uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
2,869,038,154,628 | arxiv | \section{Introduction}
The development of distributed computing and machine learning provides various industries like internet, finance and medicine with great support. However, with increasing concerns on data leakage and the introduction of relevant laws and regulations, enterprises and institutions are strictly restricted from transacting internal data with others. Thus, the data held by different parties is isolated and the value of data cannot be fully utilized. This problem is known as \emph{data island problem}~\cite{yang2019federated}. In recent years, there is an increasing interest in applying privacy computing to solve the data island problem. Several privacy computing solutions have been proposed, including Secure Multi-party Computation (MPC)~\cite{goldreich1998secure}, Trusted Execution Environment (TEE)~\cite{ohrimenko2016oblivious}, Federated Learning (FL)~\cite{konevcny2016federated}, etc. In the area of multi-party collaborative learning, FL is the most promising technique. The most distinctive idea of FL is that during the process of distributed model building, the datasets are only accessible by local devices without being shared among different parties, which strongly guarantees the protection for data privacy. Currently, federated logistic regression (FLR) is one of the most widely applied federated models, and has been implemented in almost all the FL frameworks.
A number of alternative security techniques can be used to ensure data privacy~\cite{yang2019federatedbook}, which includes Homomorphic Encryption (HE)~\cite{rivest1978data}, Oblivious Transfer (OT)~\cite{rabin2005exchange}, Garbled Circuit (GC)~\cite{yao1986generate}, Differential Privacy (DP)~\cite{dwork2006calibrating}, etc. Due to its ability to allow calculations over ciphertext directly, HE has been widely applied in existing industrial implementations of FLR. With HE, different participants in FL are able to achieve collaborative modeling through exchanging encrypted parameters instead of user data or models. However, the computational complexity of HE introduces significant overhead in FLR, and the cost of homomorphic computation eventually becomes the performance bottleneck in the model training process. Most existing FL frameworks, such as FATE~\cite{webank19}, purely rely on CPU to accomplish huge amounts of homomorphic computation. However, the computing power of CPU is quite limited, even when multiple cores are utilized concurrently.
Unlike CPU, various specifically designed hardware processors are suitable for handling operations with large amounts of parallelism. In recent years, the use of heterogeneous system which combines CPU with hardware accelerators becomes a popular choice in scenarios where high computing throughput is required. An encryption framework based on Field Programmable Gate Array (FPGA) was proposed in~\cite{yang2020fpga} for computational acceleration. In addition, Graphics Processing Unit (GPU) is widely used to offload the computational workload from CPU because of its high instruction throughput and programming flexibility.
Following this trend, we aim to design a GPU-CPU heterogeneous system to accelerate the homomorphic computation of FLR. The calculation of encrypted gradient descent and loss function in FLR are algorithms with high computational complexity. They are executed in every iteration of the training process until convergence is achieved. These complicated and repetitive calculations are good matches for GPU acceleration.
Despite the great opportunities of GPU-based heterogeneous system, there remains several challenges to overcome. First, the algorithms of FLR are complex, involving very diverse arithmetic expressions. Hence it is difficult to directly offload all the computation tasks onto GPU. Second, the discrete data storage and sequential workflow of existing FL frameworks make it challenging for GPU to perform large-batch parallel computation. Third, frequent data copy and I/O transfer between different devices will introduce additional delays.
Motivated by above challenges, we propose HAFLO\xspace, a GPU-based acceleration solution for FLR. To achieve high performance and reduce the complexity of computation offloading, we summarize a set of performance-critical homomorphic operators (HO) used by FLR, and do a joint optimization of storage, IO and computation to accelerate the execution of these operators on GPU.
Our contributions are summarized as follows:
\begin{itemize}
\item \emph{Storage optimization.} We propose an aggregated data storage system to achieve high computation efficiency on GPU, including data format conversion and storage management.
\item \emph{Computation optimization.} Instead of directly accelerating original complex algorithms, we summarize a set of performance-critical HOs used by FLR which are of lower implementation complexity and much easier for GPU acceleration.
\item \emph{IO optimization.} We propose a scheme to temporarily store intermediate results in GPU memory, and implement corresponding storage management processes which can be controlled by upper frameworks to reduce the transactions of data between GPU and CPU.
\end{itemize}
The rest of this paper is organized as follows. In section~\ref{sec:background}, we introduce necessary background about FLR and applications of GPU in general-purpose computing. We also analysis the challenges in applying GPU acceleration in FLR. In section \ref{sec:design}, we describe the design of our acceleration scheme in detail. Finally, we present the preliminary testing results of our implementation in section~\ref{sec:result} and draw a conclusion in section~\ref{sec:conclusion}.
\section{Background}\label{sec:background}
\subsection{Federated Logistic Regression}\label{sec:FLR}
FLR~\cite{hardy2017private} is a deformation of traditional logistic regression. Based on whether datasets in different parties share similar feature space or similar instance ID space, FLR is classified into homogeneous LR and heterogeneous LR. The basic mathematical principles in both models are identical. To protect privacy and keep data operability in FLR, Homomorphic Encryption (HE) is adapted for data encryption before remote parameter sharing. Paillier Cryptosystem~\cite{paillier1999public}, a pragmatic semi-homomorphic encryption algorithm, is a popular choice for FLR.
In every iteration of training, stochastic gradient descent is performed. The stochastic gradients of logistic regression are
$$ \nabla l_{S^{\prime}}\left({\bf {\theta}}\right)=\frac{1}{s^\prime}\sum_{i\in S^{\prime}}\left(\frac{1}{1+e^{-y{\bf{\theta}}^\top\bf{x}}-1}\right)y_i{\bf{x}}_i, $$ \label{algo1}
where $ S^{\prime} $ is a mini-batch.
The training loss on the dataset is
$$ l_S(\theta)=\frac{1}{n}\sum_{i\in S}\log(1+e^{-y_i{\bf{\theta}}^{\top}{\bf{x}}_i}). $$ \label{algo2}
However, Paillier Cryptosystem limits the type of operations on cipher text to addition. Formulas above do not satisfy the limitation. Therefore, they need to be replaced with their second order approximation by taking Taylor series expansions of exponential functions. After approximation, the gradients and loss function are respectively
$$ \nabla l_{S^{\prime}}\left({\bf {\theta}}\right)\approx\frac{1}{s^{\prime}}\sum_{i\in S^{\prime}}\left(\frac{1}{4}{\bf{\theta}}^{\top}{\bf{x}}_i-\frac{1}{2}y_i\right){\bf{x}}_i, $$ \label{algo3}
and
$$ l_H(\theta)\approx\frac{1}{h}\sum_{i\in H}\log2-\frac{1}{2}y_i{\bf{\theta}}^\top{\bf{x}}_i+\frac{1}{8}\left({\bf{\theta}}^\top{\bf{x}}_i\right)^2. $$ \label{algo4}
Combining the above principles with HE, multiple parties can build FLR models cooperatively.
In a typical FLR workflow, the participants start every iteration from training on their local model separately. During this process, they need to exchange encrypted intermediate data. Afterwards, they transfer encrypted local training results, like local gradients and loss, to a coordinator. The coordinator aggregates ciphertexts from different participants and performs decryption. After decryption. it sends aggregated result back to the participants for them to update local models.
\subsection{GPU Acceleration for AI Framework}
A GPU is a computing device whose architecture is well-suited for parallel workload. It was traditionally used for graphics processing. Emerging programmability and increasing performance make GPU attractive in general-purpose computations, including database queries, scientific computing, etc. Benefiting from hundreds of small processor cores and unique hierarchical storage structure, GPU can execute more arithmetic tasks concurrently than CPU. GPUs have been applied for acceleration in Artificial Intelligence for decades. Open-source machine learning frameworks like TensorFlow~\cite{abadi2016tensorflow} and PyTorch~\cite{paszke2019pytorch} have specialized acceleration for computing with GPU. As the developers become increasingly reliant on open-source frameworks, GPU has become an indispensable part of machine learning.
\subsection{Opportunities and Challenges}\label{sec:challenges}
As a necessary privacy technique, HE inevitably introduces heavy time overhead to FL system. The common key length used today is 1024 bits. It means that original floating-point operations are converted into those among large integers with thousands of bits in length. Moreover, Paillier Cryptosystem~\cite{paillier1999public} has following properties.
$$ E(a+b)=E(a)*E(b)\mod\ n$$
$$ E(a*b)=E(a)^b\mod\ n,$$
where $E$ is the Paillier encryption function.
In common situations, large amounts of calculation instances are fed into the system, which are typical scenarios for data stream processing.
Under above circumstances, the computing power of CPU is far from sufficient to meet the demands in applications. In contrast, GPU is efficient to handle such tasks with pipeline processing. Calculations among large integers can be divided into multiple stages and thus assigned to different threads for acceleration on GPU. Moreover, multiple instances can be operated concurrently with proper GPU resource allocation in order to maximize the performance.
Therefore, offloading homomorphic calculations onto GPU is a straightforward solution for relieving this bottleneck.
However, the leverage of GPU generates other problems.
\begin{itemize}
\item To facilitate transactions of data, major FL frameworks apply distributed computation system like Spark~\cite{zaharia2012resilient} to manage basic calculation and communication instructions. To make data in GPU memory compatible with different databases, time-consuming data format conversions and memory access operations will be introduced. Meanwhile, most basic instructions in FL frameworks are only performed on a small amount of data because of instance ID based data partition. In some cases, only data of a single instance maybe handled concurrently. However, GPU follows SIMD (Single Instruction, Multiple Data) architecture. Such operations greatly waste GPU computing resources and data bandwidth, which will end up in catastrophic performance loss.
\item The algorithms in FLR are complex, considering the calculations of various parameters along with encryption and decryption operations. Different FL frameworks use divergent implementations and workflows. It's hard to achieve acceleration with high compatibility without large amounts of work.
\item Along with frequent parameter exchanges in FLR and inefficient bandwidth of physical data link, cross-device data transmission between CPU and GPU leads to non-negligible overheads in the training process.
\end{itemize}
\section{Design}\label{sec:design}
\subsection{Design Overview}
We propose HAFLO\xspace, a GPU-based heterogeneous system for accelerating the homomorphic calculations in FLR. As shown in Figure~\ref{eq:Arch_Overview}, HAFLO\xspace mainly has three components, an Operator-level Computation Optimizer for computation optimization, an Operator-level Storage Optimizer for storage optimization and an Operator-level IO Optimizer for IO optimization. In our system, all the operations are at the operator-level of granularity.
HAFLO\xspace provides a set of APIs for supporting upper-layer FL framework to execute operator-level homomorphic calculations over the delivered raw data with GPU. In the first step, the Storage Optimizer will perform data preprocessing including aggregation and serialization over the raw data, which can significantly facilitate the SIMD batch calculation on GPU. After data preprocessing, the operand management module will prepare all the operands needed by following operator calculation. It is possible that the operator calculation will demand the results of previous operator calculations. In this case, the operand management module will also receive the addresses of cached results stored in GPU memory. After all the operands are prepared, the Operator Binding module will interpret task information and generate instructions to invoke corresponding HO kernel(s) for executing the calculation. In the meanwhile, the preprocessed operands will be copied to GPU global memory.
After calculation, Operator Binding reads the results stored in GPU memory and sends them back to Operand Management. Next, Storage Optimizer performs data postprocessing including deserialization over the results before they are returned to FL framework. If required, the developers can also choose to cache the results in the GPU memory by setting the value of the corresponding API parameters. In this case, data will not be copied back to CPU.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.44\textwidth]{Overview.pdf}
\caption{Architecture overview of HAFLO\xspace}
\label{eq:Arch_Overview}
\end{figure}
\subsection{Operator-level Storage Optimizer}\label{sec:storage}
Storage Optimizer is responsible for data format conversion from FL framework to GPU operators. In order to optimize data transfer between CPU and GPU as well as utilization of GPU resources, data aggregation and format conversion are necessary for every operator. To improve efficiency, proper management of memory is also indispensable.
\subsubsection{Storage Management}\label{sec:storagemanagement}
Storage Management is the basic component for Storage Optimizer. It can be leveraged to allocate or free data buffer in CPU memory. Specifically, Storage Management assigns a continuous page-locked memory for every allocation request on CPU. Different from pageable memory, page-locked memory can be directly accessed by GPU device, which greatly reduces transferring time.
However, allocation of page-locked memory is quite drawn-out, even slower than cross-device data transfer in some cases. To solve the problem, there are several facts in FLR that are worthwhile to note. Stochastic Gradient Descent (SGD) is a common algorithm for logistic regression. We observe from major frameworks that SGD is performed with the same mini-batch separation in different epochs during training. Moreover, in a training task, most mini-batches have the same batch size. Motivated by above observations, we designed a dynamic memory allocation scheme to reuse the page-locked memory. As long as a new memory space is allocated, its ID, size, availability and offset address will be recorded using a table. When there comes another memory allocation request, Storage Management checks the table for available memory space with the same size. If such space exists, Storage Management changes the status of space to be unavailable and return the address. Similarly, when data stored in a memory space is no longer needed, Storage Management changes the status of this memory space to be available. To optimize the searching progress, we use memory size as the table index.
A garbage collection mechanism is also applied to avoid overloading the system memory with too much redundant space. We use LRU (Least recently used) algorithm to manage memory space. When total size of allocated memory space exceeds range of tolerance, the least recent used memory space is released.
\subsubsection{Aggregation and Serialization}\label{sec:preprocessor}
As we stated in section~\ref{sec:challenges}, most of the calculations in major frameworks are executed over a small amount of data because of data partition. To solve this problem, dataset aggregation is introduced in data preprocessing. The main goal for this component is to reconstruct datasets for efficient batch processing on GPU. In FLR, Storage Optimizer loads raw data from database and densely packs the features and labels which come from the same mini-batch together in an array-like architecture. As discussed before, the separation of mini-batch remains the same in different epochs. On basis of this workflow, we only need to execute aggregation once after the creation of mini-batches since aggregated datasets can be reused in following epochs.
After aggregation, serialization is performed to convert data format. In order to optimize the performance of GPU and fully utilize bandwidth between CPU and GPU, data structure should be converted into byte stream before it is sent to GPU. To convert data into byte stream, Storage Optimizer performs data rearrangement to get serialized data stored in page-locked memory space preassigned by Storage Management. After serialization, byte stream is forwarded to subsequent modules for calculation.
\subsubsection{Data Postprocessing}\label{sec:postprocessor}
When all the calculations are completed, deserialization is necessary to recover data for FL frameworks, such as the recovery of overall gradient obtained after decryption. This is a reverse process of serialization. Calculation results are recreated from byte stream stored in page-locked memory space. After data recovery, Storage Management is requested to free up the memory.
\subsection{Operator-level Computation Optimizer}\label{sec:operators}
After data preprocessing, operator calculation is ready for execution. HAFLO\xspace mainly has two computation optimizations. First, several high-performance GPU-based operators are built to accelerate the process of arithmetic computation. Second, we introduce an operand management scheme to reuse intermediate data temporarily cached in GPU memory, which effectively reduces memory copy and cross-device data transfer.
\subsubsection{Operator Binding}
As mentioned in section~\ref{sec:challenges}, given the complex computing process and divergent implementations, it's infeasible to individually accelerate every calculation task with GPU. In addition, some computations are not cost-effective to be handled by GPU due to their low computational complexity. According to the principles and workflow of FLR described in section~\ref{sec:FLR}, the most time-consuming parts in training process are homomorphic calculations over encrypted intermediate data. With Taylor Approximation, such calculations in FLR are mainly composed of addition and multiplication. Combined with the properties of HE, we can sort out all the basic operators required.
Therefore, we summarize a set of performance-critical HOs with high computational complexity from the FLR algorithms and build corresponding GPU kernels for their acceleration. Examples of implemented operator kernels are listed in Table~\ref{tab:operators}. For every kernel in our design, multiple computing cores are utilized for parallel calculation. Considering the size of on-board memory in modern GPUs, more than 10 million homomorphic tasks can be handled simultaneously. These highly efficient kernels can easily component most instructions during training, including calculations of gradient and loss.
\begin{table}[!htbp]
\centering
\resizebox{0.48\textwidth}{!}{%
\begin{tabular}{ll}
\hline
\multicolumn{1}{c}{\textbf{Operator Names}} & \multicolumn{1}{c}{\textbf{Mathematical formulas}} \\ \hline
Homomorphic Encryption & $ E(m) =g^m*r^n\mod\ n^2 $. \\ \hline
Homomorphic Multiplication & $ E(m_1*m_2)=[E(m_1)]^{m_2}\mod\ n^2 $ \\ \hline
Homomorphic Addition & $ E(m_1+m_2)=E(m_1)*E(m_2)\mod\ n^2 $ \\ \hline
Homomorphic Summation & $ E(\sum m_i)=\prod E(m_i)\mod\ n^2 $ \\ \hline
\end{tabular}%
}
\caption{Examples of basic operators implemented on GPU. $E$ is encryption function. $n$ and $g$ are the public keys.}
\label{tab:operators}
\end{table}
Based on HO kernels, we designed Operator Binding which transfers data with GPU and invokes HO kernel(s) according to the task configurations it receives. The execution of Operator Binding achieves high throughput rate with thorough GPU resource allocation and workflow management. Specifically, the number of threads and blocks allocated for each kernel matches the feature of large integers in homomorphic calculations. Meanwhile, we increase parallelism by overlapping GPU operations, CPU operations and data transmission.
\subsubsection{Operand Management}\label{sec:operand_management}
Operand Management is designed to prepare data for operator execution. In the most basic workflow, Operand Management receives serialized data from Preprocessor and passes it to Operator Binding with task information. However, in certain cases, Operand Management can reduce efficiency loss with the cooperation of IO Optimizer. It is mentioned in section~\ref{sec:challenges} that data bandwidth between CPU and GPU is limited and could become the performance bottleneck. In addition, serialization and deserialization of ciphertexts conducted in Storage Optimizer are expensive considering the large amounts of memory operations. Hence it is important to reduce above operations.
We note that in the workflow of FLR, there are several consecutive operations whose calculation results are just required by the next few steps. Therefore, aimed at avoiding unnecessary cross-device data IO and data format conversion for every single operator calculation, HAFLO\xspace allows intermediate results to be temporarily stored in GPU memory until they are no longer needed by subsequent operator calculation(s). In such cases, GPU memory addresses of data should also be treated as the operands in HAFLO\xspace. We will detail the complete workflow in section~\ref{sec:IO}.
Due to the existence of above operations, Operand Management is required to identify and transfer operands with different data types. In addition, for each piece of data stored in GPU memory, Operand Management constructs a data structure which contains useful information, including memory address, data size, etc. This structure is returned to FL framework after calculation for convenient information preservation in the training process. Conversely, Operand Management will extract the information of data and pass it to Operator Binding when FL framework sends the data structure back for subsequent calculations.
\subsection{IO-Optimized Computing Workflow}\label{sec:IO}
In section~\ref{sec:operand_management}, we mentioned the scheme of caching intermediate results in GPU memory. IO Optimizer is designed to manage GPU memory space for related operations. It provides a flexible and effective scheme for caching results in GPU memory and thus significantly reduces the delay spent on data IO.
\subsubsection{GPU Memory Management}
For operators without data dependency on results of previous operator(s), GPU memory is allocated directly by Operator Binding to store the required data. However, for operators with data dependency, in order to fetch the cached results for the calculation of subsequent operators, the addresses are exposed to FL framework. This will lead to potential problems including memory leak or invalid access. To ensure memory safety, we use an isolated module (i.e. GPU Memory Management) to manage GPU memory space for caching results. Based on the requirements of training process, GPU Memory Management is called by Operand Management to initialize GPU memory allocation and release requests for cached calculation results. For clarity, we differentiate with GPU memory space used to store preprocessed operands (i.e., Preprocessed Operands Storage in Figure~\ref{eq:Arch_Overview}) and those utilized to cache intermediate results (i.e., Cached Results Storage in Figure~\ref{eq:Arch_Overview}).
\subsubsection{Workflow of temporary storage in GPU memory}
In order to reuse intermediate results cached in GPU memory, Operand Management cooperates with GPU Memory Management according to task information. Following the properties of FLR algorithms, we modify the instructions in FL framework with proper arrangement of data storage and transmission, like the example presented in Figure~\ref{eq:Steaming_Flow}. In this example, guest computes fore gradient with the minimum data transaction. In the calculation process, HAFLO\xspace needs to perform 8 basic operators under the instructions of FL framework. Each of the operators follows the same work process below.
\begin{itemize}
\item If results are required to be cached in GPU memory, Operand Management leverages GPU Memory Management to request memory allocation on GPU. Then it sends the address of allocated memory space to Operator Binding for storing results. After calculation, the address will be packed into a data structure containing data information and returned to FL framework as an intermediate operation result. In such operations, Data Postprocessing is no longer necessary and it can be bypassed.
\item If some operand(s) in current operation are previous intermediate results which are already cached in GPU memory, FL framework is required to input the data structure containing corresponding information via Homomorphic APIs. Because the data can be directly accessed by GPU, Preprocessing is skipped similarly. Operand Management retrieves data information including memory addresses from data structure and sends it to Operator Binding. If the data is no longer needed in following operations, Operand Management will leverage GPU Memory Management to release memory space after current calculation.
\end{itemize}
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.42\textwidth]{Streaming_Flow.pdf}
\caption{The workflow for guest to calculate $ [[fore\_gradient]] = [[0.25 * (logits_{host} + logits_{guest}) - 0.5 * label]]$, where $[[x]]$ represents corresponding ciphertexts of data $x$. The texts with boxes represent kernel functions. Other texts represent data.}
\label{eq:Steaming_Flow}.
\end{figure}
\begin{table*}[!htbp]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{lccc}
\hline
\multicolumn{1}{c}{\textbf{Operators}} & \multicolumn{1}{c}{\textbf{\begin{tabular}[c]{@{}c@{}}Throughput of GPU operators \\ (instances per second)\end{tabular}}} & \multicolumn{1}{c}{\textbf{\begin{tabular}[c]{@{}c@{}}Throughput of CPU operators with 1 core \\ (instances per second)\end{tabular}}} & \multicolumn{1}{c}{\textbf{Acceleration Ratio}} \\ \hline
Encoding & 15108620 & 77232 & 195 \\ \hline
Decoding & 22343405 & 587587 & 38 \\ \hline
Homomorphic Encryption & 83909 & 445 & 189 \\ \hline
Homomorphic Decryption & 287273 & 1465 & 196 \\ \hline
Homomorphic Multiplication & 991946 & 6410 & 155 \\ \hline
Homomorphic Addition & 1226803 & 27118 & 45 \\ \hline
Homomorphic Matrix Multiplication & 931942 & 4112 & 227 \\ \hline
Homomorphic Summation & 19575768 & 128893 & 152 \\ \hline
\end{tabular}%
}
\caption{Throughput of GPU HOs and CPU HOs with $10^5$ instances in each test. The bit-length of public key is 1024.}
\label{tab:Operator_Comparison}
\end{table*}
Through the processes above, redundant data transmissions are totally avoided. As GPU memory is limited, it is possible that GPU memory is not enough to cache all the calculation results. In our implementation, we leverage the LRU algorithm to select the cached results to be sent back to CPU and erased from GPU memory.
\section{Preliminary Results}\label{sec:result}
\subsection{Implementation}
Our implementation mainly consists of two parts, including construction of GPU operators and HAFLO\xspace. The GPU operators are developed with CUDA Toolkit. HAFLO\xspace is implemented on open-sourced version of Federated AI Technology Enabler (FATE) 1.5.1~\cite{webank19}. We ran the framework on two CPU servers to simulate federated training process between two parties. The hardware configurations for both servers are the same, with CPU of \textit{Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz}. We use a Nvidia Tesla V100 PCIe 32GB as the GPU accelerator for each server.
\subsection{Evaluation}
We conducted our experiments from two aspects. First, it is necessary to evaluate the performance of basic GPU operators because they are the fundamental components of the heterogeneous system. Second, in order to verify how well our system is compatible with FL frameworks, we train the logistic regression models with vanilla FATE and GPU-accelerated FATE separately.
The evaluation of operators is conducted by comparing the computing throughput between GPU and CPU. The CPU operators are implemented with \texttt{gmpy2} library, which is the same as that in FATE. Table~\ref{tab:Operator_Comparison} presents comparison results. The performance of GPU operators outperforms CPU significantly. Especially for operators like Homomorphic Multiplication which contains large amounts of modular exponentiation calculations, throughput of GPU operators is more than 100 times that of CPU operators.
To check the compatibility of HAFLO\xspace, we compare the performance of logistic regression model training with vanilla FATE and GPU-accelerated FATE. Model training is performed on credit card dataset\footnote{https://www.kaggle.com/arslanali4343/credit-card-cheating-detection-cccd}. We conducted vertical and horizontal partition on the dataset for heterogeneous LR and homogeneous LR respectively.
As shown in Figure~\ref{eq:Performance}, GPU-accelerated FATE has a remarkable performance increase for each training epoch compared to vanilla FATE. For heterogeneous LR, the modified FATE achieves an acceleration ratio of 49.9 compared with vanilla FATE in finishing one training epoch. For homogeneous LR, which is more computationally intensive, GPU-accelerated FATE achieves an acceleration ratio of 88.4.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.38\textwidth]{performance.png}
\caption{Performance of vanilla FATE and GPU-accelerated FATE}
\label{eq:Performance}
\end{figure}
\section{Conclusion and Future work}\label{sec:conclusion}
In this paper, we propose HAFLO\xspace, a GPU-based solution for accelerating FLR. To maximize the utilization of cross-device transmission bandwidth and GPU calculation resources, we optimize data storage with aggregation and serialization. Aimed at high computing throughput, several GPU-based homomorphic operators are implemented. Furthermore, an io-optimized workflow is introduced to minimize memory copy and cross-device data transfer by caching results in GPU memory. Based on our system, we accelerated an industrial FL framework and conducted logistic regression model training. The experimental results present the performance advantage of our solution.
As the need for FL in practical applications continues to grow, we have encountered some new challenges.
\emph{Needs for computational acceleration in various FL algorithms.} In addition to FLR, there are many other popular FL algorithms that suffer from the performance bottleneck of HE. According to the analysis, the performance degradation caused by data encryption is considerable in algorithms like SecureBoost and Federated Transfer Learning~\cite{jing2019quantifying}. Despite the progress in related research, it is still hard for new proposed algorithms to balance the trade-off between security and performance. Federated Matrix Factorization~\cite{chai2020secure} is a distributed framework which can be widely employed in scenarios like Federated Recommendation System~\cite{yang2020federated}. But more efficient encryption operations are urgently required to make the framework more practical in real-world applications. Therefore, it is essential to build a more comprehensive system. In the future, we will extend the functionality of HAFLO\xspace with more high-performance operators and generalized interfaces to support the acceleration for more FL algorithms.
\emph{Communication overhead in distributed system.} Due to large amounts of data and models, it is a common choice for a participant in FL to apply distributed system in data center network (DCN) to implement data storage and computation. Many FL frameworks use HTTP/2 based gRPC for data transmission, which may cause congestion in DCN. In some machine learning platforms like TensorFlow, end-to-end performance is possible to be improved by using RDMA to reduce network latency~\cite{yi2017towards}. We will explore the methods of combining GPU-accelerated FL and RDMA to reduce communication overhead while optimizing computation performance.
\bibliographystyle{named}
|
2,869,038,154,629 | arxiv | \section{Introduction}
Let $S$ be a
collection of $n$ points $p_1,\ldots,p_n$ in the Euclidean plane.
We want to
find a connected set that contains $S$ whose length is minimal, namely
\begin{equation}\label{ste}
\inf \{\mathcal{H}^1(K) : K \subset \mathbb{R}^2, \mbox{ connected and such that } S
\subset K\}\,.
\end{equation}
This latter is commonly known as the \emph{Steiner problem}.
Although the existence of minimizers is known,
finding explicitly a solution is
extremely challenging even numerically.
For this reason every method to determine solutions is welcome.
A classical tool is the notion of calibration,
introduced in the framework of minimal surfaces~\cite{berger, realcochains, CGE}
(see also~\cite[\S~6.5]{morganbook} for an overview of the history of calibrations):
given $M$ a $k$--dimensional oriented manifold in $\mathbb{R}^{d}$, a calibration
for $M$
is a closed $k$--form $\omega$
such that $\vert\omega\vert\leq 1$ and $\langle \omega, \xi\rangle = 1$
for every $\xi$ in the
tangent space of $M$.
The existence of a calibration for $M$ implies that the
manifold is area minimizing in its homology class. Indeed
given an oriented $k$--dimensional manifold $N$ such that $\partial M =
\partial N$
we have
\begin{equation*}
\mbox{Vol}(M) = \int_{M} \omega = \int_N \omega \leq \mbox{Vol}(N)\, ,
\end{equation*}
where we applied the properties required on the calibration $\omega$
and we used Stokes' theorem in the second equality.
\medskip
This definition of calibration is not suitable for the Steiner
Problem~\eqref{ste}
simply for the reason that
neither the competitors nor the minimizers of the problem
admit an orientation which is compatible with their boundary.
To overcome this issue
several variants have been
defined
starting from the \emph{paired calibrations}
by Morgan and Lawlor in~\cite{lawmor},
where the Steiner problem is seen as a problem of minimal partitions.
In~\cite{annalisaandrea}
Marchese and Massaccesi
rephrase the Steiner Problem as a mass minimization for
$1$--rectifiable currents with coefficients in a group
and this leads to a suitable definition of calibrations
(see also~\cite{orlandi}).
Finally reviving the approach via covering space by Brakke~\cite{brakke}
(see~\cite{cover} for the existence theory)
another notion of calibrations
has been produced~\cite{calicipi}.
\medskip
A natural question is whether
the previously mentioned notions of calibrations are equivalent. In the first part of the paper we give an answer to it.
When the points of $S$ lie on the boundary of a convex set
(actually the only case in which paired calibrations are defined)
calibrations on coverings are nothing but paired calibrations.
On the other hand an equivalence does not exist between
calibrations on coverings and calibrations for currents with coefficients in $\mathbb{R}^n$;
in particular the notion of calibrations for currents
is stronger than the one on coverings.
In other words it is easier to find a calibration on coverings.
Let us now discuss in more depth the relation between the two notions.
The definition of calibrations for currents with coefficients in
$\mathbb{R}^n$
(see Definition~\ref{calicurrents})
depends on choice of the norm of $\mathbb{R}^n$ (see \cite{morgancluster} where different norms are used to study clusters with multiplicities).
The norm considered in~\cite{annalisaandrea} (see also \cite{morgancluster}), here denoted by $\Vert\cdot\Vert_\flat$,
is the one that produces the weakest notion of calibrations and still gives the equivalence with the Steiner problem
in $\mathbb{R}^d$ with $d\geq 2$: the ``best possible" norm in a certain
sense.
It turns out that this notion of calibration is stronger than the one
on coverings.
Indeed in Theorem~\ref{dacalicurrentacalicovering} we are able to prove that
if a calibration for a mass minimizing current with coefficients in $\mathbb{R}^n$ exists, then
there exists also a calibration for a perimeter minimizing set in a given
covering,
but the converse does not hold.
To prove a sort of converse one has to abandon the idea
of working in the general setting of Marchese and
Massaccesi~\cite{annalisaandrea}
and take full advantage of restricting to $\mathbb{R}^2$.
To this aim we slightly change the mass minimization problem and
we define a different norm on $\mathbb{R}^n$ denoted by
$\Vert\cdot\Vert_\natural$
(the unit ball of $\Vert\cdot\Vert_\natural$ is smaller than the one of
$\Vert\cdot\Vert_\flat$ as one can see (at least in $\mathbb{R}^3$)
from their Frank diagram depicted in Figure~\ref{norm}).
The $\Vert\cdot\Vert_\natural$ notion of calibration is equivalent with the definition of calibrations on coverings in $\mathbb{R}^2$.
\medskip
The second part of the paper has a different focus and it can be seen as a
completion of~\cite{calicipi} as we restrict our attention to
calibrations on coverings.
In Theorem~\ref{impli} we prove that the existence of a calibration for a
constrained set $E$
in a covering $Y$ implies the minimality of $E$ not only among
(constrained) finite perimeter sets,
but also in the larger class of finite linear combinations of characteristic
functions of finite perimeter sets (satisfying a suitable constraint).
This apparently harmless result has some remarkable consequences.
First of all it is directly related to the convexification of the problem
naturally associated with the notion of calibration.
This convexification $G$ is the
so--called ``local convex envelope" and it has been defined
by Chambolle, Cremers and Pock.
In~\cite{chambollecremerspock} they are able to prove that it is
the tightest among the convexifications with an integral form.
Unfortunately it does not coincide with the convex envelope of the
functional,
whose characterization is unknown.
We show that $G$ equals the total variation
on constrained $BV$ functions with a finite number of values.
In other words, the local convex envelope ``outperforms" the total variation
only when evaluated on constrained $BV$ functions
whose derivatives have absolutely continuous parts
with respect to $\mathscr{L}^2$.
As a second consequence of Theorem~\ref{impli}
we produce a counterexample to the existence of calibrations.
It has already been exhibited in the setting of normal currents by
Bonafini~\cite{bonafini}
and because of the result of Section~\ref{equivalence} we had to
``translate" it in our framework.
It is specific to the case
in which $S$ is composed of five points, the vertices of a regular pentagon, and
cannot be easily generalized to vertices of other regular polygons.
\medskip
We summarize here the structure of the paper.
In Section~\ref{problem} we recap the different approaches to the
Steiner Problem and the consequent notions of calibrations.
Section~\ref{equivalence}
is devoted to the relations among different definitions of calibrations.
Then in Section~\ref{convexification} we generalize the theorem
``existence of calibrations implies minimality", and this allows us to
complement a result by Chambolle, Cremers and Pock on the
convexification of the problem.
An example of nonexistence of calibrations is given in
Section~\ref{nonexistence}.
The paper is concluded with some remarks about the calibrations
in families presented in~\cite{calicipi} that underline the effectiveness
of our method.
\section{Notions of calibrations for minimal Steiner networks}\label{problem}
In this section we briefly review the approaches to the Steiner Problem and the related notions of calibrations presented in the literature~\cite{calicipi, lawmor, annalisaandrea}.
\subsection{Covering space approach \cite{cover,brakke,calicipi}}
We begin by explaining the approach via covering space by Brakke~\cite{brakke}
and Amato, Bellettini and Paolini~\cite{cover}.
They proved that minimizing the perimeter among
constrained sets on a suitable defined
covering space of $\mathbb{R}^2\setminus S=:M$
is equivalent minimizing the length among all networks that connect the point of $S$.
We refer to both~\cite{cover} and~\cite{calicipi} for details.
\medskip
Consider a \emph{covering space} $(Y, p)$ where $p:Y\to M$
is the projection onto the base space.
Consider $\ell$ a loop in $\mathbb{R}^2$ around at most $n-1$ points of $S$.
Heuristically $Y$ is composed of $n$ copies of $\mathbb{R}^2$
(the sheets of the covering space) glued in
such a way that going along $p^{-1}(\ell)$ in $Y$,
one ”visits” all the $n$ sheets.
We avoid repeating here the explicit construction of $Y$ presented in~\cite{cover}
but it is relevant to keep in mind
how points of different copies of $\mathbb{R}^2\setminus S$ are identified.
First the $n$ points of $S$ in $\mathbb{R}^2$ are connected
with a \emph{cut} $\Sigma \subset \Omega$
given by the union of injective Lipschitz curves $\Sigma_i$ from $p_i$ to $p_{i+1}$
(with $i\in\{1,\ldots, n-1\}$) not intersecting each other.
Then $\Sigma_i$ is lifted to all the $n$ sheets of $M$ and
the points of $\Sigma_i$ of the $j$--th sheet are identified with points of
$\Sigma_i$ of the $k$--th sheet
via the equivalence relation
\begin{equation*}
k\equiv j+i\, (\mathrm{mod} \;n) \qquad\text{with}\; i=1,\ldots, n-1\; \text{and}\; j=1,\ldots, n\,.
\end{equation*}
This equivalence relation produces a non--trivial covering of $M$.
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}[scale=0.8]
\draw[white]
(-2,-1.85)to[out= 0,in=180, looseness=1] (2,-1.85)
(-2,1.85)to[out= 0,in=180, looseness=1] (2,1.85)
(-2,-1.85)to[out= 90,in=-90, looseness=1] (-2,1.85)
(2,-1.85)to[out= 90,in=-90, looseness=1] (2,1.85);
\fill[black](0,1) circle (1.7pt);
\fill[black](-0.85,-0.5) circle (1.7pt);
\fill[black](0.85,-0.5) circle (1.7pt);
\path[font=\normalsize]
(-0.86,-0.5)node[left]{$p_1$}
(0.86,-0.5)node[right]{$p_2$}
(0,1)node[above]{$p_3$};
\path[font=\small]
(-1.5,-1.85) node[above]{$\mathbb{R}^2$};
\end{tikzpicture}\quad
\begin{tikzpicture}[scale=0.8]
\draw[black!50!white, dashed]
(-2,-1.85)to[out= 0,in=180, looseness=1] (2,-1.85)
(-2,1.85)to[out= 0,in=180, looseness=1] (2,1.85)
(-2,-1.85)to[out= 90,in=-90, looseness=1] (-2,1.85)
(2,-1.85)to[out= 90,in=-90, looseness=1] (2,1.85);
\draw[color=black, dashed, very thick]
(0.86,-0.5)to[out= 90,in=-30, looseness=1] (0,1)
(-0.86,-0.5)to[out= -30,in=-150, looseness=1] (0.86,-0.5);
\draw[color=black,scale=1,domain=-3.141: 3.141,
smooth,variable=\t,shift={(0,-0.7)},rotate=0]plot({0.4*sin(\t r)},
{0.4*cos(\t r)});
\fill[black](0,1) circle (1.7pt);
\fill[black](-0.85,-0.5) circle (1.7pt);
\fill[black](0.85,-0.5) circle (1.7pt);
\path[font=\normalsize]
(-0.86,-0.5)node[left]{$p_1$}
(0.86,-0.5)node[right]{$p_2$}
(0,1)node[above]{$p_3$};
\path[font=\small]
(-1.5,-1.85) node[above]{$D_1$};
\filldraw[fill=white, color=black, pattern=dots, pattern color=black]
(-0.4,-0.7)to[out= -90,in=180, looseness=1](0,-1.1)--
(0,-1.1)to[out= 0,in=-90, looseness=1](0.4,-0.7)--
(0.4,-0.7)to[out= -165,in=-15, looseness=1](-0.4,-0.7);
\filldraw[fill=white, color=blue, pattern=grid, pattern color=blue]
(-0.4,-0.7)to[out= 90,in=180, looseness=1](0,-0.3)--
(0,-0.3)to[out= 0,in=90, looseness=1](0.4,-0.7)--
(0.4,-0.7)to[out= -165,in=-15, looseness=1](-0.4,-0.7);
\filldraw[fill=white, color=green(munsell), pattern=dots, pattern color=green(munsell)]
(0.825,-0.1)to[out= 10,in=-90, looseness=1](1.1,0.25)--
(1.1,0.25)to[out= 90,in=35, looseness=1](0.5,0.6)--
(0.5,0.6)to[out= -60,in=100, looseness=1](0.825,-0.1);
\filldraw[fill=white, color=yellow, pattern=grid, pattern color=yellow]
(0.825,-0.1)to[out= -170,in=-90, looseness=1](0.3,0.25)--
(0.3,0.25)to[out= 90,in=-145, looseness=1](0.5,0.6)--
(0.5,0.6)to[out= -60,in=100, looseness=1](0.825,-0.1);
\end{tikzpicture}\quad
\begin{tikzpicture}[scale=0.8]
\draw[black!50!white, dashed]
(-2,-1.85)to[out= 0,in=180, looseness=1] (2,-1.85)
(-2,1.85)to[out= 0,in=180, looseness=1] (2,1.85)
(-2,-1.85)to[out= 90,in=-90, looseness=1] (-2,1.85)
(2,-1.85)to[out= 90,in=-90, looseness=1] (2,1.85);
\draw[color=black, dashed, very thick]
(0.86,-0.5)to[out= 90,in=-30, looseness=1] (0,1)
(-0.86,-0.5)to[out= -30,in=-150, looseness=1] (0.86,-0.5);
\draw[color=black,scale=1,domain=-3.141: 3.141,
smooth,variable=\t,shift={(0,-0.7)},rotate=0]plot({0.4*sin(\t r)},
{0.4*cos(\t r)});
\fill[black](0,1) circle (1.7pt);
\fill[black](-0.85,-0.5) circle (1.7pt);
\fill[black](0.85,-0.5) circle (1.7pt);
\path[font=\normalsize]
(-0.86,-0.5)node[left]{$p_1$}
(0.86,-0.5)node[right]{$p_2$}
(0,1)node[above]{$p_3$};
\path[font=\small]
(-1.5,-1.85) node[above]{$D_2$};
\filldraw[fill=white, color=lavenderblue, pattern=north west lines, pattern color=lavenderblue]
(-0.4,-0.7)to[out= 90,in=180, looseness=1](0,-0.3)--
(0,-0.3)to[out= 0,in=90, looseness=1](0.4,-0.7)--
(0.4,-0.7)to[out= -165,in=-15, looseness=1](-0.4,-0.7);
\filldraw[fill=white, color=blue, pattern=grid, pattern color=blue]
(-0.4,-0.7)to[out= -90,in=180, looseness=1](0,-1.1)--
(0,-1.1)to[out= 0,in=-90, looseness=1](0.4,-0.7)--
(0.4,-0.7)to[out= -165,in=-15, looseness=1](-0.4,-0.7);
\filldraw[fill=white, color=tigerseye, pattern=north west lines, pattern color=tigerseye]
(0.825,-0.1)to[out= 10,in=-90, looseness=1](1.1,0.25)--
(1.1,0.25)to[out= 90,in=35, looseness=1](0.5,0.6)--
(0.5,0.6)to[out= -60,in=100, looseness=1](0.825,-0.1);
\filldraw[fill=white, color=green(munsell), pattern=dots, pattern color=green(munsell)]
(0.825,-0.1)to[out= -170,in=-90, looseness=1](0.3,0.25)--
(0.3,0.25)to[out= 90,in=-145, looseness=1](0.5,0.6)--
(0.5,0.6)to[out= -60,in=100, looseness=1](0.825,-0.1);
\end{tikzpicture}\quad
\begin{tikzpicture}[scale=0.8]
\draw[black!50!white, dashed]
(-2,-1.85)to[out= 0,in=180, looseness=1] (2,-1.85)
(-2,1.85)to[out= 0,in=180, looseness=1] (2,1.85)
(-2,-1.85)to[out= 90,in=-90, looseness=1] (-2,1.85)
(2,-1.85)to[out= 90,in=-90, looseness=1] (2,1.85);
\draw[color=black, dashed, very thick]
(0.86,-0.5)to[out= 90,in=-30, looseness=1] (0,1)
(-0.86,-0.5)to[out= -30,in=-150, looseness=1] (0.86,-0.5);
\draw[color=black,scale=1,domain=-3.141: 3.141,
smooth,variable=\t,shift={(0,-0.7)},rotate=0]plot({0.4*sin(\t r)},
{0.4*cos(\t r)});
\fill[black](0,1) circle (1.7pt);
\fill[black](-0.85,-0.5) circle (1.7pt);
\fill[black](0.85,-0.5) circle (1.7pt);
\path[font=\normalsize]
(-0.86,-0.5)node[left]{$p_1$}
(0.86,-0.5)node[right]{$p_2$}
(0,1)node[above]{$p_3$};
\path[font=\small]
(-1.5,-1.85) node[above]{$D_3$};
\filldraw[fill=white, pattern=dots]
(-0.4,-0.7)to[out= 90,in=180, looseness=1](0,-0.3)--
(0,-0.3)to[out= 0,in=90, looseness=1](0.4,-0.7)--
(0.4,-0.7)to[out= -165,in=-15, looseness=1](-0.4,-0.7);
\filldraw[fill=white,color=lavenderblue, pattern=north west lines, pattern color=lavenderblue]
(-0.4,-0.7)to[out= -90,in=180, looseness=1](0,-1.1)--
(0,-1.1)to[out= 0,in=-90, looseness=1](0.4,-0.7)--
(0.4,-0.7)to[out= -165,in=-15, looseness=1](-0.4,-0.7);
\filldraw[fill=white,color=yellow, pattern=grid, pattern color=yellow]
(0.825,-0.1)to[out= 10,in=-90, looseness=1](1.1,0.25)--
(1.1,0.25)to[out= 90,in=35, looseness=1](0.5,0.6)--
(0.5,0.6)to[out= -60,in=100, looseness=1](0.825,-0.1);
\filldraw[fill=white,color=tigerseye, pattern=north west lines, pattern color=tigerseye]
(0.825,-0.1)to[out= -170,in=-90, looseness=1](0.3,0.25)--
(0.3,0.25)to[out= 90,in=-145, looseness=1](0.5,0.6)--
(0.5,0.6)to[out= -60,in=100, looseness=1](0.825,-0.1);
\end{tikzpicture}
\end{center}
\caption{A closer look at the topology of the covering $Y$ of $\mathbb{R}^2\setminus\{p_1,p_2,p_3\}$. Each ball is represented with one texture and color.}
\end{figure}
We remark that for technical reasons the construction in~\cite{cover}
requires the definition of a \emph{pair of cuts} joining the point $p_i$ with $p_{i+1}$.
Then the equivalence relation is defined identifying the open sets enclosed by the pairs of cuts.
\begin{rem}\label{presc}
Given a function $f: Y \rightarrow\mathbb{R}^m$
it is possible to define the parametrizations of $f$ on the sheet $j$
as a function $f^j : M \rightarrow \mathbb{R}^m$ for every $j=1\ldots,n$
(see~\cite[Definition 2.5, Definition 2.6]{calicipi} for further details).
It is then possible to define functions $f$ (resp. sets $E$)
on the covering space $Y$ prescribing the parametrizations $f^j : M \rightarrow \mathbb{R}^m$
(resp. sets $E^j$) for every $j=1\ldots,n$.
The set $E^j$ is the set determined by the parametrization of $\chi_E$ on the sheet $j$.
\end{rem}
We define now the class of sets $\mathscr{P}_{constr}(Y)$ that we will consider
to get the equivalence with the Steiner Problem.
A set $E$ belongs to the space $\mathscr{P}_{constr}(Y)$
if it is a set of finite perimeter in $Y$,
for almost every $x$ in the base space there exists exactly one point $y$ of $E$
such that $p(y)=x$ and it satisfies a suitable boundary condition at infinity.
More precisely fixing an open, regular and bounded set
$\Lambda\subset\mathbb{R}^2$ such that
$\Sigma\subset \Lambda$ and $\mathrm{Conv}(S)\subset \Lambda$ (here $\mathrm{Conv}(S)$ denotes the convex envelope of $S$), we defined rigorously $\mathscr{P}_{constr}(Y)$ as follows:
\begin{dfnz}[Constrained sets]\label{zerounoconstr}
We denote by $\mathscr{P}_{constr}(Y)$ the space of
the sets of finite perimeter in $Y$ such that
\begin{itemize}\label{zerouno}
\item [$i)$] $\sum_{p(y) = x} \chi_E(y) = 1\ \ $ for almost every $x\in M$,
\item [$ii)$] $\chi_{E^1}(x) = 1\ \ $ for every $x\in \mathbb{R}^2 \setminus \Lambda$.
\end{itemize}
\end{dfnz}
We look for
\begin{equation}\label{minpro2}
\min \left\{P(E) : E\in\mathscr{P}_{constr}(Y)\right\}\,.
\end{equation}
\begin{rem}\label{independent}
Problem~\eqref{minpro2} does not depend on the choice of the cut $\Sigma$
in the definition of the covering space $Y$ (see \cite{cover}).
Moreover given $E_{min}$ a minimizer for~\eqref{minpro2}
it is always possible to label the points $S$ in such a way
that the cut $\Sigma$ does not intersect the projection of the reduced boundary of
$E_{\min}$ (see~\cite[Proposition 2.28]{calicipi}).
From now on we always do this choice of the labeling of $S$.
\end{rem}
\begin{thm}
The Steiner Problem is equivalent to Problem~\eqref{minpro2}.
\end{thm}
\begin{proof}
See \cite[Theorem 2.30]{calicipi}.
\end{proof}
Once we have reduced the Steiner Problem to Problem~\eqref{minpro2},
a notion of calibration follows extremely naturally.
\begin{dfnz}[Calibration on coverings]\label{caliconvering}
Given $E\in\mathscr{P}_{constr}(Y)$,
a calibration for $E$
is an approximately regular vector field $\widetilde{\Phi} :Y\to\mathbb{R}^2$ (Definition \ref{approxi}) such that:
\begin{enumerate}
\item [(\textbf{1})] $\div\widetilde{\Phi} =0$;
\item [(\textbf{2})] $\vert \widetilde{\Phi} ^i (x) - \widetilde{\Phi} ^j (x)\vert \leq 2$
for every $i,j = 1,\ldots n$ and for every $x\in M$;
\item [(\textbf{3})] $\int_{Y} \widetilde{\Phi} \cdot D\chi_E=P(E)$.
\end{enumerate}
\end{dfnz}
As desired we have that
if $\widetilde{\Phi} :Y\to\mathbb{R}^{2}$ is a calibration for $E$,
then $E$ is a minimizer of Problem~\eqref{minpro2}~\cite[Theorem 3.5]{calicipi}.
\medskip
We recall that we can reformulate the problem in terms of
$BV$ functions with values in $\{0,1\}$:
we define $BV_{constr}(Y,\{0,1\})$ as the space of
functions $u \in BV(Y,\{0,1\})$ such that for almost every
$x\in M $ it holds
$\sum_{p(y) = x} u(y) = 1$
and $u^1(x) = 1$ for every $x\in \mathbb{R}^2 \setminus \Lambda$.
Then we minimize the total variation among functions in
$BV_{constr}(Y,\{0,1\})$.
\subsection{Minimal partitions problem and paired calibrations \cite{lawmor}}\label{partizioni}
We provide here the definition of \emph{paired calibrations}~\cite{lawmor}
of a minimal partition in the plane
(see for example~\cite{chambollecremerspock} for this formulation).
To speak about minimal partitions and paired calibration
we have to suppose that the points of
$S$ lies on the boundary of an open smooth convex set $\Omega$.
We define
\begin{displaymath}
\mathcal{B} := \left\{u=(u_1,\ldots,u_n)\in BV(\Omega, \{0,1\}^n) \mbox{ such that }
\sum_{i=1}^n u_i(x) = 1\ a.e.\mbox{ in } \Omega\right\}
\end{displaymath}
and a function $\overline{u} \in \mathcal{B}$ such that $\overline{u}_i = 1$
on the part of $\partial \Omega$ that connects $p_i$ with $p_{i+1}$.
We then define the energy:
\begin{equation*}
\mathcal{E}(u):= \sum_{i=1}^n |Du_i|(\Omega)\,.
\end{equation*}
\begin{dfnz}\label{min}
A function $u_{min} \in \mathcal{B}$ is a minimizer
for the partition problem if $u_{min} = \overline{u}$ on $\partial \Omega$ and
\begin{equation*}
\mathcal{E}(u_{min}) \leq \mathcal{E}(v)\,.
\end{equation*}
for every $v\in \mathcal{B}$ such that $v = \overline u$ on $\partial \Omega$.
\end{dfnz}
\begin{dfnz}[Paired calibration]\label{paired}
A \emph{paired calibration} for $u\in \mathcal{B}$ is a collection of
$n$ approximately regular
vector fields $\phi_1,\ldots,\phi_n: \Omega \rightarrow \mathbb{R}^2$ such that
\begin{itemize}
\item $\div \phi_i = 0$ \quad for every $i=1,\ldots,n$,
\item $|\phi_i - \phi_j|\leq 2$ \quad a.e. in $\Omega$ and for every $i,j =1,\ldots,n$,
\item $(\phi_i - \phi_j)\cdot \nu_{ij} = 2$ \quad $\mathcal{H}^1$--a.e. in $J_{u_i} \cap J_{u_j}$
and for every $i,j =1,\ldots,n$,
\end{itemize}
where $J_{u_i}$ is the jump set of the function $u_i$ and
$\nu_{ij}$ denotes the normal to $J_{u_i} \cap J_{u_j}$.
\end{dfnz}
With this definition Morgan and Lawlor proved in~\cite{lawmor} that
if there exists a paired calibration for a given $u\in \mathcal{B}$,
then the latter is a minimizer of the minimal partition problem according to Definition~\ref{min}.
\medskip
Given $u=(u_1,\ldots,u_n)$ a minimizer for the partition problem
the union of the jump sets of $u_i$ is a minimal Steiner network.
Conversely given a minimal Steiner network $\mathcal{S}$ it is possible to construct
$v =(v_1,\ldots,v_n) \in \mathcal{B}$ such that the union of the singular sets of $v_i$
is the network $\mathcal{S}$. Such a $v$ is a minimizer for the partition problem.
Therefore Definition~\ref{paired} is a legitimate
notion of calibration for the Steiner Problem as well.
\begin{rem}\label{pairedecovering}
Calibrations on coverings in Definition~\ref{caliconvering}
are a generalization of paired calibrations.
Indeed when the points $S$ lies on the boundary of a convex set
(the only case in which paired calibrations are defined)
the two notions are equivalent.
Suppose that the points of $S$ lie on the boundary of a convex set $\Omega$.
Then in the construction of $Y$ we can choose the cut $\Sigma$
outside $\Omega$.
Consider $u=(u_1,\ldots,u_n) \in \mathcal{B}$ a minimizer for the minimal partition problem
and a paired calibration $(\phi_1,\ldots,\phi_n)$ for $u$.
Define then $\widetilde{u}\in BV_{constr}(Y,\{0,1\})$
prescribing the parametrization on each sheet of $Y$ as
\begin{equation*}
\widetilde{u}^i = u_{n+1-i} \qquad \mbox{for } i=1,\ldots,n\,.
\end{equation*}
Notice that with this choice
$|D\widetilde u|(Y) = \mathcal{E}(u)$.
Define a vector field $\widetilde \Phi : Y \rightarrow \mathbb{R}^2$
prescribing its parametrizations on the sheets of the covering spaces (see Remark~\ref{presc}) as
\begin{equation*}
\widetilde{\Phi}^i=\phi_{n+1-i} \qquad \mbox{for } i=1,\ldots,n\,.
\end{equation*}
It is easy to check that $\widetilde{u}$ is a minimizer for Problem~\eqref{minpro2} and that
$\widetilde{\Phi}$ a calibration for $\widetilde{u}$
according to Definition~\ref{caliconvering}.
Similarly, given a calibration $\widetilde \Phi$ for $\widetilde u \in BV_{constr}(Y,\{0,1\})$
minimizer for Problem~\eqref{minpro2} one can construct a paired calibration for
$u \in \mathcal{B}$ minimizer for the minimal partition problem.
\end{rem}
\subsection{Currents with coefficients in $\mathbb{R}^n$ \cite{annalisaandrea}}\label{unocorrenti}
We briefly summarize here the theory of
currents with coefficients in $\mathbb{R}^n$
with the approach given in~\cite{annalisaandrea}.
The notion of
currents with coefficients in a group was introduced by W.~Fleming~\cite{fleming}. We mention also the
the work of B.~White~\cite{white1, white2}.\\
Consider the normed space $(\mathbb{R}^n, \Vert\cdot\Vert)$ and
denote by $\Vert\cdot\Vert_{\ast}$ the dual norm.
For $k=0,1,2$ we call $\Lambda_k(\mathbb{R}^2)$ the space of $k$--vectors in $\mathbb{R}^2$.
\begin{dfnz}[$k$--covector with values in $\mathbb{R}^n$]
A $k$--covector with values in $\mathbb{R}^n$ is a linear map from
$\Lambda_k(\mathbb{R}^2)$ to $\mathbb{R}^n$.
We denote by $\Lambda^k_{n}(\mathbb{R}^2)$
the space of $k$--covectors with values in $\mathbb{R}^n$.
\end{dfnz}
We define the comass norm of a covector
$\omega\in \Lambda^k_{n}(\mathbb{R}^2)$ as
\begin{equation*}
\vert \omega\vert_{com}:=\sup\left\lbrace
\Vert \omega(\tau)\Vert_\ast\,:\;\tau\in \Lambda_k(\mathbb{R}^2)\;\text{with}\,
\vert \tau\vert\leq1 \mbox{ and } \tau \mbox{ simple}
\right\rbrace\,.
\end{equation*}
Then the $k$--forms with values in $\mathbb{R}^n$ are defined as the vector fields
$\omega\in C^\infty_c(\mathbb{R}^2, \Lambda^k_{n}(\mathbb{R}^2))$
and their comass is given by
\begin{equation*}
\Vert\omega\Vert_{com}:=\sup_{x\in\mathbb{R}^2} \vert\omega(x)\vert_{com}\,.
\end{equation*}
\begin{rem}
Notice that the definition of the space
$C^\infty_c(\mathbb{R}^2, \Lambda^k_{n}(\mathbb{R}^2))$
is equivalent to the one presented in~\cite{annalisaandrea}.
Indeed they consider $k$--covectors $\omega$ defined as bilinear maps
\begin{equation*}
\omega : \Lambda_k(\mathbb{R}^2) \times \mathbb{R}^n \rightarrow \mathbb{R}\,,
\end{equation*}
that can be seen as $k$--covectors with values in $(\mathbb{R}^n)'$.
\end{rem}
Thanks to the just defined notions we are able to introduce the definition
of $k$--current with coefficients in $\mathbb{R}^n$.
\begin{dfnz}[$k$--current with coefficients in $\mathbb{R}^n$]
A $k$--current with coefficients in $\mathbb{R}^n$
is a linear and continuous map
\begin{equation*}
T:C^\infty_c(\mathbb{R}^2, \Lambda^k_{n}(\mathbb{R}^2))\to \mathbb{R}\,.
\end{equation*}
\end{dfnz}
The \emph{boundary} of a
$k$--current $T$ with coefficients in $\mathbb{R}^n$ is a $(k-1)$--current
defined as
\begin{equation*}
\partial T(\omega):=-T(d\omega)\,,
\end{equation*}
where $d\omega$ is defined component--wise.
\begin{dfnz}[Mass]
Given $T$ a $k$--current with coefficients in $\mathbb{R}^n$ its mass is
\begin{equation*}
\mathbb{M}(T):=\sup\left\lbrace T(\omega)\,:\;\omega\in
C^\infty_c(\mathbb{R}^2, \Lambda^k_{n}(\mathbb{R}^2))
\;\text{with}\,\Vert\omega\Vert_{com}\leq 1
\right\rbrace\,.
\end{equation*}
\end{dfnz}
A $k$--current $T$ with coefficients in $\mathbb{R}^n$ is said to be \emph{normal}
if $\mathbb{M}(T)<\infty$ and $\mathbb{M}(\partial T)<\infty$.
\begin{dfnz}[$1$--rectifiable current with coefficients in $\mathbb{Z}^n$]
Given $\Sigma$ a $1$--rectifiable set oriented by $\tau\in\Lambda_1(\mathbb{R}^2)$, simple, such that
$\vert\tau(x)\vert=1$ for a.e. $x\in \Sigma$ and $\theta:\Sigma\to\mathbb{Z}^{n}$ in $L^1(\mathcal{H}^1)$,
a $1$--current $T$ is rectifiable with coefficients in $\mathbb{Z}^n$
if admits the following representation:
\begin{equation*}
T(\omega)=\int_{\Sigma} \left\langle
\omega(x)(\tau(x)), \theta(x)\right\rangle\,\mathrm{d}\mathcal{H}^1\,.
\end{equation*}
A $1$--rectifiable current with coefficients in $\mathbb{Z}^n$
will be denoted by the triple $T=[\Sigma, \tau, \theta]$.
\end{dfnz}
Notice that if $T=[\Sigma, \tau, \theta]$ is a
$1$--rectifiable current with coefficients in $\mathbb{Z}^n$
one can write its mass as
\begin{equation*}
\mathbb{M}(T)=\int_{\Sigma}\Vert \theta(x)\Vert\,\mathrm{d}\mathcal{H}^1\,.
\end{equation*}
\begin{rem}
The space of $1$--covector with values in $\mathbb{R}^n$ can be identified
with the set of matrices $M^{n\times 2}(\mathbb{R})$.
In what follows we will assume this identification
and we will denote the set of $1$--forms by $C_c^\infty(\mathbb{R}^2,M^{n\times 2}(\mathbb{R}))$.
Moreover given $\omega \in C_c^\infty(\mathbb{R}^2,M^{n\times 2}(\mathbb{R}))$ we write it as
\begin{equation*}
\omega=
\begin{bmatrix}
\omega_1(x) \\
\vdots \\
\omega_n(x) \\
\end{bmatrix}\,,
\end{equation*}
where $\omega_i : \mathbb{R}^2 \rightarrow \mathbb{R}^2$. Notice that $\omega_i(x)$ is a canonical $1$-form, hence its differential can be identified (by the canonical Hodge dual) as
\begin{displaymath}
d\omega_i = \frac{\partial \omega}{\partial x_2} - \frac{\partial \omega}{\partial x_1} = \div \omega_i^\perp
\end{displaymath}
and therefore we can define $d\omega$ as
\begin{equation*}
d\omega=
\begin{bmatrix}
\div \omega^\perp_1 \\
\vdots \\
\div \omega^\perp_n \\
\end{bmatrix}\,.
\end{equation*}
\end{rem}
Let $(g_i)_{i=1,\ldots,n-1}$ be the canonical base of $\mathbb{R}^{n-1}$.
Define $g_n = -\sum_{i=1}^{n-1} g_i$.
Given $B = g_1 \delta_{p_1} + \ldots + g_n \delta_{p_n}$
we consider the following minimization problem:
\begin{equation}\label{minprocurrents}
\inf \left\{\mathbb{M}(T)\ : \ T\mbox{ is a } 1-\text{rectifiable
currents with coefficients if } \mathbb{Z}^{n-1} ,\ \partial T = B\right\}\,.
\end{equation}
To have the equivalence between Problem~\eqref{minprocurrents}
and the Steiner Problem~\eqref{ste} the choice
of the norm of $\mathbb{R}^{n-1}$ plays an important role.
Indeed given $\mathcal{I}$ any subset of $\{1,\ldots,n-1\}$ it is required in \cite{annalisaandrea} that
\begin{equation}\label{prope}
\left\lVert \sum_{i\in \mathcal{I}} g_i\right\rVert =1\,.
\end{equation}
\begin{thm}\label{zu}
Choosing a norm satisfying \eqref{prope}, the Steiner Problem is equivalent to Problem \eqref{minprocurrents}.
\end{thm}
The notion of calibration associated to the mass minimization problem \eqref{minprocurrents} introduced in \cite{annalisaandrea} is the following:
\begin{dfnz}[Calibration for $1$--rectifiable currents]\label{calicurrents}
Let $T=[\Sigma, \tau, \theta]$ be a
$1$--rectifiable current with coefficients in $\mathbb{Z}^{n-1}$ and
$\Phi\in C_c^\infty(\mathbb{R}^2,M^{{n-1}\times 2}(\mathbb{R}))$.
Then
$\Phi$ is a calibration for $T$ if
\begin{itemize}
\item[(i)] $d\Phi = 0$;
\item[(ii)] $\|\Phi\|_{com} \leq 1$;
\item[(iii)] $\langle \Phi(x)\tau(x),\theta(x)\rangle
= \|\theta(x)\|$ for $\mathcal{H}^1$-a.e. $x \in \Sigma$.
\end{itemize}
\end{dfnz}
If $\Phi\in C_c^\infty(\mathbb{R}^2,M^{{n-1}\times 2}(\mathbb{R}))$ is a calibration
for $T=[\Sigma, \tau, \theta]$ a
$1$--rectifiable current with coefficients in $\mathbb{Z}^{n-1}$,
then $T$ is a minimizer of Problem~\eqref{minprocurrents}.
To be more precise $T$ is a minimizer among normal currents
with coefficients in $\mathbb{R}^{n-1}$ \cite{annalisaandrea}.
\begin{rem}
In Proposition~\ref{approximately} in appendix we prove that is possible
to weaken the regularity of the calibration $\Phi$ and consider
$\Phi : \mathbb{R}^2 \rightarrow M^{{n-1}\times 2}(\mathbb{R})$ such that each row
is an approximately regular vector field (see also \cite{annalisaandrea} for a definition of calibration with weaker regularity assumptions of the vector fields).
In the next section we assume implicitly that $\Phi$ is approximately regular.
\end{rem}
\section{Relations among the different notions of
calibrations}\label{equivalence}
We have already discussed the equivalence between paired calibrations
and calibrations on coverings (see Remark~\ref{pairedecovering}).
We focus now on the relation with Definition~\ref{calicurrents}.
Definition~\ref{calicurrents} is dependent on the norm of $\mathbb{R}^n$.
Define $\Vert \cdot\Vert _{\flat}$ as
\begin{equation*}
\Vert x\Vert _{\flat}:=\sup_{x_i>0} x_i - \inf_{x_i\leq 0} x_i
\end{equation*}
for every $x\in \mathbb{R}^n$.
This is the norm considered by Marchese and Massaccesi~\cite{annalisaandrea} and in particular it satisfies property \eqref{prope}.
In~\cite{annalisaandrea} it is also proved that
the dual norm $\Vert \cdot\Vert_{\flat,\ast}$ can be characterized as
follows:
\begin{equation}\label{dualnorm}
\Vert x\Vert _{\flat,\ast}= \max\left\{
\sum_{x_i > 0} x_i,
\sum_{x_i \leq 0}|x_i|
\right\}\,.
\end{equation}
\medskip
\textbf{From calibrations for currents to calibration on coverings}
\medskip
From here on we endow $\mathbb{R}^n$
with $\Vert \cdot\Vert=\Vert \cdot\Vert _{\flat}$.
With this choice,
we show that if there exists a calibration for a
$1$--rectifiable current with coefficients in $\mathbb{Z}^{n-1}$,
then there exists a calibration for $E\in\mathscr{P}_{constr}$
minimizer for Problem~\eqref{minpro2}.
\begin{lemma}\label{construction}
Given $S=\{p_1,\ldots,p_n\}$ with the points $p_i$
lying on the boundary of a convex set $\Omega$
labelled in an anticlockwise sense and $u=(u_1,\ldots,u_n)$
a competitor of the minimal partition problem, it is possible to
construct a $1$--rectifiable current $T=[\Sigma, \tau, \theta]$
with coefficients in $\mathbb{Z}^{n-1}$ such that
$2\mathbb{M}(T)=\mathcal{E}(u)$,
$\partial T=g_1\delta_{p_1}+\ldots g_n\delta_{p_n}$
and for $\mathcal{H}^1$--a.e. $x\in J_{u_i} \cap J_{u_j}$
\begin{equation}\label{summ}
\theta(x) = \sum_{k=i}^{j-1} g_k\,.
\end{equation}
\end{lemma}
\begin{proof}
For $i=1,\ldots,n$ let $A_i$ be the
phases of the partition induced by $u=(u_1,\ldots,u_n)$, that is
$u_i=\chi_{A_i}$.
Notice that for every $i\in\{1,\ldots,n\}$
the set $\partial^\ast A_i$ is a $1$--rectifiable set in $\mathbb{R}^2$
with tangent $\tau_i$ almost everywhere and it joins the points $p_i$ and
$p_{i+1}$.
For $i\in\{1,\ldots,n\}$
we define $T_i=[\partial^\ast A_i, \tau_i, a_i]$,
where the multiplicities $a_i$ are chosen in such a way that
$a_{i} - a_{i+1} = g_i$ for $i=1,\ldots, n-1$.
Then
for every $i,j= 1,\ldots, n$
\begin{equation}\label{molt}
a_i - a_j = \sum_{k=i}^{j-1} g_k\,.
\end{equation}
We set
\begin{equation*}
T = \sum_{i=1}^n T_i\,.
\end{equation*}
Denoting by $\theta_{T}$ the multiplicity of $T$,
by construction $\theta_{T}(x) = a_i - a_j$ for $\mathcal{H}^1$--a.e.
$x \in \partial^\ast A^i \cap \partial^\ast A^j$ that thanks to \eqref{molt} gives \eqref{summ}.
Moreover as $\partial T_{i} = a_i (\delta_{p_{i}} - \delta_{p_{i+1}})$
(with the convention that $p_{n+1} = p_1$).
We infer
\begin{eqnarray*}
\partial T &=& \sum_{i=1}^n \partial T_i
= \sum_{i=1}^n a_{i}(\delta_{p_{i}} - \delta_{p_{i+1}}) = \sum_{i=1}^n
a_{i}\delta_{p_{i}} - \sum_{i=1}^n a_{i}\delta_{p_{i+1}} \\
&=& \sum_{i=1}^{n} a_{i}\delta_{p_{i}} - \sum_{i=2}^{n+1}
a_{i-1}\delta_{p_i} = (a_1 - a_n)\delta_{p_1}
+ \sum_{i=2}^{n} (a_i - a_{i+1}) \delta_{p_i}\\
&=& \sum_{i=1}^n g_i \delta_{p_i}\,.
\end{eqnarray*}
\end{proof}
\begin{thm}\label{dacalicurrentacalicovering}
Given $S=\{p_1,\ldots,p_n\}$ lying on the boundary of a convex set $\Omega$,
let $E\in\mathscr{P}_{constr}$ be a
minimizer for Problem~\eqref{minpro2}.
Let $\Phi$ be a calibration
according to Definition~\ref{calicurrents}
for $T=[\Sigma,\tau,\theta]$ a $1$--rectifiable current with
coefficients in
$\mathbb{Z}^{n-1}$ minimizer of Problem~\eqref{minprocurrents}.
Then there exists $\widetilde{\Phi}$ a calibration for $E$
(according to Definition~\ref{caliconvering}).
\end{thm}
\begin{proof}
We label the $n$ points of $S$ in an anticlockwise sense.
By choosing the cuts $\Sigma\supset \Omega$ calibrations on coverings (Definition~\ref{caliconvering}) reduce to
paired calibrations (Definition~\ref{paired}).
Hence we are looking for a collection $\widetilde{\Phi}$
of $n$ vector fields $\widetilde{\Phi}^i:\Omega\to \mathbb{R}^2$
for $u=(u_1,\ldots,u_n)\in\mathcal{B}$ where $u_{n+1-i}=\chi_{E^i}$ (see
Remark \ref{pairedecovering}).
Let $\Phi_i$ be the $n-1$ rows of the matrix $\Phi$.
We claim that the collection
of $n$ vector fields $\widetilde{\Phi}^i$ defined by
\begin{equation}\label{relation}
\widetilde{\Phi}^i-\widetilde{\Phi}^{i+1}=2\Phi_i^\perp \quad \mbox{for }
i=1,\ldots, n-1
\end{equation}
is a calibration for $E$. Notice that from~\eqref{relation} we deduce that
\begin{equation*}
\widetilde{\Phi}^{n}-\widetilde{\Phi}^1
=-2\sum_{i=1}^{n-1}\Phi_i^\perp\,.
\end{equation*}
We have to show that $\widetilde\Phi$ satisfies conditions (\textbf{1}),
(\textbf{2}), (\textbf{3}) of Definition \ref{caliconvering}.
\begin{itemize}
\item[(\textbf{1})] The divergence of $\widetilde\Phi$
is automatically zero, because $\div \widetilde \Phi^i = 0$ in $\Omega$
for every $i=1,\ldots,n$ (notice that we have taken the cut $\Sigma$
outside $\Omega$).
\item[(\textbf{2})] By Definition~\ref{calicurrents} it holds
that $\|\Phi\|_{com} \leq 1$, hence for every $x\in\mathbb{R}^2$ we have
\begin{equation*}
\sup\left\lbrace\Vert\Phi(x)\tau\Vert_{\flat, \ast}
\;\Big\vert\, \tau\in\mathbb{R}^2,\;\vert\tau\vert \leq 1\right\rbrace\leq
1\,.
\end{equation*}
Writing the $n$ components of
$\Phi(x)\tau$ as $\left\langle\Phi_1(x),\tau\right\rangle,
\ldots, \left\langle\Phi_n(x),\tau\right\rangle$ and
using~\eqref{dualnorm},
for every $\tau \in \mathbb{R}^2$ such that $|\tau| \leq 1$ and $i,j =
1,\ldots,n$ with $i \leq j-1$ we obtain
\begin{align*}
1 \geq &\Vert\Phi(x)\tau\Vert_{\flat, \ast}
=\max\left\lbrace
\sum_{\left\langle\Phi_k,\tau\right\rangle>0}
\left\langle\Phi_k,\tau\right\rangle,
-\sum_{\left\langle\Phi_k,\tau\right\rangle<0}\left\langle\Phi_k,\tau\right\rangle
\right\rbrace
\\
& \geq \left\lvert \sum_{k=1}^{n-1}\left\langle\Phi_k,\tau\right\rangle
\right\rvert
\geq \left\lvert \left\langle
\sum_{k=i}^{j-1} \Phi_{k}, \tau
\right\rangle\right\rvert\,.
\end{align*}
Therefore
\begin{equation*}
\left|\sum_{k=i}^{j-1} \Phi_k^\perp\right| \leq 1\,.
\end{equation*}
Notice that from \eqref{relation} we obtain that for every
$i,j\in\{1,\ldots,n\}$
\begin{equation*}
\vert \widetilde{\Phi}^i-\widetilde{\Phi}^j\vert
=2\left\vert \sum_{k=i}^{j-1}\Phi_k^\perp\right\vert\,.
\end{equation*}
Hence
for every $i,j\in\{1,\ldots,n\}$ condition
$\vert \widetilde{\Phi}^i(x)-\widetilde{\Phi}^j(x)\vert\leq 2$ is
fulfilled for every $x\in \Omega$.
\item[(\textbf{3})]
We can apply
the construction of Lemma~\ref{construction} with $E^i=A_{n+1-i}$ to
produce a $1$--rectifiable current $\overline{T}$
with coefficients in $\mathbb{Z}^{n-1}$ such that
and $\partial \overline{T} = \partial T$ and
$2\mathbb{M}(\overline{T}) = P(E)$.
Moreover
thanks to the fact that $E$ a minimizer for~\eqref{minpro2} and $T$
for~\eqref{minprocurrents}
we get
\begin{equation*}
2\mathbb{M}(\overline{T}) =
P(E)=2\mathcal{H}^1(\mathcal{S})=2\mathbb{M}(T)\,,
\end{equation*}
where $\mathcal{S}$ is a minimizer for the Steiner Problem~\eqref{ste}.
The current $\overline{T}$ has the same boundary and the same mass of $T$,
then it is a minimizer for the Problem~\eqref{minprocurrents} as well.
Therefore $\Phi$ is a calibration also for $\overline{T}$.
Then we have that $\mathcal{H}^1$--a.e.
$x\in \overline{p(\partial^\ast E)}$
\begin{displaymath}
\langle \Phi \tau, \theta_{\overline{T}} \rangle = 1\,.
\end{displaymath}
Using~\eqref{summ}, for $\mathcal{H}^1$--a.e. $x \in \partial^\ast A^i \cap
\partial^\ast A^j$
the previous equation reads as
\begin{equation*}
1 = \langle \Phi \tau, \sum_{k=i}^{j-1} g_k \rangle = \sum_{k=i}^{j-1}
\langle \Phi \tau, g_k \rangle =
\sum_{k=i}^{j-1} \langle \Phi_k, \tau\rangle=
\sum_{k=i}^{j-1} \langle \Phi^\perp_k, \nu_{ij}\rangle =
\frac{1}{2}\langle \widetilde{\Phi}^i - \widetilde{\Phi}^j, \nu_{ij}
\rangle\,,
\end{equation*}
that is the third condition of the paired calibration, that in our setting
is equivalent to (\textbf{3}) of Definition \ref{caliconvering}.
\end{itemize}
\end{proof}
When the points of $S$ do not lie on the boundary of a convex set,
we cannot take advantage of the equivalence between calibrations on coverings and
paired calibrations
(that are not defined if the points of $S$ are not on the boundary of a convex set).
Indeed in this case we look for a unique vector field $\widetilde{\Phi}:Y\to\mathbb{R}^2$
defined on the whole space $Y$ and
satisfying the requirements of Definition~\ref{caliconvering}.
As one can guess from Step \textbf{(3)} in the proof of
Theorem~\ref{dacalicurrentacalicovering},
relations~\eqref{relation}
has to be satisfied locally around the jumps.
A clue of this fact is
the \emph{local} equivalence between the minimal partition problem
and Problem~\eqref{minpro2} that suggests a local
equivalence between calibrations on coverings and paired calibrations.
Thanks to Remark~\ref{presc}
once defined $\widetilde{\Phi}^i$
in each sheet
one can construct $\widetilde{\Phi}$,
but in doing such an extension/identification
procedure it is not guaranteed that the divergence of
$\widetilde{\Phi}$ is zero.
It seems to us that the existence of a calibration $\widetilde{\Phi}$
is plausible, but the extension of the field
has to be treated case by case.
At the moment we do not have a procedure to construct $\widetilde{\Phi}$ globally.
\medskip
\textbf{From calibrations on coverings to calibrations for currents}
\medskip
Given a calibration for $E\in\mathscr{P}_{constr}$
minimizer for Problem~\eqref{minpro2}
we want now to construct a calibration for $T$,
a $1$--rectifiable current with coefficients in $\mathbb{Z}^{n-1}$
minimizer for Problem~\eqref{minprocurrents}.
Notice that given any competitor $\overline{T}=[\overline{\Sigma},\overline{\tau},\overline{\theta}]$, testing condition ii) of Definition~\ref{calicurrents} on $\overline{T}$ reduces to show that
\begin{equation*}
\left\langle\Phi(x)\overline{\tau}(x),\overline{\theta}(x)\right\rangle
\leq \Vert \overline{\theta}(x)\Vert_\flat \quad \mbox{for }\mathcal{H}^1-a.e \ x\in\overline{\Sigma}\, .
\end{equation*}
Moreover it suffices to evaluate
$\left\langle\Phi(x)\overline{\tau}(x),\cdot \right\rangle$
on the extremal points of the unit ball of the norm $\Vert\cdot\Vert_\flat$
that are $P_{\mathcal{I}} = \sum_{i\in\mathcal{I}} g_i$ for every $\mathcal{I}\subset\{1,\ldots,n-1\}$
(see~\cite[Example 3.4]{annalisaandrea}).
Hence proving Condition ii) reduces to verify $2^{n-1}-1$ inequalities.
On the other hand Condition (\textbf{2}) of Definition~\ref{caliconvering}
requires to verify $\frac{n(n-1)}{2}$ inequalities.
Apart from the case of $2$ and $3$ points,
Condition (\textbf{2}) of Definition~\ref{caliconvering} is weaker than
Condition ii) of Definition~\ref{calicurrents}.
Hence in general one cannot construct a calibration for $T$
starting from a calibration for $E$.
\medskip
To restore an equivalence result we slightly change Problem~\eqref{minprocurrents}.
Define a norm $\Vert\cdot\Vert_\natural$ on $\mathbb{R}^{n-1}$
characterized by the property that its unit ball is the smallest such that
\begin{equation*}
\left\Vert \sum_{k=i}^{j-1} g_i \right\Vert_\natural=1 \quad\text{with}\ i\leq j-1\ ,\ i,j \in \{1,\ldots,n\}\,.
\end{equation*}
For $n=4$ (in this case the admissible coefficients are $g_1,g_2$ and $g_3$) the unit ball of the norm $\Vert\cdot\Vert_{\natural}$ is depicted in Figure~\ref{norm}.
Notice that if we consider the norm $\Vert\cdot\Vert_\flat$, the mass of all curves appearing in Figure~\ref{exampleA} coincides with the length.
If instead we use the norm $\Vert\cdot\Vert_{\natural}$, the mass
of the curve with multiplicity $g_1+g_3$ is strictly bigger than its length. This is a still natural choice if we want to prove an equivalence with the Steiner problem as in Theorem \ref{zu} for the norm $\Vert\cdot\Vert_{\natural}$. Indeed for a specific labelling of the points, the curve with multiplicity $g_1 + g_3$ has to lie outside the convex envelope of $p_1,\ldots,p_4$ and therefore the competitor on the rightmost of Figure~\ref{exampleA} cannot be a minimizer for the Steiner problem.
From now on we write either $\mathbb{M}_\natural$ or $\mathbb{M}_\flat$
to distinguish when the mass is computed using
either the norm $\Vert\cdot\Vert_\natural$ or $\Vert\cdot\Vert_\flat$.
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}[scale=0.7]
\draw[black!50!white]
(0,0)--(0,5)
(0,0)--(-2.5,-4.33)
(0,0)--(4.33, -2.5);
(0,0)--(2.5,4.33)
(0,0)--(0,-5)
(0,0)--(-4.33, 2.5);
\draw[black]
(-1.5,-2.6)--(-1.5,0.4) %
(2.6, -1.5)--(2.6, 1.5) %
(0,3)--(-1.5,0.4)
(0,3)--(2.6, 1.5)
(-1.5,0.4)--(1.1,-1.09)
(2.6, 1.5)--(1.1,-1.09)
(1.1,-1.09)--(1.1,-4.09)
(-1.5,-2.6)--(1.1,-4.09)
(2.6, -1.5)--(1.1,-4.09);
\draw[black, dashed]
(0,-3)--(1.1,-4.09)%
(0,-3)--(1.5,-0.4)%
(0,-3)--(-2.6, -1.5)%
(1.5,-0.4)--(1.5,2.6)
(1.5,-0.4)--(-1.1,1.09)
(-2.6, -1.5)--(-1.1,1.09)
(-1.1,1.09)--(-1.1,4.09)
(2.6, -1.5)--(1.5,-0.4);
\draw[black]
(1.5,2.6)--(-1.1,4.09)
(-2.6,1.5)--(-1.1,4.09)
(1.5,2.6)--(2.6, 1.5)
(-2.6, -1.5)--(-2.6,1.5)
(0,3)--(-1.1,4.09) %
(-2.6, 1.5)--(-1.5,0.4)
(-2.6, -1.5)--(-1.5,-2.6);
\fill[black](0,-3)circle (1.7pt);
\fill[black](-2.6, 1.5)circle (1.7pt);
\fill[black](1.5,2.6)circle (1.7pt);
\fill[black](-2.6, -1.5)circle (1.7pt);
\fill[black](-1.1,1.09)circle (1.7pt);
\fill[black]((-1.1,4.09)circle (1.7pt);
\fill[black](0,3)circle (1.7pt);
\fill[black](2.6, 1.5)circle (1.7pt);
\fill[black](-1.5,0.4)circle (1.7pt);
\fill[black](1.1,-1.09)circle (1.7pt);
\fill[black](-1.5,-2.6)circle (1.7pt);
\fill[black](2.6, -1.5)circle (1.7pt);
\fill[black](1.1,-4.09)circle (1.7pt);
\path[font=\tiny]
(0.1,2.85)node[below]{$g_2$}
(2.6, 1.5)node[right]{$g_1+g_2$}
(-1.5,0.4)node[right]{$g_2+g_3$}
(1.1,-1.09)node[left]{$g_1+g_2+g_3$}
(-1.5,-2.6)node[left]{$g_3$}
(1.1,-4.09)node[below]{$g_1+g_3$}
(2.6, -1.4)node[right]{$g_1$};
\end{tikzpicture}\quad\quad\quad
\begin{tikzpicture}[scale=0.7]
\draw[black!50!white]
(0,0)--(0,5)
(0,0)--(-2.5,-4.33)
(0,0)--(4.33, -2.5);
\draw[black]
(-1.5,-2.6)--(-1.5,0.4)
(2.6, -1.5)--(2.6, 1.5)
(0,3)--(-1.5,0.4)
(0,3)--(2.6, 1.5)
(-1.5,0.4)--(1.1,-1.09)
(2.6, 1.5)--(1.1,-1.09)
(1.1,-1.09)--(-1.5,-2.6)
(1.1,-1.09)--(2.6, -1.5);
\draw[black, dashed]
(0,-3)--(1.5,-0.4)
(0,-3)--(-2.6, -1.5)
(1.5,-0.4)--(1.5,2.6)
(1.5,-0.4)--(-1.1,1.09)
(-2.6, -1.5)--(-1.1,1.09)
(-2.6,1.5)--(-1.1,1.09)
(1.5,2.6)--(-1.1,1.09)
(2.6, -1.5)--(1.5,-0.4);
\draw[black]
(1.5,2.6)--(0,3)
(-2.6, 1.5)--(0,3)
(-2.6, -1.5)--(-2.6,1.5)
(1.5,2.6)--(2.6, 1.5)
(-1.5,-2.6)--(0,-3)
(2.6, -1.5)--(0,-3)
(-2.6, 1.5)--(-1.5,0.4)
(-2.6, -1.5)--(-1.5,-2.6);
\fill[black](0,-3)circle (1.7pt);
\fill[black](-2.6, 1.5)circle (1.7pt);
\fill[black](1.5,2.6)circle (1.7pt);
\fill[black](-2.6, -1.5)circle (1.7pt);
\fill[black](-1.1,1.09)circle (1.7pt);
\fill[black](0,3)circle (1.7pt);
\fill[black](2.6, 1.5)circle (1.7pt);
\fill[black](-1.5,0.4)circle (1.7pt);
\fill[black](1.1,-1.09)circle (1.7pt);
\fill[black](-1.5,-2.6)circle (1.7pt);
\fill[black](2.6, -1.5)circle (1.7pt);
\path[font=\tiny, white]
(1.1,-4.09)node[below]{$g_1+g_3$};
\path[font=\tiny]
(0.1,2.85)node[below]{$g_2$}
(2.6, 1.5)node[right]{$g_1+g_2$}
(-1.5,0.4)node[right]{$g_2+g_3$}
(1.1,-1.09)node[left]{$g_1+g_2+g_3$}
(-1.5,-2.6)node[left]{$g_3$}
(2.6, -1.4)node[right]{$g_1$};
\end{tikzpicture}
\end{center}
\caption{Left: the unit ball of the norm $\Vert\cdot\Vert_\flat$. Right: the unit ball of the
norm $\Vert\cdot\Vert_\natural$.}\label{norm}
\end{figure}
\begin{lemma}\label{dellefoglie}
There exists a permutation $\sigma$ of the labelling of the points of $S$ such that defining
\begin{equation*}
B_\sigma = \sum_{i=1}^{n-1} g_{\sigma(i)} \delta_{p_{\sigma(i)}}
\end{equation*}
the problem
\begin{align}\label{sigmaminprocurrents}
\inf \left\{\mathbb{M}_\natural(T)\ :\ T=[\Sigma,\tau,\theta] \mbox{ is a } 1-\text{rectifiable
currents with coefficients in } \mathbb{Z}^{n-1}, \, \partial T = B_\sigma\right\}
\end{align}
is equivalent to the Steiner Problem.
\end{lemma}
Define
\begin{equation*}
\mathcal{G}:=\left\{\sum_{k=i}^{j-1}g_k : i,j = 1,\ldots,n \mbox{ and } i \leq j -1\right\}
\end{equation*}
and notice that by the definition of $\Vert\cdot\Vert_\natural$ one has
\begin{equation}\label{minor}
\Vert\theta\Vert_\natural \geq \Vert\theta\Vert_\flat \quad \forall \theta \in \mathbb{R}^{n-1} \qquad \mbox{and} \qquad \Vert\theta\Vert_\natural = \Vert\theta\Vert_\flat=1 \quad\forall \theta \in \mathcal{G}.
\end{equation}
We obtain the definition of calibration for Problem~\eqref{sigmaminprocurrents} simply repeating
Definition~\ref{calicurrents}
replacing $\Vert\cdot\Vert_\flat$ by $\Vert\cdot\Vert_\natural$.
Clearly if $\Phi$ is a calibration for $T_\sigma$, then
$T_\sigma$ is a minimizer for Problem~\eqref{sigmaminprocurrents}
in its homology class.
We postpone the proof of Lemma \ref{dellefoglie} and we state the main result.
\begin{thm}
Given $S=\{p_1,\ldots,p_n\}$,
let $E\in\mathscr{P}_{constr}$ be a
minimizer for Problem~\eqref{minpro2}
and $T=[\Sigma,\tau,\theta]$ be a $1$--rectifiable current with coefficients in
$\mathbb{Z}^{n-1}$ minimizer of Problem~\eqref{sigmaminprocurrents}.
Suppose that there exists $\widetilde{\Phi}$ calibration for $E$
according to Definition~\ref{caliconvering}, then
there exists $\Phi$ calibration for $T_\sigma$
according to Definition~\ref{calicurrents} (where we consider
$\Vert\cdot\Vert=\Vert\cdot\Vert_\natural$).
\end{thm}
\begin{proof}
For simplicity we suppose that the $n$ points lie on the boundary of a
convex set $\Omega$ and the cuts $\Sigma$ are chosen such that $\Sigma \supset \Omega$. Hence $\widetilde{\Phi}:Y\to\mathbb{R}^2$ reduces to
a paired calibration: a collection of
$n$ approximately regular vector fields $\widetilde{\Phi}^i: \Omega \to\mathbb{R}^2$ satisfying the conditions of Definition \ref{paired}.
We define $\Phi$ calibration for $T_\sigma$ as the matrix
whose $n-1$ rows satisfy
\begin{equation*}
\Phi^\perp_i=\frac{1}{2}\left(\widetilde{\Phi}^i-\widetilde{\Phi}^{i+1}\right)
\quad\text{for}\;i=1,\ldots,n-1\,.
\end{equation*}
Condition i) is trivially satisfied and
adapting the proof of step \textbf{(3)} of Theorem~\ref{impli}
we also get Condition iii).
To conclude the proof it is enough to notice that when $\mathbb{R}^n$
is endowed with $\Vert\cdot\Vert_\natural$, condition
$\Vert\Phi\Vert_{com}\leq 1$ is fulfilled
if $\left\vert\sum_{k=i}^{j-1}\Phi^\perp_k\right\vert\leq 1$
for every $i\leq j-1 \in\{1,\ldots,n\}$, that is nothing else than
$\vert\widetilde{\Phi}_i-\widetilde{\Phi}_{j}\vert\leq 2$.
\end{proof}
We conclude this section proving Lemma \ref{dellefoglie}.
\medskip
\textit{Proof of Lemma~\ref{dellefoglie}.}
\medskip
Denoted by $\mathcal{S}$ a Steiner network connecting the points
of $S$ we repeat the construction of~\cite[Proposition 2.28]{calicipi}
obtaining a suitable labelling of the points of $S$ (and consequently $B_\sigma$). It is possible then to construct a current $T_\sigma=[\Sigma_\sigma,\tau_\sigma,\theta_\sigma]$
with boundary $B_\sigma$ such that $\theta_\sigma\in\mathcal{G}$
and $\mathbb{M}_\flat(T_\sigma)=\mathcal{H}^1(\mathcal{S})$: it is enough to define $T_i$ as the $1$--current supported on the branch of $\mathcal{S}$ connecting $p_i$ with $p_n$ with multiplicity $g_i$ and then build $T_\sigma = \sum_{i=1}^{n-1} T_i$.
Hence by the equivalence between the Steiner problem and Problem~\eqref{minprocurrents}
the current $T_\sigma=[\Sigma_\sigma,\tau_\sigma,\theta_\sigma]$
is a minimizer for Problem~\eqref{minprocurrents} with $\mathbb{M}= \mathbb{M}_\flat$ and $B = B_\sigma$.
We show that $T_\sigma$ is a minimizer also for Problem~\eqref{sigmaminprocurrents}.
By minimality of $T_\sigma$ it holds
$\mathbb{M}_{\flat}(T_{\sigma}) \leq \mathbb{M}_{\flat}(T)$
for all $1$--rectifiable current $T$ with coefficients in $\mathbb{Z}^{n-1}$.
Then for all competitors $T$ it holds
\begin{equation*}
\mathbb{M}_\natural(T_\sigma)
=\mathbb{M}_\flat(T_\sigma)\leq \mathbb{M}_\flat(T)
\leq \mathbb{M}_\natural(T)\,,
\end{equation*}
where we used \eqref{minor}. This gives the minimality of $T_\sigma$ for Problem~\eqref{sigmaminprocurrents}
and concludes the proof.
\qed
\begin{figure}[H]
\begin{tikzpicture}[scale=1.3]
\path[font=\footnotesize]
(1,1) node[above]{$p_2$}
(1,-1) node[below]{$p_3$}
(-1,-1) node[below]{$p_4$}
(-1,1) node[above]{$p_1$}
(0,0) node[right]{$g_1+g_2$}
(0.7,0.5) node[above]{$g_2$}
(0.7,-0.45) node[below]{$g_3$}
(-1.2,-0.45) node[below]{$g_1+g_2+g_3$}
(-0.7,0.5) node[above]{$g_1$};
\fill[black](1,1) circle (1.7pt);
\fill[black](1,-1) circle (1.7pt);
\fill[black](-1,1) circle (1.7pt);
\fill[black](-1,-1) circle (1.7pt);
\draw[rotate=90]
(1,-1)--(0.42,0)
(1,1)--(0.42,0)
(0.42,0)--(-0.42,0)
(-0.42,0)--(-1,-1)
(-0.42,0)--(-1,1);
\end{tikzpicture}\qquad
\begin{tikzpicture}[scale=1.3]
\path[font=\footnotesize]
(1,1) node[above]{$p_2$}
(1,-1) node[below]{$p_3$}
(-1,-1) node[below]{$p_4$}
(-1,1) node[above]{$p_1$}
(0,0) node[above]{$g_2+g_3$}
(0.6,0.5) node[above]{$g_2$}
(0.6,-0.45) node[below]{$g_3$}
(-1.5,-0.45) node[below]{$g_1+g_2+g_3$}
(-0.6,0.5) node[above]{$g_1$};
\fill[black](1,1) circle (1.7pt);
\fill[black](1,-1) circle (1.7pt);
\fill[black](-1,1) circle (1.7pt);
\fill[black](-1,-1) circle (1.7pt);
\draw[rotate=0]
(1,-1)--(0.42,0)
(1,1)--(0.42,0)
(0.42,0)--(-0.42,0)
(-0.42,0)--(-1,-1)
(-0.42,0)--(-1,1);
\end{tikzpicture}\qquad
\begin{tikzpicture}[scale=1.3]
\path[font=\footnotesize]
(1,1) node[above]{$p_2$}
(1,-1) node[below]{$p_3$}
(-1,-1) node[below]{$p_4$}
(-1,1) node[above]{$p_1$}
(0,0) node[below]{$g_1+g_3$}
(0.4,0.6) node[above]{$g_2$}
(1.1,-0.45) node[below]{$g_3$}
(-1.5,-0.45) node[below]{$g_1+g_2+g_3$}
(-0.2,1) node[above]{$g_1$};
\fill[black](1,1) circle (1.7pt);
\fill[black](1,-1) circle (1.7pt);
\fill[black](-1,1) circle (1.7pt);
\fill[black](-1,-1) circle (1.7pt);
\draw[rotate=0]
(-0.42,0)to[out= 120,in=-120, looseness=1](1,1)
(-1,1)to[out= 30,in=90, looseness=1](1.3,1.1)
(1.3,1.1)to[out= -90,in=0, looseness=1](0.42,0)
(1,-1)--(0.42,0)
(0.42,0)--(-0.42,0)
(-0.42,0)--(-1,-1);
\end{tikzpicture}
\caption{With a suitable choice of the labeling of the points
$p_1,p_2,p_3,p_4$ it is not possible to construct a competitor without loops
that lies in the convex envelop of the points
where a curve has coefficient $g_1+g_3$.}\label{exampleA}
\end{figure}
\begin{rem}
Although in general giving explicitly
the permutation $\sigma$ is quite hard,
the choice of a suitable labelling of the points of $S$
becomes easy when the points
lie on the boundary of a convex set:
it is enough to label $p_1\ldots,p_n$ in a anticlockwise sense (see Lemma \ref{construction}).
\end{rem}
\subsection{Extension to $\mathbb{R}^n$}
This paper is devoted to compare the
known notions of calibrations for minimal networks and minimal
partitions in $\mathbb{R}^2$.
\textit{Minimal partitions}
A natural question would be if it is possible to
generalize/modify the mentioned approaches to
minimal partition problems in higher dimension
with the goal of comparing the related notions of calibrations.
Paired calibrations are already a tool to validate the minimality
of a certain candidate
that span a given boundary and
divide the domain (a convex set in $\mathbb{R}^{n+1}$)
in a fixed number of phases.
At the moment
it is not known if
one can find a suitable group $\mathcal{G}$ and a suitable norm
such that
$n$--dimensional currents with coefficients in $\mathcal{G}$
represent a partition of $\mathbb{R}^{n+1}$.
Regarding instead the covering spaces approaches
several attempts to very specific problems
have been proposed in~\cite{cover,cover2,brakke}.
Despite this remarkable list of examples
it is still not clear if it is possible to systematically approach
Plateau's type problems and partition problems within the
covering space setting.
\textit{Minimal networks}
Because of the intrinsic nature of the notion if currents,
$1$--currents with group coefficients describe networks in any codimension.
Hence this approach is suitable for the Steiner problem in $\mathbb{R}^n$.
To conclude,
Section~\ref{unocorrenti}
is about minimizing $1$--dimensional objects in codimension $n$,
instead Section~\ref{partizioni}
regards the minimization of
$n$--dimensional objects in codimension $1$.
Clearly $n=2$ is the only case in which the two are comparable.
\section{Convexifications of the problem}\label{convexification}
\begin{dfnz}
We call
\begin{itemize}
\item $BV_{constr}(Y, [0,1])$
the space of
functions $u \in BV(Y,[0,1])$ such that for almost every
$x\in M $ it holds
$\sum_{p(y) = x} u(y) = 1$
and $u^1(x) = 1$ for every $x\in \mathbb{R}^2 \setminus \Omega$.
\item $BV_{constr}^\#(Y)$ the space of functions
in $BV_{constr}(Y, [0,1])$ with a finite number of values
$\alpha_1,\ldots,\alpha_k$.
\end{itemize}
\end{dfnz}
In~\cite{calicipi} we have proven that if $\Phi$ is a calibration for
$u\in BV_{constr}(Y,\{0,1\})$, then $u$ is a minimizer in the same class,
but actually the following holds:
\begin{thm}\label{impli}
If $\Phi :Y\to\mathbb{R}^{2}$ is a calibration for $u\in BV_{constr}(Y,\{0,1\})$,
then $u$ is a minimizer among all functions in $BV_{constr}^\#(Y)$.
\end{thm}
For the proof of Theorem~\ref{impli} we need the following:
\begin{lemma}\label{miracle}
Let $\{\eta_i\}_{i=1,\ldots,n}$ and $\{t_i\}_{j=1,\ldots,n}$ such that $\sum_{i=1}^n
\eta_i = 0$ and $|t_i - t_j| \leq 2$ for every $i,j \in 1\ldots,m$. Then
\begin{equation}\label{ooo}
\left|\sum_{i=1}^n t_i \eta_i \right| \leq \sum_{i=1}^n |\eta_i|\,.
\end{equation}
\end{lemma}
\begin{proof}
Notice that
\begin{eqnarray*}
\left|\sum_{i=1}^n t_i \eta_i \right| &=& \left|\sum_{\eta_i > 0} t_i \eta_i +
\sum_{\eta_i < 0} t_i \eta_i\right| \leq \left|\sum_{\eta_i > 0} \max(t_i) \eta_i +
\sum_{\eta_i < 0} \min(t_i) \eta_i\right| \\
&=& \left|\sum_{\eta_i > 0} \max(t_i) \eta_i - \sum_{\eta_i > 0} \min(t_i)
\eta_i\right| \leq 2\left|\sum_{\eta_i > 0} \eta_i\right| = \sum_{i=1}^n |\eta_i|\,.
\end{eqnarray*}
\end{proof}
\begin{rem}\label{divteo}
We also note that given $u,w \in BV_{constr}(Y,[0,1])$ and
$\Phi : Y\to \mathbb{R}^2$ an approximately regular divergence free
vector field it holds
\begin{equation}\label{booh}
\int_{Y} \Phi \cdot Du = \int_{Y} \Phi \cdot Dw\,.
\end{equation}
This result can be proved adapting~\cite[Proposition 4.3]{calicipi}.
\end{rem}
\textit{Proof of Theorem~\ref{impli}.}
Consider $u\in BV_{constr}(Y,\{0,1\})$,
$\Phi :Y\to\mathbb{R}^{2}$ a calibration for $u$
and $w$ a competitor in $BV_{constr}^\#(Y)$.
Combining Remark~\ref{divteo} with Conditions (\textbf{1}) and (\textbf{3})
of Definition~\ref{caliconvering} we have
\begin{equation}\label{b}
|Du|(Y) = \int_{Y} \Phi \cdot Du= \int_{Y} \Phi \cdot Dw\,.
\end{equation}
Moreover by the representation formula for $Dw$ in the space $Y$ we get
\begin{equation*}
\int_{Y} \Phi \cdot Dw = \sum_{j=1}^m \int_{\mathbb{R}^2} \Phi^j \cdot Dw^j\,,
\end{equation*}
where
without loss of generality we have supposed that $p(J_w) \cap \Sigma=\emptyset$ (see Remark \ref{independent}).
Calling
$\eta_j (x) = (w^j)^+(x) - (w^j)^+(x)$
(we refer to Remark~\ref{presc} for the definition of $w^j$),
we notice that, as $w \in BV_{constr}^\#(Y)$, for almost every $x \in \mathbb{R}^2$
\begin{equation*}
\sum_{j=1}^m \eta_j (x) = 0 \,.
\end{equation*}
Then
\begin{align*}
\sum_{j=1}^m \int_{\mathbb{R}^2} \Phi^j \cdot Dw^j
&= \sum_{j=1}^m \int_{J_{w^j}} \eta_j \Phi^j \cdot \nu \, d\mathcal{H}^{1}
\nonumber\\
&= \int_{p(J_w)}\sum_{j=1}^m \eta_j (\Phi^j \cdot \nu) \chi_{J_w^j} \,
d\mathcal{H}^{1} \\
&\leq \int_{p(J_w)}\Big|\sum_{j=1}^m \Phi^j \eta_j \chi_{J_w^j} \Big|\,
d\mathcal{H}^{1} \,.
\end{align*}
Applying Lemma~\ref{miracle} one obtains
\begin{equation}\label{c}
\int_{Y} \Phi \cdot Dw \leq \int_{p(J_w)} \sum_{j=1}^m |\eta_j|\chi_{J_w^j}\,
d\mathcal{H}^1 = |Dw|(Y)\,.
\end{equation}
Hence combining \eqref{b} with \eqref{c} we conclude that
\begin{equation*}
|Du|(Y) = \int_{Y} \Phi \cdot Du
= \int_{Y} \Phi \cdot Dw\leq |Dw|(Y)\,.
\end{equation*}
\qed
\begin{rem}
The previous theorem is sharp, in the sense that one cannot replace
$BV_{constr}^\#(Y)$ by $BV_{constr}(Y, [0,1])$.
Indeed consider $S=\{p_1,p_2,p_3\}$ with $p_i$
vertices of an equilateral triangle.
Although the minimizer $u\in BV_{constr}(Y, \{0,1\})$
is calibrated (for the result in our setting see~\cite[Example 3.8]{calicipi}),
there exists a function in $BV_{constr}(Y, [0,1])\setminus BV_{constr}^\#(Y)$
whose total variation is strictly less than the total variation of $u$,
as it shown in~\cite[Proposition 5.1]{chambollecremerspock}.
\end{rem}
We define now the convexification of
$\vert Du\vert$ with $u\in BV_{constr}(Y,\{0,1\})$ naturally associated to the notion of calibration for covering spaces.
It was introduced
by Chambolle, Cremers and Pock in~\cite{chambollecremerspock} in the context of minimal partitions.
\begin{dfnz}[Local convex envelope]
Let $u\in BV_{constr}(Y, [0,1])$.
We consider the functional
$G$ given by
\begin{equation*}
G(u):=\int_{Y}\Psi(Du)\,,
\end{equation*}
where
\begin{equation*}
\Psi(q) = \sup_{p\in K} \sum_{j=1}^n p^j \cdot q^j
\end{equation*}
and
\begin{equation*}
K = \{p \in Y : \|p^i - p^j\| \leq 2, \mbox{ for every } i,j = 1,\ldots, n\} \,.
\end{equation*}
In analogy with~\cite{chambollecremerspock} we call $G$
local convex envelope.
\end{dfnz}
The local convex envelope is the tightest convexification with an integral form, indeed:
\begin{prop}{(\cite{chambollecremerspock})}\label{best}
The local convex envelope $G$ is the larger
convex integral functional of the form
$H(v)=\int_{Y} \Psi(x, Dv)$
with $v\in BV_{constr}(Y, [0,1])$ and $\Psi(x,\cdot)$ non negative,
even and convex such that
\begin{equation*}
H(v)=\vert Dv\vert(Y) \quad \text{for} \;v\in BV_{constr}(Y,\{0,1\})\,.
\end{equation*}
\end{prop}
As a consequence of Theorem~\ref{impli} we are able to prove:
\begin{prop}\label{equal}
It holds
\begin{equation*}
G(v)=\vert Dv\vert(Y) \quad \text{for} \;v\in BV_{constr}^\#(Y)\,.
\end{equation*}
\end{prop}
\begin{proof}
The inequality $G(v) \geq \vert Dv\vert(Y)$ is consequence of
Proposition~\ref{best} choosing $\Psi(x,p) = |p|$. For the other inequality it is just enough
to notice that given $v \in BV^\#_{constr}(Y)$, from the proof of Theorem~\ref{impli} we obtain that
\begin{equation*}
\int_{Y} p \cdot Dv \leq \vert Dv\vert(Y)
\end{equation*}
for every $p\in K$. Therefore taking the supremum
on both sides and using that $|Dv|(Y) < +\infty$ we conclude that
\begin{equation*}
\int_{Y} \sup_{p\in K} p \cdot Dv \leq \vert Dv\vert(Y) \, .
\end{equation*}
\end{proof}
\begin{rem}
Proposition \ref{equal} shows that even if the local convex envelope is the best integral convexification of the problem, it "outperforms" the total variation only when evaluated on functions $u\in BV_{constr}(Y,[0,1]) \setminus BV_{constr}^\#(Y)$.
\end{rem}
\section{An example of nonexistence of calibrations}\label{nonexistence}
Finding a calibration for a candidate minimizer is not an easy task.
We wanted to understand at least
whether there exists a calibration when $S$ is composed
of points lying at the vertices of a regular polygon.
We have a positive answer only in the case of a triangle and of a
square~\cite{calicipi, annalisaandrea}.
As a byproduct of Theorem~\ref{impli} we are now able to
negatively answer in the case of a regular pentagon.
\begin{ex}[Five vertices of a regular pentagon]\label{fivepoints}
Given $S=\{p_1,\ldots,p_5\}$ with $p_i$ the five vertices of a regular pentagon,
the minimizer of the Steiner problem is well known.
Following the canonical construction presented in~\cite[Proposition 2.28]{calicipi}
it is not difficult to construct the ``associated"
function $u\in BV_{constr}(Y,\{0,1\})$
here represented in Figure~\ref{pentaoncovering}.
By explicit computations one gets $\frac{1}{2}\vert Du\vert (Y)=4.7653$.
Consider now the function $w\in BV_{constr}(Y, \{0,\frac12,1\})$
exhibited in Figure~\ref{pentaoncovering}.
It holds
$\frac{1}{2}\vert Dw\vert (Y) \approx 4.5677$.
Theorem~\ref{impli} tells us that if a calibration for the minimizer
$u$ exists, then $u$ has to be a minimizer also in the larger space
$BV_{constr}^\#(Y)$, but $\vert Dw\vert (Y) <\vert Du\vert(Y)$,
hence a calibration
for $u$ does not exists.
\begin{figure}[H]
\begin{tikzpicture}[scale=0.6]
\filldraw[fill=green(munsell)]
(0.88,1.24)--(0.88,-0.17)--
(0.88,-0.17)--(0,-0.7)--
(0,-0.7)--(-0.88,-0.17)--
(-0.88,1.24);
\draw[color=black, dashed, thick]
(-0.88,1.24)to[out= -120,in=100, looseness=1] (-1.43,-0.45)
(0.88,1.24)to[out= -60,in=80, looseness=1] (1.43,-0.45)
(-1.43,-0.45) to[out= -60,in=175, looseness=1] (0,-1.505)
(1.43,-0.45) to[out= -120,in=5, looseness=1] (0,-1.505);
\path[font=\footnotesize]
(-1.75,-1.85) node[above]{$D_1$};
\fill[black](-0.88,1.24) circle (1.7pt);
\fill[black](0.88,1.24) circle (1.7pt);
\fill[black](1.43,-0.45) circle (1.7pt);
\fill[black](-1.43,-0.45) circle (1.7pt);
\fill[black](0,-1.505) circle (1.7pt);
\draw[very thick]
(0.88,1.24)--(0.88,-0.17)
(-0.88,1.24)--(-0.88,-0.17)
(-0.88,-0.17)--(0,-0.7)
(0.88,-0.17)--(0,-0.7);
\end{tikzpicture}\qquad
\begin{tikzpicture}[scale=0.6]
\filldraw[fill=green(munsell)]
(0.88,1.24)to[out= -60,in=80, looseness=1] (1.43,-0.45) --
(1.43,-0.45)--(0.88,-0.17)--
(0.88,-0.17)--(0.88,1.24);
\draw[green(munsell)]
(0.88,1.24)to[out= -60,in=80, looseness=1] (1.43,-0.45) ;
\draw[color=black, dashed, thick]
(-0.88,1.24)to[out= -120,in=100, looseness=1] (-1.43,-0.45)
(0.88,1.24)to[out= -60,in=80, looseness=1] (1.43,-0.45)
(-1.43,-0.45) to[out= -60,in=175, looseness=1] (0,-1.505)
(1.43,-0.45) to[out= -120,in=5, looseness=1] (0,-1.505);
\path[font=\footnotesize]
(-1.75,-1.85) node[above]{$D_2$};
\fill[black](-0.88,1.24) circle (1.7pt);
\fill[black](0.88,1.24) circle (1.7pt);
\fill[black](1.43,-0.45) circle (1.7pt);
\fill[black](-1.43,-0.45) circle (1.7pt);
\fill[black](0,-1.505) circle (1.7pt);
\draw[very thick]
(0.88,1.24)--(0.88,-0.17)
(0.88,-0.17)--(1.43,-0.45);
\end{tikzpicture}\qquad
\begin{tikzpicture}[scale=0.6]
\filldraw[fill=green(munsell)]
(1.43,-0.45) to[out= -120,in=5, looseness=1] (0,-1.505) --
(0,-1.505)--(0,-0.7)--
(0,-0.7)--(0.88,-0.17)--
(0.88,-0.17)--(1.43,-0.45);
\draw[green(munsell)]
(1.43,-0.45) to[out= -120,in=5, looseness=1] (0,-1.505) ;
\draw[color=black, dashed, thick]
(-0.88,1.24)to[out= -120,in=100, looseness=1] (-1.43,-0.45)
(0.88,1.24)to[out= -60,in=80, looseness=1] (1.43,-0.45)
(-1.43,-0.45) to[out= -60,in=175, looseness=1] (0,-1.505)
(1.43,-0.45) to[out= -120,in=5, looseness=1] (0,-1.505);
\path[font=\footnotesize]
(-1.75,-1.85) node[above]{$D_3$};
\fill[black](-0.88,1.24) circle (1.7pt);
\fill[black](0.88,1.24) circle (1.7pt);
\fill[black](1.43,-0.45) circle (1.7pt);
\fill[black](-1.43,-0.45) circle (1.7pt);
\fill[black](0,-1.505) circle (1.7pt);
\draw[very thick]
(1.43,-0.45)--(0.88,-0.17)
(0.88,-0.17)--(0,-0.7)
(0,-0.7)--(0,-1.505);
\end{tikzpicture}\qquad
\begin{tikzpicture}[scale=0.6]
\filldraw[fill=green(munsell)]
(-1.43,-0.45) to[out= -60,in=175, looseness=1] (0,-1.505) --
(0,-1.505)--(0,-0.7)--
(0,-0.7)--(-0.88,-0.17)--
(-0.88,-0.17)--(-1.43,-0.45);
\draw[green(munsell)]
(-1.43,-0.45) to[out= -60,in=175, looseness=1] (0,-1.505) ;
\draw[color=black, dashed, thick]
(-0.88,1.24)to[out= -120,in=100, looseness=1] (-1.43,-0.45)
(0.88,1.24)to[out= -60,in=80, looseness=1] (1.43,-0.45)
(-1.43,-0.45) to[out= -60,in=175, looseness=1] (0,-1.505)
(1.43,-0.45) to[out= -120,in=5, looseness=1] (0,-1.505);
\path[font=\footnotesize]
(-1.75,-1.85) node[above]{$D_4$};
\fill[black](-0.88,1.24) circle (1.7pt);
\fill[black](0.88,1.24) circle (1.7pt);
\fill[black](1.43,-0.45) circle (1.7pt);
\fill[black](-1.43,-0.45) circle (1.7pt);
\fill[black](0,-1.505) circle (1.7pt);
\draw[very thick]
(-1.43,-0.45)--(-0.88,-0.17)
(-0.88,-0.17)--(0,-0.7)
(0,-0.7)--(0,-1.505);
\end{tikzpicture}\qquad
\begin{tikzpicture}[scale=0.6]
\filldraw[fill=green(munsell)]
(-0.88,1.24)to[out= -120,in=100, looseness=1] (-1.43,-0.45) --
(-1.43,-0.45)--(-0.88,-0.17)--
(-0.88,-0.17)--(-0.88,1.24);
\draw[green(munsell)]
(-0.88,1.24)to[out= -120,in=100, looseness=1] (-1.43,-0.45) ;
\draw[color=black, dashed, thick]
(-0.88,1.24)to[out= -120,in=100, looseness=1] (-1.43,-0.45)
(0.88,1.24)to[out= -60,in=80, looseness=1] (1.43,-0.45)
(-1.43,-0.45) to[out= -60,in=175, looseness=1] (0,-1.505)
(1.43,-0.45) to[out= -120,in=5, looseness=1] (0,-1.505);
\path[font=\footnotesize]
(-1.75,-1.85) node[above]{$D_5$};
\fill[black](-0.88,1.24) circle (1.7pt);
\fill[black](0.88,1.24) circle (1.7pt);
\fill[black](1.43,-0.45) circle (1.7pt);
\fill[black](-1.43,-0.45) circle (1.7pt);
\fill[black](0,-1.505) circle (1.7pt);
\draw[very thick]
(-0.88,1.24)--(-0.88,-0.17)
(-0.88,-0.17)--(-1.43,-0.45);
\end{tikzpicture}
\medskip
\begin{tikzpicture}[scale=0.6]
\filldraw[fill=green(munsell)]
(0.88,1.24)--(0,0.72)--
(-0.88,1.24);
\filldraw[fill=magicmint]
(0,0.72)--(-0.88,1.24)--
(-0.88,1.24)--(-0.67,0.22)--
(-0.67,0.22)--(0,0)--
(0,0)--(0.67,0.22)--
(0.67,0.22)--(0.88,1.24)
(0.88,1.24)--(0,0.72);
\draw[very thick, black]
(-0.88,1.24)--(-0.67,0.22)
(-0.67,0.22)--(0,0)
(0,0)--(0.67,0.22)
(0.67,0.22)--(0.88,1.24)
(0.88,1.24)--(0,0.72)
(-0.88,1.24)--(0,0.72);
\draw[color=black, dashed, thick]
(-0.88,1.24)to[out= -120,in=100, looseness=1] (-1.43,-0.45)
(0.88,1.24)to[out= -60,in=80, looseness=1] (1.43,-0.45)
(-1.43,-0.45) to[out= -60,in=175, looseness=1] (0,-1.505)
(1.43,-0.45) to[out= -120,in=5, looseness=1] (0,-1.505);
\path[font=\footnotesize]
(-1.75,-1.85) node[above]{$D_1$};
\fill[black](-0.88,1.24) circle (1.7pt);
\fill[black](0.88,1.24) circle (1.7pt);
\fill[black](1.43,-0.45) circle (1.7pt);
\fill[black](-1.43,-0.45) circle (1.7pt);
\fill[black](0,-1.505) circle (1.7pt);
\end{tikzpicture}\qquad
\begin{tikzpicture}[scale=0.6]
\filldraw[fill=green(munsell)]
(0.88,1.24)to[out= -60,in=80, looseness=1] (1.43,-0.45) --
(1.43,-0.45)--(0,-0)--
(0,0)--(0.88,1.24);
\filldraw[fill=magicmint, rotate=-72]
(0,0.72)--(-0.88,1.24)--
(-0.88,1.24)--(-0.67,0.22)--
(-0.67,0.22)--(0,0)--
(0,0)--(0.67,0.22)--
(0.67,0.22)--(0.88,1.24)
(0.88,1.24)--(0,0.72);
\draw[green(munsell)]
(0.88,1.24)to[out= -60,in=80, looseness=1] (1.43,-0.45) ;
\draw[color=black, dashed, thick]
(-0.88,1.24)to[out= -120,in=100, looseness=1] (-1.43,-0.45)
(0.88,1.24)to[out= -60,in=80, looseness=1] (1.43,-0.45)
(-1.43,-0.45) to[out= -60,in=175, looseness=1] (0,-1.505)
(1.43,-0.45) to[out= -120,in=5, looseness=1] (0,-1.505);
\path[font=\footnotesize]
(-1.75,-1.85) node[above]{$D_2$};
\fill[black](-0.88,1.24) circle (1.7pt);
\fill[black](0.88,1.24) circle (1.7pt);
\fill[black](1.43,-0.45) circle (1.7pt);
\fill[black](-1.43,-0.45) circle (1.7pt);
\fill[black](0,-1.505) circle (1.7pt);
\draw[very thick, black, rotate=-72]
(-0.88,1.24)--(-0.67,0.22)
(-0.67,0.22)--(0,0)
(0,0)--(0.67,0.22)
(0.67,0.22)--(0.88,1.24)
(0.88,1.24)--(0,0.72)
(-0.88,1.24)--(0,0.72);
\end{tikzpicture}\qquad
\begin{tikzpicture}[scale=0.6]
\filldraw[fill=green(munsell)]
(1.43,-0.45) to[out= -120,in=5, looseness=1] (0,-1.505) --
(0,-1.505)--(0,0)--
(0,0)--(1.43,-0.45);
\filldraw[fill=magicmint, rotate=-144]
(0,0.72)--(-0.88,1.24)--
(-0.88,1.24)--(-0.67,0.22)--
(-0.67,0.22)--(0,0)--
(0,0)--(0.67,0.22)--
(0.67,0.22)--(0.88,1.24)
(0.88,1.24)--(0,0.72);
\draw[green(munsell)]
(1.43,-0.45) to[out= -120,in=5, looseness=1] (0,-1.505) ;
\draw[color=black, dashed, thick]
(-0.88,1.24)to[out= -120,in=100, looseness=1] (-1.43,-0.45)
(0.88,1.24)to[out= -60,in=80, looseness=1] (1.43,-0.45)
(-1.43,-0.45) to[out= -60,in=175, looseness=1] (0,-1.505)
(1.43,-0.45) to[out= -120,in=5, looseness=1] (0,-1.505);
\path[font=\footnotesize]
(-1.75,-1.85) node[above]{$D_3$};
\fill[black](-0.88,1.24) circle (1.7pt);
\fill[black](0.88,1.24) circle (1.7pt);
\fill[black](1.43,-0.45) circle (1.7pt);
\fill[black](-1.43,-0.45) circle (1.7pt);
\fill[black](0,-1.505) circle (1.7pt);
\draw[very thick, black, rotate=-144]
(-0.88,1.24)--(-0.67,0.22)
(-0.67,0.22)--(0,0)
(0,0)--(0.67,0.22)
(0.67,0.22)--(0.88,1.24)
(0.88,1.24)--(0,0.72)
(-0.88,1.24)--(0,0.72);
\end{tikzpicture}\qquad
\begin{tikzpicture}[scale=0.6]
\filldraw[fill=green(munsell)]
(-1.43,-0.45) to[out= -60,in=175, looseness=1] (0,-1.505) --
(0,-1.505)--(0,0)--
(0,0)--(-1.43,-0.45);
\filldraw[fill=magicmint, rotate=144]
(0,0.72)--(-0.88,1.24)--
(-0.88,1.24)--(-0.67,0.22)--
(-0.67,0.22)--(0,0)--
(0,0)--(0.67,0.22)--
(0.67,0.22)--(0.88,1.24)
(0.88,1.24)--(0,0.72);
\draw[green(munsell)]
(-1.43,-0.45) to[out= -60,in=175, looseness=1] (0,-1.505) ;
\draw[color=black, dashed, thick]
(-0.88,1.24)to[out= -120,in=100, looseness=1] (-1.43,-0.45)
(0.88,1.24)to[out= -60,in=80, looseness=1] (1.43,-0.45)
(-1.43,-0.45) to[out= -60,in=175, looseness=1] (0,-1.505)
(1.43,-0.45) to[out= -120,in=5, looseness=1] (0,-1.505);
\path[font=\footnotesize]
(-1.75,-1.85) node[above]{$D_4$};
\fill[black](-0.88,1.24) circle (1.7pt);
\fill[black](0.88,1.24) circle (1.7pt);
\fill[black](1.43,-0.45) circle (1.7pt);
\fill[black](-1.43,-0.45) circle (1.7pt);
\fill[black](0,-1.505) circle (1.7pt);
\draw[very thick, black, rotate=144]
(-0.88,1.24)--(-0.67,0.22)
(-0.67,0.22)--(0,0)
(0,0)--(0.67,0.22)
(0.67,0.22)--(0.88,1.24)
(0.88,1.24)--(0,0.72)
(-0.88,1.24)--(0,0.72);
\end{tikzpicture}\qquad
\begin{tikzpicture}[scale=0.6]
\filldraw[fill=green(munsell)]
(-0.88,1.24)to[out= -120,in=100, looseness=1] (-1.43,-0.45) --
(-1.43,-0.45)--(0,0)--
(0,0)--(-0.88,1.24);
\filldraw[fill=magicmint, rotate=72]
(0,0.72)--(-0.88,1.24)--
(-0.88,1.24)--(-0.67,0.22)--
(-0.67,0.22)--(0,0)--
(0,0)--(0.67,0.22)--
(0.67,0.22)--(0.88,1.24)
(0.88,1.24)--(0,0.72);
\draw[green(munsell)]
(-0.88,1.24)to[out= -120,in=100, looseness=1] (-1.43,-0.45) ;
\draw[color=black, dashed, thick]
(-0.88,1.24)to[out= -120,in=100, looseness=1] (-1.43,-0.45)
(0.88,1.24)to[out= -60,in=80, looseness=1] (1.43,-0.45)
(-1.43,-0.45) to[out= -60,in=175, looseness=1] (0,-1.505)
(1.43,-0.45) to[out= -120,in=5, looseness=1] (0,-1.505);
\path[font=\footnotesize]
(-1.75,-1.85) node[above]{$D_5$};
\fill[black](-0.88,1.24) circle (1.7pt);
\fill[black](0.88,1.24) circle (1.7pt);
\fill[black](1.43,-0.45) circle (1.7pt);
\fill[black](-1.43,-0.45) circle (1.7pt);
\fill[black](0,-1.505) circle (1.7pt);
\draw[very thick, black, rotate=72]
(-0.88,1.24)--(-0.67,0.22)
(-0.67,0.22)--(0,0)
(0,0)--(0.67,0.22)
(0.67,0.22)--(0.88,1.24)
(0.88,1.24)--(0,0.72)
(-0.88,1.24)--(0,0.72);
\end{tikzpicture}
\caption{Up: The functions $u\in BV_{constr}(Y,\{0,1\})$ minimizer of Problem~\eqref{minpro2}
Down: A function $w$ in $BV_{constr}(Y,\{0,1/2,1\})$ with
$\vert Dw\vert(Y)<\vert Du\vert(Y)$.
White corresponds to the value $0$, light green to $1/2$ and
dark green to $1$.}\label{pentaoncovering}
\end{figure}
Non--existence of calibrations for minimal currents when
$S=\{p_1,\ldots,p_5\}$ with $p_i$ the five vertices of a regular pentagon
was already highlighted
in~\cite[Example 4.2]{bonafini}.
As we have shown that Definition~\ref{calicurrents}
is stronger than Definition~\ref{caliconvering} it
was necessary to ``translate" the example in our setting
to conclude that a calibration does not exists for $u$ minimizer of Problem~\eqref{minpro2}.
\end{ex}
\begin{rem}
We refer also to~\cite[Example 4.6]{annalisaandrea}
where an example of nonexistence of calibrations is provided.
However in that case the ambient space is not $\mathbb{R}^2$
endowed with the standard Euclidean metric.
\end{rem}
\subsection{Remarks on calibrations in families}
Example~\ref{fivepoints} underlines an big issue
of the theory of calibrations.
\emph{Calibrations in families} (see~\cite[Section 4]{calicipi}) can
avoid the problem.
Indeed in~\cite[Example 4.9]{calicipi} we are able to find the minimal
Steiner network
for the five vertices of a regular pentagon via a calibration argument.
We explain briefly here the strategy we used and we validate it
with some remarks.
\begin{itemize}
\item First we divide the sets of $\mathscr{P}_{constr}(Y)$ in families.
The competitors that belong to the same class
share a property related to the projection of their essential boundary
onto the base set $M$.
In particular we define a family as
\begin{equation*}\label{famiglia}
\mathcal{F}(\mathcal{J}) := \{E \in \mathscr{P}_{constr}(Y_\Sigma):
\mathcal{H}^1(E^{i,j}) \neq 0 \mbox{ for every } (i,j) \in \mathcal{J}\}.
\end{equation*}
where $\mathcal{J} \subset \{1,\ldots,m\}\times \{1,\ldots,m\}$ and
$E^{i,j}:= \partial^\ast E^i\cap \partial^\ast E^j$.
The union of the families has to cover $\mathscr{P}_{constr}(Y)$.
\item We consider a suitable notion of calibrations for $E$ in
$\mathcal{F}(\mathcal{J})$:
Condition \textbf{(2)} can be weaken as
$|\Phi^i(x) - \Phi^j(x)| \leq 2$ for every $i,j = 1,\ldots m$
such that $(i,j) \in \mathcal{J}$ and for every $x\in D$.
\item We calibrate the candidate minimizer in each family.
\item We compare the perimeter of each calibrated minimizer
to find the explicit global
minimizers of Problem~\eqref{minpro2}.
\end{itemize}
\textit{How to divide the competitors in families}
We consider as competitors only the sets in
$\mathscr{P}_{constr}(Y)$ whose projection onto $M$
is a network without loops.
Since it is known that the minimizers are tree--like,
the previous choice is not restrictive.
\smallskip
Suppose that $S$ consists of $n$ points
located on the boundary of a convex set $\Omega$.
Then Problem~\eqref{minpro2} is equivalent to a minimal partition problem and
$E\in\mathscr{P}_{constr}(Y)$ induces a partition
$\{A_1,\ldots,A_n\}$ of $\Omega$.
We classify the sets in $\mathscr{P}_{constr}(Y)$ simply prescribing
which phases ``touch" each other (see~\cite[Lemma 4.8]{calicipi}).
The division in families depends on the topology of the complementary of
the network.
\smallskip
Let us now pass to the general case of any configuration of $n$ points of
$S$.
The minimal Steiner networks are composed of
at most $m=2n-3$ segments.
Each segment of a minimizer $\mathcal{S}$ coincides with
$\overline{p(E^{i,j})}$
for $i\neq j\in \{1,\ldots,n\}$.
Different segments of $\mathcal{S}$ are associated with different $E^{i,j}$.
We take $\mathcal{I}$ composed of $2n-3$ different couples
of indices $(i,j)\in \{1,\ldots,n\}\times \{1,\ldots,n\}$.
The cover of $\mathscr{P}_{constr}(Y)$ is given by considering
all possible $\mathcal{I}$ satisfying the above property.
\medskip
\textit{Existence of calibrations in families}
The just proposed division in families is the finest possible one
and it classifies the competitors relying on their topological type.
Note that the length is a convex function of the location of the junctions.
As a consequence each stationary network
is the unique minimizer in its topological type
(see
for instance~\cite[Corollary 4.3]{morgancluster} and~\cite{choe} where more general situations
are treated)
and therefore a calibration in such a
family always exists.
\medskip
\textit{Export the idea of calibrations in families to currents}
Once identified the families for sets in $\mathscr{P}_{constr}(Y)$
it is possible to produce families
for Problems~\eqref{minprocurrents} and~\eqref{sigmaminprocurrents}.
Take a competitor in each $\mathcal{F}(\mathcal{J})$.
When the points of $S$ lie on the boundary of a convex set
it is sufficient to apply Lemma~\ref{construction}
(reminding that $E^i=A_{n+1-i}$) to construct a current $T$.
Then one can identify the coefficients of $T$.
Hence in this case
the classification in families will rely on which subsums of $g_i$ are
present in the competitors.
To deal with the case of general configurations of points of $S$,
we have to generalize Lemma~\ref{construction}.
In the construction
we set $T_i=[\partial^\ast E^i,\tau_i,e_i]$ where $\tau_i$ are the tangent vectors to $\partial^\ast E^i$ and the multiplicities
are chosen
in such a way that $e_i-e_{i-1}=\tilde{g}_i$ with $\tilde{g}_i$ linearly
independent
vectors of $\mathbb{R}^{n-1}$.
Again we set $T=\sum T_i$. Now $\partial T$ is the sum of
$\tilde{g}_j\delta_{p_i}$
where $j$ can also be different from $i$.
We obtain a current with the desired boundary simply substituting
$\tilde{g}_j$ with $g_i$ in order to satisfy
$\tilde{g}_j\delta_{p_i}=g_i\delta_{p_i}$.
\section*{Appendix: Regularity of the calibration}
\begin{prop}[Constancy theorem for currents with coefficients in $\mathbb{R}^n$]\label{constancy}
Let $T$ be a normal $2$-current in $\mathbb{R}^2$ with coefficients in $\mathbb{R}^n$. Then there exists $u\in BV(\mathbb{R}^2,\mathbb{R}^n)$ such that for every $\omega \in C_c^\infty(\mathbb{R}^2,\mathbb{R}^n)$
\begin{equation}
T(\omega) = \int_{\mathbb{R}^2} \langle \omega, u\rangle \, d\mathscr{L}^2\,.
\end{equation}
\end{prop}
\begin{proof}
Notice firstly that the space of $2$--forms with values in $\mathbb{R}^n$ can be identified by Hodge duality to the space $C_c^\infty(\mathbb{R}^2,\mathbb{R}^n)$.
As $T$ is a normal current, by Riesz theorem there exists $\sigma: \mathbb{R}^2 \rightarrow \mathbb{R}^n$ and $\mu_T$ a finite measure in $\mathbb{R}^2$ such that
\begin{equation}\label{const1}
T(\omega) = \int_{\mathbb{R}^2} \langle \omega , \sigma\rangle \, d\mu_T = \sum_{i=1}^n\int_{\mathbb{R}^2} \omega_i \sigma_i \, d\mu_T\,.
\end{equation}
Defining $T_i : C_c^\infty(\mathbb{R}^2,\mathbb{R}) \rightarrow \mathbb{R}$ as
\begin{equation*}
T_i(f) = \int_{\mathbb{R}^2} f\sigma_i \, d\mu_T
\end{equation*}
we know that $T_i$ is a $2$-normal current with coefficients in $\mathbb{R}$. Therefore we can apply the standard constancy theorem (see for instance~\cite[\S 3.2,~Theorem 3]{CCC}) and find $u_i \in BV(\mathbb{R}^2)$ such that
\begin{equation}\label{const2}
T_i(f) = \int_{\mathbb{R}^2} f u_i \, d\mathscr{L}^2
\end{equation}
for every $i=1,\ldots,n$.
Hence combining \eqref{const1} and \eqref{const2} we conclude.
\end{proof}
We recall the definition of approximately regular vector fields both on $\mathbb{R}^n$ and on the covering space $Y$ (\cite{mumford, calicipi}).
\begin{dfnz}[Approximately regular vector fields on $\mathbb{R}^n$]
Given $A\subset \mathbb{R}^{n}$, a Borel vector field $\Phi: A \rightarrow \mathbb{R}^{n}$ is approximately regular
if it is bounded and for every Lipschitz hypersurface $M$ in $\mathbb{R}^{n}$, $\Phi$ admits traces on $M$ on the two sides of $M$ (denoted by $\Phi^+$ and $\Phi^-$) and
\begin{equation}\label{app}
\Phi^+(x) \cdot \nu_M(x) = \Phi^-(x) \cdot \nu_M(x) = \Phi(x) \cdot \nu_M(x),
\end{equation}
for $\mathcal{H}^{n-1}$--a.e. $x \in M\cap A$.
\end{dfnz}
\begin{dfnz}[Approximately regular vector fields on the covering $Y$]\label{approxi}
Given $\Phi: Y\rightarrow \mathbb{R}^2$, we say that it is \emph{approximately regular} in $Y$ if
$\Phi^j$
is \emph{approximately regular} for every $j=1,\ldots,m$.
\end{dfnz}
\begin{thm}\label{approximately}
Suppose that $\Phi:\mathbb{R}^2 \rightarrow M^{n\times 2}(\mathbb{R})$ is a matrix valued vector field such that its rows are approximately regular vector fields.
Given $T=[\Sigma, \tau, \theta]$ a
$1$--rectifiable current with coefficients in $\mathbb{Z}^n$, assume that $\Phi$ satisfies condition (i), (ii) and (iii) of Definition \ref{calicurrents}.
Then $T$ is mass minimizing among all rectifiable $1$--currents with coefficients
in $\mathbb{Z}^n$ in its homology class.
\end{thm}
\begin{proof}
Set $\Omega' \subset\subset \Omega \subset \mathbb{R}^2$ open, bounded, smooth sets such that they contain the convex envelope of $S$.
Given the candidate minimizer $T$ we take a competitor
$\widetilde T = [\widetilde{\Sigma}, \widetilde \tau, \widetilde \theta]$: a rectifiable $1$--current in $\mathbb{R}^2$ with coefficients in $\mathbb{Z}^n$
such that
$\partial(T-\widetilde T)= 0$. Notice that we can suppose that $\Sigma, \widetilde \Sigma \subset \Omega'$.
There exists $U$ a normal $2$--current in $\mathbb{R}^2$ with coefficients in $\mathbb{Z}^n$
such that $T - \widetilde T = \partial U$.
By Proposition \ref{constancy} there exists $u\in BV(\mathbb{R}^2, \mathbb{R}^n)$ such that for every $\omega\in C_c^\infty(\mathbb{R}^2, \mathbb{R}^n)$
\begin{displaymath}
U(\omega) = \int_{\mathbb{R}^2} \langle \omega, u\rangle \, d\mathscr{L}^{2}\,.
\end{displaymath}
Notice that for every $\phi \in C_c^\infty(\mathbb{R}^2,M^{n\times 2})$ supported in $\mathbb{R}^2 \setminus \Omega'$ we have
\begin{equation*}
0 = T(\phi) - \widetilde T(\phi) = -U(d\phi) = - \int_{\mathbb{R}^2}\langle u,d\phi\rangle \, d\mathscr{L}^2 = \sum_{i=1}^n \int_{\mathbb{R}^2} u_i \div \phi_i^\perp\, d\mathscr{L}^2\,.
\end{equation*}
Taking the supremum on $\phi \in C_c^\infty(\mathbb{R}^2, M^{n\times 2})$ compactly supported in $\mathbb{R}^2 \setminus \Omega'$ such that $\|\omega\|_\infty \leq 1$
we infer that $|Du|(\mathbb{R}^2 \setminus \Omega') = 0$ and therefore there exists a vector $c\in \mathbb{R}^n$ such that $u(x) = c$ in $\mathbb{R}^2 \setminus \Omega'$ almost everywhere. Define then
\begin{equation}
U_0(\omega) = \int_{\mathbb{R}^2} \langle \omega,u^c\rangle\, d\mathscr{L}^2
\end{equation}
where $u^c(x) = u(x) - c$. It is easy to check that $U_0(d\phi) = U(d\phi)$ for every $\phi \in C_c^\infty(\mathbb{R}^2,M^{n\times 2})$.
Define now $\Phi_n \in C_c^\infty(\mathbb{R}^2, M^{n\times 2})$ as $\Phi_n = (\chi_{\Omega}\Phi)\star \rho_n$, where $\rho_n$ is a mollifier. Using the standard divergence theorem for $BV$ function we obtain
\begin{eqnarray}
T(\Phi_n) - \widetilde T(\Phi_n) &=& \partial U (\Phi_n) = - U(d\Phi_n) = - U_0(d\Phi_n) = -\int_{\mathbb{R}^2} \langle u^c,d\Phi_n\rangle \, d\mathscr{L}^2 \nonumber \\
&=& -\sum_{i=1}^n \int_{\mathbb{R}^2} u^c_i\div (\Phi_n)_i^\perp \, d\mathscr{L}^2
= \sum_{i=1}^n \int_{\mathbb{R}^2} (\Phi_n)_i^\perp \cdot Du^c_i\,. \label{boh}
\end{eqnarray}
We observe that
\begin{equation*}
T(\Phi_n) = \int_{\Sigma}\langle \Phi_n\tau, \theta\rangle\, d\mathcal{H}^1 \rightarrow T(\Phi) \quad \mbox{as }n\rightarrow + \infty
\end{equation*}
because $\Sigma \subset \Omega$ and similarly for $\widetilde T$. Therefore taking the limit on both sides of \eqref{boh} we get
\begin{equation}
T(\Phi) - \widetilde T(\Phi) = \sum_{i=1}^n \int_{\Omega} \Phi_i^\perp \cdot Du^c_i\,.
\end{equation}
Finally applying the divergence theorem for approximately regular vector fields (\cite{mumford}) and using that $u^c = 0$ on $\mathbb{R}^2 \setminus\Omega'$ we get
\begin{equation*}
T(\Phi) - \widetilde T(\Phi) = \sum_{i=1}^n \int_{\Omega} u^c_i \div \Phi_i^\perp \, d\mathscr{L}^2 = 0
\end{equation*}
thanks to property (i) of a calibration.
Then the proof follows the same line of Proposition 3.2 in \cite{annalisaandrea}.
\end{proof}
\bibliographystyle{amsplain}
|
2,869,038,154,630 | arxiv | \section{Introduction}\label{s_intro}
Let~$\B(\ell^2)$ denote the space of bounded linear operators
on~$\ell^2$. The Schur multipliers of $\B(\ell^2)$ have attracted
considerable attention in the literature. These are the (necessarily
bounded) maps of the form \[M(\phi)\colon \B(\ell^2)\to
\B(\ell^2),\quad T\mapsto \phi\ast T\] where $\phi =
(\phi(i,j))_{i,j\in \bN}$ is a fixed matrix with the property that the
Schur, or entry-wise, product $\phi\ast T$ is in~$\B(\ell^2)$ for
every~$T\in \B(\ell^2)$. Here we identify operators in $\B(\ell^2)$
with matrices indexed by $\bN\times\bN$ in a canonical way. It is
well-known that if~$\phi$ is itself the matrix of an element
of~$\B(\ell^2)$, then $M(\phi)$ is a Schur multiplier, but that not
every Schur multiplier of~$\B(\ell^2)$ arises in this way.
In fact~\cite{pa}, Schur multipliers are precisely the normal
(weak*-weak* continuous) $\D$-bimodule maps on $\B(\ell^2)$,
where~$\D$ is the maximal abelian selfadjoint algebra, or masa,
consisting of the operators in~$\B(\ell^2)$ whose matrix is
diagonal. By a result of R. R. Smith~\cite{smith}, each of these maps
has completely bounded norm equal to its norm as linear
map on~$\B(\ell^2)$. Moreover, it follows from a classical
result of A.~Grothendieck~\cite{Gro} that the space of Schur multipliers
of~$\B(\ell^2)$ can be identified with~$\D\otimes_{\eh}\D$,
where~$\otimes_{\eh}$ is the weak* (or extended) Haagerup tensor
product introduced by D.~P.~Blecher and R.~R.~Smith
in~\cite{blecher_smith}.
Recall \cite[Definition~3.1]{fm2} that a masa~$\A$ in a von Neumann
algebra~$\M$ is a Cartan masa if there is a faithful normal
conditional expectation of~$\M$ onto~$\A$, and the set of unitary
normalizers of~$\A$ in~$\M$ generates~$\M$.
Let $\R$ be the hyperfinite {\rm II}$_1$-factor. For each Cartan
masa~$\A\subseteq \R$, F.~Pop and R.~R.~Smith defined a Schur product
$\mathbin{\star}_\A\colon \R\times \R\to \R$ using the Schur products of finite
matrices and approximation techniques~\cite{ps}. Using this product,
they showed that every bounded $\A$-bimodule map $\R\to\R$ is
completely bounded, with completely bounded norm equal to its norm.
The separable von Neumann algebras~$\M$ containing a Cartan masa~$\A$
were coordinatised by J.~Feldman and C.~C.~Moore~\cite{fm1,fm2}. We use
this coordinatisation to define the Schur multipliers of~$(\M,\A)$.
Our definition
generalises the classical notion of a Schur multiplier
of~$\B(\ell^2)$, and for~$\M=\R$ and certain
masas~$\A\subseteq \R$, our definition of Schur
multiplication extends the Schur product~$\mathbin{\star}_\A$ of~\cite{ps}.
In fact, the Schur multipliers of~$\M$ turn out to be the adjoints of
the multipliers of the Fourier algebra of the groupoid underlying the
von Neumann algebra~$\M$ (see~\cite{rbook,r}). Our focus, however, is
on algebraic properties such as idempotence, characterisation problems
and connections with operator space tensor products, so we restrict
our attention to Schur multipliers of von Neumann algebras with Cartan
masas.
Our main results are as follows. Let~$\M$ be a separable von Neumann
algebra with a Cartan masa~$\A$. After defining the Schur multipliers
of~$(\M,\A)$, we show in Theorem~\ref{th_main} that these are
precisely the normal $\A$-bimodule maps $\M\to \M$, generalising the
well-known result for $\M=\B(\ell^2)$, $\A=\D$. However,
if~$\M\ne\B(\ell^2)$, then the extended Haagerup tensor product
$\A\otimes_{\eh}\A$ need not exhaust the Schur multipliers; indeed we
show in that if~$\M$ contains a direct summand isomorphic to~$\R$,
then $\A\otimes_{\eh}\A$ does not contain every Schur multiplier
of~$\M$. This is perhaps surprising, since in~\cite{ps} Pop and Smith
show that every (completely) bounded $\A$-bimodule map on~$\R$ is the
weak* pointwise limit of transformations corresponding to elements of
$\A\otimes_{\eh}\A$. Our result is a corollary to
Theorem~\ref{th_ch}, in which we show that there are no non-trivial
idempotent Schur multipliers of Toeplitz type on~$\R$ that come from
$\A\otimes_{\eh}\A$.
\subsection*{Acknowledgements}
The authors are grateful to Adam Fuller and David Pitts for providing
Remark~\ref{r_autcb} and drawing our attention to~\cite{cpz}. We also
wish to thank Jean Renault for illuminating discussions during the
preparation of this paper.
\section{Feldman-Moore relations and Cartan pairs}\label{s_prel}
Here we recall some preliminary notions and results from the work of
Feldman and Moore~\cite{fm1,fm2}. Throughout, let~$X$ be a set and
let~$R\subseteq X\times X$ be an equivalence relation on~$X$. We write
$x\sim y$ to mean that $(x,y)\in R$.
For $n\in \bN$ with $n\ge2$, we
write
\[R^{(n)}=\{(x_0,x_1,\dots,x_n)\in X^{n+1}\colon x_0\sim
x_1\sim\dots\sim x_n\}.\] The $i$th coordinate projection of~$R$
onto~$X$ will be written as $\pi_i\colon R\to X$, $(x_1,x_2)\mapsto
x_i$.
\begin{definition}
A map~$\sigma\colon
R^{(2)}\to \bT$ is a \emph{$2$-cocycle on~$R$} if
\[
\sigma(x,y,z)\sigma(x,z,w)=\sigma(x,y,w)\sigma(y,z,w)
\] for %
all $(x,y,z,w)\in R^{(3)}$. We say~$\sigma$
is \emph{normalised} if~$\sigma(x,y,z)=1$ whenever two of $x$, $y$
and~$z$ are equal. By \cite[Proposition~7.8]{fm1}, any normalised
$2$-cocycle $\sigma$ is \emph{skew-symmetric}: for every
permutation~$\pi$ on three elements,
\[\sigma(\pi(x,y,z))=
\begin{cases}
\sigma(x,y,z)&\text{if $\pi$ is even},\\
\sigma(x,y,z)^{-1}&\text{if $\pi$ is odd}.
\end{cases}\]
\end{definition}
\begin{definition}
An equivalence relation~$R$ on~$X$ is \emph{countable} if for every
$x\in X$, the equivalence class $[x]_R=\{y\in X\colon x\sim y\}$ is
countable.
\end{definition}
Now let~$(X,\mu)$ be a standard Borel probability space and suppose
that~$R$ is a countable equivalence relation which is also a Borel
subset of~$X\times X$, when~$X\times X$ is equipped with the product
Borel structure.
\begin{definition}
For $\alpha\subseteq X$, let~$[\alpha]_R=\bigcup_{x\in \alpha}[x]_R$
be the $R$-saturation of~$\alpha$. We say that~$\mu$ is
\emph{quasi-invariant under~$R$} if
\[ \mu(\alpha)=0\iff \mu([\alpha]_R)=0\] for any measurable
set~$\alpha\subseteq X$.
\end{definition}
\begin{definition}
We say that~$(X,\mu,R,\sigma)$ is a \emph{Feldman-Moore relation} if
$(X,\mu)$ is a standard Borel probability space, $R$ is a countable
Borel equivalence relation on~$X$ so that~$\mu$ is quasi-invariant
under~$R$, and~$\sigma$ is a normalised $2$-cocycle on~$R$. When the
context makes this unambiguous, for brevity we will simply refer to
this Feldman-Moore relation as~$R$.
\end{definition}
Fix a Feldman-Moore relation~$(X,\mu,R,\sigma)$.
\begin{definition}
Let~$E\subseteq R$ and let~$x,y\in X$. The horizontal slice of~$E$
at~$y$ is
\[ %
E_y=\{z\in X\colon (z,y)\in E\}\times \{y\}\]
and the vertical slice of~$E$ at~$x$ is
\[E^x=\{x\}\times \{z\in X\colon (x,z)\in E\}.\] %
We define
\[ \bB(E)=\sup_{x,y\in X} |E_x|+|E^y|,\] and say that~$E$ is
\emph{band limited} if~$\bB(E)<\infty$. We call a bounded Borel
function~$a\colon R\to \bC$ \emph{left finite} if the support of~$a$
is band limited, and we write
\[ \Sigma_0=\Sigma_0(R)\] for the set of all such left finite
functions on~$R$.
\end{definition}
\begin{definition}
Equip~$R$ with the relative Borel structure from~$X\times X$. The
\emph{right counting measure} for~$R$ is the measure~$\nu$ on~$R$
defined by
\[ \nu(E)=\int_X |E_y|\,d\mu(y)\] for each measurable
set~$E\subseteq R$.
\end{definition}
We shall also need a generalisation of the counting measure
$\nu$. For~$n\ge2$, let $\pi_{n+1}$ be the projection of $R^{(n)}$
onto~$X$ defined by
$\pi_{n+1}(x_0,x_1,\ldots, x_n)=x_{n}$, and let~$\nu^{(n)}$ be the
measure on~$R^{(n)}$ given by
\[\nu^{(n)}(E)=\int_X|\pi_{n+1}^{-1}(y)\cap E|\,d\mu(y).\]
Now consider the Hilbert space~$H=L^2(R,\nu)$, where~$\nu$ is the right
counting measure of~$R$.
\begin{definition}
We define a linear map
\[ L_0\colon \Sigma_0\to \B(H),\qquad L_0(a)\xi:=a*_\sigma \xi\]
for $a\in \Sigma_0$ and $\xi\in H$, where
\begin{equation} a *_\sigma \xi(x,z)=\sum_{y\sim x}
a(x,y)\xi(y,z)\sigma(x,y,z),\quad\text{for~$(x,z)\in
R$} \label{eq:starsigma}.\end{equation} As shown in~\cite{fm2},
this defines a bounded linear operator $L_0(a)\in \B(H)$ with
$\|L_0(a)\|\leq \bB(E)\|a\|_\infty$, where $E$ is the support
of~$a$.
\end{definition}
\begin{definition}
We define
\[ \M_0(R,\sigma)= L_0(\Sigma_0)\] to be the range
of~$L_0$. %
\end{definition}
\begin{definition}
The von Neumann algebra~$\M(R,\sigma)$ of the Feldman-Moore
relation $(X,\mu,R,\sigma)$ is
the von Neumann subalgebra of~$\B(H)$ generated
by~$\M_0(R,\sigma)$. We will abbreviate this as~$\M(R)$ or
simply~$\M$ where the context allows.
\end{definition}
Let~$\Delta=\{(x,x)\colon x\in X\}$ be the diagonal of~$R$, and let
$\chi_\Delta\colon R\to \bC$ be the characteristic function
of~$\Delta$. Note that $\chi_\Delta$ is a unit vector in~$H$, since
$\nu(\Delta)=\mu(X)=1$.
\begin{definition}
The \emph{symbol map} of~$R$ is the map
\[ s\colon \M\to H,\quad T\mapsto T\chi_\Delta.\]
The \emph{symbol set} for~$R$ is the range of~$s$:
\[ \Sigma(R,\sigma)=s(\M).\] We often abbreviate this
as~$\Sigma(R)$ or~$\Sigma$.
\end{definition}
Since~$\sigma$ is normalised, equation~\eqref{eq:starsigma} gives
\begin{equation}\label{eq_newn}
s(L_0(a))=a\quad\text{for $a\in \Sigma_0$,}
\end{equation}
where equality holds almost everywhere. So we may view the Borel
functions~$a\in \Sigma_0$ as elements of~$H=L^2(R,\nu)$. Moreover,
for~$T\in \M$ we have $\|s(T)\|_{\infty}\leq \|T\|$
by~\cite[Proposition~2.6]{fm2}. Hence
\begin{equation}\label{eq:sigma0-inclusion}
\Sigma_0\subseteq \Sigma\subseteq H\cap L^\infty(R,\nu).
\end{equation}
\begin{definition}
By~\cite{fm2}, $s$ is a bijection onto~$\Sigma$, and its inverse
\[ L\colon \Sigma\to \M\] extends~$L_0$. We call~$L$ the
\emph{inverse symbol map} of~$R$. In fact, for any $a\in \Sigma$ we
have $L(a)\xi=a*_\sigma \xi$ where $*_\sigma$ is the
convolution product formally defined by equation~\eqref{eq:starsigma}.
\end{definition}
If we equip~$\Sigma$ with the involution $a^*(x,y)=\overline{a(y,x)}$,
the pointwise sum and the convolution product~$*_\sigma$, then $s$ is
a $*$-isomorphism onto~$\Sigma$: for all $a,b\in \Sigma$ and
$\lambda,\mu\in\bC$, we have
\begin{align*}s(L(a)^*)(x,y)&=\overline{a(y,x)},\\ s(L(\lambda
a)+L(\mu b))&=\lambda a+\mu b\quad\text{and}\\ s(L(a)
L(b))&=a*_\sigma b.
\end{align*}
This is proven in~\cite{fm2}. By equation~(\ref{eq_newn}),
$\Sigma_0(R)$ is a $*$-subalgebra of~$\Sigma$, so $\M_0(R,\sigma)$ is
a $*$-subalgebra of~$\M(R,\sigma)$.
\begin{definition}
Given~$\alpha\in L^\infty(X,\mu)$, let~$d(\alpha)\colon R\to \bC$ be given by
\[ d(\alpha)(x,y)=
\begin{cases}
\alpha(x)&\text{if~$x=y$,}\\
0&\text{otherwise}.
\end{cases}\] Clearly $d(\alpha)\in \Sigma_0$. We write
$D(\alpha)=L(d(\alpha))\in\M$, and we define the \emph{Cartan masa
of~$R$} to be
\[ \A=\A(R)=\{ D(\alpha)\colon \alpha\in L^\infty(X,\mu)\}.\]
By~\cite{fm2},~$\A(R)$ is a Cartan masa in the von Neumann
algebra~$\M(R,\sigma)$.
Note that if $\xi\in H$ and $(x,y)\in R$, then
\begin{eqnarray*}
D(\alpha)\xi (x,y) & = & \sum_{z\sim x} d(\alpha)(x,z)\xi(z,y) \sigma(x,z,y) =
\alpha(x) \xi(x,y) \sigma(x,x,y)\\
& = & \alpha(x) \xi(x,y).
\end{eqnarray*}
Since this does not depend on the normalised $2$-cocycle ~$\sigma$,
this shows that~$\A(R)$ does not depend on~$\sigma$.
\end{definition}
\begin{definition}
If~$\A$ is a Cartan masa in a von Neumann algebra~$\M$, then we say
that~$(\M,\A)$ is a \emph{Cartan pair}. If~$\M\subseteq \B(H)$
where~$H$ is a separable Hilbert space, then we say that~$(\M,\A)$
is a \emph{separably acting} Cartan pair.
We say that two Cartan pairs~$(\M_1,\A_1)$ and $(\M_2,\A_2)$ are
isomorphic, and write~$(\M_1,\A_1)\cong(\M_2,\A_2)$, if there is a
$*$-isomorphism of~$\M_1$ onto~$\M_2$ which carries $\A_1$
onto~$\A_2$.
A \emph{Feldman-Moore coordinatisation} of a Cartan pair~$(\M,\A)$
is a Feldman-Moore relation $(X,\mu,R,\sigma)$ so that
\[ (\M,\A)\cong (\M(R,\sigma),\A(R)).\]
\end{definition}
\begin{definition}\label{def:isorel}
For $i=1,2$, let $R_i=(X_i,\mu_i,R_i,\sigma_i)$ be a Feldman-Moore
relation with right counting measure $\nu_i$. We say that these are
isomorphic, and write $R_1\cong R_2$, if there is a Borel
isomorphism $\rho\colon X_1\to X_2$ so that
\begin{enumerate}
\item $\rho_*\mu_1$ is equivalent to~$\mu_2$, where
$\rho_*\mu_1(E)=\mu_1(\rho^{-1}(E))$ for $E\subseteq X_2$;
\item $\rho^2(R_1)=R_2$, up to a $\nu_{2}$-null set, where $\rho^2=\rho\times\rho$; and
\item $\sigma_2(\rho(x),\rho(y),\rho(z))=\sigma_1(x,y,z)$ for
a.e.~$(x,y,z)\in R_1^{(2)}$ with respect to $\nu_1^{(2)}$.
\end{enumerate}
\end{definition}
Our definition of the Schur multipliers of a von Neumann algebra~$\M$
with a Cartan masa~$\A$ will rest on:
\begin{theorem}[The Feldman-Moore coordinatisation~{\cite[Theorem~1]{fm2}}]
\label{thm:fmii1}
Every separably acting Cartan pair~$(\M,\A)$ has a Feldman-Moore
coordinatisation.
Moreover, if $R_i=(X_i,\mu_i,R_i,\sigma_i)$ is a Feldman-Moore
coordinatisation of~$(\M_i,\A_i)$ for $i=1,2$, then
\[ (\M_1,\A_1)\cong (\M_2,\A_2)\iff R_1\cong R_2.\]
\end{theorem}
\begin{remark}\label{rk:unitary}
Suppose that we have isomorphic Feldman-Moore relations $R_1$
and~$R_2$, with an isomorphism~$\rho\colon X_1\to X_2$ as in
Definition~\ref{def:isorel}. A calculation shows that if~$h\colon
X_2\to \bR$ is the Radon-Nikodym derivative of~$\rho_*\mu_1$ with
respect to~$\mu_2$, then the operator \[U\colon
L^2(R_2,\nu_2)\to L^2(R_1,\nu_1),\] given for $(x,y)\in
R_1$ and $f\in L^2(R_2,\nu_2)$ by
\[U(f)(x,y) = h(\rho(y))^{-1/2} f(\rho(x),\rho(y)),\]
is unitary.
Moreover, writing $L_i$ for the inverse
symbol map of~$R_i$, for $a\in \Sigma_0(R_1,\sigma_1)$ we have
\begin{equation}\label{eq:unitary-action}
U^*L_1(a)U=L_2( a\circ \rho^{-2})
\end{equation}
where
\[\rho^{-2}(u,v)=(\rho^{-1}(u),\rho^{-1}(v)),\quad(u,v)\in R_2.\] It
follows that
\[U^*\M(R_1,\sigma_1)U=\M(R_2,\sigma_2)\quad\text{and}\quad
U^*\A(R_1)U=\A(R_2),\] so conjugation by~$U$ implements an isomorphism
\[ (\M(R_1,\sigma_1),\A(R_1))\cong (\M(R_2,\sigma_2),\A(R_2))\] whose
existence is assured by Theorem~\ref{thm:fmii1}.
\end{remark}
\section{Algebraic preliminaries}\label{s_sap}
In this section, we collect some algebraic observations.
Fix a Feldman-Moore relation $R=(X,\mu,R,\sigma)$ with right counting
measure~$\nu$, let~$H=L^2(R,\nu)$, let~$\M=\M(R,\sigma)$ and
let~$\A=\A(R)$. Also let $\Sigma_0$ be the collection of left finite
functions on~$R$, and let~$s,L,\Sigma$ be the symbol
map, inverse symbol map and the symbol set of~$R$, respectively.
We can describe the bimodule action of~$\A$ on~$\M$ quite easily in
terms of the pointwise product of symbols.
\begin{definition}
For~$a,b\in L^\infty(R,\nu)$, let $ a\mathbin{\star} b$ be the pointwise product
of~$a$ and~$b$.
\end{definition}
\begin{definition}
For~$\alpha\in L^\infty(X,\mu)$ we write
\[c(\alpha)\colon R\to\bC,\quad (x,y)\mapsto
\alpha(x)\quad\text{and}\quad r(\alpha)\colon R\to\bC,\quad
(x,y)\mapsto \alpha(y).\]
\end{definition}
\begin{lemma}\label{lem:action}
For~$a\in \Sigma$ and~$\beta,\gamma\in L^\infty(X,\mu)$, we have
\[ D(\beta)L(a)D(\gamma)=L(c(\beta)\mathbin{\star} a\mathbin{\star} r(\gamma)).\]
\end{lemma}
\begin{proof}
The statement follows from the identity
$s\big(D(\beta)L(a)D(\gamma))=c(\beta)\mathbin{\star} a\mathbin{\star} r(\gamma)$;
its verification is straightforward, but we include it for completeness:
\begin{align*}
s\big(D(\beta)L(a)D(\gamma)\big)(x,y)
& = \big(D(\beta)L(a)D(\gamma)\chi_{\Delta}\big)(x,y) \\
& = \beta(x)\big(L(a)D(\gamma)\chi_{\Delta}\big)(x,y)\\
& = \beta(x)\sum_{z\sim x} a(x,z) \big(D(\gamma)\chi_{\Delta}\big)(z,y)\sigma(x,z,y)\\
& = \beta(x)\sum_{z\sim y} a(x,z) \gamma(z) \chi_{\Delta}(z,y)\sigma(x,z,y)\\
& = \beta(x) a(x,y) \gamma(y) \sigma(x,y,y)\\
& = \beta(x) a(x,y) \gamma(y) \\
&= \left(c(\beta)\mathbin{\star} a\mathbin{\star} r(\gamma)\right)(x,y).\qedhere
\end{align*}
\end{proof}
Recall the standard way to associate an inverse semigroup to $R$.
Suppose that~$f\colon \delta\to \rho$ is a Borel isomorphism
between two Borel subsets~$\delta,\rho\subseteq X$. Such a map will be
called a \emph{partial Borel isomorphism of~$X$}. If~$g\colon
\delta'\to \rho'$ is another partial Borel isomorphism of~$X$, then we
can (partially) compose them as follows:
\[ g\circ f\colon f^{-1}(\rho\cap \delta')\to g(\rho\cap
\delta'),\quad x\mapsto g(f(x)).\] Let us write $\Gr f=\{(x,f(x))\colon
\text{$x$ is in the domain of~$f$}\}$ for the graph of~$f$. Under
(partial) composition, the set~\[ \I(R)=\{f\colon \text{$f$ is a partial Borel
isomorphism of~$X$ with $\Gr f\subseteq R$}\}\] is an inverse
semigroup, where the inverse of~$f\colon\delta\to\rho$ in~$\I(R)$ is
the inverse function~$f^{-1}\colon\rho\to \delta$.
If~$f\in\I(R)$, then~$\bB(\Gr f)\leq 2$, so~$\chi_{\Gr f}\in
\Sigma_0$. We define an operator~$V(f)\in \M$ by
\[ V(f)=L(\chi_{\Gr f}).\]
If~$\delta$ is a Borel subset of~$X$, we will write
$P(\delta)=V(\id_{\delta})$ where~$\id_{\delta}$ is the identity map
on the Borel set~$\delta\subseteq X$. Note that $P(\delta) =
D(\chi_{\delta})$.
\goodbreak
\begin{lemma}\label{l_pin}\leavevmode
\begin{enumerate}
\item If~$f\in \I(R)$, then $V(f)^*=V(f^{-1})$.
\item If~$f\in \I(R)$ and $\delta,\rho$ are Borel subsets of~$X$,
then \[P(\delta)V(f)P(\rho)=V(\id_\rho\circ f\circ \id_\delta).\]
\item If~$\delta$ is a Borel subset of~$X$, then $P(\delta)$ is a
projection in~$\A$, and every projection in~$\A$ is of this form.
\item If~$\rho$ is a Borel subset of~$X$, then $V(f) P(\rho) =
P(f^{-1}(\rho))V(f)$.
\item \label{pisom} If $f : \delta\rightarrow\rho$ is in~$\I(R)$,
then~$V(f)$ is a partial isometry %
with initial projection $P(\rho)$ and final projection $P(\delta)$.
\end{enumerate}
\end{lemma}
\begin{proof}
(1) It is straightforward that $\chi_{\Gr (f^{-1})}=(\chi_{\Gr f})^*$
(where the ${}^*$ on the right hand side is the involution
on~$\Sigma$ discussed in~\sect\ref{s_prel} above). Since~$L$ is a $*$-isomorphism,
$V(f^{-1})=V(f)^*$.
(2) Note that
\[ (\delta\times X)\cap \Gr f\cap (X\times \rho) = \Gr(\id_\rho\circ f\circ \id_\delta),\]
so
\[ c(\chi_\delta)\mathbin{\star} \chi_{\Gr f}\mathbin{\star}
r(\chi_\rho)=\chi_{\Gr(\id_\rho\circ f\circ \id_\delta)}.\]
By Lemma~\ref{lem:action},
\begin{equation*}
P(\delta)V(f)P(\rho)
=L(c(\chi_\delta)\mathbin{\star} \chi_{\Gr f}\mathbin{\star} r(\chi_\rho))
= V(\id_\rho\circ f\circ \id_\delta).
\end{equation*}
(3) Taking $f=\id_\delta$ in~(1) shows
that~$P(\delta)=V(\chi_{\id_\delta})$ is self-adjoint; and taking
$f=\id_\Delta$ and $\delta=\rho$ in~(2) shows that $P(\delta)$ is
idempotent.
So $P(\delta)$ is a projection. Since
$P(\delta)=D(\chi_{\delta})$, we have $P(\delta)\in
\A$. Conversely, since~$L$ is a $*$-isomorphism, any
projection~$P$ in~$\A$ is equal to~$D(\alpha)$ for some
projection~$\alpha\in L^\infty(X,\mu)$. So $\alpha=\chi_\delta$ for some Borel
set~$\delta\subseteq X$, and hence $P = P(\delta)$ for some Borel set $\delta\subseteq X$.
(4) Since $\id_\rho\circ f=f\circ\id_{f^{-1}(\rho)}$ and~$P(X)=I$,
this follows by taking~$\delta=X$ in~(2).
(5) Using the fact that $\sigma$ is normalised, a simple
calculation yields
\[ \chi_{\Gr f}*_\sigma \chi_{\Gr f^{-1}} = \chi_{\Gr(\id_\delta)}.\]
Applying the $*$-isomorphism~$L$ and using~(1) gives
$V(f)V(f)^*=P(\delta)$
and replacing~$f$ with $f^{-1}$ gives
$V(f)^*V(f)=P(\rho)$.
\end{proof}
\begin{proposition}\label{prop:bimod-symb}
Let~$\Phi\colon \M\to \M$ be a linear $\A$-bimodule map.
\begin{enumerate}
\item If $f\in \I(R)$ and~$V=V(f)$, then $s(\Phi(V))=
\chi_{\Gr f} \mathbin{\star} s(\Phi(V))$.
\item For $i=1,2$, let $f_i\colon \delta_i\to \rho_i$ be in~$\I(R)$
and let~$V_i=V(f_i)$. If $G=\Gr(f_1)\cap \Gr(f_2)$, then
\[ \chi_G\mathbin{\star} s(\Phi(V_1))= \chi_G\mathbin{\star} s(\Phi(V_2)).\]
\end{enumerate}
\end{proposition}
\begin{proof}
(1) Let~$f\in \I(R)$ and let $\rho\subseteq X$ be a Borel set.
Since $\Phi$ is an $\A$-bimodule map, Lemma~\ref{l_pin} implies
that
\begin{align*}V^*\Phi(V)P(\rho) &= V^*\Phi(V P(\rho))
= V^*\Phi(P(f^{-1}(\rho))V) \\&=
V^*P(f^{-1}(\rho))\Phi(V)\\&=
(P(f^{-1}(\rho))V)^*\Phi(V) \\&=
(VP(\rho))^*\Phi(V) = P(\rho) V^*\Phi(V).
\end{align*}
Hence $V^*\Phi(V)$ commutes with all projections in $\A$, and
since $\A$ is a masa, $V^*\Phi(V)\in \A$. If~$\delta$ is the
domain of~$f$, then by Lemma~\ref{l_pin}(\ref{pisom}), $P(\delta)$ is
the final projection of~$V$, and therefore
\[\Phi(V)=\Phi(P(\delta)V)=P(\delta)\Phi(V)=VV^*\Phi(V)\in
V\A.\] So $\Phi(V)=VD(\gamma)$ for some $\gamma\in
L^\infty(X,\mu)$. By Lemma~\ref{lem:action},
\[ s(\Phi(V))=s(VD(\gamma))=s(L(\chi_{\Gr
f})D(\gamma))=\chi_{\Gr f}\mathbin{\star} d(\gamma),\] so $s(\Phi(V))= \chi_{\Gr f} \mathbin{\star}
s(\Phi(V))$.
(2) Let~$\delta=\pi_1(G)$ where~$\pi_1(x,y)=x$ for $(x,y)\in
R$. It is easy to see that $\chi_G=c(\chi_\delta)\mathbin{\star} \chi_{\Gr
f_i}$ for $i=1,2$. By part~(1), $s(\Phi(V_i))=\chi_{\Gr f_i}\mathbin{\star}
s(\Phi(V_i))$. Hence by Lemmas~\ref{lem:action}
and~\ref{l_pin},
\begin{align*}
\chi_G \mathbin{\star} s(\Phi(V_i))&=c(\chi_\delta)\mathbin{\star} \chi_{\Gr f_i}\mathbin{\star}
s(\Phi(V_i))\\&=c(\chi_\delta)\mathbin{\star}
s(\Phi(V_i)) = s(P(\delta)\Phi(V_i))\\&=
s(\Phi(P(\delta)V_i)) = s(\Phi(V(f_i\circ \id_\delta))).
\end{align*}
The definition of~$\delta$ ensures that $f_1\circ
\id_\delta=f_2\circ \id_\delta$, so $\chi_G \mathbin{\star} s(\Phi(V_1)) =
\chi_G\mathbin{\star} s(\Phi(V_2))$.
\end{proof}
\section{Schur multipliers: definition and characterisation}\label{s_sm}
Let $(X,\mu,R,\sigma)$ be a Feldman-Moore coordinatisation of a
separably acting Cartan pair~$(\M,\A)$, and let $\Sigma_0,\Sigma$
be as in Section~\ref{s_prel}.
In this section we define the class $\fS(R,\sigma)$ of Schur
multipliers of the von Neumann algebra~$\M$ with respect to the
Feldman-Moore relation~$R$. The main result in this section,
Theorem~\ref{th_main}, characterises these multipliers as normal
bimodule maps. From this it follows that~$\fS(R,\sigma)$ depends only
on the Cartan pair~$(\M,\A)$. We also show that isomorphic
Feldman-Moore relations yield isomorphic classes of Schur multipliers.
\begin{definition}
\label{d_sh}
Let~$R=(X,\mu,R,\sigma)$ be a Feldman-Moore coordinatisation of a
Cartan pair~$(\M,\A)$. We say that $\phi\in L^\infty(R,\nu)$ is a
\emph{Schur multiplier of~$(\M,\A)$ with respect to~$R$}, or simply
a \emph{Schur multiplier of~$\M$}, if
\[ a\in \Sigma(R,\sigma)\implies \phi\mathbin{\star} a\in \Sigma(R,\sigma) \]
where~$\mathbin{\star}$ is the pointwise product on~$L^\infty(R,\nu)$.
We then write
\[ m(\phi)\colon \Sigma(R,\sigma)\to \Sigma(R,\sigma),\quad a\mapsto
\phi\mathbin{\star} a\] and
\[ M(\phi)\colon \M\to \M,\quad T\mapsto L (\phi\mathbin{\star} s(T)).\]
\end{definition}
Set
\[\fS = \fS(R,\sigma) = \{\phi\in L^\infty(R,\nu)\colon \text{$\phi$
is a Schur multiplier of $\M$}\}.\]
It is clear from Definition~\ref{d_sh} that $\fS(R,\sigma)$ is an algebra
with respect to pointwise addition and multiplication of functions.
\begin{example}\label{ex_bh}
For a suitable choice of Feldman-Moore coordinatisation,
$\fS(R,\sigma)$ is precisely the set of classical Schur multipliers
of~$\B(\ell^2)$. Indeed, let $X = \bN$, equipped with the (atomic)
probability measure $\mu$ given by $\mu(\{i\}) = p_i$, $i\in \bN$,
and set $R = X\times X$. If~$p_i>0$ for every~$i\in \bN$,
then~$\mu$ is quasi-invariant under~$R$. Let $\sigma$ be the trivial
$2$-cocycle $\sigma \equiv 1$. The right counting measure for the
Feldman-Moore relation~$(X,\mu,R,\sigma)$ is $\nu=\kappa\times \mu$
where~$\kappa$ is counting measure on~$\bN$. Indeed,
for~$E\subseteq R$,
\[\nu(E)=\sum_{y\in\bN} |E_y| \mu(\{y\})
=\sum_{y\in\bN} \kappa\times \mu(E_y)= \kappa\times \mu(E).\]
Hence $L^2(R,\nu)$ is canonically isometric to the Hilbert space
tensor product $\ell^2\otimes\ell^2(\bN,\mu)$. Let~$T\in
\M(R,\sigma)$. For an elementary tensor $\xi\otimes\eta\in
L^2(R,\nu)$, we have
\[T(\xi\otimes\eta) (i,j) = L_{s(T)}(\xi\otimes \eta)(i,j) =
\sum_{k=1}^{\infty} s(T)(i,k)\xi(k)\eta(j) = (A_{s(T)}\xi\otimes
\eta)(i,j)\] where $A_a\in \B(\ell^2)$ is the operator with
matrix~$a\colon \bN\times\bN\to \bC$. It follows that the map
$T\mapsto A_{s(T)}\otimes I$ is an isomorphism between $\M(R,\sigma)$ and
$\B(\ell^2)\otimes I$, so
\[\Sigma(R,\sigma)=\{a\colon\bN\times\bN\to \bC\mid \text{$a$ is the matrix of~$A$
for some~$A\in \B(\ell^2)$}\}.\] In particular, a function $\phi :
\bN\times\bN\rightarrow \bC$ is in~$\fS(R,\sigma)$ if and only
if $\phi$ is a (classical) Schur multiplier of~$\B(\ell^2)$.
\end{example}
\begin{example}\label{ex_d}
If~$(X,\mu,R,\sigma)$ is a Feldman-Moore relation and $\Delta$ is
the diagonal of~$R$, then $\chi_{\Delta}\in \fS(R,\sigma)$ since for
any~$a\in L^\infty(R,\nu)$, the function
\[\chi_\Delta\mathbin{\star} a=d(x\mapsto a(x,x))\]
belongs to $\Sigma_0$ and hence to $\Sigma$.
\end{example}
More generally:
\begin{proposition}\label{prop:sigma0}
For any Feldman-Moore relation~$(X,\mu,R,\sigma)$, we have
$\Sigma_0(R,\sigma)\subseteq \fS(R,\sigma)$.
\end{proposition}
\begin{proof}
Let~$\phi\in \Sigma_0(R,\sigma)$ and let~$a\in \Sigma(R,\sigma)$.
Recall that~$a\in L^\infty(R,\nu)$, so we can
choose a bounded Borel function~$\alpha\colon R\to\bC$ with
$\alpha=a$ almost everywhere with respect to~$\nu$. The function
$\phi\mathbin{\star} \alpha$ is then bounded, and its support is a subset of the
support of~$\phi$, which is band limited. Hence $\phi\mathbin{\star} \alpha\in
\Sigma_0$, and $\phi\mathbin{\star} \alpha=\phi\mathbin{\star} a$ almost everywhere. By
equation~(\ref{eq:sigma0-inclusion}), we have $\phi\mathbin{\star} a\in
\Sigma(R,\sigma)$, so $\phi\in \fS(R,\sigma)$.
\end{proof}
We now embark on the proof of our main result.
\begin{lemma}\label{lem:cgt}
Let~$\fX$ be a Banach space, let~$V$ be a complex normed vector
space, and let~$\alpha,\beta$ and $h$ be linear maps so that the
following diagram commutes:
\begin{diagram}
\fX & \rTo^{h} & V \\
\dTo^{\alpha{}} & & \dTo_{\beta{}}\\
\fX & \rTo^h & V
\end{diagram}
If~$h$ and $\beta$ are continuous and~$h$ is injective, then~$\alpha$
is continuous.
\end{lemma}
\begin{proof}
If $x_n\in \fX$ with $x_n\to 0$ and $\alpha(x_n)\to y$ as~$n\to \infty$ for some $y\in
\fX$, then
\begin{align*} h(y)=h(\lim_{n\to \infty} \alpha(x_n))&=\lim_{n\to
\infty}h(\alpha(x_n))
\\
&=\lim_{n\to \infty}\beta(h(x_n))=\beta(h(\lim_{n\to
\infty}x_n))=\beta(h(0))=0.
\end{align*}
Since~$h$ is injective, $y=0$ and $\alpha$ is continuous by the
closed graph theorem.
\end{proof}
If~$\phi$ is a Schur multiplier of~$\M$, then we have the following
commutative diagram of linear maps:
\begin{diagram}
\M & \pile{\lTo^L\\ \rTo_s} & \Sigma(R,\sigma) \\
\dTo^{M(\phi)} & & \dTo_{m(\phi)}\\
\M & \pile{\lTo^L\\ \rTo_s} & \Sigma(R,\sigma)
\end{diagram}
We now record some continuity properties of this diagram.
\begin{proposition}\label{prop:continuity}\leavevmode
Let~$(X,\mu,R,\sigma)$ be a Feldman-Moore relation,
let~$(\M,\A)=(\M(R,\sigma),\A(R))$, let~$\H=L^2(R,\nu)$ where~$\nu$
is the right counting measure of~$R$, and write
$\Sigma=\Sigma(R,\sigma)$. Let~$\phi\in \fS(R,\sigma)$.
\begin{enumerate}
\item $m(\phi)$ is continuous
as a map on $(\Sigma,\|\cdot\|_\infty)$.
\item \label{s-contraction} $s$ is a contraction from
$(\M,\|\cdot\|_{\B(\H)})$ to $(\Sigma,\|\cdot\|_\infty)$.
\item \label{Mphi-cts} $M(\phi)$ is norm-continuous.
\item $m(\phi)$ is continuous as a map on $(\Sigma,\|\cdot\|_2)$.
\item $s$ is a contraction from $(\M,\|\cdot\|_{\B(\H)})$
to $(\Sigma, \|\cdot\|_2)$.
\item\label{lem-obvious} $s$ is continuous from~$(\M,{\rm SOT})$ to
$(\Sigma,\|\cdot\|_2)$, where SOT is the strong operator topology
on~$\M$.
\end{enumerate}
\end{proposition}
\begin{proof}\leavevmode
(1) and~(4) follow from the fact that~$\phi$ is essentially bounded.
(2) See~\cite[Proposition~2.6]{fm2}.
(3) This follows from~(\ref{s-contraction}) and Lemma~\ref{lem:cgt}.
(5) follows from the fact that $\chi_\Delta$ is a unit vector in~$\H$.
(6) Let $\{T_\lambda\}$ be a net in~$\M$ which converges
in the SOT to~$T\in\M$. Then
$s(T_\lambda)=T_\lambda(\chi_\Delta)\to T(\chi_\Delta)=s(T)$ in
$\|\cdot\|_2$.\qedhere
\end{proof}
If~$R$ is a Feldman-Moore relation with right counting measure~$\nu$,
let~$\nu^{-1}$ be the measure on~$R$ given by
\[\nu^{-1}(E)=\nu(\{(y,x)\colon (x,y)\in E\}).\] We will need the
following facts, which are established in~\cite{fm2}.
\begin{proposition}\leavevmode\label{prop:nuinverse}
\begin{enumerate}
\item $\nu$ and $\nu^{-1}$ are mutually absolutely continuous;
\item if~$d=\frac{d\nu^{-1}}{d\nu}$, then the
set~$d^{1/2}\Sigma_0=\{d^{1/2}a\colon a\in \Sigma_0\}$ of
\emph{right finite} functions on~$R$ has the property that
for~$b\in d^{1/2}\Sigma_0$, the formula
\[ R_0(b)\xi=\xi *_\sigma b,\quad \xi\in H \] defines a bounded
linear operator~$R_0(b)\in\B(H)$; and
\item for~$b\in d^{1/2}\Sigma_0$, we have $R_0(b)\in\M'$ and
$R_0(b)(\chi_\Delta)=b$.
\end{enumerate}
\end{proposition}
We will now see that the SOT-convergence of a \emph{bounded} net
in~$\M$ is equivalent to the $\|\cdot\|_2$ convergence of its image
under~$s$.
\begin{proposition}\label{p_sconv}
Let $\{T_\lambda\}\subseteq \M(R)$ be a norm bounded net.
\begin{enumerate}
\item $\{T_\lambda\}$ converges in the {\rm SOT} if and only if
$\{s(T_\lambda)\}$ converges with respect to~$\|\cdot\|_2$.
\item For~$T\in\M$, we have
\[ T_\lambda\to_{{\rm SOT}} T\iff s(T_\lambda)\to_{\|\cdot\|_2} s(T).\]
\end{enumerate}
\end{proposition}
\begin{proof} (1) The ``only if'' is addressed by
Proposition~\ref{prop:continuity}(\ref{lem-obvious}).
Conversely, suppose that $s(T_\lambda)=T_\lambda(\chi_{\Delta})$
converges with respect to~$\|\cdot\|_2$ on~$H$. For a right
finite function $b\in d^{1/2}\Sigma_0$, we have
\[R_0(b)T_\lambda (\chi_{\Delta}) = T_\lambda R_0(b)(\chi_\Delta) =
T_\lambda (b)\] which converges
in~$H$. By~\cite[Proposition~2.3]{fm2}, the set of right finite
functions is dense in $H$. Since~$\{T_\lambda\}$ is bounded, we
conclude that $T_\lambda(\xi)$ converges for every $\xi\in H$. So we
may define a linear operator~$T\colon H\to H$
by~$T(\xi)=\lim_\lambda T_\lambda(\xi)$; then~$\|T(\xi)\|\leq
\sup_\lambda \|T_\lambda\| \|\xi\|$, so~$T\in\B(H)$. By
construction, $T_\lambda\to T$ strongly.
(2) The direction ``$\implies$'' follows from
Proposition~\ref{prop:continuity}(\ref{lem-obvious}). For the converse, apply~(1) to see that
if~$s(T_\lambda)\to_{\|\cdot\|_2} s(T)$, then $T_\lambda \to_{{\rm SOT}} S$
for some~$S\in \M$. Hence $s(T_\lambda)\to _{\|\cdot\|_2} s(S)$; therefore
$s(S)=s(T)$ and so $S=T$.
\end{proof}
The following argument is taken from the proof
of~\cite[Corollary~2.4]{ps}.
\begin{lemma}\label{lem-popsmith}
Let~$H$ be a separable Hilbert space and $\M\subseteq\B(H)$ be a von
Neumann algebra. Suppose that~$\Phi \colon\M\to\M$ is a bounded
linear map which is strongly sequentially continuous on bounded
sets, meaning that for every~$r>0$, whenever $X,X_1,X_2,X_3,\dots$
are operators in~$\M$ with norm at most~$r$ with $X_n\to _{\rm
SOT}X$ as $n\to \infty$, we have $\Phi(X_n)\to_{{\rm
SOT}}\Phi(X)$. Then $\Phi$ is normal.
\end{lemma}
\begin{proof}
For~$\xi,\eta\in H$, let~$\omega_{\xi,\eta}$ be the vector
functional in~$\M_*$ given by $\omega_{\xi,\eta}(X)=\langle
X\xi,\eta\rangle$, $X\in \M$, and let
\[K=\ker\Phi^*(\omega_{\xi,\eta})\qtext{and} K_r=K\cap \{X\in \M\colon
\|X\|\leq r\},\ \ \ \text{for $r>0$}.\] Let~$r>0$. Since~$H$ is
separable, $\M_*$ is separable and so the strong operator topology
is metrizable on the bounded set~$K_r$. From the sequential strong
continuity of~$\Phi$ on $\{ X\in\M\colon \|X\|\leq r\}$, it follows
that~$K_r$ is strongly closed. Since~$K_r$ is bounded and convex,
each $K_r$ is ultraweakly closed. By the Krein-Smulian theorem, $K$
is ultraweakly closed, so $\Phi^*(\omega_{\xi,\eta})$ is ultraweakly
continuous; that is, it lies in $\M_*$. The linear span of
$\{\omega_{\xi,\eta}\colon \xi,\eta\in H\}$ is (norm) dense in~$\M_*$, so
this shows that $\Phi^*(\M_*)\subseteq\M_*$.
Define $\Psi\colon\M_*\to\M_*$ by
$\Psi(\omega)=\Phi^*(\omega)$. Then $\Phi=\Psi^*$, so $\Phi$ is
normal.
\end{proof}
\begin{remark}\label{rk:graph-partition}
Let~$R$ be a Feldman-Moore relation. It follows from the first part
of the proof of \cite[Theorem~1]{fm1} that there is a countable
family~$\{f_j\colon \delta_j\to\rho_j\colon j\ge0\}\subseteq \I(R)$
such that $\{\Gr f_j\colon j\ge0\}$ is a partition of~$R$. Indeed,
it is shown there that there are Borel sets~$\{D_j\colon j\ge1\}$
which partition $R\setminus\Delta$ so that $D_j=\Gr f_j$, where
$f_j\colon \pi_{1}(D_j)\to \pi_2(D_j)$ is a one-to-one map. Since
$\Gr f_j$ and $\Gr(f_j^{-1})$ are both Borel sets, each~$f_j$ is
in~$\I(R)$, and we can take $f_0$ to be the identity mapping on~$X$.
\end{remark}
\begin{theorem}\label{th_main}
We have that $\{M(\phi) : \phi\in \fS\}$ coincides with the set of
normal $\A$-bimodule maps on $\M$.
\end{theorem}
\begin{proof}
Let~$\phi\in \fS$. %
If~$a\in \Sigma$ and~$\beta,\gamma\in L^\infty(X,\mu)$, then by
Lemma~\ref{lem:action},
\begin{align*}
M(\phi)\big(D(\beta)L(a)D(\gamma)\big)&=M(\phi)\big(L(c(\beta) \mathbin{\star}
a\mathbin{\star} r(\gamma))\big)\\&=L(c(\beta)\mathbin{\star} \phi\mathbin{\star} a\mathbin{\star}
r(\gamma)) = D(\beta)M(\phi)(L(a))D(\gamma)
\end{align*}
and~$M(\phi)$ is plainly linear, so $M(\phi)$ is an~$\A$-bimodule
map.
Let $r>0$ and let $T_n,T\in \M$ for $n\in\bN$ with
$\|T_n\|,\|T\|\leq r$ and $T_n\to_{{\rm SOT}}T$.
By Proposition~\ref{prop:continuity}(\ref{lem-obvious}), $s(T_n)\to_{\|\cdot\|_2} s(T)$,
so by the $\|\cdot\|_2$ continuity of~$m(\phi)$,
\[ m(\phi)(s(T_n))\to_{\|\cdot\|_2} m(\phi)(s(T));\]
thus,
\[ s(M(\phi)(T_n))\to_{\|\cdot\|_2} s(M(\phi)(T)).\]
By Proposition~\ref{p_sconv},
\[ M(\phi)(T_n)\to_{{\rm SOT}} M(\phi)(T).\] Since $L^2(R,\nu)$ is
separable, Proposition~\ref{prop:continuity}(\ref{Mphi-cts}) and
Lemma~\ref{lem-popsmith} show that $M(\phi)$ is normal.
\medskip\goodbreak
Now suppose that~$\Phi$ is a normal~$\A$-bimodule map on~$\M$. By
Remark~\ref{rk:graph-partition}, we may write $R$ as a disjoint
union $R = \bigcup_{k=1}^{\infty} F_k$, where~$F_k=\Gr f_k$
and~$f_k\in \I(R)$, $k\in \bN$. Let
\[\phi : R\rightarrow \bC,\quad \phi(x,y) = \sum_{k\ge1}
s(\Phi(V(f_k)))(x,y).\]
Note that $\phi$ is well-defined since the sets
$F_k$ are pairwise disjoint and, by Lemma~\ref{prop:bimod-symb}(1),
$s(\Phi(V(f_k))) = s(\Phi(V(f_k)))\mathbin{\star} \chi_{F_k}$.
It now easily follows that $\phi$ is measurable. Moreover,
since each~$V(f_k)$ is a partial
isometry (see Lemma~\ref{l_pin}(\ref{pisom})),
by \cite[Proposition~2.6]{fm2} we have
\[ \|\phi\|_\infty=\sup_{k\ge1} \|s(\Phi(V(f_k)))\|_\infty
\leq\sup_{k\ge1} \|\Phi(V(f_k))\|\leq \|\Phi\|;\]
thus, $\phi$ is essentially bounded.
We claim that $s(\Phi(T))=\phi\mathbin{\star} s(T)$ for every~$T\in\M$. First we
consider the case $T=V(g)$ where $g\in \I(R)$. If we write $g_1=g$,
then for~$m\ge2$ we can find $g_m\in \I(R)$ with graph $G_m=\Gr g_m$
so that $R$ is the disjoint union $R=\bigcup_{m\ge1} G_m$. For
example, we can define~$g_m$ to be the partial Borel isomorphism
whose graph is~$F_{m-1}\setminus G_1$. Now let $\psi(x,y) =
\sum_{m\ge1} s(\Phi(V(g_m)))(x,y)$, $(x,y)\in R$. By
Proposition~\ref{prop:bimod-symb}(2), we have $\phi\mathbin{\star} \chi_{F_k\cap
G_m}=\psi\mathbin{\star} \chi_{F_k\cap G_m}$ for every~$k,m\ge1$, so
$\phi=\psi$. In particular, \[s(\Phi(V(g_1)))=\psi\mathbin{\star}
\chi_{G_1}=\phi\mathbin{\star}\chi_{G_1}=\phi\mathbin{\star} s(V(g_1)).\] %
Hence if~$T$ is in the left $\A$-module $\V$ generated
by~$\{V(f)\colon f\in \I(R)\}$, then $s(\Phi(T))=\phi\mathbin{\star} s(T)$.
On the other hand, by~\cite[Proposition~2.3]{fm2},
$\V=\M_0(R,\sigma)$ and hence
$\V$ is a strongly dense $*$-subalgebra of~$\M$.
Now let $T\in \M$. By Kaplansky's Density Theorem, there exists a
bounded net $\{T_\lambda\}\subseteq \V$ such that
$T_\lambda\rightarrow T$ strongly. For every $\lambda$, we have that
\[ s(\Phi(T_\lambda))=\phi\mathbin{\star} s(T_\lambda).\] By
Proposition~\ref{prop:continuity}(\ref{lem-obvious}),
$s(T_\lambda)\rightarrow_{\|\cdot\|_2} s(T)$ and, since $\phi\in
L^{\infty}(R)$, we have \[\phi\mathbin{\star}
s(T_\lambda)\rightarrow_{\|\cdot\|_2} \phi\mathbin{\star} s(T).\] On the other
hand, since $\Phi$ is normal, $\Phi(T_\lambda)\to \Phi(T)$
ultraweakly. Normal maps are bounded, so
$\{\Phi(T_\lambda)\}$ is a bounded net in~$\M$. By
Proposition~\ref{p_sconv}, $\Phi(T_\lambda)$ is strongly convergent.
Thus, $\Phi(T_\lambda)\to \Phi(T)$ strongly. Since $\Phi(T)\in \M$,
Proposition~\ref{p_sconv} yields
\[s(\Phi(T_\lambda))\to_ {\|\cdot\|_2} s(\Phi(T)).\] By uniqueness
of limits, $\phi\mathbin{\star} s(T)=s(\Phi(T))$. In particular, $\phi\mathbin{\star}
s(T)\in \Sigma$ so $\phi$ is a Schur multiplier, and $\Phi(T)=
L(\phi\mathbin{\star} s(T))=M(\phi)(T)$. It follows that $\Phi=
M(\phi)$.
\end{proof}
\begin{remark}\label{r_autcb}
The authors are grateful to Adam Fuller and David Pitts for bringing
the following to our attention. If~$(\M,\A)$ is a Cartan pair,
then~$\A$ is norming for~$\M$ in the sense of~\cite{pss},
by~\cite[Corollary 1.4.9]{cpz}. Hence by~\cite[Theorem 2.10]{pss},
if $\phi$ is a Schur multiplier, then the map $M(\phi)$ is competely
bounded with~$\|M(\phi)\|_{\cb}=\|M(\phi)\|$.
\end{remark}
We now show that up to isomorphism, the set of Schur multipliers of a
Cartan pair with respect to a Feldman-Moore coordinatisation~$R$
depends on~$(\M,\A)$, but not on~$R$.
\begin{proposition}\label{p_shpre}
Let $(X_i,\mu_i,R_i,\sigma_i)$, $i = 1,2$, be isomorphic
Feldman-Moore relations and let $\rho : X_1\to X_2$ be an
isomorphism from $R_1$ onto $R_2$. Then $\tilde{\rho}\colon
a\mapsto a\circ \rho^{-2}$ is a bijection
from~$\Sigma(R_1,\sigma_1)$ onto $\Sigma(R_2,\sigma_2)$, and an
isometric isomorphism from $\fS(R_1,\sigma)$ onto
$\fS(R_2,\sigma_2)$.
\end{proposition}
\begin{proof}
It suffices to show that $\tilde
\rho^{-1}(\Sigma(R_2,\sigma_2))\subseteq
\Sigma(R_1,\sigma_1)$. Indeed, by symmetry we would then have
$\tilde \rho(\Sigma(R_1,\sigma_1))\subseteq \Sigma(R_2,\sigma_2)$
and could conclude that these sets are equal. Since $\tilde \rho$ is
an isomorphism for the pointwise product, it then follows easily
that $\tilde \rho(\fS(R_1,\sigma_1))=\fS(R_2,\sigma_2)$.
For $i=1,2$, let~$s_i\colon \M(R_i,\sigma_i)\to
\Sigma(R_i,\sigma_i)$ and $L_i=s_i^{-1}$ be the symbol map and the
inverse symbol map for $R_i$, let $\nu_i$ be the right counting
measure of~$R_i$ and let~$H_i=L^2(R_i,\nu_i)$.
Let~$a\in \Sigma(R_2,\sigma_2)$ and let~$T=L_2(a)$. Since $T\in
\M(R_2,\sigma_2)$, the Kaplansky density theorem gives a bounded net
$\{T_\lambda\}\subseteq \M_0(R_2,\sigma_2)$ with $T_\lambda\to
_{\mathrm{SOT}}T$. Let $a_\lambda=s_2(T_\lambda)$ and $a=s_2(T)$. By
Proposition~\ref{prop:continuity}(6),
\[a_\lambda\to a\quad\text{in~$H_2$}\] so if $U\colon H_2\to H_1$ is
the unitary operator defined as in Remark~\ref{rk:unitary}, then
\[(a_\lambda\circ \rho^2)\mathbin{\star}\eta = Ua_\lambda \to U a = (a\circ
\rho^2)\mathbin{\star}\eta \quad\text{in~$H_1$}\]
where $\eta(x,y)=h(\rho(y))^{-1/2}$ and
$h=\frac{d(\rho_*\mu_1)}{d\mu_2}$. We can find a subnet, which can
in fact be chosen to be a sequence $\{(a_n\circ \rho^2) \mathbin{\star}
\eta\}$, that converges almost everywhere. Hence
\[a_n\circ\rho^2\to a\circ\rho^2\quad\text{almost everywhere}.\]
On the other hand, since $T_n$ converges to $T$ in the strong
operator topology, $UT_nU^*$ converges to $UTU^*$
strongly. Moreover, since $T_n\in \M_0(R_2,\sigma_2)$, Equation~(\ref{eq:unitary-action}) gives
$s_1(UT_nU^*)=a_n\circ \rho^2$.
Therefore
\[a_n\circ\rho^2 =s_1(UT_n U^*)\to
s_1(UTU^*)\quad\text{in~$H_1$}.\] %
So $\tilde \rho^{-1}(a)=a\circ\rho^2 = s_1(UTU^*)\in
\Sigma(R_1,\sigma_1)$.%
\end{proof}
\section{A class of Schur multipliers}\label{s_AR}
In this section, we examine a natural subclass of Schur multipliers on
$\M(R)$ which coincides, by a classical result of A. Grothendieck,
with the space of all Schur multipliers in the special case
$\M(R)=\B(\ell^2)$. Throughout, we fix a Feldman-Moore relation
$(X,\mu,R,\sigma)$, and we write $\M(R) = \M(R,\sigma)$. We first
recall some measure theoretic concepts \cite{a}. A measurable subset
$E\subseteq X\times X$ is said to be \emph{marginally null} if there
exists a $\mu$-null set $M\subseteq X$ such that $E\subseteq (M\times
X)\cup (X\times M)$. Measurable sets $E,F\subseteq X\times X$ are
called \emph{marginally equivalent} if their symmetric difference is
marginally null. The set $E$ is called \emph{$\omega$-open} if $E$ is
marginally equivalent to a subset of the form $\cup_{k=1}^{\infty}
\alpha_k\times\beta_k$, where $\alpha_k,\beta_k\subseteq X$ are
measurable.
In the sequel, we will use some notions from operator space theory;
we refer the reader to~\cite{blm} and~\cite{pa} for background material.
Recall that every
element $u$ of the extended Haagerup tensor product
$\A\otimes_{\eh}\A$ can be identified with a series \[u =
\sum_{i=1}^{\infty} A_i\otimes B_i,\] where $A_i,B_i\in \A$ and, for some constant $C > 0$,
we have
\[\left\|\sum_{i=1}^{\infty} A_i A_i^*\right\| \leq
C \qtext{and} \left\|\sum_{i=1}^{\infty} B_i^* B_i\right\| \leq C\]
(the series being convergent in the weak* topology). Let
$\A=\A(R)$. The element $u$ gives rise to a completely bounded
$\A'$-bimodule map $\Psi_u$ on $\B(L^2(R,\nu))$ defined by
\[\Psi_u(T) = \sum_{i=1}^{\infty} A_i T B_i, \quad T\in \B(L^2(R,\nu)).\]
For each~$T$, this series is $w^*$-convergent. Moreover, this element
$u\in \A\otimes_{\eh}\A$ also gives rise to a function $f_u\colon
X\times X\to \bC$, given by \[f_u(x,y) = \sum_{i=1}^{\infty}
a_i(x)b_i(y),\] where $a_i$ (resp. $b_i$) is the function in
$L^{\infty}(X,\mu)$ such that $D(a_i) = A_i$ (resp. $D(b_i) = B_i$),
$i\in \bN$. We write $u\sim \sum_{i=1}^{\infty} a_i\otimes b_i$.
Since
\begin{equation}\label{eq_C}
\left\|\sum_{i=1}^{\infty} |a_i|^2\right\|_{\infty} \leq
C \qtext{and} \left\|\sum_{i=1}^{\infty} |b_i|^2\right\|_{\infty} \leq C,
\end{equation}
the function $f_u$ is well-defined up to a marginally null set.
Moreover, $f_u$ is \emph{$\omega$-continuous} in the sense that
$f_u^{-1}(U)$ is an $\omega$-open subset of $X\times X$ for every open
set $U\subseteq \bC$, and~$f_u$ determines uniquely the corresponding
element~$u\in \A\otimes_{\eh}\A$ (see~\cite{kp}).
\begin{definition}
Given~$u\in \A\otimes_{\eh}\A$, we write
\[ \phi_u\colon R\to \bC\] for the restriction of~$f_u$ to~$R$.
\end{definition}
In what follows, we identify $u\in \A\otimes_{\eh}\A$ with the
corresponding function~$f_u$, and write $\|\cdot\|_{\eh}$ for the norm
of $\A\otimes_{\eh}\A$.
\begin{lemma}\label{l_wd}
If $E\subseteq X\times X$ is a marginally null set, then $E\cap R$
is $\nu$-null. Thus, given $u\in \A\otimes_{\eh}\A$, the function
$\phi_u$ is well-defined as an element of
$L^{\infty}(R,\nu)$. Moreover, $\|\phi_u\|_{\infty}\leq
\|u\|_{\eh}$.
\end{lemma}
\begin{proof}
If $E \subseteq X\times M$, where $M\subseteq X$ is $\mu$-null, then
$(E\cap R)_y = \emptyset$ if $y\not\in M$, and hence $\nu(E\cap R) =
0$. Recall from Proposition~\ref{prop:nuinverse} that $\nu$ has the
same null sets as the measure $\nu^{-1}$; so if $E\subseteq M\times
X$, then $\nu(E\cap R)=0$. Hence any marginally null set is
$\nu$-null.
Since $\|u\|_{\eh}$ is the least possible constant $C$ so
that~(\ref{eq_C}) holds, the set $\{(x,y)\in X\times X\colon
|u(x,y)|>\|u\|_{\eh}\}$ is marginally null with respect to~$\mu$, so
its intersection with~$R$ is $\nu$-null. Hence
$\|\phi_u\|_{\infty}\leq \|u\|_{\eh}$.
\end{proof}
\begin{definition}
Let
\[\fA(R) = \{\phi_u : u\in \A\otimes_{\eh}\A\}.\]
By virtue of Lemma~\ref{l_wd}, $\fA(R)\subseteq
L^{\infty}(R,\nu)$.
\end{definition}
\begin{lemma}\label{l_elt}
If $a,b\in L^{\infty}(X,\mu)$ and $u = a\otimes b$, then for $T\in
\M(R,\sigma)$ we have \[M(\phi_u)(T) = D(a)TD(b).\] In particular,
$\phi_u\in \fS(R,\sigma)$.
\end{lemma}
\begin{proof}
By Lemma~\ref{lem:action},
\[s(D(a)TD(b))(x,y) = a(x)s(T)(x,y)b(y), \ \ \ (x,y)\in R.\] The
claim is now immediate.
\end{proof}
\begin{lemma}\label{lem:pw-weak}
Let $(Z,\theta)$ be a $\sigma$-finite measure space and let
$\{f_k\}_{k\in \bN}$ be a sequence in $L^2(Z,\theta)$ such that
(i) \ \ $f_k$ converges weakly to $f\in L^2(Z,\theta)$,
(ii) \ $f_k$ converges (pointwise) almost everywhere to $g\in L^2(Z,\theta)$, and
(iii) $\sup_{k\ge1}\|f_k\|_{\infty}<\infty$.
\noindent Then $f=g$.
\end{lemma}
\begin{proof} Let $\xi \in L^2(Z,\theta)$. As $f_k$ converges weakly,
$\{\|f_k\|_2\}$ is bounded. Let $Y\subseteq Z$ be measurable with
$\theta(Y)<\infty$. If we write $B=\sup_{k\ge1} \|f_k\|_\infty$, then
\[|f_k\overline{\xi}\chi_Y|\leq B|\xi|\chi_Y.\] Since $B|\xi|\chi_Y$
is integrable,
\begin{eqnarray*}
\langle f\chi_Y,\xi\rangle & = & \langle f,\chi_Y\xi\rangle =
\lim_{k\to \infty} \langle f_k,\chi_Y\xi\rangle=\lim_{k\to \infty}\int
f_k \overline{\xi}\chi_Y d\mu\\
& = & \int g\overline{\xi}\chi_Y\,d\mu=
\langle g\chi_Y,\xi\rangle
\end{eqnarray*}
by the Lebesgue Dominated Convergence
Theorem. So $f\chi_Y = g\chi_Y$. Since $Z$ is $\sigma$-finite, this
yields $f = g$.
\end{proof}
\begin{theorem}\label{p_aeha}
If $u\in \A\otimes_{\eh}\A$, then
$M(\phi_u)$ is the restriction of\/~$\Psi_u$ to $\M(R,\sigma)$
and $\|M(\phi_u)\|\leq \|u\|_{\eh}$.
Hence %
\[ \fA(R)\subseteq \fS(R,\sigma).\]
\end{theorem}
\begin{proof}
Let~$H=L^2(R,\nu)$, let $u\in \A\otimes_{\eh}\A$ and
let~$\Psi=\Psi_u$; thus, $\Psi$ is a completely bounded map on
$\B(H)$. It is well-known that
$\|\Psi\|_{\cb}=\|u\|_{\eh}$. We have $u\sim \sum_{i=1}^\infty
a_i\otimes b_i$, for some $a_i,b_i\in \A$ with
\[C=\max\left\{\Big\|\sum_{i=1}^\infty |a_i|^2 \Big\|_\infty,\
\Big\|\sum_{i=1}^\infty |b_i|^2\Big\|_{\infty}\right\}<\infty.\] For $k\in
\bN$, set $u_k = \sum_{i=1}^k a_i\otimes b_i$ and $\Psi_k =
\Psi_{u_k}$. By Lemma~\ref{l_elt}, $\Psi_k$ leaves $\M(R,\sigma)$
invariant. Since $\Psi_k(T)\to _{w^*}\Psi(T)$ for each~$T\in
\B(H)$, it follows that $\Psi$ also leaves~$\M(R,\sigma)$ invariant.
Let $\Phi$ and $\Phi_k$ be the restrictions of $\Psi$ and $\Psi_k$,
respectively, to $\M(R,\sigma)$. Set $\phi_k = \phi_{u_k}$ for each
$k\in \bN$. Let $c\in \Sigma(R,\sigma)$ and let $T=L(c)$. By
Lemma~\ref{l_elt}, $\phi_k\in \fS(R,\sigma)$,
so $\phi_k\mathbin{\star} c\in \Sigma(R,\sigma)$ and
\[ L(\phi_k \mathbin{\star} c) = \Phi_k(T) \to_{w^*}\Phi(T)
\quad\text{as $k\to \infty$}.\] Hence for every $\eta\in
H$, we have
\[\langle \phi_k \mathbin{\star} c,\eta\rangle = \langle L(\phi_k \mathbin{\star} c)(\chi_{\Delta}),\eta\rangle \rightarrow
\langle \Phi(T)(\chi_{\Delta}),\eta\rangle = \langle
s(\Phi(T)),\eta\rangle.\] So \[\phi_k \mathbin{\star} c \rightarrow
s(\Phi(T))\quad\text{ weakly in $L^2(R,\nu)$.}\] However, $u_k\to u$
marginally almost everywhere, so by Lemma~\ref{l_wd}, $\phi_k\to
\phi_u$ almost everywhere, and thus \[\phi_k \mathbin{\star} c\rightarrow \phi_u \mathbin{\star}
c\quad\text{almost everywhere.}\] Since \[\sup_{k\ge1}\|\phi_k \mathbin{\star}
c\|_\infty \leq C\|c\|_\infty<\infty,\] Lemma~\ref{lem:pw-weak} shows
that $\phi_u \mathbin{\star} c = s(\Phi(T))$. Hence \[L(\phi_u\mathbin{\star} s(T)) = \Phi(T)
\in \M(R,\sigma)\] for every $T\in \M(R,\sigma)$, so $\phi_u$ is a Schur
multiplier and $M(\phi_u)=\Phi=\Psi|_{\M(R,\sigma)}$. Since
$\|M(\phi_u)\|\leq\|M(\phi_u)\|_{\cb}$ (and in fact we have equality
by Remark~\ref{r_autcb}), this shows that $\|M(\phi_u)\| \leq
\|\Psi\|_{\cb} = \|u\|_{\eh}$.
\end{proof}
\section{Schur multipliers of the hyperfinite II$_1$-factor}\label{s_h21}
Recall the following properties of the classical Schur multipliers
of~$B(\ell^2)$.
\begin{enumerate}
\item Every symbol function is a Schur multiplier.
\item Every Schur multiplier is in $\fA(R)$.
\end{enumerate}
In this section, we consider a specific Feldman-Moore coordinatisation
of the hyperfinite II$_1$ factor, and show that in this context the
first property is satisfied but the second is not.
The coordinatisation we will work with is defined as follows. Let
$(X,\mu)$ be the probability space $X = [0,1)$ with Lebesgue measure
$\mu$, and equip~$X$ with the commutative group operation of addition
modulo $1$. For~$n\in\bN$, let~$\bD_n$ be the finite subgroup of~$X$
given by
\[ \bD_n= \{\tfrac{i}{2^n} : 0\leq i \leq 2^n - 1\},\]
and let
\[\bD = \bigcup_{n=0}^{\infty} \bD_n.\]
The countable subgroup~$\bD$ acts on $X$ by translation;
let~$R\subseteq X\times X$ be the corresponding orbit equivalence
relation:
\[R = \{(x,x+r) : x\in X,\ r\in \bD\}.\] For~$r\in \bD$, define
\[\Delta_r=\{(x,x+r)\colon x\in X\}\]
and note that $\{\Delta_r\colon r\in\bD\}$ is a partition of~$R$.
Let ${\bf 1}$ be the $2$-cocycle on~$R$ taking the constant value $1$;
then~$(X,\mu,R,\bf 1)$ is a Feldman-Moore relation. Let~$\nu$ be the
corresponding right counting measure. Clearly, if~$E_r\subseteq
\Delta_r$ is measurable, then
$\nu(E)=\mu(\pi_1(E_r))=\mu(\pi_2(E_r))$. Hence if $E$ is a measurable
subset of~$R$, then for~$j=1,2$ we have
\begin{equation}\label{eq:nu}
\nu(E) = \sum_{r\in \bD} \nu(E\cap \Delta_r) =
\sum_{r\in \bD} \mu(\pi_j(E\cap \Delta_r)).
\end{equation}
It is well-known (see e.g.,~\cite{kr}) that $\R=\M(R,\bf1)$ is
(*-isomorphic to) the hyperfinite II$_1$-factor.
For $1\leq i,j\leq 2^n$, define
\[\Delta^n_{ij} = \left\{\left(x, x + \frac{j-i}{2^n}\right) : \frac{i-1}{2^n} \leq x <
\frac{i}{2^n}\right\}.\] Let~$\chi_{ij}^n$ be the characteristic
function of~$\Delta_{ij}^n$, and write
\[ \Sigma_n=\spn\{\chi_{ij}^n \colon 1\leq i,j\leq 2^n\}.\] Writing
$L$ for the inverse symbol map of~$R$, let~$\R_n\subseteq \R$ be given
by
\[ \R_n=\{L(a)\colon a\in \Sigma_n\}.\]
We also write
\[\iota_n\colon \R_n\to M_{2^n},\quad \sum_{i,j}
\alpha_{ij}L(\chi_{ij}^n)\mapsto (\alpha_{ij}).\] Recall that $\mathbin{\star}$
denotes pointwise multiplication of symbols. We write $A \odot B$ for
the Schur product of matrices $A,B\in M_k$ for some $k\in \bN$.
\begin{lemma}\leavevmode\label{lem:mus}
\begin{enumerate}
\item The set~$\{ L(\chi_{ij}^n)\colon 1\leq i,j\leq 2^n\}$ is a
matrix unit system in~$\R$.
\item The map~$\iota_n$ is a $*$-isomorphism. In particular,
$\iota_n$ is an isometry.
\item For~$a,b\in \Sigma_n$,
we have
\begin{enumerate}
\item $a\mathbin{\star} b\in \Sigma_n$;
\item $\iota_n(L(a\mathbin{\star} b))=\iota_n(L(a))\odot
\iota_n(L(b))$; and
\item $\|L(a\mathbin{\star} b)\|\leq \|L(a)\|\,\|L(b)\|$.
\end{enumerate}
\end{enumerate}
\end{lemma}
\begin{proof}
Checking (1) is an easy calculation, and~(2) is then
immediate. Statement~(3a) is obvious, and (3b) is plain from the
definition of~$\iota_n$. It is a classical result of matrix theory
that if~$A,B\in M_k$, then $\|A\odot B\|\leq
\|A\|\,\|B\|$. Statement~(3c) then follows from~(2) and~(3b).
\end{proof}
Let $\tau\colon \R\to \bC$ be given by
\[\tau(L(a)) = \int_X a(x,x)\,d\mu(x).\]
Since $\nu=\nu^{-1}$, an easy calculation shows that~$\tau$ is a trace
on~$\R$.
For~$a\in L^\infty(R,\nu)$, let
\[ \lambda_{ij}^n(a)=2^n \int_{(i-1)/2^n}^{i/2^n}
a(x,x+(j-i)/2^n)\,d\mu(x)\] be the average value of~$a$
on~$\Delta_{ij}^n$, and define
\[ E_n\colon \Sigma(R,\mathbf1)\to \Sigma_n,\quad a\mapsto \sum_{i,j} \lambda_{ij}^n(a)\chi_{ij}^n\]
and
\[ \bE_n\colon \R\to \R_n,\quad L(a)\mapsto L(E_n(a)).\]
\begin{lemma}\leavevmode\label{lem:cond}
$\bE_n$ is the $\tau$-preserving conditional expectation
of\/~$\R$ onto~$\R_n$. In particular, $\bE_n$ is norm-reducing.
\end{lemma}
\begin{proof}
By~\cite[Lemma~3.6.2]{ss-book}, it suffices to show that~$\bE_n$ is
a $\tau$-preserving $\R_n$-bimodule map. For~$a\in \Sigma(R,\mathbf
1)$, we have
\begin{align*} \tau(\bE_n(L(a))) &= \tau(L(E_n(a)))\\
&= \int E_n(a)(x,x)\,d\mu(x)\\
&= \sum_{i=1}^{2^n} \lambda_{ii}^n(a) \mu([ (i-1)/2^n,i/2^n))\\
&= \tau(L(a)),
\end{align*}
so $\bE_n$ is $\tau$-preserving. For $b,c\in \Sigma_n$, a calculation gives
\[ E_n(b *_{\mathbf1} a*_{\mathbf1} c) = b*_{\mathbf 1}E_n(a)*_{\mathbf 1}c,\]
hence $\bE_n(BTC)=B\bE_n(T)C$ for $B,C\in \R_n$ and $T\in \R$.
\end{proof}
\begin{lemma}\label{lem:condconv}
Let $a\in \Sigma(R,\mathbf1)$.
\begin{enumerate}
\item $\|E_n(a)\|_\infty \leq \|a\|_\infty$.
\item $E_n(a)\to_{\|\cdot\|_2} a$ as $n\to \infty$.
\end{enumerate}
\end{lemma}
\begin{proof}
(1) follows directly from the definition of~$E_n$.%
(2) For $T\in \R$, we have $\bE_n(T)\to_{\mathrm{SOT}} T$ as $n\to
\infty$ (see e.g.,~\cite{ps}). By
Proposition~\ref{prop:continuity}(6),
\[ E_n(a) = s(\bE_n(L(a))) \to_{\|\cdot\|_2} s(L(a))=a.\qedhere\]
\end{proof}
\begin{theorem}\label{th_inc}
We have $\Sigma(R,\mathbf1)\subseteq \fS(R,\mathbf1)$. Moreover, if
$a,b\in\Sigma(R,\mathbf1)$, then $\|L(a\mathbin{\star} b)\|\leq
\|L(a)\|\|L(b)\|$.
\end{theorem}
\begin{proof}
Let $a,b\in \Sigma(R,\mathbf1)$, and for $n\in\bN$, let $a_n=E_n(a)$
and $b_n=E_n(b)$. Lemmas~\ref{lem:mus} and~\ref{lem:cond} give
\begin{equation}\label{eq2}
\|L(a_n\mathbin{\star} b_n)\|
\leq \|L(a_n)\|\,\|L(b_n)\|
= \|\bE_n(L(a))\|\,\|\bE_n(L(b))\|
\leq \|L(a)\|\|L(b)\|.
\end{equation}
On the other hand,
\begin{align*}
\|a_n\mathbin{\star} b_n - a\mathbin{\star} b\|_2
& \leq \|a_n \mathbin{\star} (b_n - b)\|_2 + \|b\mathbin{\star} (a_n - a)\|_2\nonumber \\
& \leq \|a_n\|_{\infty} \|(b_n - b)\|_2 + \|b\|_{\infty} \|(a_n - a)\|_2
\end{align*}
so by Lemma~\ref{lem:condconv},
\[ a_n\mathbin{\star} b_n\to_{\|\cdot\|_2} a\mathbin{\star} b.\] Let $T_n=L(a_n\mathbin{\star}
b_n)$. Since~$(T_n)$ is bounded by~(\ref{eq2}),
Proposition~\ref{p_sconv} shows that $(T_n)$ converges in the strong
operator topology, say to $T\in \R$, and \[a_n\mathbin{\star}
b_n=s(T_n)\to_{\|\cdot\|_2} s(T).\] Hence $a\mathbin{\star} b=s(T)\in
\Sigma(R,\mathbf1)$, so $a\in \fS(R,\mathbf1)$.
Since $T_n\to_{\mathrm{SOT}}T$, we have $\|T\|\leq \limsup_{n\to
\infty} \|T_n\|$. Hence by~(\ref{eq2}),
\[\|L(a\mathbin{\star} b)\|\leq \limsup_{n\to \infty} \|L(a_n \mathbin{\star} b_n)\| \leq
\|L(a)\|\|L(b)\|.\qedhere\]
\end{proof}
\begin{remark}\label{remark:popsmith}
For each masa $\A\subseteq \R$, Pop and Smith define a Schur product
$\mathbin{\star}_\A \colon \R\times \R\to \R$ in~\cite{ps}. The proof of
Theorem~\ref{th_inc} shows that for the specific Feldman-Moore
coordinatisation $(X,\mu,R,\mathbf1)$ described above and the masa
$\A=\A(R)\subseteq \R=\M(R,\mathbf1)$, if we identify operators
in~$\R$ with their symbols, then Definition~\ref{d_sh} extends
$\mathbin{\star}_\A$ to a map $\fS(R,\mathbf1)\times \R\to \R$. It is easy to
see that this is a proper extension: the constant function
$\phi(x,y)=1$ is plainly in~$\fS(R,\mathbf1)$, but $\phi$ is not the
symbol of an operator in~$\R$ (\cite[Remark 3.3]{ps}).
\end{remark}
\begin{corollary}\label{cor:inc}
Let~$\R$ be the hyperfinite II$_1$ factor, and let~$\tilde \A$ be
any masa in~$\R$. For any Feldman-Moore coordinatination $(\tilde
X,\tilde \mu,\tilde R,\tilde \sigma)$ of the Cartan pair~$(\R,\tilde
\A)$, we have $\Sigma(\tilde R,\tilde \sigma)\subseteq \fS(\tilde
R,\tilde \sigma)$.
\end{corollary}
\begin{proof}
By~\cite{cfw}, we have $(\R,\tilde \A)\cong (\R,\A)$. Hence by
Theorem~\ref{thm:fmii1}, \[(\tilde X,\tilde \mu, \tilde R,\tilde
\sigma)\cong (X,\mu,R,\mathbf1)\] via an isomorphism $\rho\colon
\tilde X\to X$. Consider the map $\tilde \rho\colon a\mapsto a\circ
\rho^{-2}$ as in Proposition~\ref{p_shpre}. By Theorem~\ref{th_inc},
\[ \Sigma(\tilde R,\tilde \sigma)=\tilde \rho(\Sigma(R,\mathbf1))
\subseteq \tilde\rho(\fS(R,\mathbf1))=\fS(\tilde
R,\tilde\sigma).\qedhere\]
\end{proof}
In view of Theorem~\ref{th_inc} and Proposition~\ref{prop:sigma0}, it
is natural to ask the following question.
\begin{question}
Does the inclusion $\Sigma(R,\sigma)\subseteq \fS(R,\sigma)$ hold
for an arbitrary Feldman-Moore relation $(X,\mu,R,\sigma)$?
\end{question}
We now turn to the inclusion
\[ \fA(R)\subseteq \fS(R,\sigma)\] established in
Section~\ref{s_AR}. While these sets are equal in the classical
case, we will show that in the current context this inclusion is
proper.
\newcommand{R}{R}
For~$D\subseteq\bD$, we define
\[\Delta(D)=\bigcup_{r\in D} \Delta_r.\]
Note that $\Delta(D)$ is marginally null only if $D=\emptyset$, and
its characteristic function $\chi_{\Delta(D)}$ is a ``Toeplitz''
idempotent element of $L^\infty(R,\nu)$.
\begin{proposition}\label{prop:dyad-A(Rd)}\leavevmode
\begin{enumerate}
\item If\/ $\emptyset\ne D\subsetneq \bD$ and either $D$ or\/
$\bD\setminus D$ is dense in $[0,1)$, then the characteristic
function~$\chi_{\Delta(D)}$ is not in\/~$\fA(R)$.
\item Let $0\ne \phi\in L^\infty(R)$ and
\[ E=\{ r\in \bD\colon \phi|_{\Delta_r}=0\
\mu\text{-a.e.}\}.\]If $E$ is dense in $[0,1)$, then $\phi\not\in
\fA(R)$.
\end{enumerate}
\end{proposition}
\begin{proof}
(1) Suppose first that $\bD\setminus D$ is dense in $[0,1)$ and, by
way of contradiction, that $\chi_{\Delta(D)}\in\fA(R)$. There is
an element $\sum_{i=1}^\infty a_i\otimes b_i\in \A\otimes_{\eh}\A$ and a
$\nu$-null set $N\subseteq R$ such
that \[\chi_{\Delta(D)}(x,y)=\sum_{i=1}^\infty a_i(x)b_i(y)\ \text{for all
$(x,y)\in R\setminus N$.}\]
Let $f\colon X\times X\to \bC$ be the extension of $\chi_{\Delta(D)}$
which is defined (up to a marginally null set) by
\[f(x,y)=\sum_{i=1}^\infty a_i(x)b_i(y)\ \text{for marginally almost
every~$(x,y)\in X\times X$}.\] By~\cite[Theorem~6.5]{eks}, $f$ is
$\omega$-continuous. Hence the set \[F=f^{-1}(\bC\setminus \{0\})\]
is $\omega$-open. Since $D\ne\emptyset$ and $\Delta(D)\subseteq F$,
the set~$F$ is not marginally null. So there exist Borel sets
$\alpha,\beta\subseteq [0,1)$ with non-zero Lebesgue measure so that
$\alpha\times \beta\subseteq F$. For~$j=1,2$, let $N_j=\pi_j(N)$. By
equation~(\ref{eq:nu}), $\mu(N_j)=0$. Let $\alpha'=\alpha\setminus
N_1$ and $\beta'=\beta\setminus N_2$; then $\alpha'$ and $\beta'$
have non-zero Lebesgue measure, and hence the set \[\beta'-\alpha' =
\{ y-x\colon x\in \alpha',\,y\in \beta'\}\] contains an open
interval by Steinhaus' theorem, so it intersects the dense
set~$\bD\setminus D$. So there exist~$r\in \bD\setminus D$ and $x\in
\alpha'$ with $x+r\in \beta'$. Now
\[ (x,x+r)\in F \setminus \Delta(D),\]
so
\[0\ne f(x,x+r)=\chi_{\Delta(D)}(x,x+r)=0,\] a contradiction. So
$\chi_{\Delta(D)}\not\in \fA(R)$ if $D\ne\emptyset$ and $\bD\setminus D$ is dense in
$[0,1)$.
If $D\ne \bD$ and~$D$ is dense in $[0,1)$ then
$\chi_{\Delta(\bD\setminus D)}\not\in \fA(R)$; since $\fA(R)$ is
a linear space containing the constant function $1$, this shows
that $1-\chi_{\Delta(\bD\setminus
D)}=\chi_{\Delta(D)}\not\in\fA(R)$.\medskip
(2) The argument is similar. If $\phi\in\fA(R)$ then there is a
$\nu$-null set $N\subseteq R$ such that $\phi(x,y)=\sum_{i=1}^{\infty}
a_i(x)b_i(y)$ for all $(x,y)\in R\setminus N$ where $\sum_{i=1}^{\infty}
a_i\otimes b_i\in \A\otimes_{\eh}\A$, and $\phi(x,y)=0$ for all
$(x,y)\in R\setminus N$ with the property $y-x\in E$. Let $f\colon
[0,1)^2\to \bC$, $f(x,y)=\sum_{i=1}^{\infty} a_i(x)b_i(y)$, $x,y\in [0,1)$.
Then $f$ is non-zero
and $\omega$-continuous, so $f^{-1}(\bC\setminus\{0\})$ contains
$\alpha'\times \beta'$ where $\alpha',\beta'$ are sets of non-zero
measure so that $(\alpha'\times \beta')\cap N = \emptyset$. Hence
$\beta'-\alpha'$ contains an open interval of $[0,1)$, and
intersects the dense set $E$ in at least one point $r\in \bD$; so
there is $x\in [0,1)$ such that $(x,x+r)\in (\alpha'\times\beta')\cap
(R\setminus N)$. Then $0=\phi(x,x+r)=f(x,x+r)\ne0$, a
contradiction.
\end{proof}
\begin{corollary}\label{cor:Rdinclusion}
The inclusion $\fA(R)\subseteq \fS(R,\mathbf1)$ is proper.
\end{corollary}
\begin{proof}
Since~$\Delta=\Delta(\{0\})$, Proposition~\ref{prop:dyad-A(Rd)}
shows that $\chi_\Delta\not\in \fA(R)$. It is easy to check (as in
Lemma~\ref{lem:cond}) that the Schur multiplication map
$M(\chi_{\Delta})$ is the conditional expectation of $\R$ onto $\A$,
so $\chi_\Delta\in \fS(R,\mathbf1)$.
\end{proof}
\begin{corollary}\label{c_noncbs}
Let~$(\tilde X,\tilde \mu,\tilde R,\tilde \sigma)$ be a
Feldman-Moore relation and suppose that $\M(\tilde R,\tilde \sigma)$
contains a direct summand isomorphic to the hyperfinite~{\rm II}$_1$
factor. Then the inclusion~$\fA(\tilde R)\subseteq \fS(\tilde
R, \tilde \sigma)$ is proper.
\end{corollary}
\begin{proof}
Let~$P$ be a central projection in~$\M(\tilde R,\tilde \sigma)$ so
that~$P\M(\tilde R, \tilde\sigma)$ is (isomorphic to) the
hyperfinite II$_1$ factor~$\R$. It is not difficult to verify
that~$\A_P=P\A(\tilde R)$ is a Cartan masa in~$\R$ (see the
arguments in the proof of~\cite[Theorem~1]{fm2}). By~\cite{cfw}, the
Cartan pair~$(\R,\A_P)$ is isomorphic to the Cartan pair~$(\R,\A)$
considered throughout this section. It follows from
Theorem~\ref{thm:fmii1} that there is a Borel isomorphism $\rho :
\tilde X\to X_0 \cup [0,1)$ (a disjoint union) with $\rho^2(\tilde
R) = R_0\cup R$ (again, a disjoint union), where $R_0\subseteq
X_0\times X_0$ is a standard equivalence relation and $R$ is the
equivalence relation defined at the start of the present section.
It is easy to check that $\rho^2(\fA(\tilde R)) = \fA(R_0\cup R)$.
We may thus assume that $\tilde X = X_0 \cup [0,1)$ and $\tilde R =
R_0\cup R$. %
Now suppose that $\fS(\tilde R,\tilde \sigma) = \fA(\tilde R)$. Let $P
= P([0,1))$. Given $\phi\in \fS(R)$, let $\psi : \tilde R\to
\bC$ be its extension defined by letting $\psi(x,y) = 0$ if
$(x,y)\in R_0$. Then
\[M(\psi)(T\oplus S) = P M(\psi)(T\oplus S) P = M(\phi)(T)\oplus 0,
\quad T\in \M(R).\] So $\psi\in \fS(R,\mathbf1)$ and hence
$\psi\in \fA(\tilde R)$. It now easily follows that $\phi\in
\fA(R)$, contradicting Corollary~\ref{cor:Rdinclusion}.
\end{proof}
In fact, the only Toeplitz idempotent elements of~$\fS(R):=\fS(R, \mathbf1)$ are
trivial. To see this, we first explain how $\fS(R)$ can be obtained from
multipliers of the Fourier algebra of a measured groupoid. We refer
the reader to~\cite{rbook,r} for basic notions and results about
groupoids.
The set $\G=X\times\mathbb D$ becomes a groupoid under the partial product
\[ (x,r_1)\cdot (x+r_1,r_2)=(x,r_1+r_2)\quad\text{for $x\in X$,
$r_1,r_2\in \bD$}\]
where the set of composable pairs is
\[\G^2=\{\big((x_1,r_1),(x_2,r_2)\big): x_2=
x_1+r_1\}\] and inversion is given by
\[ (x,t)^{-1}=(x+t, -t).\] The domain and range maps in this case are
$d(x,t)=(x,t)^{-1}\cdot(x,t)=(x+t,0)$ and
$r(x,t)=(x,t)\cdot(x,t)^{-1}=(x,0)$, so the unit space, $\G_0$, of
this groupoid, which is the common image of $d$ and $r$, can be
identified with $X$. Let $\lambda$ be the Haar, that is, the
counting, measure on $\mathbb D$. The groupoid $\G$ can be equipped
with the Haar system $\{\lambda^x:x\in X\}$, where
$\lambda^x=\delta_x\times \lambda$ and $\delta_x$ is the point mass
at~$x$.
\newcommand{\nu_{\G}}{\nu_{\G}}
Recall that $\mu$ is Lebesgue measure on $X$. Consider the measure
$\nu_{\G}$ on~$\G$ given by $\nu_{\G}=\mu\times\lambda=\int\lambda^xd\mu(x)$.
Since~$\mu$ is translation invariant and~$\lambda$ is invariant under
the transformation $t\mapsto -t$, it is easy to see that
$\nu_{\G}^{-1}=\nu_{\G}$, where $\nu_{\G}^{-1}(E)=\nu_{\G}(\{e^{-1}\colon e\in
E\})$.%
Therefore $\G$ with the above Haar system and the measure~$\mu$
becomes a measured groupoid.
Consider the map
\[\theta:R\to X\times \bD,\quad \theta(x,x+r)=(x,r),\quad x\in X,\ r\in \bD.\]
Clearly $\theta$ is a continuous bijection (here~$\bD$ is equipped
with the discrete topology). We claim the measure~$\theta_*\nu \colon
E\mapsto \nu(\theta^{-1}(E))$ is equal to~$\nu_{\G}$, where, as
before,~$\nu$ is the right counting measure for the Feldman-Moore
relation~$(X,\mu,R,\mathbf1)$. Indeed, for~$E\subseteq\G$, we have
\begin{align*}
(\theta_*\nu)(E)
&=
\nu(\theta^{-1}(E))\\
&=
\sum_{r\in \bD} \mu(\pi_1(\theta^{-1}(E)\cap \Delta_r))\quad\text{by equation~(\ref{eq:nu})}
\\&=
\sum_{r\in \bD} \mu(\pi_1(E\cap (X\times \{r\}))) = (\mu\times \lambda)(E)=\nu_{\G}(E)
\end{align*}
since it is easily seen that $\pi_1(\theta^{-1}(E)\cap
\Delta_r)=\{x\in X\colon (x,r)\in E\}$.
It follows that the operator \[U:L^2(R,\nu)\to L^2(\G,\nu_{\G}),\quad \xi\mapsto \xi\circ \theta^{-1}\] is unitary.
Let $C_c(\G)$ be the space of compactly supported continuous functions
on $\G$. This becomes a $*$-algebra with respect to the convolution
given by
\[(f\ast g)(x,t)=\sum_{r\in \bD} f(x,r)g(x+r,t-r),\]
and involution given by $f^*(x,t)=\overline{f(x+t,-t)}$.
Let $\Reg$ be the representation of $C_c(\G)$ on the Hilbert space
$L^2(\G,\nu_{\G})$ given for~$\xi,\eta\in L^2(\G,\nu_{\G})$ by
\begin{align*}
\langle \Reg(f)\xi,\eta\rangle& = \int f(x,t)\xi((x,t)^{-1}(y,s))\overline{\eta(y,s)}d\lambda^{r(x,t)}(y,s)d\lambda^u(x,t)d\mu(u)\\
&=\int f(x,t)\xi(x+t,s-t)\overline{\eta(x,s)}d\lambda(s)d\lambda(t)d\mu(x)\\
&=\int f(x,t)\xi(x+t,s-t)\overline{\eta(x,s)}d\lambda(t)d\nu_{\G}(x,s)
\end{align*}
hence
\[(\Reg(f)\xi)(x,s)=
\int f(x,t)\xi(x+t,s-t)d\lambda(t)=\sum_t f(x,t)\xi(x+t,s-t).\]
In~\cite[Section 2.1]{r}, the von Neumann algebra $\VN(\G)$ of $\G$
is defined to be the bicommutant $\Reg(C_c(\G))''$.
If $f\in C_c(\G)$, then $f\circ \theta$ has a band limited support and
for~$\xi\in L^2(R,\nu)$, we have
\begin{align*}
(U^*\Reg(f)U\xi)(x,x+t)&=\sum_sf(x,s)\xi(x+s,x+t)\\
&=\sum_sf(\theta(x,x+s))\xi(x+s,x+t)\label{equivalence}\\
&=(L(f\circ \theta)\xi)(x,x+t).
\end{align*}
Hence
\begin{equation}
U^*\Reg(f)U = L(f\circ \theta)\label{equivalence}
\end{equation}
and so $\VN(\G)$ is spatially isomorphic to $\M(R)$.
The von Neumann algebra $\VN(\G)$ is the dual of the Fourier algebra
$A(\G)$ of the measured groupoid $\G$, which is a Banach algebra of
complex-valued functions on $\G$. If the operator $M_\phi$ on $A(\G)$
of multiplication by the function $\phi\in L^\infty(\G)$ is bounded,
then its adjoint $M_\phi^*$ is a bounded linear map on $\VN(\G)$.
Moreover, in this case we have $M_\phi^*\Reg(f)=\Reg(\phi f)$, for
$f\in C_c(\G)$. The function $\phi$ is then called a multiplier of
$A(\G)$ \cite{r} and we write $\phi\in MA(\G)$. If the map $M_\phi$
is also completely bounded then $\phi$ is called a completely bounded
multiplier of $A(\G)$ and we write $\phi\in M_0A(\G)$. By
equation~(\ref{equivalence}) and Remark~\ref{r_autcb}, we have
\begin{equation}
\phi\in M_0A(\G)
\iff \phi\circ \theta\in \fS(R,\mathbf1).\label{eq:renault}
\end{equation}
We are now ready to prove the following statement:
\begin{proposition}\label{prop:dyadmult}
If $D\subseteq \bD$, then the following are equivalent:
\begin{enumerate}
\item The function $\chi_{\Delta(D)}\in L^\infty(R,\nu)$ is in
$\fS(R)$.
\item The function $\chi_D\in \ell^\infty(\bD)$ is in the
Fourier-Stieltjes algebra $B(\bD)$ of $\bD$.
\item $D$ is in the coset ring of $\bD$.
\end{enumerate}
\end{proposition}
\begin{proof}
To see that $(1)$ and~$(2)$ are equivalent, observe that if
$\pi:\G\to \mathbb D$, $(x,t)\mapsto t$ is the projection
homomorphism of $\G$ onto $\mathbb D$, then
\[\chi_{\Delta(D)}=\chi_{D}\circ\pi\circ\theta.\]
Moreover, since~$\bD$ is commutative, we have $B(\bD)=M_0 A(\bD)$. So
\begin{align*}
\chi_D\in B(\bD)&\iff \chi_D\in M_0A(\bD)
\\&\iff \chi_D\circ \pi \in M_0A(\G)\ \text{by~\cite[Proposition~3.8]{r}}\\
&\iff \chi_{\Delta(D)}=\chi_D\circ \pi\circ \theta\in \fS(R,\mathbf1)\ \text{by~(\ref{eq:renault}).}
\end{align*}
The equivalence of $(2)$ and $(3)$ follows
from~\cite[Chapter~3]{rudin-fag}.
\end{proof}
\begin{theorem}\label{th_ch}
The only elements of $\fA(R)$ of the form $\chi_{\Delta(D)}$ for
some $D\subseteq \bD$ are $0$ and $1$.
\end{theorem}
\begin{proof}
If $\chi_{\Delta(D)}\in \fA(R)$ then $\chi_{\Delta(D)}\in \fS(R)$ by
Proposition~\ref{p_aeha}, so $D$ is in the coset ring of $\bD$ by
Proposition~\ref{prop:dyadmult}. All proper subgroups of $\bD$ are
finite, so $D$ is in the ring of finite or cofinite subsets of
$\bD$. Hence either $\bD\setminus D$ or $D$ is dense in $[0,1)$,
so either $D=\emptyset$ or $D = \bD$ by
Proposition~\ref{prop:dyad-A(Rd)}.
\end{proof}
\begin{remark}
We note that there exist non-trivial idempotent elements of
$\fA(R)$. For example, if~$\alpha,\beta$ are measurable subsets
of~$X$, then the characteristic function of $(\alpha\times
\beta)\cap R$ is always idempotent. Note that the sets of the form
$(\alpha\times\beta)\cap R$ are not unions of full diagonals
unless they are equivalent to either $R$ or the empty set.
\end{remark}
\smallskip
|
2,869,038,154,631 | arxiv |
\section{Introduction}
Time-scale calculus is a recent and exciting mathematical theory
that unifies two existing approaches to dynamic modelling --- difference and differential equations ---
into a general framework called dynamic models on time scales \cite{BohnerDEOTS,Hilger97,moz}.
Since a more general approach to dynamic modelling, it allows to consider more complex time domains,
such as $h\mathbb{Z}$, $q^{\mathbb{N}_0}$ or complex hybrid domains \cite{almeida:torres}.
Both inflation and unemployment inflict social losses. When a Phillips tradeoff exists between the two,
what would be the best combination of inflation and unemployment? A well-known approach
consists to write the social loss function as a function of the rate of inflation $p$
and the rate of unemployment $u$, with different weights;
then, using relations between $p$, $u$ and the expected rate of inflation $\pi$,
to rewrite the social loss function as a function of $\pi$;
finally, to apply the theory of the calculus of variations
in order to find an optimal path $\pi$ that minimizes
the total social loss over a certain time interval $[0,T]$ under study.
Economists dealing with this question implement the above approach using both
continuous and discrete models \cite{ChiangEDO,Taylor}.
Here we propose a new, more general, time-scale model.
We claim that such model describes better the reality.
We compare solutions to three models --- the continuous, the discrete, and the time-scale model with
$\mathbb{T}=h\mathbb{Z}$ --- using real data from the USA over a period of 11 years,
from 2000 to 2010. Our results show that the solutions to the classical continuous and discrete models
do not approximate well the reality. Therefore, while predicting the future, one cannot base
predictions on the two classical models only. The time-scale approach proposed here shows, however,
that the classical models are adequate if one uses an appropriate data sampling process.
Moreover, the proper times for data collection can be computed from the theory of time scales.
The paper is organized as follows. Section~\ref{prel} provides all the necessary definitions and results
of the delta-calculus on time scales, which will be used throughout the text. This section makes
the paper accessible to Economists with no previous contact with the time-scale calculus.
In Section~\ref{model} we present the economical model under our consideration, in continuous,
discrete, and time-scale settings. Section~\ref{main:results} contains our results.
Firstly, we derive in Section~\ref{main:theory} necessary (Theorem~\ref{cor1}
and Corollary~\ref{cor:ThZ}) and sufficient (Theorem~\ref{global}) optimality conditions
for the variational problem that models the economical situation. For the
time scale $\mathbb{T} = h\mathbb{Z}$ with appropriate values of $h$,
we obtain an explicit solution for the global minimizer
of the total social loss problem (Theorem~\ref{th:delf}). Secondly, we apply those conditions
to the model with real data of inflation \cite{rateinf} and unemployment
\cite{rateunemp} (Section~\ref{main:empirical}).
We end with Section~\ref{conclusions} of conclusions.
\section{Preliminaries}
\label{prel}
In this section we introduce basic definitions and theorems that will be useful in the sequel.
For more on the theory of time scales we refer to the gentle books
\cite{BohnerDEOTS,MBbook2001}. For general results on the calculus of variations on time scales
we refer the reader to \cite{Girejko,Malinowska,Martins} and references therein.
A time scale $\mathbb{T}$ is an arbitrary nonempty closed subset of $\mathbb{R}$.
Let $a,b\in\mathbb{T}$ with $a<b$. We define the interval $[a,b]$ in $\mathbb{T}$ by
$[a,b]_{\mathbb{T}}:=[a,b]\cap\mathbb{T}=\left\{t\in\mathbb{T}: a\leq t\leq b\right\}$.
\begin{df}[\cite{BohnerDEOTS}]
\label{def:jump:op}
The backward jump operator $\rho:\mathbb{T} \rightarrow \mathbb{T}$
is defined by $\rho(t):=\sup\lbrace s\in\mathbb{T}: s<t\rbrace$ for
$t\neq \inf\mathbb{T}$ and $\rho(\inf\mathbb{T}) := \inf\mathbb{T}$ if $\inf\mathbb{T}>-\infty$.
The forward jump operator $\sigma:\mathbb{T} \rightarrow \mathbb{T}$ is defined by
$\sigma(t):=\inf\lbrace s\in\mathbb{T}: s>t\rbrace$ for $t\neq \sup\mathbb{T}$
and $\sigma(\sup\mathbb{T}) := \sup\mathbb{T}$ if $\sup\mathbb{T}<+\infty$.
The backward graininess function $\nu:\mathbb{T} \rightarrow [0,\infty)$
is defined by $\nu(t):=t-\rho(t)$, while the forward graininess function
$\mu:\mathbb{T} \rightarrow [0,\infty)$ is defined by $\mu(t):=\sigma(t)-t$.
\end{df}
\begin{ex}
The two classical time scales are $\mathbb{R}$ and $\mathbb{Z}$,
representing the continuous and the purely discrete time, respectively.
The other example of interest to the present study is the periodic time scale
$h\mathbb{Z}$. It follows from Definition~\ref{def:jump:op} that
if $\mathbb{T}=\mathbb{R}$, then
$\sigma (t)=t$, $\rho(t)=t$, and $\mu(t) = 0$ for all $t \in \mathbb{T}$;
if $\mathbb{T}=h\mathbb{Z}$, then $\sigma(t)= t+h$, $\rho(t)= t-h$,
and $\mu(t) = h$ for all $t\in\mathbb{T}$.
\end{ex}
A point $t\in\mathbb{T}$ is called \emph{right-dense},
\emph{right-scattered}, \emph{left-dense} or \emph{left-scattered}
if $\sigma(t)=t$, $\sigma(t)>t$, $\rho(t)=t$,
and $\rho(t)<t$, respectively. We say that $t$ is \emph{isolated}
if $\rho(t)<t<\sigma(t)$, that $t$ is \emph{dense} if $\rho(t)=t=\sigma(t)$.
\subsection{The delta derivative and the delta integral}
We collect here the necessary theorems and properties
concerning differentiation and integration on a time scale.
To simplify the notation, we define $f^{\sigma}(t):=f(\sigma(t))$.
The delta derivative is defined for points in the set
$$
\mathbb{T}^{\kappa} :=
\begin{cases}
\mathbb{T}\setminus\left\{\sup\mathbb{T}\right\}
& \text{ if } \rho(\sup\mathbb{T})<\sup\mathbb{T}<\infty,\\
\mathbb{T}
& \hbox{ otherwise}.
\end{cases}
$$
\begin{df}[Section~1.1 of \cite{BohnerDEOTS}]
We say that a function $f:\mathbb{T}\rightarrow\mathbb{R}$ is
\emph{$\Delta$-differentiable} at
$t\in\mathbb{T}^\kappa$ if there is a number $f^{\Delta}(t)$
such that for all $\varepsilon>0$ there exists a neighborhood $O$
of $t$ such that
$$
|f^\sigma(t)-f(s)-f^{\Delta}(t)(\sigma(t)-s)|
\leq\varepsilon|\sigma(t)-s|
\quad \mbox{ for all $s\in O$}.
$$
We call to $f^{\Delta}(t)$ the \emph{$\Delta$-derivative} of $f$ at $t$.
\end{df}
\begin{tw}[Theorem~1.16 of \cite{BohnerDEOTS}]
\label{differentiation}
Let $f:\mathbb{T} \rightarrow \mathbb{R}$
and $t\in\mathbb{T}^{\kappa}$. The following holds:
\begin{enumerate}
\item
If $f$ is differentiable at $t$, then $f$ is continuous at $t$.
\item
If $f$ is continuous at $t$ and $t$ is right-scattered,
then $f$ is differentiable at $t$ with
$$
f^{\Delta}(t)=\frac{f^\sigma(t)-f(t)}{\mu(t)}.
$$
\item
If $t$ is right-dense, then $f$ is differentiable at $t$
if, and only if, the limit
$$
\lim\limits_{s\rightarrow t}\frac{f(t)-f(s)}{t-s}
$$
exists as a finite number. In this case,
$$
f^{\Delta}(t)=\lim\limits_{s\rightarrow t}\frac{f(t)-f(s)}{t-s}.
$$
\item
If $f$ is differentiable at $t$, then
$f^\sigma(t)=f(t)+\mu(t)f^{\Delta}(t)$.
\end{enumerate}
\end{tw}
\begin{ex}
If $\mathbb{T}=\mathbb{R}$, then item~3
of Theorem~\ref{differentiation} yields that
$f:\mathbb{R} \rightarrow \mathbb{R}$
is delta differentiable at $t\in\mathbb{R}$ if, and only if,
$$
f^\Delta(t)=\lim\limits_{s\rightarrow t}\frac{f(t)-f(s)}{t-s}
$$
exists, i.e., if, and only if, $f$ is differentiable (in the ordinary sense) at $t$:
$f^{\Delta}(t)=f'(t)$. If $\mathbb{T}=h\mathbb{Z}$, then point~2
of Theorem~\ref{differentiation} yields that
$f:\mathbb{Z} \rightarrow \mathbb{R}$ is delta differentiable
at $t\in h\mathbb{Z}$ if, and only if,
\begin{equation}
\label{eq:delta:der:h}
f^{\Delta}(t)=\frac{f(\sigma(t))-f(t)}{\mu(t)}=\frac{f(t+h)-f(t)}{h}.
\end{equation}
In the particular case $h=1$, $f^{\Delta}(t)=\Delta f(t)$,
where $\Delta$ is the usual forward difference operator.
\end{ex}
\begin{tw}[Theorem~1.20 of \cite{BohnerDEOTS}]
\label{tw:differpropdelta}
Assume $f,g:\mathbb{T}\rightarrow\mathbb{R}$
are $\Delta$-differentiable at $t\in\mathbb{T^{\kappa}}$. Then,
\begin{enumerate}
\item The sum $f+g:\mathbb{T}\rightarrow\mathbb{R}$ is
$\Delta$-differentiable at $t$ with
$(f+g)^{\Delta}(t)=f^{\Delta}(t)+g^{\Delta}(t)$.
\item
For any constant $\alpha$, $\alpha f:\mathbb{T}\rightarrow\mathbb{R}$
is $\Delta$-differentiable at $t$ with
$(\alpha f)^{\Delta}(t)=\alpha f^{\Delta}(t)$.
\item The product $fg:\mathbb{T} \rightarrow \mathbb{R}$
is $\Delta$-differentiable at $t$ with
\begin{equation*}
(fg)^{\Delta}(t)=f^{\Delta}(t)g(t)+f^{\sigma}(t)g^{\Delta}(t)
=f(t)g^{\Delta}(t)+f^{\Delta}(t)g^{\sigma}(t).
\end{equation*}
\item If $g(t)g^{\sigma}(t)\neq 0$,
then $f/g$ is $\Delta$-differentiable at $t$ with
\begin{equation*}
\left(\frac{f}{g}\right)^{\Delta}(t)
=\frac{f^{\Delta}(t)g(t)-f(t)g^{\Delta}(t)}{g(t)g^{\sigma}(t)}.
\end{equation*}
\end{enumerate}
\end{tw}
\begin{df}[Definition~1.71 of \cite{BohnerDEOTS}]
A function $F:\mathbb{T} \rightarrow \mathbb{R}$ is called
an antiderivative of $f:\mathbb{T} \rightarrow \mathbb{R}$ provided
$F^{\Delta}(t)=f(t)$ for all $t\in\mathbb{T}^{\kappa}$.
\end{df}
\begin{df}[\cite{BohnerDEOTS}]
A function $f:\mathbb{T} \rightarrow \mathbb{R}$ is called rd-continuous provided
it is continuous at right-dense points in $\mathbb{T}$ and its left-sided limits exists
(finite) at all left-dense points in $\mathbb{T}$.
\end{df}
The set of all rd-continuous functions $f:\mathbb{T} \rightarrow \mathbb{R}$
is denoted by $C_{rd} = C_{rd}(\mathbb{T}) = C_{rd}(\mathbb{T},\mathbb{R})$.
The set of functions $f:\mathbb{T} \rightarrow \mathbb{R}$ that are
$\Delta$-differentiable and whose derivative is rd-continuous is denoted by
$C^{1}_{rd}=C_{rd}^{1}(\mathbb{T})=C^{1}_{rd}(\mathbb{T},\mathbb{R})$.
\begin{tw}[Theorem~1.74 of \cite{BohnerDEOTS}]
Every rd-continuous function $f$ has an antiderivative $F$.
In particular, if $t_{0}\in\mathbb{T}$, then $F$ defined by
$$
F(t):=\int\limits_{t_{0}}^{t} f(\tau)\Delta \tau, \quad t\in\mathbb{T},
$$
is an antiderivative of $f$.
\end{tw}
\begin{df}
Let $\mathbb{T}$ be a time scale and $a,b\in\mathbb{T}$.
If $f:\mathbb{T}^{\kappa} \rightarrow \mathbb{R}$ is a rd-continuous
function and $F:\mathbb{T} \rightarrow \mathbb{R}$
is an antiderivative of $f$, then the $\Delta$-integral is defined by
$$
\int\limits_{a}^{b} f(t)\Delta t := F(b)-F(a).
$$
\end{df}
\begin{ex}
\label{int hZ}
Let $a,b\in\mathbb{T}$ and $f:\mathbb{T} \rightarrow \mathbb{R}$ be rd-continuous.
If $\mathbb{T}=\mathbb{R}$, then
\begin{equation*}
\int\limits_{a}^{b}f(t)\Delta t=\int\limits_{a}^{b}f(t)dt,
\end{equation*}
where the integral on the right side is the usual Riemann integral.
If $\mathbb{T}=h\mathbb{Z}$, $h>0$, then
\begin{equation*}
\int\limits_{a}^{b}f(t)\Delta t
=
\begin{cases}
\sum\limits_{k=\frac{a}{h}}^{\frac{b}{h}-1}f(kh)h, & \hbox{ if } a<b, \\
0, & \hbox{ if } a=b,\\
-\sum\limits_{k=\frac{b}{h}}^{\frac{a}{h}-1}f(kh)h, & \hbox{ if } a>b.
\end{cases}
\end{equation*}
\end{ex}
\begin{tw}[Theorem~1.75 of \cite{BohnerDEOTS}]
\label{eqDelta1}
If $f\in C_{rd}$ and $t\in \mathbb{T}^{\kappa}$, then
\begin{equation*}
\int\limits_{t}^{\sigma(t)}f(\tau)\Delta \tau=\mu(t)f(t).
\end{equation*}
\end{tw}
\begin{tw}[Theorem ~1.77 of \cite{BohnerDEOTS}]
\label{intpropdelta}
If $a,b\in\mathbb{T}$, $a\leqslant c \leqslant b$,
$\alpha\in\mathbb{R}$, and $f,g \in C_{rd}(\mathbb{T}, \mathbb{R})$, then:
\begin{enumerate}
\item
$\int\limits_{a}^{b}(f(t)+g(t))\Delta t
=\int\limits_{a}^{b} f(t)\Delta t+\int\limits_{a}^{b}g(t)\Delta t$,
\item
$\int\limits_{a}^{b}\alpha f(t)\Delta t=\alpha \int\limits_{a}^{b} f(t)\Delta t$,
\item
$\int\limits_{a}^{b}f(t)\Delta t
=-\int\limits_{b}^{a}f(t)\Delta t$,
\item
$\int\limits_{a}^{b} f(t)\Delta t
=\int\limits_{a}^{c} f(t)\Delta t
+\int\limits_{c}^{b} f(t)\Delta t$,
\item
$\int\limits_{a}^{a} f(t)\Delta t=0$,
\item
$\int\limits_{a}^{b}f(t)g^{\Delta}(t)\Delta t
=\left.f(t)g(t)\right|^{t=b}_{t=a}
-\int\limits_{a}^{b} f^{\Delta}(t)g^\sigma(t)\Delta t$,
\item
$\int\limits_{a}^{b} f^\sigma(t) g^{\Delta}(t)\Delta t
=\left.f(t)g(t)\right|^{t=b}_{t=a}
-\int\limits_{a}^{b}f^{\Delta}(t)g(t)\Delta t$,
\item
if $f(t)\geqslant0$ for all $a\leqslant t < b$,
then $\int\limits_{a}^{b}f(t)\Delta t \geqslant 0$.
\end{enumerate}
\end{tw}
\subsection{Delta dynamic equations}
\label{equations}
We now recall the definition and main properties of the delta exponential function.
The general solution to a linear and homogenous second-order delta differential
equation with constant coefficients is given.
\begin{df}[Definition~2.25 of \cite{BohnerDEOTS}]
We say that a function $p:\mathbb{T} \rightarrow \mathbb{R}$ is regressive if
$$
1+\mu(t)p(t) \neq 0
$$
for all $t\in\mathbb{T}^{\kappa}$.
The set of all regressive and rd-continuous functions
$f:\mathbb{T} \rightarrow \mathbb{R}$ is denoted by
$\mathcal{R}=\mathcal{R}(\mathbb{T})=\mathcal{R}(\mathbb{T},\mathbb{R})$.
\end{df}
\begin{df}[Definition~2.30 of \cite{BohnerDEOTS}]
If $p\in\mathcal{R}$, then we define the exponential function by
\begin{equation*}
e_{p}(t,s):= exp\left(\int\limits_{s}^{t}\xi_{\mu(\tau)}(p(\tau))\Delta\tau\right),
\quad s,t\in\mathbb{T},
\end{equation*}
where $\xi_{\mu}$ is the cylinder transformation (see \cite[Definition~2.21]{BohnerDEOTS}).
\end{df}
\begin{ex}
\label{ex:16}
Let $\mathbb{T}$ be a time scale, $t_0 \in \mathbb{T}$,
and $\alpha\in\mathcal{R}(\mathbb{T},\mathbb{R})$.
If $\mathbb{T}=\mathbb{R}$, then
$e_{\alpha}(t,t_{0})=e^{\alpha(t-t_{0})}$ for all $t \in\mathbb{T}$.
If $\mathbb{T}=h\mathbb{Z}$, $h>0$, and
$\alpha\in\mathbb{C}\backslash\left\{-\frac{1}{h}\right\}$ is a constant, then
\begin{equation}
\label{exp:in:hZ}
e_{\alpha}(t,t_{0})=\left(1+\alpha h\right)^{\frac{t-t_{0}}{h}}
\hbox{ for all } t \in \mathbb{T}.
\end{equation}
\end{ex}
\begin{tw}[Theorem~2.36 of \cite{BohnerDEOTS}]
\label{properties_exp_delta}
Let $p,q\in\mathcal{R}$ and $\ominus p(t):=\frac{-p(t)}{1+\mu(t)p(t)}$.
The following holds:
\begin{enumerate}
\item
$e_{0}(t,s)\equiv 1 \hbox{ and } e_{p}(t,t)\equiv 1$;
\item
$e_{p}(\sigma(t),s)=(1+\mu(t)p(t))e_{p}(t,s)$;
\item
$\frac{1}{e_{p}(t,s)}=e_{\ominus p}(t,s)$;
\item
$e_{p}(t,s)=\frac{1}{e_{p}(s,t)}=e_{\ominus p}(s,t)$;
\item
$\left(\frac{1}{e_{p}(t,s)}\right)^{\Delta}
=\frac{-p(t)}{e_{p}^{\sigma}(t,s)}$.
\end{enumerate}
\end{tw}
\begin{tw}[Theorem~2.62 of \cite{BohnerDEOTS}]
Suppose $y^{\Delta}=p(t)y$ is regressive, that is, $p\in\mathcal{R}$.
Let $t_{0}\in\mathbb{T}$ and $y_{0}\in\mathbb{R}$.
The unique solution to the initial value problem
\begin{equation*}
y^{\Delta}(t) = p(t) y(t), \quad y(t_{0})=y_{0},
\end{equation*}
is given by $y(t)=e_{p}(t,t_{0})y_{0}$.
\end{tw}
Let us consider the following linear second-order dynamic
homogeneous equation with constant coefficients:
\begin{equation}
\label{eq1}
y^{\Delta\Delta}+\alpha y^{\Delta}+\beta y=0,
\quad \alpha, \beta \in \mathbb{R}.
\end{equation}
We say that the dynamic equation \eqref{eq1} is regressive if
$1-\alpha\mu(t)+\beta\mu^{2}(t)\neq 0$ for
$t\in\mathbb{T}^{\kappa}$, i.e., $\beta\mu-\alpha\in\mathcal{R}$.
\begin{df}[Definition~3.5 of \cite{BohnerDEOTS}]
Given two delta differentiable functions $y_{1}$ and $y_{2}$,
we define the Wronskian $W(y_{1},y_{2})(t)$ by
$$
W(y_{1},y_{2})(t) := \det \left[
\begin{array}{cc}
y_{1}(t)&y_{2}(t)\\
y_{1}^{\Delta}(t)&y_{2}^{\Delta}(t)
\end{array}\right].
$$
We say that two solutions $y_{1}$ and $y_{2}$
of \eqref{eq1} form a fundamental set of solutions
(or a fundamental system) for \eqref{eq1},
provided $W(y_{1},y_{2})(t)\neq 0$
for all $t\in\mathbb{T}^{\kappa}$.
\end{df}
\begin{tw}[Theorem~3.16 of \cite{BohnerDEOTS}]
\label{fund sys}
If \eqref{eq1} is regressive and $\alpha^{2}-4\beta\neq 0$,
then a fundamental system for \eqref{eq1} is given by
$e_{\lambda_{1}}(\cdot,t_{0})$ and $e_{\lambda_{2}}(\cdot,t_{0})$,
where $t_{0}\in\mathbb{T}^{\kappa}$ and $\lambda_{1}$ and $\lambda_{2}$ are given by
\begin{equation*}
\lambda_{1} := \frac{-\alpha-\sqrt{\alpha^{2}-4\beta}}{2},
\quad \lambda_{2} := \frac{-\alpha+\sqrt{\alpha^{2}-4\beta}}{2}.
\end{equation*}
\end{tw}
\begin{tw}[Theorem~3.32 of \cite{BohnerDEOTS}]
\label{fund sys2}
Suppose that $\alpha^{2}-4\beta < 0$. Define $p=\frac{-\alpha}{2}$
and $q=\frac{\sqrt{4\beta-\alpha^{2}}}{2}$. If $p$ and $\mu\beta-\alpha$
are regressive, then a fundamental system of \eqref{eq1} is given by
$\cos_{\frac{q}{(1+\mu p)}}(\cdot,t_{0})e_{p}(\cdot,t_{0})$
and $\sin_{\frac{q}{(1+\mu p)}}(\cdot,t_{0})e_{p}(\cdot,t_{0})$,
where $t_{0}\in\mathbb{T}^{\kappa}$.
\end{tw}
\begin{tw}[Theorem~3.34 of \cite{BohnerDEOTS}]
\label{fund sys3}
Suppose $\alpha^{2}-4\beta = 0$. Define $p=\frac{-\alpha}{2}$.
If $p\in\mathcal{R}$, then a fundamental system of \eqref{eq1} is given by
$$
e_{p}(t,t_{0})
\quad \hbox{ and } \quad
e_{p}(t,t_{0})\int\limits_{t_{0}}^{t}\frac{1}{1+p\mu(\tau)}\Delta \tau,
$$
where $t_{0}\in\mathbb{T}^{\kappa}$.
\end{tw}
\begin{tw}[Theorem~3.7 of \cite{BohnerDEOTS}]
\label{general sol}
If functions $y_{1}$ and $y_{2}$ form
a fundamental system of solutions for \eqref{eq1}, then
$y(t)=\alpha y_{1}(t)+ \beta y_{2}(t)$,
where $\alpha,\beta$ are constants, is a general solution to \eqref{eq1},
i.e., every function of this form is a solution to \eqref{eq1}
and every solution of \eqref{eq1} is of this form.
\end{tw}
\subsection{Calculus of variations on time scales}
\label{CV}
Consider the following problem of the calculus
of variations on time scales:
\begin{equation}
\label{problem}
\mathcal{L}(y)=\int\limits_{a}^{b}
L(t,y(t),y^{\Delta}(t))\Delta t \longrightarrow \min
\end{equation}
subject to the boundary conditions
\begin{equation}
\label{bcproblem}
y(a)=y_{a}, \quad\quad y(b)=y_{b},
\end{equation}
where $L:[a,b]_{\mathbb{T}}^{\kappa}\times\mathbb{R}^{2}\rightarrow\mathbb{R}$,
$(t,y,v) \mapsto L(t,y,v)$, is a given function, and $y_{a}$, $y_{b}\in\mathbb{R}$.
\begin{df}
A function $y \in C^{1}_{rd}([a,b]_{\mathbb{T}},\mathbb{R})$ is said to
be an admissible path to problem \eqref{problem} if it satisfies
the given boundary conditions \eqref{bcproblem}.
\end{df}
We assume that $L(t,\cdot,\cdot)$ is differentiable in $(y,v)$;
$L(t,\cdot,\cdot)$, $L_{y}(t,\cdot,\cdot)$ and $L_{v}(t,\cdot,\cdot)$
are continuous at $(y,y^{\Delta})$ uniformly at $t$
and rd-continuously at $t$ for any admissible path;
the functions $L(\cdot,y(\cdot),y^{\Delta}(\cdot))$, $L_{y}(\cdot,y(\cdot),y^{\Delta}(\cdot))$
and $L_{v}(\cdot,y(\cdot),y^{\Delta}(\cdot))$ are
$\Delta$-integrable on $[a,b]$ for any admissible path $y$.
\begin{df}
We say that an admissible function $\hat{y}$ is a local minimizer
to problem \eqref{problem}--\eqref{bcproblem} if there exists $\delta >0$
such that $\mathcal{L}(\hat{y})\le\mathcal{L}(y)$ for all admissible
functions $y\in C^{1}_{rd}$ satisfying the inequality $||y-\hat{y}||<\delta$.
The following norm in $C^{1}_{rd}$ is considered:
\begin{equation*}
||y||:=\sup\limits_{t\in [a,b]^{\kappa}_{\mathbb{T}}} |y(t)|
+\sup\limits_{t\in [a,b]^{\kappa}_{\mathbb{T}}} \left|y^{\Delta}(t)\right|.
\end{equation*}
\end{df}
\begin{tw}[Corollary~1 of \cite{OptCondHigherDelta}]
\label{corE-Leq}
If $y$ is a local minimizer to problem \eqref{problem}--\eqref{bcproblem},
then $y$ satisfies the Euler--Lagrange equation
\begin{equation}
\label{E-L:eq:T}
L_{v}(t,y(t),y^{\Delta}(t))
=\int\limits_{a}^{\sigma(t)}
L_{y}(\tau,y(\tau),y^{\Delta}(\tau))\Delta\tau +c
\end{equation}
for some constant $c\in\mathbb{R}$ and all
$t\in \left[a,b\right]^{\kappa}_{\mathbb{T}}$.
\end{tw}
\section{Economical model}
\label{model}
The inflation rate, $p$, affects decisions of the society regarding consumption and saving,
and therefore aggregated demand for domestic production, which in turn affects the rate
of unemployment, $u$. A relationship between the inflation rate and the rate of unemployment
is described by the Phillips curve, the most commonly used term in the analysis of inflation
and unemployment \cite{Samuelson}. Having a Phillips tradeoff between $u$ and $p$,
what is then the best combination of inflation and unemployment over time?
To answer this question, we follow here the formulations presented in \cite{ChiangEDO,Taylor}.
The Phillips tradeoff between $u$ and $p$ is defined as
\begin{equation}
\label{inflation}
p := -\beta u+\pi , \quad \beta >0,
\end{equation}
where $\pi$ is the expected rate of inflation that is captured by the equation
\begin{equation}
\label{expected}
\pi'=j(p-\pi), \quad 0< j \le 1.
\end{equation}
The government loss function, $\lambda$,
is specified in the following quadratic form:
\begin{equation}
\label{loss function}
\lambda = u^{2} + \alpha p^{2},
\end{equation}
where $\alpha>0$ is the weight attached to government's distaste for
inflation relative to the loss from income deviating from its equilibrium level.
Combining \eqref{inflation} and \eqref{expected},
and substituting the result into \eqref{loss function}, we obtain that
\begin{equation*}
\lambda\left(\pi(t),\pi'(t)\right)
=\left(\frac{\pi'(t)}{\beta j}\right)^{2}
+\alpha \left(\frac{\pi'(t)}{j}+\pi(t)\right)^{2},
\end{equation*}
where $\alpha$, $\beta$, and $j$ are real positive parameters
that describe the relations between all variables that occur in the model \cite{Taylor}.
The problem consists to find the optimal path $\pi$ that minimizes the total social loss
over the time interval $[0, T]$. The initial and the terminal values of $\pi$,
$\pi_{0}$ and $\pi_{T}$, respectively, are given with $\pi_{0},\pi_{T}>0$.
To recognize the importance of the present over the future,
all social losses are discounted to their present values via a positive discount rate $\delta$.
Two models are available in the literature: the \emph{continuous model}
\begin{equation}
\label{total_social_loss_con}
\Lambda_{C}(\pi)=\int\limits_{0}^{T}\lambda(\pi(t),\pi'(t))e^{-\delta t} dt\longrightarrow \min,
\end{equation}
subject to given boundary conditions
\begin{equation}
\label{eq:bc:cdm}
\pi(0)=\pi_{0}, \quad\pi(T)=\pi_{T},
\end{equation}
and the \emph{discrete model}
\begin{equation}
\label{total_social_loss_disc}
\Lambda_{D}(\pi)=\sum\limits_{t=0}^{T-1}
\lambda(\pi(t),\Delta\pi(t))(1+\delta)^{-t}\longrightarrow \min,
\end{equation}
also subject to given boundary conditions \eqref{eq:bc:cdm}.
In both cases \eqref{total_social_loss_con} and \eqref{total_social_loss_disc},
\begin{equation}
\label{def:lambda}
\lambda(t,\pi,\upsilon) := \left(\frac{\upsilon}{\beta j}\right)^{2}
+\alpha\left(\frac{\upsilon}{j}+\pi\right)^{2}.
\end{equation}
Here we propose the more general \emph{time-scale model}
\begin{equation}
\label{total:social:loss:scale}
\Lambda_{\mathbb{T}}(\pi)=\int\limits_{0}^{T}\lambda(t,\pi(t),\pi^{\Delta}(t))
e_{\ominus\delta}(t,0)\Delta t\longrightarrow \min
\end{equation}
subject to boundary conditions \eqref{eq:bc:cdm}
and with $\lambda$ defined by \eqref{def:lambda}.
Clearly, the time-scale model includes both the discrete
and continuous models as special cases:
our time-scale functional \eqref{total:social:loss:scale}
reduces to \eqref{total_social_loss_con} when $\mathbb{T} = \mathbb{R}$
and to \eqref{total_social_loss_disc} when $\mathbb{T} = \mathbb{Z}$.
\section{Main results}
\label{main:results}
Standard dynamic economic models are set up in either continuous or discrete time.
Since time scale calculus can be used to model dynamic processes whose time domains
are more complex than the set of integers or real numbers, the use of time scales in
economy is a flexible and capable modelling technique.
In this section we show the advantage of using \eqref{total:social:loss:scale}
with the periodic time scale. We begin by obtaining in Section~\ref{main:theory}
a necessary and also a sufficient optimality condition
for our economic model \eqref{total:social:loss:scale}:
Theorem~\ref{cor1} and Theorem~\ref{global}, respectively.
For $\mathbb{T} = h\mathbb{Z}$, $h > 0$, the explicit solution
$\hat{\pi}$ to the problem \eqref{total:social:loss:scale}
subject to \eqref{eq:bc:cdm} is given (Theorem~\ref{th:delf}).
Afterwards, we use such results with empirical data
(Section~\ref{main:empirical}).
\subsection{Theoretical results}
\label{main:theory}
Let us consider the problem
\begin{equation}
\label{mainProblem}
\mathcal{L}(\pi)=\int\limits_{0}^{T}L(t,\pi(t),\pi^{\Delta}(t))\Delta t\longrightarrow \min
\end{equation}
subject to boundary conditions
\begin{equation}
\label{boun:con}
\pi(0)=\pi_{0}, \quad \pi(T)=\pi_{T}.
\end{equation}
As explained in Section~\ref{model}, we are particularly interested
in the situation where
\begin{equation}
\label{eq:pii}
L(t,\pi(t),\pi^{\Delta}(t))=\left[\left(\frac{\pi^{\Delta}(t)}{\beta j}\right)^{2}
+\alpha\left(\frac{\pi^{\Delta}(t)}{j}+\pi(t)\right)^{2}\right]e_{\ominus\delta}(t,0).
\end{equation}
For simplicity, in the sequel we use the notation
$[\pi](t) := (t,\pi(t),\pi^{\Delta}(t))$.
\begin{tw}
\label{cor1}
If $\hat{\pi}$ is a local minimizer to problem \eqref{mainProblem}--\eqref{boun:con}
and the graininess function $\mu$ is a $\Delta$-differentiable function
on $[0,T]^{\kappa}_{\mathbb{T}}$, then $\hat{\pi}$ satisfies the Euler--Lagrange equation
\begin{equation}
\label{E-LDelta}
\left(L_{v}[\pi](t)\right)^{\Delta}
=\left(1+\mu^{\Delta}(t)\right) L_{y}[\pi](t)
+\mu^{\sigma}(t)\left(L_{y}[\pi](t)\right)^{\Delta}
\end{equation}
for all $t\in\ \left[0,T\right]^{\kappa^{2}}_{\mathbb{T}}$.
\end{tw}
\begin{proof}
If $\hat{\pi}$ is a local minimizer to \eqref{mainProblem}--\eqref{boun:con},
then, by Theorem~\ref{corE-Leq}, $\hat{\pi}$ satisfies the following equation:
\begin{equation*}
L_{v}[\pi](t)=\int\limits_{0}^{\sigma(t)}L_{y}[\pi](\tau)\Delta\tau+c.
\end{equation*}
Using the properties of the $\Delta$-integral (see Theorem~\ref{eqDelta1}),
we can write that $\hat{\pi}$ satisfies
\begin{equation}
\label{eq:aux1}
L_{v}[\pi](t)=\int\limits_{0}^{t}L_{y}[\pi](\tau)\Delta\tau +\mu(t)L_{y}[\pi](t) + c.
\end{equation}
Taking the $\Delta$-derivative to both sides of \eqref{eq:aux1},
we obtain equation \eqref{E-LDelta}.
\end{proof}
Using Theorem~\ref{cor1}, we can immediately write the classical
Euler--Lagrange equations for the continuous \eqref{total_social_loss_con}
and the discrete \eqref{total_social_loss_disc} models.
\begin{ex}
\label{E-L_con}
Let $\mathbb{T} = \mathbb{R}$. Then, $\mu \equiv 0$ and
\eqref{E-LDelta} with the Lagrangian \eqref{eq:pii} reduces to
\begin{equation}
\label{eq:EL:ex25}
\left(1+\alpha\beta^{2}\right)\pi^{\prime\prime}(t)
-\delta\left(1+\alpha\beta^{2}\right)\pi^{\prime}(t)
-\alpha j \beta^{2}\left(\delta+j\right)=0.
\end{equation}
This is the Euler--Lagrange equation
for the continuous model \eqref{total_social_loss_con}.
\end{ex}
\begin{ex}
\label{E-L disc}
Let $\mathbb{T} = \mathbb{Z}$. Then, $\mu \equiv 1$ and
\eqref{E-LDelta} with the Lagrangian \eqref{eq:pii} reduces to
\begin{equation}
\label{eq:EL:ex26}
\left(\alpha j\beta^{2}-\alpha\beta^{2}-1\right)\Delta^{2}\pi(t)
+\left(\alpha j^{2}\beta^{2}+\delta\alpha\beta+\delta\right) \Delta\pi(t)
+\alpha j\beta^{2} \left(\delta+j\right)\pi(t)=0.
\end{equation}
This is the Euler--Lagrange equation for the discrete model
\eqref{total_social_loss_disc}.
\end{ex}
\begin{cor}
\label{cor:ThZ}
Let $\mathbb{T} = h\mathbb{Z}$, $h > 0$, $\pi_{0}, \pi_{T} \in\mathbb{R}$, and $T = N h$
for a certain integer $N > 2 h$. If $\hat{\pi}$ is a solution to the problem
\begin{gather*}
\Lambda_{h}(\pi)=\sum\limits_{t=0}^{T-h}L(t,\pi(t),\pi^{\Delta}(t))h \longrightarrow \min,\\
\pi(0)=\pi_{0},\quad \pi(T)=\pi_{T},
\end{gather*}
then $\hat{\pi}$ satisfies the Euler--Lagrange equation
\begin{equation}
\label{eq:cor}
\left(L_{v}[\pi](t)\right)^{\Delta}=L_{y}[\pi](t)+h\left(L_{y}[\pi](t)\right)^{\Delta}
\end{equation}
for all $t\in \{0,\ldots, T-2h\}$.
\end{cor}
\begin{proof}
Follows from Theorem~\ref{cor1} by choosing $\mathbb{T}$
to be the periodic time scale $h\mathbb{Z}$.
\end{proof}
\begin{ex}
\label{eq:exQNI}
The Euler--Lagrange equation for problem
\eqref{total:social:loss:scale} on $\mathbb{T} = h\mathbb{Z}$ is given by \eqref{eq:cor}:
\begin{equation}
\label{E-LeqhZ}
(1+\alpha\beta^{2}-\alpha\beta^{2}j h)\pi^{\Delta\Delta}
+(-\delta-\alpha\beta^{2}\delta
-\alpha\beta^{2}j^{2}h) \pi^{\Delta}
+ (-\alpha\beta^{2}\delta j-\alpha\beta^{2}j^{2})\pi = 0.
\end{equation}
Assume that $1+\alpha\beta^{2}-\alpha\beta^{2}j h\neq 0$.
Then equation \eqref{E-LeqhZ} is regressive and we can use
the well known theorems in the theory of dynamic equations on time scales
(see Section~\ref{equations}), in order to find its general solution.
Introducing the quantities
\begin{equation}
\label{eq:O:A:B}
\Omega := 1+\alpha\beta^{2}-\alpha\beta^{2}jh,
\quad A := -\left(\delta+\alpha\beta^{2}\delta+\alpha\beta^{2}j^{2}h\right),
\quad B := \alpha\beta^{2}j(\delta +j),
\end{equation}
we rewrite equation \eqref{E-LeqhZ} as
\begin{equation}
\label{eqConstDelta}
\pi^{\Delta\Delta}+\frac{A}{\Omega}\pi^{\Delta}-\frac{B}{\Omega}\pi=0.
\end{equation}
The characteristic equation for \eqref{eqConstDelta} is
$$
\varphi(\lambda)=\lambda^{2}+\frac{A}{\Omega}\lambda-\frac{B}{\Omega}=0
$$
with determinant
\begin{equation}
\label{determinant}
\zeta=\frac{A^{2}+4B\Omega}{\Omega^{2}}.
\end{equation}
In general we have three different cases depending on the sign of the determinant $\zeta$:
$\zeta >0$, $\zeta=0$ and $\zeta <0$. However, with our assumptions on the parameters, simple computations
show that the last case cannot occur. Therefore, we consider the two possible cases:
\begin{enumerate}
\item If $\zeta>0$, then we have two different characteristic roots:
\begin{equation*}
\lambda_{1}=\frac{-A+\sqrt{A^{2}+4B\Omega}}{2\Omega} >0
\hbox{ and }\lambda_{2}=\frac{-A-\sqrt{A^{2}+4B\Omega}}{2\Omega}<0,
\end{equation*}
and by Theorem~\ref{fund sys} and Theorem~\ref{general sol} we get that
\begin{equation}
\label{general solution}
\pi(t)=C_{1}e_{\lambda_{1}}(t,0)+C_{2}e_{\lambda_{2}}(t,0)
\end{equation}
is the general solution to \eqref{eqConstDelta},
where $C_{1}$ and $C_{2}$ are constants
determined using the given boundary conditions \eqref{eq:bc:cdm}.
Using \eqref{exp:in:hZ}, we rewrite \eqref{general solution} as
\begin{equation*}
\pi(t)=C_{1}\left(1+\lambda_{1}h\right)^{\frac{t}{h}}
+C_{2}\left(1+\lambda_{2}h\right)^{\frac{t}{h}}.
\end{equation*}
\item If $\zeta=0$, then by Theorems~\ref{fund sys3} and \ref{general sol} we get that
\begin{equation}
\label{general solution2}
\pi(t)=K_{1}e_{p}(t,0)+K_{2}e_{p}(t,0)\int\limits_{0}^{t}\frac{\Delta\tau}{1+p\mu(\tau)}
\end{equation}
is the general solution to \eqref{eqConstDelta},
where $K_{1}$ and $K_{2}$ are constants, determined using
the given boundary conditions \eqref{eq:bc:cdm},
and $p=-\frac{A}{2 \Omega} \in \mathcal{R}$.
Using Example~\ref{int hZ} and \eqref{exp:in:hZ},
we rewrite \eqref{general solution2} as
$$
\pi(t)= K_{1} \left(1-\frac{A}{2\Omega}h\right)^{\frac{t}{h}}
+K_{2} \left(1-\frac{A}{2\Omega}h\right)^{\frac{t}{h}}\frac{2\Omega t}{2\Omega-Ah}.
$$
\end{enumerate}
\end{ex}
In certain cases one can show that the Euler--Lagrange
extremals are indeed minimizers. In particular,
this is true for the Lagrangian \eqref{eq:pii} under study.
We recall the notion of jointly convex function
(cf., e.g., \cite[Definition~1.6]{book:MT}).
\begin{df}
\label{def:conv}
Function $(t,u,v) \mapsto L(t,u,v)\in C^1\left([a,b]_\mathbb{T}\times\mathbb{R}^{2}; \mathbb{R}\right)$
is jointly convex in $(u,v)$ if
\begin{equation*}
L(t,u+u_0,v+v_0)-L(t,u,v) \geq \partial_2 L(t,u,v)u_0 +\partial_{3}L(t,u,v) v_0
\end{equation*}
for all $(t,u,v)$, $(t,u+u^0,v+v^0) \in [a,b]_\mathbb{T} \times \mathbb{R}^{2}$.
\end{df}
\begin{tw}
\label{global}
Let $(t,u,v) \mapsto L(t,u,v)$ be jointly convex with respect to $(u,v)$
for all $t\in [a,b]_{\mathbb{T}}$. If $\hat{y}$ is a solution to the Euler--Lagrange
equation \eqref{E-L:eq:T}, then $\hat{y}$ is a global minimizer
to \eqref{problem}--\eqref{bcproblem}.
\end{tw}
\begin{proof}
Since $L$ is jointly convex with respect to $(u,v)$ for all $t\in [a,b]_{\mathbb{T}}$,
\begin{multline*}
\mathcal{L}(y)-\mathcal{L}(\hat{y})
=\int\limits_{a}^{b}[L(t,y(t),y^{\Delta}(t))-L(t,\hat{y}(t),\hat{y}^{\Delta}(t))]\Delta t\\
\ge\int\limits_{a}^{b}\left[\partial_{2}L(t,\hat{y}(t),\hat{y}^{\Delta}(t))
\cdot(y(t)-\hat{y}(t))+\partial_{3}L(t,\hat{y}(t),\hat{y}^{\Delta}(t))
\cdot(y^{\Delta}(t)-\hat{y}^{\Delta}(t))\right]\Delta t
\end{multline*}
for any admissible path $y$. Let $h(t) := y(t)-\hat{y}(t)$.
Using boundary conditions \eqref{bcproblem}, we obtain that
\begin{equation*}
\begin{split}
\mathcal{L}(y)-\mathcal{L}(\hat{y})
&\ge \int\limits_{a}^{b} h^{\Delta} (t)\left[-\int\limits_{a}^{\sigma(t)}
\partial_{2}L(\tau,\hat{y}(\tau),\hat{y}^{\Delta}(\tau))\Delta \tau
+\partial_{3}L(t,\hat{y}(t),\hat{y}^{\Delta}(t))\right]\Delta t\\
&\qquad +h(t)\int\limits_{a}^{b}\partial_{2}L(t,\hat{y}(t),\hat{y}^{\Delta}(t))\Delta t|_{a}^{b}\\
&=\int\limits_{a}^{b} h^{\Delta} (t)\left[-\int\limits_{a}^{\sigma(t)}
\partial_{2}L(\tau,\hat{y}(\tau),\hat{y}^{\Delta}(\tau))\Delta \tau
+\partial_{3}L(t,\hat{y}(t),\hat{y}^{\Delta}(t))\right]\Delta t.
\end{split}
\end{equation*}
From \eqref{E-L:eq:T} we get
$$
\mathcal{L}(y)-\mathcal{L}(\hat{y})\ge\int\limits_{a}^{b}h^{\Delta}(t) c\Delta t=0
$$
for some $c\in\mathbb{R}$. Hence, $\mathcal{L}(y)-\mathcal{L}(\hat{y})\ge 0$.
\end{proof}
Combining Examples~\ref{ex:16} and \ref{eq:exQNI} and Theorem~\ref{global},
we obtain the central result to be applied in Section~\ref{main:empirical}.
\begin{tw}[Solution to the total social loss problem of the calculus of variations
in the time scale $\mathbb{T} = h\mathbb{Z}$, $h > 0$]
\label{th:delf}
Let us consider the economic problem
\begin{equation}
\label{functional hZ}
\begin{gathered}
\Lambda_{h}(\pi)=\sum\limits_{t=0}^{T-h}\left[
\left(\frac{\pi^{\Delta}(t)}{\beta j}\right)^{2}
+\alpha\left(\frac{\pi^{\Delta}(t)}{j}+\pi(t)\right)^{2}\right]
\left(1-\frac{h \delta}{1+h\delta} \right)^{\frac{t}{h}} h \longrightarrow \min,\\
\pi(0)=\pi_{0},\quad \pi(T)=\pi_{T},
\end{gathered}
\end{equation}
discussed in Section~\ref{model}
with $\mathbb{T} = h\mathbb{Z}$, $h > 0$, and the delta derivative
given by \eqref{eq:delta:der:h}. More precisely, let $T = N h$
for a certain integer $N > 2 h$, $\alpha, \beta, \delta, \pi_{0}, \pi_{T} \in\mathbb{R}^+$,
and $0 < j \le 1$ be such that $h > 0$ and $1+\alpha\beta^{2}-\alpha\beta^{2}j h\neq 0$.
Let $\Omega$, $A$ and $B$ be given as in \eqref{eq:O:A:B}.
\begin{enumerate}
\item If $A^{2}+4B\Omega > 0$, then
the solution $\hat{\pi}$ to problem \eqref{functional hZ} is given by
\begin{equation}
\label{eq:exp:rt:delf}
\hat{\pi}(t)=C\left(1-\frac{A-\sqrt{A^{2}+4B\Omega}}{2\Omega}h\right)^{\frac{t}{h}}
+(\pi_0 - C)\left(1-\frac{A+\sqrt{A^{2}+4B\Omega}}{2\Omega}h\right)^{\frac{t}{h}},
\end{equation}
$t\in \{0,\ldots, T-2h\}$, where
$$
C := \frac{\pi_T-\pi_0 \left(\frac {2\,\Omega-hA
-h\sqrt{{A}^{2}+4\,B\Omega}}{2\Omega} \right) ^{{\frac {T}{h}}}}{\left(
\frac {2\,\Omega-hA+h\sqrt{{A}^{2}+4\,B\Omega}}{2\Omega} \right)^{{\frac {T}{h}}}
- \left(\frac {2\,\Omega-hA-h\sqrt {{A}^{2}+4\,B\Omega}}{2\Omega} \right)^{{\frac {T}{h}}}}.
$$
\item If $A^{2}+4B\Omega = 0$, then
the solution $\hat{\pi}$ to problem \eqref{functional hZ} is given by
\begin{equation}
\label{eq:exp:rt:delf2}
\hat{\pi}(t)=\left(1-\frac{A}{2\Omega}h\right)^{\frac{t}{h}}\pi_{0}
+\left(1-\frac{A}{2\Omega}h\right)^{\frac{t}{h}}\left[\pi_{T}\left(
\frac{2\Omega}{2\Omega-Ah}\right)^{\frac{T}{h}}-\pi_{0}\right]\frac{t}{T},
\end{equation}
\end{enumerate}
$t\in \{0,\ldots, T-2h\}$.
\end{tw}
\begin{proof}
From Example~\ref{eq:exQNI}, $\hat{\pi}$ satisfies the Euler--Lagrange equation for problem
\eqref{functional hZ}. Moreover, the Lagrangian of functional $\Lambda_{h}$ of \eqref{functional hZ}
is a convex function because it is the sum of convex functions. Hence, by Theorem~\ref{global},
$\hat{\pi}$ is a global minimizer.
\end{proof}
\subsection{Empirical results}
\label{main:empirical}
We have three forms for the total social loss: continuous \eqref{total_social_loss_con},
discrete \eqref{total_social_loss_disc}, and on a time scale $\mathbb{T}$ \eqref{total:social:loss:scale}.
Our idea is to compare the implications of one model with those of another using empirical data:
the rate of inflation $p$ from \cite{rateinf} and the rate of unemployment $u$ from \cite{rateunemp},
which were being collected each month in the USA over 11 years, from 2000 to 2010.
We consider the coefficients
$$
\beta := 3, \quad j := \frac{3}{4},\quad \alpha := \frac{1}{2},\quad \delta := \frac{1}{4},
$$
borrowed from \cite{ChiangEDO}. Therefore,
the time-scale total social loss functional for one year is
\begin{equation}
\label{eq2}
\Lambda_{\mathbb{T}}(\pi) = \int\limits_{0}^{11}\left[\frac{16}{9}\left(\pi^{\Delta}(t)\right)^{2}
+\frac{1}{2}\left(\frac{4}{3}\pi^{\Delta}(t)
+\pi(t)\right)^{2}\right]e_{\ominus\frac{1}{4}}(t,0) \Delta t.
\end{equation}
Empirical values $\pi_{E}$ of the expected rate of inflation, $\pi$, for all months in each year,
are calculated using \eqref{inflation} and appropriate values of $p$ and $u$ \cite{rateinf,rateunemp}.
In the sequel, the boundary conditions $\pi(0)$ and $\pi(11)$ will be selected from empirical
data in January and December, respectively.
We shall compare the minimum values of the total social loss functional \eqref{eq2}
obtained from continuous and discrete models and the value for empirical data,
i.e., the value of the discrete functional $\Lambda_{D}(\pi_E) =: \Lambda_{E}$
computed with empirical data $\pi_E$.
In the continuous case we use the Euler--Lagrange equation \eqref{eq:EL:ex25}
with appropriate boundary conditions in order to find the optimal
path that minimizes $\Lambda_{C}$ over each year.
Then, we calculate the optimal values of $\Lambda_{C}$ for each year
(see 2nd column of Table~\ref{tbl:1}). In the 3rd column of Table~\ref{tbl:1}
we collect empirical values of total social loss $\Lambda_{E}$ for each year,
which are obtained by \eqref{total_social_loss_disc} from empirical data.
We find the optimal path that minimizes $\Lambda_{D}$ over each year using
the Euler--Lagrange equation \eqref{eq:EL:ex26}
with appropriate boundary conditions. The optimal values of $\Lambda_{D}$
for each year are given in the 5th column of Table~\ref{tbl:1}.
The paths obtained from the three approaches, using empirical data from 2000,
are presented in Figure~\ref{fig:1}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.4,angle=-90]{3.eps}
\end{center}
\caption{\label{fig:1}The expected rate of inflation $\hat{\pi}(t)$
during the year of 2000 in USA, obtained from the classical discrete model \eqref{total_social_loss_disc}
(upper function) and the classical continuous model \eqref{total_social_loss_con}
(lower function), with boundary conditions \eqref{eq:bc:cdm} from January ($t=0$)
and December ($t=11$), together with the empirical rate of inflation
with real data from 2000 \cite{rateinf,rateunemp} (function in the middle).}
\end{figure}
The implications obtained from the three methods in a fixed year are very different,
independently of the year we chose. Table~\ref{tbl:2} shows the relative errors between
$\Lambda_{C}$ and $\Lambda_{E}$ (the 3rd column), $\Lambda_{D}$ and $\Lambda_{E}$
(the 4th column). Our research was motivated by these discrepancies. Why are the results so different?
Is it caused by poor design of the model or maybe by something else?
We focus on the data collection time sampling and consider it as a cause of those differences in the results.
There may exist other reasons, but we examine here the data gathering.
Let us consider our time-scale model in which we consider functional \eqref{eq2}
over a periodic time scale $\mathbb{T}=h\mathbb{Z}$.
In each year we change the time scale by changing $h$,
in such a way that the sum in the functional makes sense,
and we are seeking such value of $h$ for which the absolute error between the minimal
values of the functional \eqref{eq2} and $\Lambda_{E}$ is minimal.
In Table~\ref{tbl:1}, the 6th column presents the values of the most appropriate $h$ and the
4th column the minimal values of the total social loss that correspond to them.
Figure~\ref{fig:2} presents the optimal paths for the continuous, discrete and time-scale models
together with the empirical path, obtained using real data from 2000 \cite{rateinf,rateunemp}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.4,angle=-90]{4.eps}
\end{center}
\caption{\label{fig:2}The three functions of Figure~\ref{fig:1} together with the one
obtained from our time-scale model \eqref{total:social:loss:scale} and Theorem~\ref{th:delf},
illustrating the fact that the expected rate of inflation given by \eqref{eq:exp:rt:delf} with $h = 0,22$
approximates well the empirical rate of inflation.}
\end{figure}
In the 2nd column of Table~\ref{tbl:2} we collect the relative errors between
the minimal values of functional $\Lambda_{E}$ and $\Lambda_{h}$.
\begin{table}[ht]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}\hline
year & \multicolumn{4}{c|}{The value of the functional in different approaches} & \ \\ \cline{2-5}
\ & continuous & empirical & time scales & discrete & the best $h$ \\ \hline \cline{2-5}
\ & $\Lambda_{C} $ & $\Lambda_{E} $ & $\Lambda_{h} $ & $\Lambda_{D} $ & \ \\ \hline
2000 &37,08888039 & 457,1493181 & 487,1508715 & 2470& 0,22 \\\hline
2001 &52,78839446 & 522,8060796 & 536,0298868 & 3040& 0,11 \\\hline
2002 & 63,88123645 & 673,399954 & 663,2573844 & 3820& 0,11\\\hline
2003 & 62,01139398 & 811,1909476 & 853,5383036 & 4520& 0,2\\\hline
2004 & 61,72908568 & 703,7663513 & 699,714732 & 4130& 0,11\\\hline
2005 & 56,01553586 & 672,0977499 & 665,8735854 & 4060& 0,1 \\\hline
2006 & 45,73885179 & 592,0374216 & 594,1793342 & 3700& 0,1\\\hline
2007 & 53,65457721 & 505,8743517 & 511,5351347 & 2910& 0,1 \\\hline
2008 & 73,4472459 & 785,9852316 & 746,8126214 & 4260& 0,11\\\hline
2009 & 144,2965207 & 1352,738181 & 1357,167459 & 6330& 0,22\\\hline
2010 &153,4630805 & 1819,572063 & 1865,77131 & 11400& 0,1\\\hline
11years & 12,89356177 &480,5729081 & 446,1625854 & 2E+91& 0,11 \\\hline
\end{tabular}
\end{center}
\caption{\label{tbl:1}Comparison of the values of the total social loss functionals in different approaches.}
\end{table}
\begin{table}[ht]
\begin{center}
\begin{tabular}{|c|c|c|c|}\hline
\ & \multicolumn{3}{c|}{Relative error between the empirical value $\Lambda_{E}$ and the result in} \\ \cline{2-4}
year & time scale $h\mathbb{Z}$ with the best $h$ & continuous approach & discrete classic approach\\\cline{2-4}
\ & $\frac{\Lambda_{h}-\Lambda_{E}}{\Lambda_{E}}$ & $\frac{\Lambda_{C}-\Lambda_{E}}{\Lambda_{E}}$
\ & $\frac{\Lambda_{D}-\Lambda_{E}}{\Lambda_{E}} $\\ \cline{2-4} \hline
2000 & 6,562747053 & 91,88692208 & 440,3048637\\ \hline
2001 &2,529390479 &89,90287288& 481,4775533\\ \hline
2002 & 1,506173195 & 90,51362625& 467,270606\\ \hline
2003 & 5,220393068 & 92,35551208 & 457,2054291 \\ \hline
2004 & 0,575705174 & 91,2287529 & 486,8424929\\ \hline
2005 & 0,926080247 & 91,66556712 & 504,0787967\\ \hline
2006 & 0,361786602 & 92,27433096 & 524,9604949\\ \hline
2007 & 1,119009687 & 89,39369489 & 475,2416564\\\hline
2008 & 4,98388629 & 90,6553911 & 441,9949165\\ \hline
2009 & 0,327430545 & 89,33300451& 367,939775\\ \hline
2010 &2,539017164 & 91,56597952& 526,5209404\\ \hline
11 years & 7,160271027 & 97,31704356 & 4,1617E+90\\ \hline
\end{tabular}
\end{center}
\caption{\label{tbl:2}Relative errors.}
\end{table}
\section{Conclusions}
\label{conclusions}
We introduced a time-scale model to the total social loss over
a certain time interval under study. During examination
of the proposed time-scale model for $\mathbb{T}=h\mathbb{Z}$, $h > 0$,
we changed the graininess parameter. Our goal was to obtain the most similar value
of the total social loss functional $\Lambda_{h}$ to its real value,
i.e., the value from empirical data. We analyzed 11 years
with real data from \cite{rateinf,rateunemp}. With a well-chosen
time scale, we found a small relative error between the real value
of the total social loss and the value obtained
from our time-scale model (see the 2nd column of Table~\ref{tbl:2}).
We conclude that the lack of accurate results
by the classical models arise due to an inappropriate frequency data collection.
Indeed, if one measures the level of inflation and unemployment
about once a week, which is suggested by the values of $h$ obtained
from the time-scale model, e.g., $h=0,11$ or $h=0,2$ (here $h=1$ corresponds to one month),
the credibility of the results obtained from the classical methods will be much higher.
In other words, similar results to the ones obtained by our time-scale model
can be obtained with the classical models, if a higher frequency of data collection could be used.
In practical terms, however, to collect the levels of inflation and unemployment
on a weekly basis is not realizable, and the calculus of variations
on time scales \cite{Bartos,china-Xuzhou} assumes an important role.
\section*{Acknowledgements}
This work was supported by {\it FEDER} funds through
{\it COMPETE} --- Operational Programme Factors of Competitiveness
(``Programa Operacional Factores de Competitividade'')
and by Portuguese funds through the
{\it Center for Research and Development
in Mathematics and Applications} (University of Aveiro)
and the Portuguese Foundation for Science and Technology
(``FCT --- Funda\c{c}\~{a}o para a Ci\^{e}ncia e a Tecnologia''),
within project PEst-C/MAT/UI4106/2011
with COMPETE number FCOMP-01-0124-FEDER-022690.
Dryl was also supported by FCT through the Ph.D. fellowship
SFRH/BD/51163/2010; Malinowska by Bialystok
University of Technology grant S/WI/02/2011;
and Torres by EU funding under the 7th Framework Programme
FP7-PEOPLE-2010-ITN, grant agreement number 264735-SADCO.
The authors are grateful to two anonymous referees
for valuable suggestions and comments,
which improved the quality of the paper.
|
2,869,038,154,632 | arxiv | \section{Introduction}
Recently, in the paper \cite{ADL}, it has been studied the following
system
\begin{equation}\label{eq:SP}
\left\{
\begin{array}{ll}
-\Delta u=\eta|u|^{p-1}u + \varepsilon q\phi f(u)& \text{in } \O, \\
- \Delta \phi=2 qF(u)& \text{in } \O, \\
u=\phi=0 & \text{on }\partial \O,
\end{array}
\right.
\end{equation}
where $\O \subset \mathbb{R}^3$ is a bounded domain with smooth
boundary $\partial \O$, $1 < p < 5$, $q >0$, $\varepsilon,\eta = \pm 1$,
$f:{\rm I\! R}\to{\rm I\! R}$ is a continuous function
and $F(s)=\int_0^s f(t)\, dt.$\\
If $f(s)=s$, \eqref{eq:SP} becomes the well known
Schr\"odinger-Poisson system in a bounded domain which has been
investigated by many authors (see e.g. \cite{BF, CS, PS0, PS, RS, S}). In \cite{ADL} it has been
showed that if $f(s)$ grows at infinity as $s^4$, then a variational
approach based on the reduction method (e.g. as in \cite{BF}) becomes more difficult
because of a loss of compactness in the coupling term. In this case
problem \eqref{eq:SP} recalls, at least formally, the more known
Dirichlet problem
\begin{equation}\label{eq:BN}
\left\{
\begin{array}{ll}
-\Delta u =\l u^p + u^5& \text{in } \O, \\
u>0& \text{in } \O\\
u=0& \text{in } \partial\O
\end{array}
\right.
\end{equation}
which has been studied and solved for $p\in [1,5[$ by Brezis and
Nirenberg in the very celebrated paper \cite{BN}. In that paper, by
means of a deep investigation of the compactness property of
minimizing sequences of a suitable constrained functional, it was
showed that, if $p=1$ and $\O$ is a ball, the Palais-Smale condition holds at the level
of the infimum if and only if the parameter $\l$ lies into an interval
depending on the first eigenvalue of the operator $-\Delta$. In the same
spirit of \cite{BN} and \cite{ADL}, in this paper we are interested
in studying the following problem
\begin{equation} \label{P}\tag{$\mathcal P$}
\left\{
\begin{array}{ll}
-\Delta u = \l u + q |u|^3 u \phi
&
\hbox{in } B_R,\\
-\Delta \phi=q |u|^5
&
\hbox{in } B_R,\\
u=\phi=0
&
\hbox{on } \partial B_R.
\end{array}
\right.
\end{equation}
where $\l\in{\rm I\! R}$ and $B_R$ is the ball in ${\mathbb{R}^3}$ centered in $0$ with
radius $R$.
As it is well known, problem \eqref{P} is equivalent to that of
finding critical points of a functional depending only on the
variable $u$ and which includes a nonlocal nonlinear term. Many
papers treated functionals presenting both a critically growing
nonlinearity and a nonlocal nonlinearity (see \cite{AP, CCM, C, ZZ}),
but, up to our knowledge, it has been never
considered the case when the term presenting a critical growth
corresponds with the one containing the nonlocal nonlinearity.\\
From a technical point of view, the use of an approach similar to
that of Brezis and Nirenberg requires different estimates with
respect to those used in the above mentioned papers. Indeed, since
it is just the nonlocal term of the functional the
cause of the lack of compactness, it seems natural to compare it
with the critical Lebesgue norm.\\
The main result we present is the following.
\begin{theorem}\label{th1}
Set $\l_1$ the first eigenvalue of $-\Delta$ in $B_R.$
If $\l \in ]\frac 3{10
} \l_1,\l_1[$, then
problem \eqref{P} has
a positive ground state solution for any $q>0.$
\end{theorem}
The analogy with the problem \eqref{eq:BN} applies also to some non
existence results. Indeed a classical Poho\u{z}aev obstruction holds for
\eqref{P} according to the following result.
\begin{theorem}\label{th2}
Problem \eqref{P} has no nontrivial solution if $\l\le 0$.
\end{theorem}
Actually, Theorem \ref{th2} holds also if the domain is a general smooth and
star shaped open bounded set.
Moreover, a standard argument allows us also to prove that there
exists no solution to \eqref{P} if $\l\ge\l_1$ (see \cite[Remark 1.1]{BN}).
\\
It remains an open problem what happens if $\l\in ]0,\frac 3{10}
\l_1].$
The paper is so organized: Section \ref{sec:nonex} is devoted to prove the
nonexistence result which does not require any variational argument;
in Section \ref{sec:ex} we introduce our
variational approach and prove the existence of a positive ground state
solution.
\section{Nonexistence result}\label{sec:nonex}
In this section, following \cite{DM}, we adapt the Poho\u{z}aev arguments in \cite{P} to
our situation.
Let $\O \subset {\mathbb{R}^3}$ be a star shaped domain and $(u,\phi)\in\H\times\H$ be a nontrivial
solution of (\ref{P}). If we multiply the first equation of (\ref{P}) by $x\cdot\n u$ and the second one by $x\cdot\n\phi$ we have that
\begin{align*}
0=&(\Delta u + \l u + q \phi |u|^3 u)(x\cdot\n u)\\
=&\operatorname{div}\left[(\n u)(x\cdot\n u)\right] - |\n u|^2
- x\cdot \n \left(\frac{|\n u|^2}{2}\right) + \frac{\l}{2} \n (u^2)
+ \frac{q}{5} x\cdot\n\left(\phi |u|^5\right) - \frac{q}{5} (x\cdot\n\phi)|u|^5\\
=&\operatorname{div}\left[(\n u)(x\cdot\n u) - x \frac{|\n u|^2}{2} + \frac{\l}{2} x u^2
+\frac{q}{5} x \phi |u|^5 \right] + \frac{1}{2} |\n u|^2 - \frac{3}{2} \l u^2
- \frac{3}{5} q \phi |u|^5 - \frac{q}{5} (x\cdot\n\phi) |u|^5
\end{align*}
and
\begin{align*}
0=&(\Delta \phi + q |u|^5)(x\cdot\n \phi)\\
=&\operatorname{div}\left[(\n \phi)(x\cdot\n \phi)\right] - |\n \phi|^2
- x\cdot \n \left(\frac{|\n \phi|^2}{2}\right) + q (x\cdot\n\phi) |u|^5\\
=&\operatorname{div}\left[(\n \phi)(x\cdot\n \phi) - x \frac{|\n \phi|^2}{2}
\right] + \frac{1}{2} |\n \phi|^2 + q (x\cdot\n\phi) |u|^5.
\end{align*}
Integrating on $\O$, by boundary conditions, we obtain
\begin{equation}\label{eq:Poho1}
-\frac{1}{2} \|\n u \|_2^2 - \frac{1}{2} \int_{\partial\O} \left| \frac{\partial u}{\partial {\bf n}}\right|^2 x\cdot {\bf n} = - \frac{3}{2} \l \|u\|_2^2 -\frac{3}{5} q \int_{\Omega} \phi |u|^5 -\frac{q}{5} \int_{\Omega} (x\cdot\n\phi) |u|^5
\end{equation}
and
\begin{equation}\label{eq:Poho2}
-\frac{1}{2} \|\n\phi\|_2^2 -\frac{1}{2} \int_{\partial\O} \left| \frac{\partial \phi}{\partial {\bf n}}\right|^2 x\cdot {\bf n}
= q \int_{\Omega} (x \cdot \n \phi) |u|^5.
\end{equation}
Substituting \eqref{eq:Poho2} into \eqref{eq:Poho1} we have
\begin{equation}\label{eq:Pohoc}
-\frac{1}{2} \|\n u \|_2^2 - \frac{1}{2} \int_{\partial\O} \left| \frac{\partial u}{\partial {\bf n}}\right|^2 x\cdot {\bf n} = - \frac{3}{2} \l \|u\|_2^2 -\frac{3}{5} q \int_{\Omega} \phi |u|^5 + \frac{1}{10} \|\n\phi\|_2^2 + \frac{1}{10} \int_{\partial\O} \left| \frac{\partial \phi}{\partial {\bf n}}\right|^2 x\cdot {\bf n} .
\end{equation}
Moreover, multiplying the first equation of (\ref{P}) by $u$ and the second one by $\phi$ we get
\begin{equation}\label{eq:Ne1}
\|\n u \|_2^2= \l \| u \|_2^2 + q \int_{\Omega} \phi |u|^5
\end{equation}
and
\begin{equation}\label{eq:Ne2}
\|\n\phi\|_2^2 = q \int_{\Omega} \phi |u|^5.
\end{equation}
Hence, combining \eqref{eq:Pohoc}, \eqref{eq:Ne1} and \eqref{eq:Ne2}, we have
\[
-\l \| u \|_2^2 + \frac{1}{2} \int_{\partial\O} \left| \frac{\partial u}{\partial {\bf n}}\right|^2 x\cdot {\bf n} + \frac{1}{10} \int_{\partial\O} \left| \frac{\partial \phi}{\partial {\bf n}}\right|^2 x\cdot {\bf n} = 0
\]
Then, if $\l<0$ we get a contradiction.\\
If $\l=0$, then
\[
\int_{\partial\O} \left| \frac{\partial \phi}{\partial {\bf n}}\right|^2 x\cdot {\bf n} = 0
\]
and so by the second equation of \eqref{P} we get $\|u\|_5=0$. Therefore $(u,\phi)=(0,0)$ which is a contradiction.
\section{Proof of Theorem \ref{th1}}\label{sec:ex}
Problem \eqref{P} is
variational and the related $C^1$ functional $F:H^1_0(B_R) \times
H^1_0(B_R) \rightarrow \mathbb{R}$ is given by
\[
F(u,\phi)
=\frac{1}{2}\int_{B_R} |\nabla u|^2
-\frac{\l}{2}\int_{B_R} u^2
-\frac{q}{5}\int_{B_R} |u|^5 \phi
+\frac{1}{10}\int_{B_R} |\nabla \phi|^2.
\]
The functional $F$ is strongly indefinite. To avoid this indefiniteness, we apply the following reduction argument.\\
First of all we give the following result.
\begin{lemma}
\label{le:inv} For every $u\in H^1_0(B_R)$ there exists a unique
$\phi_u\in H^1_0(B_R)$ solution of
\[
\left\{
\begin{array}{ll}
-\Delta \phi=q |u|^5
&
\hbox{in } B_R,\\
\phi=0
&
\hbox{on } \partial B_R.
\end{array}
\right.
\]
Moreover, for any $u\in H^1_0(B_R)$, $\phi_u \ge 0$ and the map
\begin{equation*
u \in H^1_0(B_R) \mapsto \phi_u \in H^1_0(B_R)
\end{equation*}
is continuously differentiable. Finally we have
\begin{equation}\label{eq:Ne2u}
\|\n\phi_u\|_2^2=q\int_{B_R} |u|^5 \phi_u
\end{equation}
and
\begin{equation}
\label{eq:essi}
\|\n\phi_u\|_2\leq \frac{q}{S^3} \|\n u\|_2^5
\end{equation}
where
\[
S=\inf_{v\in H^1_0(B_R)\setminus\{0\}}\frac{\|\n v\|_2^2}{\|v\|_6^2}.
\]
\end{lemma}
\begin{proof}
To prove the first part we can proceed reasoning as in \cite{BF}.\\ To show \eqref{eq:essi}, we argue in the following way. By applying H\"older and Sobolev inquality to \eqref{eq:Ne2u}, we get
\[
\|\n\phi_u\|_2^2 \le q \|\phi_u\|_6 \|u\|_6^5 \le \frac{q}{\sqrt{S}} \|\n\phi_u\|_2 \|u\|_6^5.
\]
Then
\[
\|\n\phi_u\|_2 \le \frac{q}{\sqrt{S}} \|u\|_6^5 \le \frac{q}{S^3} \| \n u \|_2^5.
\]
\end{proof}
So, using Lemma \ref{le:inv}, we can consider on $H^1_0(B_R)$ the $C^1$ one variable functional
\begin{equation*}
I(u):=F(u,\phi_u)=
\frac{1}{2}\int_{B_R} |\nabla u|^2
-\frac{\l}{2}\int_{B_R} u^2
-\frac{1}{10}\int_{B_R} |\n \phi_u|^2
\end{equation*}
By standard variational arguments as those in \cite{BF}, the
following result can be easily proved.
\begin{proposition
Let $(u,\phi)\in H^1_0(B_R)\times
H^1_0(B_R)$, then the following propositions are equivalent:
\begin{enumerate}[label=(\alph*), ref=\alph*]
\item $(u,\phi)$ is a critical point of functional $F$;
\item $u$ is a critical point of functional $I$ and
$\phi=\phi_u$.
\end{enumerate}
\end{proposition}
To find solutions of (\ref{P}), we look for critical points of $I$.
The functional $I$ satisfies the geometrical assumptions of the Mountain Pass Theorem (see \cite{AR}).\\
So, we set
\[
c=\inf_{\g \in \Gamma} \max_{t\in [0,1]} I(\g(t)),
\]
where $\Gamma=\left\{\g\in C([0,1],H^1_0(B_R)) \; \vline \; \g(0)=0, I(\g(1))<0\right\}$.\\
Now we proceed as follows:
\begin{enumerate}[label={\bf Step \arabic*:}, ref={Step \arabic*}]
\item \label{step1} we prove that there exists a nontrivial solution to the problem \eqref{P};
\item \label{step2} we show that such a solution is a ground state.
\end{enumerate}
\begin{remark}
Observe that standard elliptic arguments based on the maximum
principle work, so that we are allowed to assume that $u$ and $\phi_u$, solutions of \eqref{P}, are both positive.
\end{remark}
\noindent{\bf Proof of \ref{step1}:} {\it there exists a solution of \eqref{P}}.
Let $(u_n)_n$ be a Palais-Smale sequence at the mountain pass level $c$.
It is easy to verify that $(u_n)_n$ is bounded so, up to a
subsequence, we can suppose it is weakly convergent.
Suppose by contradiction that $u_n\rightharpoonup 0$ in $H^1_0(B_R)$. Then $u_n\to 0$ in $L^2(B_R)$.\\
Since $I(u_n)\to c$ and $\langle I'(u_n),u_n\rangle\to 0$ we have
\begin{equation}\label{eq:one}
\frac 1 2 \|\n u_n\|_2^2 -\frac 1 {10} \| \n \phi_{n}\|^2=c + o_n(1)
\end{equation}
and
\begin{equation}\label{eq:two}
\|\n u_n\|_2^2 - \| \n \phi_{n}\|^2 = o_n(1)
\end{equation}
where we have set $\phi_n=\phi_{u_n}$.
Combining \eqref{eq:one} and \eqref{eq:two} we have
\begin{equation*}
\|\n u_n\|_2^2=\frac 5 2 c + o_n(1)
\end{equation*}
and
\[
\|\n \phi_n\|_2 = \frac 5 2 c + o_n(1).
\]
Then, since $(u_n,\phi_n)$ satisfies \eqref{eq:essi}, passing to the limit we get
\begin{equation}\label{eq:cont}
c \ge \frac 25\sqrt{\frac {S^3} q}.
\end{equation}
Now consider a fixed smooth function
$\varphi=\varphi(r)$ such that $\varphi(0)=1,$ $\varphi'(0)=0$ and $\varphi(R)=0.$ Following \cite[Lemma 1.3]{BN}, we set $r=|x|$ and
\begin{equation*}
u_\varepsilon (r)=\frac {\varphi(r)}{(\varepsilon + r^2)^{\frac 1 2}}.
\end{equation*}
The following estimates can be found in \cite{BN}
\begin{align*}
\|\n u_\varepsilon\|_2^2&=S\frac{K}{\varepsilon^{\frac12}}+\o\int_0^R|\varphi'(r)|^2\,dr + O(\varepsilon^{\frac12}),\\
\|u_\varepsilon\|_6^2&=\frac{K}{\varepsilon^{\frac12}}+O(\varepsilon^{\frac12}),\\
\|u_\varepsilon\|_2^2&=\o\int_0^R\varphi^2(r)\,dr+O(\varepsilon^{\frac12}),
\end{align*}
where $K$ is a positive constant and $\o$ is the area of the unitary sphere in ${\mathbb{R}^3}.$
We are going to give an estimate of the value $c$. Observe that,
multiplying the second equation of \eqref{P} by $|u|$ and
integrating, we have that
\begin{equation}\label{eq:ineq}
q\|u\|_6^6 = \int_{B_R} (\n\phi_u|\n |u|)\le \frac 1{2} \|\n\phi_u\|_2^2 + \frac 12\|\n |u|\|_2^2.
\end{equation}
So, if we introduce the new functional $J:H^1(B_R)\to{\rm I\! R}$ defined in the following way
$$J(u):= \frac {3}{5} \int_{B_R} |\n u|^2 - \frac \l2 \int_{B_R} u^2 - \frac q5\int_{B_R}|u|^6,$$
by \eqref{eq:ineq} we have that $I(u)\le J(u),$ for any $u\in
H^1_0(B_R),$ and $c\le \displaystyle\inf_{u\in H^1_0(B_R)
\setminus\{0\}}\sup_{t>0} J(tu).$
Now we compute $\sup_{t>0} J(tu_\varepsilon)=J(t_\varepsilon u_\varepsilon),$ where
$t_\varepsilon$ is the unique positive solution of the equation
$$\frac d {dt} J(tu_\varepsilon)=0.$$
Since
$$\frac d {dt} J(tu_\varepsilon)=\frac {6}5 t\int_{B_R} |\n u_\varepsilon|^2 - \l t \int_{B_R} u_\varepsilon^2 -
\frac 65t^5 q \int_{B_R}|u_\varepsilon|^6,$$
we have that
$$
t_\varepsilon= \frac 1 {\|u_\varepsilon\|_6}\sqrt[4]{\frac{\frac {6}5 \|\n
u_\varepsilon\|^2_2-\l\|u_\varepsilon\|_2^2}{\frac 65 q\|u_\varepsilon\|_6^2}}=\frac 1
{\|u_\varepsilon\|_6}\sqrt[4]{\frac S q+A(\varphi) \varepsilon^{\frac12} +
O(\varepsilon)},
$$
where we have set
$$A(\varphi)=\frac{\o}{q K} \int_0^R\left(|\varphi'(r)|^2-\frac 5
{6}\l\varphi^2(r)\right)\,dr.$$
Then
\begin{align}\label{eq:est}
\sup_{t>0} J(tu_\varepsilon)&=J(t_\varepsilon u_\varepsilon)\nonumber\\
&=\frac {3}{5} t_\varepsilon^2 \int_{B_R} |\n u_\varepsilon|^2 -
\frac \l2 t_\varepsilon^2 \int_{B_R} u_\varepsilon^2 - \frac q5 t^6_\varepsilon\int_{B_R}|u_\varepsilon|^6\nonumber\\
&=\frac 25 q \sqrt{\left(\frac S q+A(\varphi) \varepsilon^{\frac12}+
O(\varepsilon)\right)^3}.
\end{align}
Now, if we take $\varphi(r)=\cos(\frac{\pi r}{2R})$ as in \cite{BN}, we have that
$$\int_0^R|\varphi'(r)|^2\,dr=\frac{\pi^2}{4R^2}\int_0^R\varphi^2(r)\,dr$$
and then, if $\l \in ]\frac 3{10} \l_1,\l_1[$, we deduce that
$A(\varphi)<0$. Taking $\varepsilon$ sufficiently small, from
\eqref{eq:est} we conclude that $c<\frac 25\sqrt{\frac {S^3} q},$
which contradicts \eqref{eq:cont}.
Then we have that $u_n\rightharpoonup u$ with
$u\in H^1_0(B_R) \setminus\{0\}$. We are going to prove that $u$ is a weak
solution of \eqref{P}.\\
As in \cite{ADL} it can be showed that $\phi_n\rightharpoonup\phi_u$
in $H^1_0(B_R).$ Now, set $\varphi$ a test function. Since
$I'(u_n)\to 0,$ we have that
\begin{equation*}
\langle I'(u_n),\varphi\rangle\to 0.
\end{equation*}
On the other hand,
\begin{multline*
\langle I'(u_n),\varphi\rangle= \int_{B_R} (\n u_n|\n\varphi) -\l\int_{B_R} u_n\varphi\\
- q\int_{B_R}\phi_{n}|u_n|^3u_n\varphi\to \int_{B_R} (\n u|\n\varphi)-\l\int_{B_R} u\varphi- q\int_{B_R}\phi_{u}|u|^3u\varphi
\end{multline*}
so we conclude that $(u,\phi_u)$ is a weak solution of \eqref{P}.
\medskip
\noindent{\bf Proof of \ref{step2}:} {\it The solution found is a ground state}.\\
As in \ref{step1}, we consider a Palais-Smale sequence $(u_n)_n$ at level $c$. We have that $(u_n)_n$ weakly converges to a critical point $u$ of $I$.\\
To prove that such a critical point is a ground state we proceed as follows.\\
First of all we prove that
\[
I(u)\le c.
\]
Since $I(u_n)\to c$ and $\langle I'(u_n), u_n\rangle\to 0$, then
\[
I(u_n)= \frac 2 5 \int_{B_R} |\n u_n|^2 - \frac 2 5 \l \int_{B_R} u_n^2 + o_n(1) \to c.
\]
Moreover, being $(u,\phi_u)$ is a solution, we have
\[
\int_{B_R} |\n u|^2 - \l \int_{B_R} u^2 - q \int_{B_R} \phi_{u}|u|^5 = 0.
\]
Hence, by the lower semi-continuity of the $H^1_0$-norm and since $u_n \to u$ in $L^2(B_R)$,
\begin{align*}
I(u
= & \frac 2 5\left(\int_{B_R} |\n u|^2 - \l \int_{B_R} u^2\right)\\
\le & \frac 2 5 \left(\liminf_n \int_{B_R} |\n u_n|^2 - \l \lim_n \int_{B_R} u_n^2\right)\\
= & \frac 2 5 \liminf_n \left( \int_{B_R} |\n u_n|^2 - \l \int_{B_R} u_n^2 \right)\\
= & c
\end{align*}
Finally, let $v$ be a nontrivial critical point of $I$.
Since the maximum of $I(tv)$ is achieved for $t=1$, then
\[
I(v)=\sup_{t>0} I(tv) \ge c \ge I(u).
\]
|
2,869,038,154,633 | arxiv | \section{Introduction}
Relative Steinberg groups \(\stlin(R, I)\) were defined in the stable linear case by F. Keune and J.-L. Loday in \cite{Keune, Loday}. Namely, this group is just
\[\frac{\Ker\bigl(p_{2*} \colon \stlin(I \rtimes R) \to \stlin(R)\bigr)}{\bigl[\Ker(p_{1*}), \Ker(p_{2*})\bigr]},\]
where \(R\) is a unital associative ring, \(I \leqt R\), \(p_1 \colon I \rtimes R \to R, a \rtimes p \mapsto a + p\) and \(p_2 \colon I \rtimes R \to R, a \rtimes p \mapsto p\) are ring homomorphisms. Such a group is a crossed module over \(\stlin(R)\) generated by \(x_{ij}(a)\) for \(a \in I\) with the ``obvious'' relations that are satisfied by the generators \(t_{ij}(a)\) of the normal subgroup \(\mathrm E(R, I) \leqt \mathrm E(R)\). It is classically known that \(\stlin(R, I)\) is generated by \(z_{ij}(a, p) = \up{x_{ji}(p)}{x_{ij}(a)}\) for \(a \in I\) and \(p \in R\) as an abstract group, the same holds for the relative elementary groups.
Such relative Steinberg groups and their generalizations for unstable linear groups and Chevalley groups are used in, e.g., proving centrality of \(\mathrm K_2\) \cite{CentralityD, CentralityE}, a suitable local-global principle for Steinberg groups \cite{LocalGlobalC, CentralityD, Tulenbaev}, early stability of \(\mathrm K_2\) \cite{Tulenbaev}, and \(\mathbb A^1\)-invariance of \(\mathrm K_2\) \cite{Horrocks, AInvariance}. In \cite[theorem 9]{CentralityE} S. Sinchuk proved that all relations between the generators \(z_\alpha(a, p)\) of \(\stlin(\Phi; R, I)\), where \(\Phi\) is a root system of type \(\mathsf{ADE}\) and \(R\) is commutative, come from various \(\stlin(\Psi; R, I)\) for root subsystems \(\Psi \subseteq \Phi\) of type \(\mathsf A_3\), i.e. \(\stlin(\Phi; R, I)\) is the amalgam of \(\stlin(\Psi; R, I)\) with identifying generators \(z_\alpha(a, p)\).
There exist explicit presentations (in the sense of abstract groups) of relative unstable linear and symplectic Steinberg groups in terms of van der Kallen's generators, i.e. analogues of arbitrary transvections in \(\glin(n, R)\) or \(\symp(2n, R)\), see \cite{RelativeC, Tulenbaev} and \cite[proposition 8]{CentralityE}.
In \cite{RelStLin} we determined the relations between the generators \(z_\alpha(a, p)\) in the following two cases:
\begin{itemize}
\item for relative unstable linear Steinberg groups \(\stlin(n; R, I)\) with \(n \geq 4\),
\item for relative simply laced Steinberg groups \(\stlin(\Phi; R, I)\) with \(\Phi\) of rank \(\geq 3\).
\end{itemize}
In turns out that all the relations between \(z_\alpha(a, p)\) come from \(\stlin(\Psi; R, I)\) for \(\Psi\) of types \(\mathsf A_2\) and \(\mathsf A_1 \times \mathsf A_1\), thus Sinchuk's result may be strengthened a bit.
The relations for the simply laced Steinberg groups are easily obtained from the linear case and Sinchuk's result. In the linear case we actually considered Steinberg groups associated with a generalized matrix ring \(T\) instead of \(\mat(n, R)\), i.e. if \(T\) is a ring with a complete family of \(n\) full idempotentes. Such a generality is convenient to apply ``root elimination'', i.e. in order to replace the generators of a Steinberg group parametrized by a root system \(\Phi\) by some new generators parametrized by a system of relative roots \(\Phi / \alpha\). Moreover, instead of an ideal \(I \leqt R\) we considered arbitrary crossed module \(\delta \colon A \to T\) in the sense of associative rings since this is neccesary for, e.g., applying the method of Steinberg pro-groups \cite{CentralityBC, LinK2}.
In this paper we find the relations between the generators \(z_\alpha(a, p)\) for
\begin{itemize}
\item relative odd unitary Steinberg groups \(\stunit(R, \Delta; S, \Theta)\), where \(\delta \colon (S, \Theta) \to (R, \Delta)\) is a crossed module of odd form rings and \((R, \Delta)\) has an orthogonal hyperbolic family of sufficiently large rank in the sense of \cite{CentralityBC} (this is a unitary analogue of generalized matrix rings and crossed modules of associative rings),
\item relative doubly laced Steinberg groups \(\stlin(\Phi; K, \mathfrak a)\) with \(\Phi\) of rank \(\geq 3\), where \(K\) is a unital commutative ring and \(\delta \colon \mathfrak a \to K\) is a crossed module of commutative rings.
\end{itemize}
The odd unitary case already gives a presentation of relative Steinberg groups associated with classical sufficiently isotropic reductive groups by \cite[theorem 4]{ClassicOFA}, so the second case is non-trivial only for the root system of type \(\mathsf F_4\).
Actually, relative elementary subgroups of \(\stlin(\Phi; K)\) for doubly laced \(\Phi\) may be defined not only for ordinary ideals \(\mathfrak a \leqt K\), but for E. Abe's \cite{Abe} admissible pairs \((\mathfrak a, \mathfrak b)\), where \(\mathfrak a \leqt K\) and \(\mathfrak a_2 \leq \mathfrak b \leq \mathfrak a\) is a subgroup such that \(\mathfrak b k^2 \leq \mathfrak b\) for all \(k \in K\) if \(\Phi\) is of type \(\mathsf C_\ell\) and \(\mathfrak b \leqt K\) if \(\Phi\) is of type \(\mathsf B_\ell\) or \(\mathsf F_4\). Such a pair naturally gives a subgroup \(\mathrm E(\Phi; \mathfrak a, \mathfrak b) \leq \mathrm G^{\mathrm{sc}}(\Phi, K)\) generated by \(x_\alpha(a)\) for short roots \(\alpha\) and \(a \in \mathfrak a\) and by \(x_\beta(b)\) for long roots \(\beta\) and \(b \in \mathfrak b\).
In order to study relative Steinberg groups associated with admissible pairs, we consider new families of Steinberg groups \(\stlin(\Phi; K, L)\), where \(\Phi\) is a doubly laced root system and \((K, L)\) is a \textit{pair of type} \(\mathsf B\), \(\mathsf C\), or \(\mathsf F\) respectively (the precise definition is given in section \ref{pairs-type}). Then admissible pairs are just crossed submodules of \((K, K)\) for a commutative unital ring \(K\). We also find the relations between the generators \(z_\alpha(a, p)\) of
\begin{itemize}
\item relative doubly laced Steinberg groups \(\stlin(\Phi; K, L; \mathfrak a, \mathfrak b)\), where \(\Phi\) is a doubly laced root system of rank \(\geq 3\) and \((\mathfrak a, \mathfrak b) \to (K, L)\) is a crossed modules of pairs of types \(\mathsf B\), \(\mathsf C\), or \(\mathsf F\) respectively.
\end{itemize}
All relations between the generators \(z_\alpha(a, p)\) involve only the roots from root subsystems of rank \(2\), i.e. \(\mathsf A_2\), \(\mathsf{BC}_2\), \(\mathsf A_1 \times \mathsf A_1\), \(\mathsf A_1 \times \mathsf{BC}_1\) in the odd unitary case and \(\mathsf A_2\), \(\mathsf B_2\), \(\mathsf A_1 \times \mathsf A_1\) in the doubly-laced case.
\section{Relative unitary Steinberg groups}
We use the group-theoretical notation \(\up gh = g h g^{-1}\) and \([g, h] = ghg^{-1}h^{-1}\). If a group \(G\) acts on a set \(X\), then we usually denote the action by \((g, x) \mapsto \up gx\). If \(X\) is itself a group, then \([g, x] = \up gx x^{-1}\) and \([x, g] = x (\up gx)^{-1}\) are the commutators in \(X \rtimes G\). A \textit{group-theoretical crossed module} is a homomorphism \(\delta \colon N \to G\) of groups such that there is a fixed action of \(G\) on \(N\), \(\delta\) is \(G\)-equivariant, and \(\up nn' = \up{\delta(n)}{n'}\) for \(n, n' \in N\).
The group operation in a \(2\)-step nilpotent group \(G\) is usually denoted by \(\dotplus\). If \(X_1\), \ldots, \(X_n\) are subsets of \(G\) containing \(\dot 0\) and \(\prod_i X_i \to G, (x_1, \ldots, x_n) \mapsto x_1 \dotplus \ldots \dotplus x_n\) is a bijection, then we write \(G = \bigoplus_i^\cdot X_i\).
Let \(A\) be an associative unital ring and \(\lambda \in A^*\). A map \(\inv{(-)} \colon A \to A\) is called a \(\lambda\)-\textit{involution} if it is an anti-automorphism, \(\inv{\inv x} = \lambda x \lambda^{-1}\), and \(\inv \lambda = \lambda^{-1}\). For a fixed \(\lambda\)-involution a map \(B \colon M \times M \to A\) for a module \(M_A\) is called a \textit{hermitian form} if it is biadditive, \(B(m, m'a) = B(m, m')a\), and \(B(m', m) = \inv{B(m, m')} \lambda\).
Now let \(A\) be an associative unital ring with a \(\lambda\)-involution and \(M_A\) be a module with a hermitian form \(B\). The \textit{Heisenberg group} of \(B\) is the set \(\Heis(B) = M \times A\) with the group operation \((m, x) \dotplus (m', x') = (m + m', x - B(m, m') + x')\). The multiplicative monoid \(A^\bullet\) acts on \(\Heis(B)\) from the right by \((m, x) \cdot y = (my, \inv y x y)\). An \(A^\bullet\)-invariant subgroup \(\mathcal L \leq \Heis(B)\) is called an \textit{odd form parameter} if
\[\{(0, x - \inv x \lambda) \mid x \in A\} \leq \mathcal L \leq \{(m, x) \mid x + B(m, m) + \inv x \lambda = 0\}.\]
The corresponding \textit{quadratic form} is the map \(q \colon M \to \Heis(B) / \mathcal L, m \mapsto (m, 0) \dotplus \mathcal L\). Finally, the unitary group is
\[\unit(M, B, \mathcal L) = \{g \in \Aut(M_A) \mid B(gm, gm') = B(m, m'),\, q(gm) = q(m) \text{ for all } m, m' \in M\}.\]
Recall definitions from \cite{CentralityBC, ClassicOFA}. An \textit{odd form ring} is a pair \((R, \Delta)\), where \(R\) is an associative non-unital ring, \(\Delta\) is a group with the group operation \(\dotplus\), the multiplicative semigroup \(R^\bullet\) acts on \(\Delta\) from the right by \((u, a) \mapsto u \cdot a\), and there are maps \(\phi \colon R \to \Delta\), \(\pi \colon \Delta \to R\), \(\rho \colon \Delta \to R\) such that
\begin{itemize}
\item \(\phi\) is a group homomorphism, \(\phi(\inv aba) = \phi(b) \cdot a\);
\item \(\pi\) is a group homomorphism, \(\pi(u \cdot a) = \pi(u) a\);
\item \([u, v] = \phi(-\inv{\pi(u)} \pi(v))\);
\item \(\rho(u \dotplus v) = \rho(u) - \inv{\pi(u)} \pi(v) + \rho(v)\), \(\inv{\rho(u)} + \inv{\pi(u)} \pi(u) + \rho(u) = 0\), \(\rho(u \cdot a) = \inv a \rho(u) a\);
\item \(\pi(\phi(a)) = 0\), \(\rho(\phi(a)) = a - \inv a\);
\item \(\phi(a + \inv a) = \phi(\inv aa) = 0\) (in \cite{CentralityBC, ClassicOFA} we used the stronger axiom \(\phi(a) = \dot 0\) for all \(a = \inv a\));
\item \(u \cdot (a + b) = u \cdot a \dotplus \phi(\inv{\,b\,} \rho(u) a) \dotplus u \cdot b\).
\end{itemize}
Let \((R, \Delta)\) be an odd form ring. Its \textit{unitary group} is the set
\[\unit(R, \Delta) = \{g \in \Delta \mid \pi(g) = \inv{\rho(g)}, \pi(g) \inv{\pi(g)} = \inv{\pi(g)} \pi(g)\}\]
with the identity element \(1_{\unit} = \dot 0\), the group operation \(gh = g \cdot \pi(h) \dotplus h \dotplus g\), and the inverse \(g^{-1} = \dotminus g \cdot \inv{\pi(g)} \dotminus g\). The unitary groups of odd form rings in \cite{CentralityBC, ClassicOFA} are precisely the graphs of \(\pi \colon \unit(R, \Delta) \to R\) as subsets of \(R \times \Delta\).
An odd form ring \((R, \Delta)\) is called \textit{special} if the homomorphism \((\pi, \rho) \colon \Delta \to \Heis(R)\) is injective, where \(\Heis(R) = R \times R\) with the operation \((x, y) \dotplus (z, w) = (x + z, y - \inv xz + w)\). It is called \textit{unital} if \(R\) is unital and \(u \cdot 1 = u\) for \(u \in \Delta\). In other words, \((R, \Delta)\) is a special unital odd form ring if and only if \(R\) is a unital associative ring with \(1\)-involution and \(\Delta\) is an odd form parameter with respect to the \(R\)-module \(R\) and the hermitian form \(R \times R \to R, (x, y) \mapsto \inv xy\), where we identify \(\Delta\) with its image in \(\Heis(R)\).
If \(M_A\) is a module over a unital ring with a \(\lambda\)-involution, \(B\) is a hermitian form on \(M\), and \(\mathcal L\) is an odd form parameter, then there is a special unital odd form ring \((R, \Delta)\) such that \(\unit(M, B, \mathcal L) \cong \unit(R, \Delta)\), see \cite[section 2]{CentralityBC} or \cite[section 3]{ClassicOFA} for details.
We say that an odd form ring \((R, \Delta)\) \textit{acts} on an odd form ring \((S, \Theta)\) if there are multiplication operations \(R \times S \to S\), \(S \times R \to S\), \(\Theta \times R \to \Theta\), \(\Delta \times S \to \Delta\) such that \((S \rtimes R, \Theta \rtimes \Delta)\) is a well-defined odd form ring. There is an equivalent definition in terms of explicit axioms on the operations, see \cite[section 2]{CentralityBC}. For example, each odd form ring naturally acts on itself. Actions of \((R, \Delta)\) on \((S, \Theta)\) are in one-to-one correspondence with isomorphism classes of right split short exact sequences
\[(S, \Theta) \to (S \rtimes R, \Theta \rtimes \Delta) \leftrightarrows (R, \Delta),\]
since the category of odd form rings and their homomorphisms is algebraically coherent semi-abelian in the sense of \cite{AlgCoh}.
Let us call \(\delta \colon (S, \Theta) \to (R, \Delta)\) a \textit{precrossed module} of odd form rings if \((R, \Delta)\) acts on \((S, \Theta)\) and \(\delta\) is a homomorphism preserving the action of \((R, \Delta)\). Such objects are in one-to-one correspondence with \textit{reflexive graphs} in the category of odd form rings, i.e. tuples \(((R, \Delta), (T, \Xi), p_1, p_2, d)\), where \(p_1, p_2 \colon (T, \Xi) \to (R, \Delta)\) are homomorphisms with a common section \(d\). Namely, \((S, \Theta)\) corresponds to the kernel of \(p_2\) and \(\delta\) is induced by \(p_1\).
A precrossed module \(\delta \colon (S, \Theta) \to (R, \Delta)\) is a \textit{crossed module} of odd form rings if \textit{Peiffer identities}
\begin{itemize}
\item \(ab = \delta(a) b = a \delta(b)\) for \(a, b \in S\);
\item \(u \cdot a = \delta(u) \cdot a = u \cdot \delta(a)\) for \(u \in \Theta\), \(a \in S\)
\end{itemize}
hold. Equivalently, the corresponding reflexive graph is an \textit{internal category} (and even an \textit{internal groupoid}, necessarily in the unique way), i.e. there is a homomorphism
\[m \colon \lim\bigl( (T, \Xi) \xrightarrow{p_2} (R, \Delta) \xleftarrow{p_1} (T, \Xi) \bigr) \to (T, \Xi)\]
such that the homomorphisms from any \((I, \Gamma)\) to \((R, \Delta)\) and \((T, \Xi)\) form a set-theoretic category. See \cite{XMod} for details.
The unitary group \(\unit(R, \Delta)\) acts on \((R, \Delta)\) by automorphisms via
\[\up g a = \alpha(g)\, a\, \inv{\alpha(g)} \text{ for } a \in R, \enskip \up g u = (g \cdot \pi(u) \dotplus u) \cdot \inv{\alpha(g)} \text{ for } u \in \Delta.\]
where \(\alpha(g) = \pi(g) + 1 \in R \rtimes \mathbb Z\). The second formula also gives the conjugacy action of \(\unit(R, \Delta)\) on itself.
If \((R, \Delta)\) acts on \((S, \Theta)\), then \(\unit(R, \Delta)\) acts on \(\unit(S, \Theta)\) in the sense of groups and \(\unit(S \rtimes R, \Theta \rtimes \Delta) = \unit(S, \Theta) \rtimes \unit(R, \Delta)\). For any crossed module \(\delta \colon (S, \Theta) \to (R, \Delta)\) the induced homomorphism \(\delta \colon \unit(S, \Theta) \to \unit(R, \Delta)\) is a crossed module of groups.
Recall from \cite{CentralityBC} that a \textit{hyperbolic pair} in an odd form ring \((R, \Delta)\) is a tuple \(\eta = (e_-, e_+, q_-, q_+)\), where \(e_-\) and \(e_+\) are orthogonal idempotents in \(R\), \(\inv{e_-} = e_+\), \(q_\pm\) are elements of \(\Delta\), \(\pi(q_\pm) = e_\pm\), \(\rho(q_\pm) = 0\), and \(q_\pm = q_\pm \cdot e_\pm\). A sequence \(H = (\eta_1, \ldots, \eta_\ell)\) is called an \textit{orthogonal hyperbolic family} of rank \(\ell\), if \(\eta_i = (e_{-i}, e_i, q_{-i}, q_i)\) are hyperbolic pairs, the idempotents \(e_{|i|} = e_i + e_{-i}\) are orthogonal, and \(e_{|i|} \in R e_{|j|} R\) for all \(1 \leq i, j \leq \ell\). We also say that the orthogonal hyperbolic family \(\eta_1, \ldots, \eta_\ell\) is \textit{strong} if \(e_i \in R e_j R\) for all \(1 \leq |i|, |j| \leq \ell\).
From now on and until the end of section \ref{sec-pres}, we fix a crossed module \(\delta \colon (S, \Theta) \to (R, \Delta)\) of odd form rings and an orthogonal hyperbolic family \(H = (\eta_1, \ldots, \eta_\ell)\) in \((R, \Delta)\). We also use the notation
\[S_{ij} = e_i S e_j, \quad
\Theta^0_j = \{u \in \Theta \cdot e_j \mid e_k \pi(u) = 0 \text{ for all } 1 \leq |k| \leq \ell\}\]
for \(1 \leq |i|, |j| \leq \ell\), and similarly for the corresponding subgroups of \(R\) and \(\Delta\). Clearly,
\[S_{ij} R_{jk} + S_{i, -j} R_{-j, k} = S_{ik}, \quad \Theta^0_i \cdot R_{ij} \dotplus \Theta^0_{-i} \cdot R_{-i, j} \dotplus \phi(S_{-j, j}) = \Theta^0_j\]
for all \(i, j, k \neq 0\). If \(H\) is strong, then actually
\[S_{ij} R_{jk} = S_{ik}, \quad \Theta^0_i \cdot R_{ij} \dotplus \phi(S_{-j, j}) = \Theta^0_j.\]
An \textit{unrelativized Steinberg group} \(\stunit(S, \Theta)\) is the abstract group with the generators \(X_{ij}(a)\), \(X_j(u)\) for \(1 \leq |i|, |j| \leq \ell\), \(i \neq \pm j\), \(a \in S_{ij}\), \(u \in \Theta^0_j\), and the relations
\begin{align*}
X_{ij}(a) &= X_{-j, -i}(-\inv a); \\
X_{ij}(a)\, X_{ij}(b) &= X_{ij}(a + b); \\
X_j(u)\, X_j(v) &= X_j(u \dotplus v); \\
[X_{ij}(a), X_{kl}(b)] &= 1 \text{ for } j \neq k \neq -i \neq -l \neq j; \\
[X_{ij}(a), X_l(u)] &= 1 \text{ for } i \neq l \neq -j; \\
[X_{-i, j}(a), X_{ji}(b)] &= X_i(\phi(ab)); \\
[X_i(u), X_i(v)] &= X_i(\phi(-\inv{\pi(u)} \pi(v))); \\
[X_i(u), X_j(v)] &= X_{-i, j}(-\inv{\pi(u)} \pi(v)) \text{ for } i \neq \pm j; \\
[X_{ij}(a), X_{jk}(b)] &= X_{ik}(ab) \text{ for } i \neq \pm k; \\
[X_i(u), X_{ij}(a)] &= X_{-i, j}(\rho(u) a)\, X_j(\dotminus u \cdot (-a)).
\end{align*}
Of course, the group \(\stunit(S, \Theta)\) is functorial on \((S, \Theta)\). In particular, the homomorphism
\[\delta \colon \stunit(S, \Theta) \to \stunit(R, \Delta), X_{ij}(a) \mapsto X_{ij}(\delta(a)), X_j(u) \mapsto X_j(\delta(u))\]
is well-defined. There is also a canonical homomorphism
\begin{align*}
\stmap \colon \stunit(S, \Theta) &\to \unit(S, \Theta), \\
X_{ij}(a) &\mapsto T_{ij}(a) = q_i \cdot a \dotminus q_{-j} \cdot \inv a \dotminus \phi(a), \\
X_j(u) &\mapsto T_j(u) = u \dotminus \phi(\rho(u) + \pi(u)) \dotplus q_{-j} \cdot (\rho(u) - \inv{\pi(u)}).
\end{align*}
Let \(p_{i*} \colon \stunit(S \rtimes R, \Theta \rtimes \Delta) \to \stunit(R, \Delta)\) be the induced homomorphisms. The \textit{relative Steinberg group} is
\[\stunit(R, \Delta; S, \Theta) = \Ker(p_{2*}) / [\Ker(p_{1*}), \Ker(p_{2*})].\]
It is easy to see that it is a crossed module over \(\stunit(R, \Delta)\). The \textit{diagonal group} is
\[\diag(R, \Delta) = \{g \in \unit(R, \Delta) \mid g \cdot e_i \inv{\pi(g)} \dotplus q_i \cdot \inv{\pi(g)} \dotplus g \cdot e_i = \dot 0 \text{ for } 1 \leq |i| \leq \ell\},\]
it acts on \(\stunit(S, \Theta)\) by
\[\up g{T_{ij}(a)} = T_{ij}(\up ga), \quad \up g{T_j(u)} = T_j(\up gu).\]
Hence it also acts on the commutative diagram of groups
\[\xymatrix@R=30pt@C=90pt@!0{
\stunit(S, \Theta) \ar[r] \ar[dr]_{\delta} & \stunit(R, \Delta; S, \Theta) \ar[r]^{\stmap} \ar[d]_{\delta} & \unit(S, \Theta) \ar[r] \ar[d]_{\delta} & \mathrm{Aut}(S, \Theta) \\
& \stunit(R, \Delta) \ar[r]^{\stmap} & \unit(R, \Delta) \ar[ur] \ar[r] & \mathrm{Aut}(R, \Delta).
}\]
\section{Root systems of type \(\mathsf{BC}\)}
Let
\[\Phi = \{\pm \mathrm e_i \pm \mathrm e_j \mid 1 \leq i < j \leq \ell\} \cup \{\pm \mathrm e_i \mid 1 \leq i \leq \ell\} \cup \{\pm 2 \mathrm e_i \mid 1 \leq i \leq \ell\} \subseteq \mathbb R^\ell\]
be a \textit{root system} of type \(\mathsf{BC}_\ell\). For simplicity let also \(\mathrm e_{-i} = -\mathrm e_i\) for \(1 \leq i \leq \ell\). The \textit{roots} of \(\Phi\) are in one-to-one correspondence with the \textit{root subgroups} of \(\stunit(S, \Theta)\) as follows:
\begin{align*}
X_{\mathrm e_j - \mathrm e_i}(a) &= X_{ij}(a) \text{ for } a \in S_{ij}, i + j > 0,\\
X_{\mathrm e_i}(u) &= X_i(u) \text{ for } u \in \Theta^0_i,\\
X_{2 \mathrm e_i}(u) &= X_i(u) \text{ for } u \in \phi(S_{-i, i}).
\end{align*}
The image of \(X_\alpha\) is denoted by \(X_\alpha(S, \Theta)\). The \textit{Chevalley commutator formulas} from the definition of \(\stunit(S, \Theta)\) may be written as
\[[X_\alpha(\mu), X_\beta(\nu)] = \prod_{\substack{i \alpha + j \beta \in \Phi\\ i, j > 0}} X_{i\alpha + j\beta}(f_{\alpha \beta i j}(\mu, \nu))\]
for all non-antiparallel \(\alpha, \beta \in \Phi\) and some universal expressions \(f_{\alpha \beta i j}\). It is also useful to set \(f_{\alpha \beta 0 1}(\mu, \nu) = \nu\) and \(f_{\alpha \beta 0 2}(\mu, \nu) = \dot 0\) if \(\beta\) is \textit{ultrashort} (i.e. of length \(1\)), the root \(\beta\) is considered as the largest with respect to a linear order in the product.
It is easy to see that \(\Ker(p_{2*}) \leq \stunit(S \rtimes R, \Theta \rtimes \Delta)\) is the group with the action of \(\stunit(R, \Delta)\) generated by \(\stunit(S, \Theta)\) with the additional relations
\[\up{X_\alpha(\mu)}{X_\beta(\nu)} = \prod_{\substack{i \alpha + j \beta \in \Phi\\ i \geq 0, j > 0}} X_{i\alpha + j\beta}(f_{\alpha \beta i j}(\mu, \nu))\]
for non-antiparallel \(\alpha, \beta \in \Phi\), \(\mu \in R \cup \Delta\), \(\nu \in S \cup \Theta\). Hence the relative Steinberg group \(\stunit(R, \Delta; S, \Theta)\) is the crossed module over \(\stunit(R, \Delta)\) generated by \(\delta \colon \stunit(S, \Theta) \to \stunit(R, \Delta)\) with the same additional relations.
The \textit{Weyl group} \(\mathrm W(\mathsf{BC}_\ell) = (\mathbb Z / 2 \mathbb Z)^\ell \rtimes \mathrm S_\ell\) acts on the orthogonal hyperbolic family \(\eta_1, \ldots, \eta_\ell\) by permutations and sign changes (i.e. \((e_{-i}, e_i, q_{-i}, q_i) \mapsto (e_i, e_{-i}, q_i, q_{-i})\)), so the correspondence between roots and root subgroups is \(\mathrm W(\mathsf{BC}_\ell)\)-equivariant. Also, the hyperbolic pairs from the orthogonal hyperbolic family and the opposite ones are in one-to-one correspondence with the set of ultrashort roots, \(\eta_i\) corresponds to \(\mathrm e_i\), and \(\eta_{-i} = (e_i, e_{-i}, q_i, q_{-i})\) corresponds to \(\mathrm e_{-i}\) for \(1 \leq i \leq \ell\).
Recall that a subset \(\Sigma \subseteq \Phi\) is called \textit{closed} if \(\alpha, \beta \in \Sigma\) and \(\alpha + \beta \in \Phi\) imply \(\alpha + \beta \in \Sigma\). We say that a closed subset \(\Sigma \subseteq \Phi\) is \textit{saturated}, if \(\alpha \in \Sigma\) together with \(\frac 12 \alpha \in \Phi\) imply \(\frac 12 \alpha \in \Sigma\). If \(X \subseteq \Phi\), then \(\langle X \rangle\) is the smallest saturated closed subset of \(\Phi\) containing \(X\), \(\mathbb R X\) is the linear span of \(X\), and \(\mathbb R_+ X\) is the smallest convex cone containing \(X\). A saturated root subsystem \(\Psi \subseteq \Phi\) is a saturated closed subset such that \(\Psi = -\Psi\).
A closed subset \(\Sigma \subseteq \Phi\) is called \textit{special} if \(\Sigma \cap -\Sigma = \varnothing\). It is well-known that a closed subset of \(\Phi\) is special if and only if it is a subset of some system of positive roots. Hence the smallest saturated closed subset containing a special closed subset it also special. A root \(\alpha\) in a saturated special closed set \(\Sigma\) is called \textit{extreme} if it is indecomposable into a sum of two roots of \(\Sigma\) and in the case \(\alpha \in \Phi_{\mathrm{us}}\) the root \(2\alpha\) is not a sum of two distinct roots of \(\Sigma\). Every non-empty saturated special closed set contains an extreme root and if \(\alpha \in \Sigma\) is extreme, then \(\Sigma \setminus \langle \alpha \rangle\) is also a saturated special closed set. Notice that if \(\Sigma\) is a saturated special closed subset and \(u\) is an extreme ray of \(\mathbb R_+ \Sigma\), then \(u\) contains an extreme root of \(\Sigma\).
If \(\Sigma \subseteq \Phi\) is a special closed subset, then the map \(\prod_{\alpha \in \Sigma \setminus 2\Sigma} X_\alpha(S, \Theta) \to \stunit(S, \Theta)\) is injective for any linear order on \(\Sigma \setminus 2\Sigma\) and its image \(\stunit(S, \Theta; \Sigma)\) is a subgroup of \(\stunit(S, \Theta)\) independent on the order. Moreover, the homomorphism \(\stunit(S, \Theta; \Sigma) \to \unit(S, \Theta)\) is injective. This follows from results of \cite{CentralityBC}[section 4].
Let \(\Psi \subseteq \Phi\) be a saturated root subsystem. Consider the following binary relation on \(\Phi_{\mathrm{us}}\): \(\mathrm e_i \sim_\Psi \mathrm e_j\) if \(\mathrm e_i - \mathrm e_j \in \Psi \cup \{0\}\) and \(\mathrm e_j \notin \Psi\). Actually, this is a partial equivalence relation (i.e. symmetric and transitive), \(\mathrm e_i \sim_\Psi \mathrm e_j\) if and only if \(\mathrm e_{-i} \sim_\Psi \mathrm e_{-j}\), \(\mathrm e_i \not \sim_\Psi \mathrm e_{-i}\). Conversely, each partial equivalence relation on \(\Phi_{\mathrm{us}}\) with these properties arise from unique saturated root subsystem.
The image of \(\Phi \setminus \Psi\) in \(\mathbb R^\ell / \mathbb R \Psi\) is denoted by \(\Phi / \Psi\), in the case \(\Psi = \langle \alpha \rangle\) we write just \(\Phi / \alpha\). We associate with \(\Psi\) a new orthogonal hyperbolic family \(H / \Psi\) as follows. If \(E \subseteq \Phi_{\mathrm{us}}\) is an equivalence class with respect to \(\sim_\Psi\), then \(\eta_E\) is the sum of all hyperbolic pairs corresponding to the elements of \(E\), where a sum of two hyperbolic pairs is given by
\[(e_-, e_+, q_-, q_+) \oplus (e'_-, e'_+, q'_-, q'_+) = (e_- + e'_-, e_+ + e'_+, q_- + q'_-, q_+ + q'_+)\]
if \((e_- + e_+) (e'_- + e'_+) = 0\). The family \(H / \Psi\) consists of all \(\eta_E\), if we take only one equivalence class \(E\) from each pair of opposite equivalence classes (so \(H / \Psi\) does not contain opposite hyperbolic pairs). The Steinberg groups constructed by \(H / \Psi\) are denoted by \(\stunit(R, \Delta; \Phi / \Psi)\), \(\stunit(S, \Theta; \Phi / \Psi)\), and \(\stunit(R, \Delta; S, \Theta; \Phi / \Psi)\). In the case \(\Psi = \varnothing\) we obtain the original Steinberg groups with omitted \(\Phi / \varnothing = \Phi\) in the notation. Now it is easy to see that \(\Phi / \Psi\) is a root system of type \(\mathsf{BC}_{\ell - \dim(\mathbb R \Psi)}\), it parametrizes the root subgroups of the corresponding Steinberg groups. Note that \(H / \Psi\) is well-defined only up to the action of \(\mathrm W(\Phi / \Psi)\).
Let us denote the map \(\Phi \setminus \Psi \to \Phi / \Psi\) by \(\pi_\Psi\). The preimage of a special closed subset of \(\Phi / \Psi\) is a special closed subset of \(\Phi\). There is a canonical group homomorphism \(F_\Psi \colon \stunit(S, \Theta; \Phi / \Psi) \to \stunit(S, \Theta; \Phi)\), it maps every root subgroup \(X_\alpha(S, \Theta) \leq \stunit(S, \Theta; \Phi / \Psi)\) to \(\stunit(S, \Theta; \pi_\Psi^{-1}(\{\alpha, 2\alpha\} \cap \Phi / \Psi)) \leq \stunit(S, \Theta; \Phi)\) in such a way that
\[\stmap \circ F_\Psi = \stmap \colon \stunit(S, \Theta; \Phi / \Psi) \to \unit(S, \Theta).\]
Of course, \(\{\alpha, 2\alpha\} \cap \Phi / \Psi = \{\alpha, 2 \alpha\}\) if \(\alpha\) is ultashort and it equals \(\{\alpha\}\) otherwise.
There are similarly defined natural homomorphisms \(F_\Psi \colon \stunit(R, \Delta; \Phi / \Psi) \to \stunit(R, \Delta; \Phi)\) and \(F_\Psi \colon \stunit(R, \Delta; S, \Theta; \Phi / \Psi) \to \stunit(R, \Delta; S, \Theta; \Phi)\). By \cite[propositions 1 and 2]{CentralityBC}, \(F_\Psi \colon \stunit(R, \Delta; \Phi / \alpha) \to \stunit(R, \Delta; \Phi)\) is an isomorphism for every root \(\alpha\) if \(\ell \geq 4\) or \(\ell \geq 3\) and the orthogonal hyperbolic family is strong. The diagonal group \(\diag(R, \Delta; \Phi / \Psi)\) constructed by \(H_\Psi\) contains the root elements \(T_\alpha(\mu)\) for all \(\alpha \in \Psi\) and
\[F_\Psi\bigl(\up{T_\alpha(\mu)}{g}\bigr) = \up{X_\alpha(\mu)}{F_\Psi(g)} \in \stunit(R, \Delta; S, \Theta; \Phi)\]
for \(g \in \stunit(R, \Delta; S, \Theta; \Phi / \Psi)\), \(\alpha \in \Psi\).
Note that there is a one-to-one correspondence between the saturated root subsystems of \(\Phi\) containing a saturated root subsystem \(\Psi\) and the saturated root subsystems of \(\Phi / \Psi\). If \(\Psi \subseteq \Psi' \subseteq \Phi\) are two saturated root subsystems, then
\[\pi_{\Psi} \circ \pi_{\Psi' / \Psi} = \pi_{\Psi'} \colon \stunit(S, \Theta; \Phi / \Psi') \to \stunit(S, \Theta; \Phi).\]
Let \(e_{i \oplus j} = e_i + e_j\), \(q_{i \oplus j} = q_i \dotplus q_j\), \(e_{\ominus i} = e_{-i} + e_0 + e_i\). There are new root homomorphisms
\begin{align*}
X_{i, \pm (l \oplus m)} \colon S_{i, \pm (l \oplus m)} = S_{i, \pm l} \oplus S_{i, \pm m} &\to \stunit(S, \Theta; \Phi / (\mathrm e_m - \mathrm e_l)) \text{ for } i \notin \{0, \pm l, \pm m\}; \\
X_{\pm (l \oplus m), j} \colon S_{\pm (l \oplus m), j} = S_{\pm l, j} \oplus S_{\pm m, j} &\to \stunit(S, \Theta; \Phi / (\mathrm e_m - \mathrm e_l)) \text{ for } j \notin \{0, \pm l, \pm m\}; \\
X_{\pm(l \oplus m)} = X^0_{\pm(l \oplus m)} \colon \Delta^0_{\pm(l \oplus m)} = \Theta^0_{\pm l} \dotoplus \phi(S_{\mp l, \pm m}) \dotoplus \Theta^0_{\pm m} &\to \stunit(S, \Theta; \Phi / (\mathrm e_m - \mathrm e_l)); \\
X^{\ominus m}_j \colon \Theta^{\ominus m}_j = q_{-m} \cdot S_{-m, j} \dotoplus \Theta^0_j \dotoplus q_m \cdot S_{mj} &\to \stunit(S, \Theta; \Phi / \mathrm e_m) \text{ for } j \notin \{0, \pm m\}.
\end{align*}
The remaining root morphisms of \(\stunit(S, \Theta; \Phi / \mathrm e_m)\) and \(\stunit(S, \Theta; \Phi / (\mathrm e_m - \mathrm e_l))\) are denoted by the usual \(X_{ij}\) and \(X_j = X^0_j\).
\iffalse
\begin{lemma} !!!!!!!!!!!!!!!
Let \(\Sigma \subseteq \Phi\) be a closed subset and \(C\) be the convex cone spanned by \(\Sigma\). Then every root in \(C\) is a positive linear combination of at most two roots from \(\Sigma\). If \(\Sigma\) is saturated, then \(\Sigma = C \cap \Phi\).
\end{lemma}
\begin{proof}
To prove the first claim, take \(\alpha \in \Phi \cap C\) and choose a decomposition \(\alpha = \sum_{i = 1}^n x_i \beta_i\) for \(\beta_i \in \Sigma\) and \(x_i > 0\) such that \(n\) is the smallest possible. Without loss of generality, \(\Sigma\) is spanned by \(\beta_i\) as a closed subset, \(\beta_i\) are linearly independent, and \(n \geq 3\). Then the image of \(\Sigma \setminus \mathbb R \alpha\) in \(\Phi / \alpha\) is not special, hence there are two roots \(\gamma_1, \gamma_2 \in \Sigma\) such that their images in \(\Phi / \alpha\) are antiparallel. Since \(\Sigma\) itself is special, it follows that \(\alpha\) is a positive linear combination of \(\gamma_1, \gamma_2\).
The second claim now easily reduces to the case \(\dim \mathbb R \Sigma \leq 2\) and may be checked case by case.
\end{proof}
We finish this section with a technical lemma needed in section \(\).
\begin{lemma}\label{three-decompose}
Suppose that \(\ell \geq 3\). Up to the action of \(\mathrm W(\mathsf{BC}_\ell)\), the only linearly independent triples \(\{\alpha, \beta, \gamma\} \subseteq \Phi \setminus 2 \Phi\) such that \(\mathrm e_1 = x\alpha + y\beta + z\gamma\) for some \(x, y, z > 0\) are
\(\{\mathrm e_2 + \mathrm e_3, \mathrm e_1 - \mathrm e_2, \mathrm e_2 - \mathrm e_3\}\),
\(\{\mathrm e_2 + \mathrm e_3, \mathrm e_1 - \mathrm e_2, \mathrm e_1 - \mathrm e_3\}\),
and \(\{\mathrm e_2 + \mathrm e_3, -\mathrm e_2, \mathrm e_1 - \mathrm e_3\}\).
Up to the same action, the only linearly independent triples \(\{\alpha, \beta, \gamma\} \subseteq \Phi \setminus 2 \Phi\) such that \(\mathrm e_1 - \mathrm e_2 = x\alpha + y\beta + z\gamma\) for some \(x, y, z > 0\) are
\(\{\mathrm e_1 + \mathrm e_3, -\mathrm e_3, -\mathrm e_2\}\),
\(\{\mathrm e_1 + \mathrm e_3, -\mathrm e_3, -\mathrm e_1 - \mathrm e_2\}\),
\(\{\mathrm e_1 + \mathrm e_3, -\mathrm e_3, \mathrm e_3 - \mathrm e_2\}\),
\(\{\mathrm e_1 + \mathrm e_3, \mathrm e_2 - \mathrm e_3, -\mathrm e_2\}\),
\(\{\mathrm e_1 + \mathrm e_3, \mathrm e_1 - \mathrm e_3, -\mathrm e_2\}\),
\(\{\mathrm e_1 + \mathrm e_3, \mathrm e_1 - \mathrm e_3, -\mathrm e_1 - \mathrm e_2\}\),
and \(\{\mathrm e_3 - \mathrm e_4, \mathrm e_1 - \mathrm e_3, \mathrm e_4 - \mathrm e_2\}\). The last one is possible only for \(\ell \geq 4\).
\end{lemma}
\begin{proof}
Let \(\Psi = (\mathbb R \alpha + \mathbb R \beta + \mathbb R \gamma) \cap \Phi\), it is an indecomposable saturated root subsystem of rank \(3\). In the first claim necessarily \(\Psi\) is of type \(\mathsf{BC}_3\), so without loss of generality \(\Psi = \Phi\) and \(\ell = 3\). The stabilizer \(G\) of \(\mathrm e_1\) in \(\mathrm W(\mathsf{BC}_3)\) is isomorphic to the dihedral group \((\mathbb Z / 2 \mathbb Z)^2 \rtimes (\mathbb Z / 2 \mathbb Z)\). The images of \(\alpha, \beta, \gamma\) lie in the root system \(\Phi / \mathrm e_1\) of type \(\mathsf{BC}_2\) and \(0\) lies in the interior of the triangle spanned by these images, hence up to the action of \(G\) and up to permutations of \(\alpha, \beta, \gamma\) we have \(\alpha = \mathrm e_2 + \mathrm e_3\), \(\beta \in \{\mathrm e_1 - \mathrm e_2, -\mathrm e_2, -\mathrm e_1 - \mathrm e_2\}\), \(\gamma \in \{\mathrm e_2 - \mathrm e_3, \mathrm e_1 - \mathrm e_3, -\mathrm e_3, -\mathrm e_1 - \mathrm e_3\}\). The resulting list easily follows.
In the second claim consider the case where \(\Psi\) is of type \(\mathsf{BC}_3\), so again we may assume that \(\Psi = \Phi\) and \(\ell = 3\). The stabilizer \(G\) of \(\mathrm e_1 - \mathrm e_2\) in \(\mathrm W(\mathsf{BC}_3)\) is isomorphic to \(\mathbb Z / 2 \mathbb Z \times \mathbb Z / 2 \mathbb Z\). The images of \(\alpha, \beta, \gamma\) lie in the root system \(\Phi / (\mathrm e_1 - \mathrm e_2)\) of type \(\mathsf{BC}_2\) and \(0\) lies in the interior of the triangle spanned by these images, hence up to the action of \(G\) and up to permutations of \(\alpha, \beta, \gamma\) we have \(\alpha \in \{\mathrm e_1 + \mathrm e_3, \mathrm e_2 + \mathrm e_3\}\), \(\beta \in \{\mathrm e_1 - \mathrm e_3, \mathrm e_2 - \mathrm e_3, -\mathrm e_3\}\), \(\gamma \in \{\mathrm e_3 - \mathrm e_2, \mathrm e_3 - \mathrm e_1, -\mathrm e_2, -\mathrm e_1 - \mathrm e_2, -\mathrm e_1\}\). So we have the second list without the last triple.
Finally, suppose that \(\Psi\) is of type \(\mathsf A_3\). Without loss of generality, \(\ell = 4\) and \(\Psi = \{\mathrm e_i - \mathrm e_j \mid 1 \leq i \neq j \leq j\}\). The stabilizer \(G\) of \(\mathrm e_1 - \mathrm e_2\) in \(\mathrm W(\mathsf A_3)\) is \(\mathbb Z / 2 \mathbb Z\). By the same argument as above, up to all symmetries \(\alpha = \mathrm e_3 - \mathrm e_4\), \(\beta \in \{\mathrm e_1 - \mathrm e_3, \mathrm e_2 - \mathrm e_3\}\), and \(\gamma \in \{\mathrm e_4 - \mathrm e_1, \mathrm e_4 - \mathrm e_2\}\). This gives only one possible triple.
\end{proof}
\fi
\section{Conjugacy calculus}
Let us say that a group \(G\) has a \textit{conjugacy calculus} with respect to the orthogonal hyperbolic family \(H\) if there is a family of maps
\[\stunit(R, \Delta; \Sigma) \times \stunit(S, \Theta; \Phi) \to G, (g, h) \mapsto \up g{\{h\}_\Sigma}\]
parametrized by a saturated special closed subset \(\Sigma \subseteq \Phi\) such that
\begin{itemize}
\item[(Hom)] \(\up g{\{h_1 h_2\}_\Sigma} = \up g{\{h_1\}_\Sigma}\, \up g{\{h_2\}_\Sigma}\);
\item[(Sub)] \(\up g{\{h\}_{\Sigma'}} = \up g{\{h\}_\Sigma}\) if \(\Sigma' \subseteq \Sigma\);
\item[(Chev)] \(\up{g X_{\alpha}(\mu)}{\{X_{\beta}(\nu)\}_\Sigma} = \up g{\bigl\{\prod_{\substack{i \alpha + j \beta \in \Phi\\ i \geq 0, j > 0}} X_{i \alpha + j \beta}(f_{\alpha \beta i j}(\mu, \nu))\bigr\}_\Sigma}\) if \(\alpha, \beta\) are non-antiparallel and \(\alpha \in \Sigma\);
\item[(XMod)] \(\up{X_\alpha(\delta(\mu))\, g}{\{h\}_\Sigma} = \up 1{\{X_\alpha(\mu)\}_\Sigma}\, \up g{\{h\}_\Sigma}\, \up 1{\{X_\alpha(\mu)\}^{-1}_\Sigma}\) if \(\alpha \in \Sigma\);
\item[(Conj)] \(\up{g_1}{\{h_1\}_{\Sigma'}}\, \up{F_\Psi(g_2)}{\{F_\Psi(h_2)\}_{\pi_\Psi^{-1}(\Sigma)}}\, \up{g_1}{\{h_1\}_{\Sigma'}}^{-1} = \up{F_\Psi(\up f {g_2})}{\{F_\Psi(\up f {h_2})\}_{\pi_\Psi^{-1}(\Sigma)}}\) if \(\Psi \subseteq \Phi\) is a saturated root subsystem, \(\Sigma \subseteq \Phi / \Psi\) is a saturated special closed subset, \(\alpha \in \Psi\), \(\Sigma' \subseteq \Psi\), \(f = \up{\stmap(g_1)}{\stmap(h_1)} \in \unit(S, \Theta)\).
\end{itemize}
The axiom (Sub) implies that we may omit the subscript \(\Sigma\) in the maps \((g, h) \mapsto \up g{\{h\}_\Sigma}\).
If \(G\) has a conjugacy calculus with respect to \(H\), then we define the elements
\begin{align*}
Z_{ij}(a, p) &= \up{X_{ji}(p)}{\{X_{ij}(a)\}};
& X_{ij}(a) &= Z_{ij}(a, 0); \\
Z_j(u, s) &= \up{X_{-j}(s)}{\{X_j(u)\}};
& X_j(u) &= Z_j(u, \dot 0); \\
Z_{i, j \oplus k}(a, p) &= \up{F_{\mathrm e_k - \mathrm e_j}(X_{j \oplus k, i}(p))}{\{F_{\mathrm e_k - \mathrm e_j}(X_{i, j \oplus k}(a))\}};
& X_{i, j \oplus k}(a) &= Z_{i, j \oplus k}(a, 0); \\
Z_{i \oplus j, k}(a, p) &= \up{F_{\mathrm e_j - \mathrm e_i}(X_{k, i \oplus j}(p))}{\{F_{\mathrm e_j - \mathrm e_i}(X_{i \oplus j, k}(a))\}};
& X_{i \oplus j, k}(a) &= Z_{i \oplus j, k}(a, 0); \\
Z_{i \oplus j}(u, s) &= \up{F_{\mathrm e_j - \mathrm e_i}(X_{-(i \oplus j)}(s))}{\{F_{\mathrm e_j - \mathrm e_i}(X_{i \oplus j}(u))\}};
& X_{i \oplus j}(u) &= Z_{i \oplus j}(u, \dot 0);\\
Z^{\ominus i}_j(u, s) &= \up{F_{\mathrm e_i}(X^{\ominus i}_{-j}(s))}{\{F_{\mathrm e_i}(X^{\ominus i}_j(u))\}} \text{ for } \alpha = \mathrm e_i;
& X^{\ominus i}_j(u) &= Z^{\ominus i}_j(u, \dot 0)
\end{align*}
of \(G\). Since \(\stunit(R, \Delta; S, \Theta)\) and \(\unit(S, \Theta)\) have natural conjugacy calculi with respect to \(H\), we use the notation \(Z_{ij}(a, p)\) and \(Z_j(u, s)\) for the corresponding elements in these groups.
\begin{prop} \label{identities}
Suppose that \(\delta \colon (S, \Theta) \to (R, \Delta)\) is a crossed module of odd form rings, \(H\) is an orthogonal hyperbolic family in \((R, \Delta)\), and a group \(G\) has a conjugacy calculus with respect to \(H\). Then the following identities hold:
\begin{itemize}
\item[(Sym)] \(Z_{ij}(a, p) = Z_{-j, -i}(-\inv a, -\inv p)\) and \(Z_{i \oplus j, k}(a, p) = Z_{-k, (-i) \oplus (-j)}(-\inv a, -\inv p)\);
\item[(Add)] The maps \(Z_{ij}\), \(Z_j\), \(Z_{i \oplus j, k}\), \(Z_{i, j \oplus k}\), \(Z_{i \oplus j}\), \(Z^{\ominus i}_j\) are homomorphisms on the first variables.
\item[(Comm)]
\begin{enumerate}
\item \([Z_{ij}(a, p), Z_{kl}(b, q)] = 1\) if \(\pm i, \pm j, \pm k, \pm l\) are distinct,
\item \([Z_{ij}(a, p), Z_l(b, q)] = 1\) if \(\pm i, \pm j, \pm l\) are distinct,
\item \(\up{Z_{ij}(a, p)}{Z_{i \oplus j, k}(b, q)} = Z_{i \oplus j, k}\bigl(\up{Z_{ij}(a, p)} b, \up{Z_{ij}(a, p)} q\bigr)\),
\item \(\up{Z_{ij}(a, p)}{Z_{i \oplus j}(u, s)} = Z_{i \oplus j}\bigl(\up{Z_{ij}(a, p)} u, \up{Z_{ij}(a, p)} s\bigr)\),
\item \(\up{Z_i(u, s)}{Z_{\ominus i, j}(v, t)} = Z_{\ominus i, j}\bigl(\up{Z_i(u, s)} v, \up{Z_i(u, s)} t\bigr)\);
\end{enumerate}
\item[(Simp)]
\begin{enumerate}
\item \(Z_{ik}(a, p) = Z_{i \oplus j, k}(a, p)\);
\item \(Z_{ij}(a, p) = Z_{(-i) \oplus j}(\phi(a), \phi(p))\);
\item \(Z_j(u, s) = Z^{\ominus i}_j(u, s)\);
\end{enumerate}
\item[(HW)]
\begin{enumerate}
\item \(Z_{j \oplus k, i}\bigl(\up{T_{jk}(r)} a, p + q\bigr) = Z_{k, i \oplus j}\bigl(\up{T_{ij}(p)} a, \up{T_{ij}(p)}{(q + r)}\bigr)\) for \(a \in S_{ki}\), \(p \in R_{ij}\), \(q \in R_{ik}\), \(r \in R_{jk}\),
\item \(Z_{-j \oplus -i}\bigl(\up{T_{ij}(q)}{(u \dotplus \phi(a))}, s \dotplus \phi(p) \dotplus t\bigr) = Z_{\ominus i, -j}\bigl(\up{T_i(s)}{(u \dotplus q_i \cdot a)}, \up{T_i(s)}{(q_{-i} \cdot p \dotplus t \dotplus q_i \cdot q)}\bigr)\) for \(u \in \Theta^0_{-j}\), \(a \in S_{i, -j}\), \(s \in \Delta^0_i\), \(p \in R_{-i, j}\), \(t \in \Delta^0_j\), \(q \in R_{ij}\);
\end{enumerate}
\item[(Delta)]
\begin{enumerate}
\item \(Z_{ij}(a, \delta(b) + p) = Z_{ji}(b, 0)\, Z_{ij}(a, p)\, Z_{ji}(-b, 0)\),
\item \(Z_j(u, \delta(v) \dotplus s) = Z_{-j}(v, \dot 0)\, Z_j(u, s)\, Z_{-j}(\dotminus v, \dot 0)\).
\end{enumerate}
\end{itemize}
Conversely, suppose that \(G\) is a group with the elements \(Z_{ij}(a, p)\), \(Z_j(u, s)\), \(Z_{i \oplus j, k}(a, p)\), \(Z_{i, j \oplus k}(a, p)\), \(Z_{i \oplus j}(u, s)\), \(Z_{\ominus i, j}(u, s)\) satisfying the identities above. Then \(G\) has a unique conjugacy calculus with respect to \(H\) such that the distinguished elements coincide with the corresponding expressions from the conjugacy calculus.
\end{prop}
\begin{proof}
If \(G\) has a conjugacy calculus, then all the identities may be proved by direct calculations. In particular, (Comm) follows from (Conj), (Delta) follows from (XMod), and the remaining ones follow from (Hom), (Sub), (Chev).
To prove the converse, notice that (Add) and (Sym) imply
\begin{align*}
Z_{ij}(0, p) &= 1, &
Z_{i \oplus j, k}(0, p) &= 1, &
Z_{i, j \oplus k}(0, p) &= 1, \\
Z_j(\dot 0, s) &= 1, &
Z_{i \oplus j}(\dot 0, s) &= 1, &
Z_{\ominus i, j}(\dot 0, s) &= 1.
\end{align*}
Together with (Sym), (Add), (Simp), and (HW) it follows that \(X_{i \oplus j, k}(a)\), \(X_{i, j \oplus k}(a)\), \(X_{i \oplus j}(u)\), and \(X_{\ominus i, j}(u)\) may be expressed in terms of the root elements \(X_{ij}(a)\) and \(X_j(u)\) in the natural way (these elements are defined as various \(Z_{(-)}(=, \dot 0)\)). In particular, (Comm) implies that the root elements satisfy the Chevalley commutator formulas. It is also easy to see that (Sym), (Add), (Simp), and (HW) give some canonical expressions of all the distinguished elements in terms of \(Z_{ij}(a, p)\) and \(Z_j(u, s)\).
We explicitly construct the maps \((g, h) \mapsto \up g{\{h\}_\Sigma}\) by induction on \(|\Sigma|\), the case \(\Sigma = \varnothing\) is trivial. Simultaneously we show that \(\up g{\{h\}_\Sigma}\) evaluates to a distinguished element if \(\Sigma\) is strictly contained in a system of positive roots of a rank \(2\) saturated root subsystem and \(h \in \unit(S, \Theta; -\Sigma)\). Hence from now on we assume that \(\Sigma \subseteq \Phi\) is a saturated special closed subset and there are unique maps \(\up g{\{h\}_{\Sigma'}}\) for all saturated \(\Sigma'\) with \(|\Sigma'| < |\Sigma|\) satisfying the axioms (Hom), (Sub), and (Chev).
Firstly, we construct \(\up g{\{X_\alpha(\mu)\}_\Sigma}\), where \(\alpha \in \Phi \setminus 2 \Phi\) if a fixed root. If there is an extreme root \(\beta \in \Sigma \setminus 2 \Sigma\) such that \(\beta \neq -\alpha\), then we define
\[\up{g X_\beta(\nu)}{\{X_\alpha(\mu)\}_\Sigma} = \up g{\bigl\{\up{X_\beta(\nu)}{X_\alpha(\mu)}\bigr\}_{\Sigma \setminus \langle \beta \rangle}}\]
for any \(g \in \stunit(R, \Delta; \Sigma \setminus \langle \beta \rangle)\), where \(\up{X_\beta(\nu)}{X_\alpha(\mu)} = \prod_{\substack{i\alpha + j\beta \in \Phi\\ i \geq 0, j > 0}} X_{i \alpha + j \beta}(f_{\alpha \beta i j}(\mu, \nu)) \in \stunit(S, \Theta)\). By (HW), this definition gives the distinguished element for appropriate \(\Sigma\).
Let us check that the definition is correct, i.e. if \(\beta, \gamma \in \Sigma \setminus 2 \Sigma\) are two extreme roots, \(\beta, \gamma, -\alpha\) are distinct, and \(g \in \stunit(R, \Delta; \Sigma \setminus (\langle \beta \rangle \cup \langle \gamma \rangle))\), then
\[\up{g X_\beta(\nu)}{\{\up{X_\gamma(\lambda)}{X_\alpha(\mu)}\}_{\Sigma \setminus \langle \gamma \rangle}} = \up{g [X_\beta(\nu), X_\gamma(\lambda)]\, X_\gamma(\lambda)}{\{\up{X_\beta(\nu)}{X_\alpha(\mu)}\}_{\Sigma \setminus \langle \beta \rangle}}.\]
If \(\langle \alpha, \beta, \gamma \rangle\) is special, then this claim easily follows. Else these roots lie in a common saturated root subsystem \(\Phi_0\) of type \(\mathsf A_2\) or \(\mathsf{BC}_2\). We may assume that \(\Sigma = \langle \beta, \gamma \rangle\), otherwise there is an extreme root in \(\Sigma\) but not in \(\Phi_0\). If \(\Phi_0\) is of type \(\mathsf{BC}_2\) and \(\langle \beta, \gamma \rangle\) is not its subsystem of positive roots, then the required identity follows from (Sym). Otherwise this is a simple corollary of (HW).
The above definition cannot be used if \(\Sigma = \langle -\alpha \rangle\). In this case we just define
\[\up{X_{ji}(p)}{\{X_{ij}(a)\}_{\langle \mathrm e_j - \mathrm e_i \rangle}} = Z_{ij}(a, p), \quad \up{X_{-i}(s)}{\{X_i(u)\}_{\langle \mathrm e_i \rangle}} = Z_i(u, s).\]
Now let us check that the map \((g, h) \to \up g{\{h\}_\Sigma}\) is well-defined, i.e. factors through the Steinberg relations on \(h\). By construction, it factors through the homomorphism property of the root elements. Let us check that the Chevalley commutator formula for \([X_\alpha(*), X_\beta(*)]\) is also preserved, where \(\alpha, \beta\) are linearly independent roots. If there is an extreme root \(\gamma \in \Sigma\) such that \(\langle \alpha, \beta, \gamma \rangle\) is special, then we may apply the construction of \(\up g{\{h\}_\Sigma}\) via \(\gamma\). Otherwise let \(\Phi_0 \subseteq \Phi\) be the rank \(2\) saturated root subsystem containing \(\alpha, \beta, \gamma\). If \(\Phi_0\) is of type \(\mathsf A_1 \times \mathsf A_1\) or \(\mathsf A_1 \times \mathsf{BC}_1\), then \(\Sigma = \langle -\alpha, -\beta \rangle\) and we may apply the corresponding case of (Comm). If \(\Phi_0\) is of type \(\mathsf A_2\) or \(\mathsf{BC}_2\) and \(\Sigma\) is not its subsystem of positive roots, then we just apply (Add).
Consider the case where \(\Phi_0\) is of type \(\mathsf A_2\), \(\alpha, \beta, \alpha + \beta \in \Phi_0\), and \(\Sigma = \langle -\alpha, -\beta \rangle\). Without loss of generality, \(\alpha = \mathrm e_j - \mathrm e_i\) and \(\beta = \mathrm e_k - \mathrm e_j\). Then
\begin{align*}
\up{X_{ji}(p)\, X_{ki}(q)\, X_{kj}(r)}{\{X_{ij}(a)\, X_{jk}(b)\}_{\Sigma}} &= X_{k, i \oplus j}\bigl(\up{T_{ji}(p)}{(qa)}\bigr)\, Z_{ij}(a, p)\, Z_{i \oplus j, k}\bigl(b, \up{T_{ji}(p)}{(q + r)}\bigr) \\
&= Z_{i \oplus j, k}\bigl(\up{T_{ji}(p)\, T_{ij}(a)}{b}, \up{T_{ji}(p)}{(q + r)}\bigr)\, X_{k, i \oplus j}\bigl(\up{T_{ji}(p)}{(qa)}\bigr)\, Z_{ij}(a, p) \\
&= \up{X_{ji}(p)\, X_{ki}(q)\, X_{kj}(r)}{\{X_{jk}(b)\, X_{ik}(ab)\, X_{ij}(a)\}_\Sigma}.
\end{align*}
The remaining case is where \(\Phi_0\) is of type \(\mathsf{BC}_2\), \(\alpha, \beta, \alpha + \beta, 2\alpha + \beta \in \Phi_0\), and \(\Sigma = \langle -\alpha, -\beta \rangle\). Without loss of generality, \(\alpha = \mathrm e_i\) and \(\beta = \mathrm e_j - \mathrm e_i\). We have
\begin{align*}
\up{X_{-i}(s)\, X_{i, -j}(p)\, X_{-j}(t)\, X_{ji}(q)}{\{X_i(u)\, X_{ij}(a)\}_\Sigma} &= Z_{i \oplus j}(u, s \dotplus \phi(p) \dotplus t)\, Z_{\ominus i, j}\bigl(\up{T_{-i}(s)}{(q_i \cdot a)}, \up{T_{-i}(s)}{(q_i \cdot p \dotplus t \dotminus q_{-i} \cdot \inv q)}\bigr)\\
&= X_{\ominus i, -j}\bigl(\up{T_{-i}(s)}{[t \dotplus q_i \cdot p, T_i(u)]}\bigr)\, Z_i(u, s)\\
&\cdot Z_{\ominus i, j}\bigl(\up{T_{-i}(s)}{(q_i \cdot a)}, \up{T_{-i}(s)}{(q_i \cdot p \dotplus t \dotminus q_{-i} \cdot \inv q)}\bigr) \\
&= Z_{\ominus i, j}\bigl(\up{T_{-i}(s)}{(q_i \cdot a \dotplus u \cdot a \dotplus q_{-i} \cdot \rho(u) a)}, \up{T_{-i}(s)}{(q_i \cdot p \dotplus t \dotminus q_{-i} \cdot \inv q)}\bigr) \\
&\cdot Z_{i \oplus j}(u, s \dotplus \phi(p) \dotplus t) \\
&= \up{X_{-i}(s)\, X_{i, -j}(p)\, X_{-j}(t)\, X_{ji}(q)}{\{X_{ij}(a)\, X_j(u \cdot a)\, X_{-i, j}(\rho(u) a)\, X_i(u)\}_\Sigma}.
\end{align*}
Clearly, our map \((g, h) \mapsto \up g{\{h\}_\Sigma}\) satisfy the required properties and it is unique. The axiom (XMod) follows from the Steinberg relations on the root elements, (Comm), and (Delta) if \(\Sigma\) is one-dimensional, the general case follows from the construction of \(\up g{\{h\}_\Sigma}\).
To prove the axiom (Conj), without loss of generality \(\Sigma'\) is one-dimensional and \(\Psi = \mathbb R \Sigma' \cap \Phi\). Applying (Chev) multiple times to the term \(\up{F_\Psi(g_2)}{\{F_\Psi(h_2)\}_{\pi^{-1}_\Psi}(\Sigma)}\), we reduce to the case where \(\Sigma\) is also one-dimensional. This is precisely (Comm).
\end{proof}
From now on let \(\overline{\stunit}(R, \Delta; S, \Theta; \Phi)\) be the universal group with a conjugacy calculus with respect to \(H\). It is the abstract group with the presentation given by proposition \ref{identities}. Clearly, for a saturated root subsystem \(\Psi \subseteq \Phi\) there is a homomorphism \(F_\Psi \colon \overline{\stunit}(R, \Delta; S, \Theta; \Phi / \Psi) \to \overline{\stunit}(R, \Delta; S, \Theta; \Phi)\), i.e. every group with a conjugacy calculus with respect to \(H\) also has a canonical conjugacy calculus with respect to \(H_\Psi\). We have a sequence of groups
\[\stunit(S, \Theta; \Phi) \to \overline{\stunit}(R, \Delta; S, \Theta; \Phi) \to \stunit(R, \Delta; S, \Theta; \Phi)\]
with the action of \(\diag(R, \Delta; \Phi)\). Our goal for the next several sections is to prove that the right arrow is an isomorphism.
\section{Lemmas about odd form rings}
The difficult part in the proof that \(\overline{\stunit}(R, \Delta; S, \Theta; \Phi) \to \stunit(R, \Delta; S, \Theta; \Phi)\) is an isomorphism is to construct an action of \(\stunit(R, \Delta; \Phi)\) on the left hand side. In order to do this, we prove that \(F_\alpha \colon \overline{\stunit}(R, \Delta; S, \Theta; \Phi / \alpha) \to \overline{\stunit}(R, \Delta; S, \Theta; \Phi)\) is an isomorphism for all roots \(\alpha\), then the automorphisms of the left hand side induced by \(T_\alpha(\mu)\) give certain automorphisms of the right hand side. In this section we prove surjectivity of \(F_\alpha\) and several preparatory results.
\begin{lemma} \label{ring-pres}
If \(\pm i\), \(\pm j\), \(\pm k\) are distinct non-zero indices, then the multiplication map
\[(S_{ik} \oplus S_{i, -k}) \otimes_{e_{|k|} R e_{|k|}} (R_{kj} \oplus R_{-k, j}) \to S_{ij}\]
is an isomorphism. If, in addition, \(H\) is strong, then
\[S_{ik} \otimes_{R_{kk}} R_{kj} \to S_{ij}\]
is also an isomorphism.
\end{lemma}
\begin{proof}
This is an easy result from Morita theory. Let \(e_j = \sum_{l = \pm k} \sum_m x_{lm} y_{lm}\) for some \(x_{lm} \in R_{jl}\) and \(y_{ml} \in R_{lj}\), they exists since \(e_j \in R e_{|k|} R\). Then a direct calculation show that
\[S_{ij} \to (S_{ik} \oplus S_{i, -k}) \otimes_{e_{|k|} R e_{|k|}} (R_{kj} \oplus R_{-k, j}), a \mapsto \sum_{l = \pm k} \sum_m a x_{lm} \otimes y_{lm}\]
is the inverse to the map from the statement. The second claim follows by the same argument if we take \(l = k\).
\end{proof}
\begin{lemma} \label{form-pres}
For any \(\pm j\), \(\pm k\) distinct non-zero indices consider the group \(F\) with generators \(u \boxtimes p\) for \(u \in \Theta^0_{\pm k}\), \(p \in R_{\pm k, j}\) and \(\phi(a)\) for \(a \in S_{-j, j}\). The relations are
\begin{itemize}
\item \(\phi(a + b) = \phi(a) \dotplus \phi(b)\), \(\phi(a) = \phi(-\inv a)\);
\item \(u \boxtimes a \dotplus \phi(b) = \phi(b) \dotplus u \boxtimes a\), \([u \boxtimes a, v \boxtimes b] = \phi(-\inv a \inv{\pi(u)} \pi(v) b)\);
\item \((u \dotplus v) \boxtimes a = u \boxtimes a \dotplus v \boxtimes a\), \(u \boxtimes (a + b) = u \boxtimes a \dotplus \phi(\inv{\,b\,} \rho(u) a) \dotplus u \boxtimes b\);
\item \(u \boxtimes ab = (u \cdot a) \boxtimes b\) for \(u \in \Theta^0_l\), \(a \in R_{lm}\), \(b \in R_{mj}\), \(l, m \in \{-k, k\}\);
\item \(\phi(a) \boxtimes b = \phi(\inv{\,b\,} a b)\).
\end{itemize}
Then the homomorphism
\[f \colon F \to \Theta^0_j, u \boxtimes p \mapsto u \cdot p, \phi(a) \mapsto \phi(a)\]
is an isomorphism. If the orthogonal hyperbolic family is strong, then it suffices to take only the generators \(u \boxtimes p\) for \(u \in \Theta^0_k\), \(p \in R_{kj}\) and \(\phi(a)\) for \(a \in S_{-j, j}\).
\end{lemma}
\begin{proof}
Let \(e_j = \sum_{l = \pm k} \sum_m x_{lm} y_{lm}\) for some \(x_{lm} \in R_{jl}\) and \(y_{ml} \in R_{lj}\). Consider the map
\[g \colon \Theta^0_j \to F, u \mapsto \sum_{l = \pm k}^\cdot \sum_m^\cdot (u \cdot x_{lm}) \boxtimes y_{lm} \dotplus \phi\bigl(\sum_{(l, m) < (l', m')} \inv{y_{l'm'}} \inv{x_{l'm'}} \rho(u) x_{lm} y_{lm}\bigr),\]
it is a section of \(f\). The relations of \(F\) easily imply that \(g\) is a homomorphism. Finally,
\[g(f(\phi(a))) = \sum_{l = \pm k}^\cdot \sum_m^\cdot (\phi(a) \cdot x_{lm}) \boxtimes y_{lm} \dotplus \phi\bigl(\sum_{(l, m) < (l', m')} \inv{y_{l'm'}} \inv{x_{l'm'}} (a - \inv a) x_{lm} y_{lm}\bigr) = \phi(a)\]
and
\[g(f(u \boxtimes a)) = \sum_{l = \pm k}^\cdot \sum_m^\cdot (u \cdot a x_{lm}) \boxtimes y_{lm} \dotplus \phi\bigl(\sum_{(l, m) < (l', m')} \inv{y_{l'm'}} \inv{x_{l'm'}} \inv a \rho(u) a x_{lm} y_{lm}\bigr) = u \boxtimes a.\]
The second claim may be proved in the same way.
\end{proof}
\begin{prop} \label{elim-sur}
The homomorphism
\[F_\alpha \colon \overline{\stunit}(R, \Delta; S, \Theta; \Phi / \alpha) \to \overline{\stunit}(R, \Delta; S, \Theta; \Phi)\]
is surjective if \(\alpha\) is a short root and \(\ell \geq 3\) or if \(\alpha\) is an ultrashort root and \(\ell \geq 2\). The homomorphism
\[F_{\Psi / \alpha} \colon \overline{\stunit}(R, \Delta; S, \Theta; \Phi / \Psi) \to \overline{\stunit}(R, \Delta; S, \Theta; \Phi / \alpha)\]
is also surjective if \(\alpha \in \Psi \subseteq \Phi\) is a root subsystem of type \(\mathsf A_2\), \(\ell \geq 3\), and \(H\) is strong.
\end{prop}
\begin{proof}
By proposition \ref{identities}, \(\overline{\stunit}(R, \Delta; S, \Theta; \Phi)\) is generated by \(Z_{ij}(a, p)\) and \(Z_j(u, s)\). It suffices to show that they lie in the images of the homomorphisms. This is clear for the generators with the roots not in \(\{\alpha, -\alpha\}\) or \(\Psi\) respectively. For the remaining roots \(\beta\) it suffices to show that \(X_\beta(S, \Theta)\) lie in the images. This easily follows from the Chevalley commutator formula and lemmas \ref{ring-pres}, \ref{form-pres}.
\end{proof}
Of course, the proposition also implies that
\[F_\alpha \colon \overline{\stunit}(R, \Delta; S, \Theta; \Phi / \Psi) \to \overline{\stunit}(R, \Delta; S, \Theta; \Phi / \alpha)\]
is surjective if \(\ell \geq 3\) and \(\Psi\) is of type \(\mathsf{BC}_2\) or \(\ell \geq 4\) and \(\Psi\) is of type \(\mathsf A_2\).
The final technical lemma is needed in the next section.
\begin{lemma}\label{associator}
Suppose that \(\ell = 3\) and \(H\) is strong. Let \(A\) be an abelian group and
\[\{-\}_{ij} \colon R_{1i} \otimes_{\mathbb Z} R_{ij} \otimes_{\mathbb Z} S_{j3} \to A\]
be homomorphisms for \(i, j \in \{-2, 2\}\). Suppose also that
\begin{itemize}
\item[(A1)] \(\{p \otimes qr \otimes a\}_{ik} = \{pq \otimes r \otimes a\}_{jk} + \{p \otimes q \otimes ra\}_{ij}\);
\item[(A2)] \(\{p \otimes q \otimes \inv pa\}_{-i, i} = 0\);
\item[(A3)] \(\{p \otimes qr \otimes a\}_{ij} = \{\inv q \otimes \inv pr \otimes a\}_{-i, j}\);
\item[(A4)] \(\{p \otimes q \otimes ra\}_{ij} = -\{\inv r \otimes \inv q \otimes \inv pa\}_{-j, -i}\).
\end{itemize}
Then \(\{x\}_{ij} = 0\) for all \(i, j, x\).
\end{lemma}
\begin{proof}
Let \(R_{|2|, |2|} = \sMat{R_{-2, -2}}{R_{-2, 2}}{R_{2, -2}}{R_{22}}\) and \(S_{|2|, 3} = \sCol{S_{-2, 3}}{S_{23}}\). For convenience we prove the claim for arbitrary left \(R_{|2|, |2|}\)-module \(S_{|2|, 3}\) instead of a part of a crossed module, where \(S_{\pm 1, 3} = R_{\pm 1, 2} \otimes_{R_{22}} S_{23}\) in (A2) and (A4). From the last two identities we get
\begin{align*}
\{px \otimes \inv yq \otimes a\}_{ij} &= \{py \otimes \inv xq \otimes a\}_{-i, j}; \tag{A5} \\
\{p \otimes qx \otimes \inv ya\}_{ji} &= \{p \otimes qy \otimes \inv xa\}_{j, -i} \tag{A6}
\end{align*}
for \(x \in R_{1i}\) and \(y \in R_{1, -i}\). This implies that
\[\{pxyq \otimes r \otimes a\}_{ij} = \{pyxq \otimes r \otimes a\}_{ij} \tag{A7}\]
for \(x, y \in R_{\pm 1, \pm 1}\). Let \((I, \Gamma) \leqt (R, \Delta)\) be the odd form ideal generated by \(xy - yx\) for \(x, y \in R_{11}\). From (A5)--(A7) we obtain that \(\{-\}_{ij}\) factor through \(R / I\) and \(S_{|2|, 3} / (I \cap R_{|2|, |2|}) S_{|2|, 3}\), so we may assume that \(R_{11}\) is commutative. It is easy to see that
\[\{rx \otimes y e_1 z \otimes w e_1 a\}_{ij} = \{x \otimes yrz \otimes w e_1 a\}_{ij} = \{x \otimes y e_1 z \otimes wra\}_{ij} \tag{A8}\]
for \(r \in R_{11}\). By (A5) and (A6), it suffices to prove that \(\{x\}_{22} = 0\).
From (A2), (A4), (A5), and (A6) we get
\begin{align*}
\{px \otimes qx \otimes a\}_{22} &= 0; \tag{A9} \\
\{p \otimes yq \otimes ya\}_{22} &= 0 \tag{A10}
\end{align*}
for \(x \in R_{12}\) and \(y \in R_{21}\). Using (A1), (A8), (A9), (A10), and the linearizations of (A9) and (A10), we get
\begin{align*}
\{x \otimes y (pq)^3 z \otimes wa\}_{22} &= \{xyp \otimes q (pq)^2 z \otimes wa\}_{22} + \{(pq)^2 x \otimes yp \otimes qzwa\}_{22} \\
&= \{xyp \otimes qz' \otimes wpqa\}_{22} + \{x' \otimes ypqp \otimes qzwa\}_{22} \\
&= -\{xyz' \otimes qp \otimes w't\}_{22} - \{x' \otimes qp \otimes y'zwt\}_{22} = 0
\end{align*}
for \(x, p, z \in R_{12}\), \(y, q, w \in R_{21}\), \(a \in S_{13}\), where \(x' = pqx - xqp\), \(y' = ypq - qpy\), \(z' = pqz - zqp\), \(w' = wpq - qpw\). The last equality follows from \(x'q = py' = z'q = pw' = 0\). It remains to notice that the elements \((pq)^3\) generate the unit ideal in \(R_{11}\).
\end{proof}
\section{Construction of root subgroups}
In this and the next sections \(\alpha = \mathrm e_m\) or \(\alpha = \mathrm e_m - \mathrm e_l\) for \(m \neq \pm l\) is a fixed root. We also assume that \(\ell \geq 4\) or \(\ell = 3\) and \(H\) is strong. We are going to prove that
\[F_\alpha \colon \overline{\stunit}(R, \Delta; S, \Theta; \Phi / \alpha) \to \overline{\stunit}(R, \Delta; S, \Theta; \Phi)\]
is an isomorphism, i.e. that \(\overline{\stunit}(R, \Delta; S, \Theta; \Phi / \alpha)\) has a natural conjugacy calculus with respect to \(H\).
In this section we construct root elements \(\widetilde X_\beta(\mu) \in \overline{\stunit}(R, \Delta; S, \Theta; \Phi / \alpha)\) for \(\beta \in \Phi\) and prove the Steinberg relations for them. Let \(I = \{m, -m\}\) if \(\alpha = \mathrm e_m\) and \(I = \{m, -m, l, -l\}\) if \(\alpha = \mathrm e_m - \mathrm e_l\) be the set of non-trivial indices. If \(\beta \notin \mathbb R \alpha\), then there is a canonical choice for such elements, i.e.
\begin{align*}
\widetilde X_{ij}(a) &= X_{ij / \alpha}(a) \text{ for } i, j \notin I; \\
\widetilde X_{ij}(a) &= \widetilde X_{-j, -i}(-\inv a) = X_{i, \pm \infty / \alpha}(a) \text{ for } \alpha = \mathrm e_m - \mathrm e_l, i \notin I, j \in \{\pm l, \pm m\}; \\
\widetilde X_{ij}(a) &= X_{\pm \infty / \alpha}(\phi(a)) \text{ for } \alpha = \mathrm e_m - \mathrm e_l, i = \mp l, j = \pm m \text{ or } i = \mp m, j = \pm l; \\
\widetilde X_{ij}(a) &= \widetilde X_{-j, -i}(-\inv a) = X_{j / \alpha}(q_i \cdot a) \text{ for } \alpha = \mathrm e_m, i = \pm m; \\
\widetilde X_j(u) &= X_{j / \alpha}(u) \text{ for } j \notin I; \\
\widetilde X_j(u) &= X_{\pm \infty / \alpha}(u) \text{ for } \alpha = \mathrm e_m - \mathrm e_l, j = \pm m \text{ or } j = \pm l.
\end{align*}
These elements satisfy all the Steinberg relations involving only roots from saturated special closed subsets \(\Sigma \subseteq \Phi\) disjoint with \(\mathbb R \alpha\). Similar elements also may be defined in \(\stunit(R, \Delta; \Phi / \alpha)\) and \(\stunit(S, \Theta; \Phi / \alpha)\). The conjugacy calculus with respect to \(H_\alpha\) gives a way to evaluate the elements \(\up{\widetilde X_\beta(\mu)}{\{\widetilde X_\gamma(\nu)\}}\) in terms of \(\widetilde X_{i \beta + j \gamma}(\lambda)\) if \(\pm \alpha \notin \langle \beta, \gamma \rangle\) and \(\langle \beta, \gamma \rangle\) is special. Up to symmetry, it remains to construct \(\widetilde X_{lm}(a) \in \overline{\stunit}(R, \Delta; S, \Theta; \Phi / \alpha)\) for \(\alpha = \mathrm e_m - \mathrm e_l\) and \(\widetilde X_m(u) \in \overline{\stunit}(R, \Delta; S, \Theta; \Phi / \alpha)\) for \(\alpha = \mathrm e_m\), as well as to prove the Steinberg relations involving \(\alpha\) and \(2\alpha\).
Consider the expressions \(\up{\widetilde X_\beta(\mu)}{\{\widetilde X_\gamma(\nu)\}}\), where \(\alpha \in \langle \beta, \gamma \rangle\) but \(\alpha\) is linearly independent with \(\beta\) and \(\gamma\) separately. We expand them in terms of \(\widetilde X_{i \beta + j \gamma}(\lambda)\) adding the new terms as follows. If \(\alpha = \mathrm e_m - \mathrm e_l\) let
\begin{align*}
\up{\widetilde X_{li}(p)}{\{\widetilde X_{im}(a)\}} &= \widetilde X_{lm}^i(p, a)\, \widetilde X_{im}(a); \\
\up{\widetilde X_{im}(p)}{\{\widetilde X_{li}(a)\}} &= \up i {\widetilde X}_{lm}(-a, p)\, \widetilde X_{li}(a); \\
\up{\widetilde X_{-l}(s)}{\{\widetilde X_m(u)\}} &= \widetilde X_{lm}^\pi(s, \dotminus u)\, \widetilde X_m(u); \\
\up{\widetilde X_m(s)}{\{\widetilde X_{-l}(u)\}} &= \up \pi{\widetilde X}_{lm}(u, s)\, \widetilde X_{-l}(u); \\
\up{\widetilde X_{-l}(s)}{\{\widetilde X_{-l, m}(a)\}} &= \widetilde X^{-l}_{lm}(s, a)\, \widetilde X_m(\dotminus s \cdot (-a))\, \widetilde X_{-l, m}(a); \\
\up{\widetilde X_m(s)}{\{\widetilde X_{l, -m}(a)\}} &= \up{-m}{\widetilde X}_{lm}(a, \dotminus s)\, \widetilde X_{-l}(\dotminus s \cdot \inv a)\, \widetilde X_{l, -m}(a); \\
\up{\widetilde X_{l, -m}(p)}{\{\widetilde X_m(u)\}} &= \widetilde X_{-l}(u \cdot \inv p)\, \widetilde X^{-m}_{lm}(-p, \dotminus u)\, \widetilde X_m(u); \\
\up{\widetilde X_{-l, m}(p)}{\{\widetilde X_{-l}(u)\}} &= \widetilde X_m(u \cdot (-p))\, \up{-l}{\widetilde X}_{lm}(u, -p)\, \widetilde X_{-l}(u); \\
\widetilde X_\alpha(S, \Theta) &= \langle \widetilde X^i_{lm}(p, a), \up i{\widetilde X}_{lm}(a, p), \widetilde X^\pi_{lm}(s, u), \up \pi {\widetilde X}_{lm}(u, s), \\
&\quad \widetilde X^{-l}_{lm}(s, a), \up{-m}{\widetilde X}_{lm}(a, s), \widetilde X^{-m}_{lm}(p, u), \up{-l}{\widetilde X}_{lm}(u, p) \rangle.
\end{align*}
In the case \(\alpha = \mathrm e_m\) let
\begin{align*}
\up{\widetilde X_{-m, i}(p)}{\{\widetilde X_{im}(a)\}} &= \widetilde X_{-m, m}^i(p, a)\, \widetilde X_{im}(a); \\
\up{\widetilde X_i(s)}{\{\widetilde X_{im}(a)\}} &= \widetilde X_{-i, m}(\rho(s) a)\, \widetilde X^i_m(\dotminus s, -a)\, \widetilde X_{im}(a); \\
\up{\widetilde X_{im}(p)}{\{\widetilde X_i(u)\}} &= \up i {\widetilde X}_m(u, -p)\, \widetilde X_{-i, m}(-\rho(u) p)\, \widetilde X_i(u); \\
\widetilde X_\alpha(S, \Theta) &= \langle \widetilde X^i_{-m, m}(p, a), \widetilde X^i_m(s, a), \up i{\widetilde X}_m(u, p) \rangle.
\end{align*}
\begin{lemma} \label{elim-diag}
If \(g \in \widetilde X_\alpha(S, \Theta)\) and \(\beta \in \Phi / \alpha\), then
\[\up g{Z_\beta(\mu, \nu)} = Z_\beta(\up{\delta(\stmap(g))} \mu, \up{\delta(\stmap(g))} \nu).\]
\end{lemma}
\begin{proof}
Without loss of generality, \(g\) is a generator of \(\widetilde X_\alpha(S, \Theta)\). Let \(\Psi \subseteq \Phi\) be the saturated irreducible root subsystem of rank \(2\) involved in the definition of \(g\) (of type \(\mathsf A_2\) or \(\mathsf{BC}_2\)). If \(\beta \notin \Psi / \alpha\), then the claim follows from (Conj). Otherwise we apply proposition \ref{elim-sur} to
\[F_\beta \colon \overline{\stunit}(R, \Delta; S, \Theta; \Phi / \Psi) \to \overline{\stunit}(R, \Delta; S, \Theta; \Phi / \alpha)\]
in order to decompose \(Z_\beta(\mu, \nu)\) into the generators with roots not in \(\Psi / \alpha\).
\end{proof}
Let \(\mathrm{eval} \colon \widetilde X_\alpha(S, \Theta) \to S_{lm}\) or \(\mathrm{eval} \colon \widetilde X_\alpha(S, \Theta) \to \Theta^0_m\) be such that \(\stmap(g) = T_{lm}(\mathrm{eval}(g))\) or \(\stmap(g) = T_m(\mathrm{eval}(g))\) depending on the choice of \(\alpha\). Lemma \ref{elim-diag} implies that
\[[\widetilde X_\beta(\mu), g] = \prod_{\substack{i \beta + j \alpha \in \Phi\\ i, j > 0}} X_{i \beta + j \alpha}(f_{\beta \alpha i j}(\mu, \mathrm{eval}(g)))\]
for all \(g \in \widetilde X_\alpha(*)\) if \(\beta\) and \(\alpha\) are linearly independent.
We still have to prove the relations between the generators of \(\widetilde X_\alpha(S, \Theta)\). In order to do so, we consider expressions
\[\up{\prod_{\beta \in \Sigma} \widetilde X_\beta(\mu_\beta)}{\{\widetilde X_\gamma(\nu)\}_\Sigma},\]
where \(\Sigma \subseteq \Phi\) is at most two-dimensional saturated special closed subset, \(\alpha\) is linearly independent with \(\Sigma\), \(\alpha \notin \langle \gamma \rangle\), and \(\langle \Sigma, \gamma, \alpha \rangle\) is special. Such an expression may be decomposed into a product of root elements \(\widetilde X_\beta(\nu)\) and the generators of \(\widetilde X_\alpha(S, \Theta)\) in two ways if at the first step we take one of the extreme roots of \(\Sigma\) applying (Chev). We say that the expression \textit{gives} an identity \(h_1 = h_2\) between products of such root elements. Similarly, expressions \(\up{\widetilde X_\beta(\mu)}{\{\widetilde X_\gamma(\nu_1 \dotplus \nu_2)\}_{\langle \beta \rangle}}\) \textit{give} an identity \(h_1 = h_2\) if \(\alpha \in \langle \beta, \gamma \rangle\) and at the first step we either apply (Hom) and (Chev), or replace \(\widetilde X_\gamma(\nu_1 \dotplus \nu_2)\) by \(\widetilde X_\gamma(\nu_1)\, \widetilde X_\gamma(\nu_2)\) and apply (Chev) to the result.
Note also that any generator of \(\widetilde X_\alpha(S, \Theta)\) is trivial if either of its arguments vanishes.
\begin{lemma} \label{ush-new-root}
Suppose that \(\alpha = \mathrm e_m\). Then there is unique homomorphism \(\widetilde X_m \colon \Theta^0_m \to \overline{\stunit}(R, \Delta; S, \Theta; \Phi / \alpha)\) such that
\[g = \widetilde X_m(\mathrm{eval}(g))\]
for all \(g \in \widetilde X_\alpha(S, \Theta)\).
\end{lemma}
\begin{proof}
Lemma \ref{elim-diag} implies that the elements \(\widetilde X^i_{-m, m}(p, a)\) lie in the center of \(\widetilde X_\alpha(S, \Theta)\). It is easy to check that
\begin{align*}
\up{\widetilde X_{-m, i}(p)}{\{\widetilde X_{im}(a + b)\}}
&\gives
\widetilde X^i_{-m, m}(p, a + b)
=
\widetilde X^i_{-m, m}(p, a)\, \widetilde X^i_{-m, m}(p, b)
; \\
\up{\widetilde X_{-m, j}(p)\, \widetilde X_{-m, i}(q)\, \widetilde X_{ji}(r)}{\{\widetilde X_{im}(a)\}}
&\gives
\widetilde X^i_{-m, m}(q + pr, a)
=
\widetilde X^i_{-m, m}(q, a)\, \widetilde X^j_{-m, m}(p, ra)
\text{ for } i \neq \pm j; \\
\up{\widetilde X_{-m, i}(p)\, \widetilde X_{jm}(-q)}{\{\widetilde X_{ij}(a)\}}
&\gives
\widetilde X^i_{-m, m}(p, aq)
=
\widetilde X^{-j}_{-m, m}(\inv q, -\inv a \inv p)
\text{ for } i \neq \pm j.
\end{align*}
From the second identity we easily get \(\widetilde X^i_{-m, m}(p + q, a) = \widetilde X^i_{-m, m}(p, a)\, \widetilde X^i_{-m, m}(q, a)\) and \(\widetilde X^i_{-m, m}(pq, a) = \widetilde X^j_{-m, m}(p, qa)\) for all \(i, j \neq \pm m\). Hence by lemma \ref{ring-pres} there is a unique homomorphism
\[\widetilde X_{-m, m} \colon S_{-m, m} \to \overline{\stunit}(R, \Delta; S, \Theta; \Phi / \alpha)\]
such that \(\widetilde X^k_{-m, m}(p, a) = \widetilde X_{-m, m}(pa)\) and \(\widetilde X_{-m, m}(a) = \widetilde X_{-m, m}(-\inv a)\).
It turns out that for \(i \neq \pm j\)
\begin{align*}
\up{\widetilde X_{im}(-p)}{\{\widetilde X_i(u \dotplus v)\}}
&\gives
\up i{\widetilde X}_m(u \dotplus v, p)
=
\up i{\widetilde X}_m(u, p)\, \up i{\widetilde X}_m(v, p)
; \\
\up{\widetilde X_{jm}(-r)\, \widetilde X_{-j, i}(p)}{\{\widetilde X_{ij}(a)\}}
&\gives
\up j{\widetilde X_m}(\phi(pa), r)
=
\widetilde X_{-m, m}(\inv r par)
; \\
\up{\widetilde X_{im}(-p)\, \widetilde X_{jm}(-q)\, \widetilde X_{ji}(-r)}{\{\widetilde X_j(u)\}}
&\gives
\up j{\widetilde X}_m(u, rp + q)
=
\up i{\widetilde X}_m(u \cdot r, p)\, \widetilde X_{-m, m}(\inv q \rho(u) rp)\, \up j{\widetilde X}_m(u, q).
\end{align*}
The second identity may be generalized to \(\up i{\widetilde X}_m(\phi(a), p) = \widetilde X_{-m, m}(\inv pap)\). The last identity is equivalent to \(\up j{\widetilde X}_m(u, rp) = \up i{\widetilde X}_m(u \cdot r, p)\) and \(\up j{\widetilde X}_m(u, rp + q) = \up j{\widetilde X}_m(u, rp)\, \widetilde X_{-m, m}(\inv q \rho(u) rp)\, \up j{\widetilde X}_m(u, q)\) for \(i \neq \pm j\). Hence \(\up i{\widetilde X}_m(u, p + q) = \up i{\widetilde X}_m(u, p)\, \widetilde X_{-m, m}(\inv q \rho(u) p)\, \up i{\widetilde X}_m(u, q)\) and \(\up j{\widetilde X}_m(u, pq) = \up i{\widetilde X}_m(u \cdot p, q)\) for all \(i, j \neq \pm m\). Moreover, lemma \ref{elim-diag} implies that \([g, \up i{\widetilde X}_m(u, p)] = \widetilde X_{-m, m}\bigl(\inv p \inv{\pi(u)} \pi(\mathrm{eval}(g))\bigr)\) for all \(g \in \widetilde X_\alpha(S, \Theta)\). Now lemma \ref{form-pres} gives the unique homomorphism
\[\widetilde X_m \colon \Theta^0_m \to \overline{\stunit}(R, \Delta; S, \Theta; \Phi / \alpha)\]
such that \(\widetilde X_m(\phi(a)) = \widetilde X_{-m, m}(a)\) and \(\up i{\widetilde X}_m(u, p) = \widetilde X_m(u \cdot p)\).
It remains to prove that \(\widetilde X_m^i(s, a) = \widetilde X_m(s \cdot a)\). This easily follows from
\begin{align*}
\up{\widetilde X_i(\dotminus s)}{\{\widetilde X_{im}(-a - b)\}}
&\gives
\widetilde X^i_m(s, a + b)
=
\widetilde X^i_m(s, a)\, \widetilde X_{-m, m}(\inv{\,b\,} \rho(s) a)\, \widetilde X^i_m(s, b)
; \\
\up{\widetilde X_j(\dotminus s)\, \widetilde X_{im}(-p)}{\{\widetilde X_{ji}(-a)\}}
&\gives
\widetilde X^j_m(s, ap)
=
\widetilde X_m(s \cdot ap)
\text{ for } i \neq \pm j. \qedhere
\end{align*}
\end{proof}
\begin{lemma} \label{sh-new-root}
Suppose that \(\alpha = \mathrm e_m - \mathrm e_l\). Then there is unique homomorphism \(\widetilde X_{lm} \colon S_{lm} \to \overline{\stunit}(R, \Delta; S, \Theta; \Phi / \alpha)\) such that
\[g = \widetilde X_{lm}(\mathrm{eval}(g))\]
for all \(g \in \widetilde X_\alpha(S, \Theta)\).
\end{lemma}
\begin{proof}
First of all, there is a nontrivial element from the Weyl group \(\mathrm W(\Phi)\) stabilizing \(\alpha\) and \(\Phi / \alpha\), it exchanges \(\pm \mathrm e_l\) with \(\mp \mathrm e_m\). It gives a duality between the generators of \(\widetilde X_\alpha(S, \Theta)\) as follows:
\begin{align*}
\widetilde X^i_{lm}(p, a) &\leftrightarrow \up{-i}{\widetilde X_{lm}}(\inv a, -\inv p), &
\widetilde X^{-l}_{lm}(s, a) &\leftrightarrow \up{-m}{\widetilde X_{lm}}(-\inv a, \dotminus s), \\
\widetilde X^\pi_{lm}(s, u) &\leftrightarrow \up \pi{\widetilde X_{lm}}(\dotminus u, s), &
\widetilde X^{-m}_{lm}(p, u) &\leftrightarrow \up{-l}{\widetilde X_{lm}}(\dotminus u, -\inv p).
\end{align*}
So it suffices to prove only one half of the identities between the generators. Note that
\begin{align*}
\up{\widetilde X_{i, -l}(q)\, \widetilde X_{-l}(s)\, \widetilde X_{li}(-p)}{\{\widetilde X_m(\dotminus u)\}}
&\gives
\widetilde X^\pi_{lm}(s \dotplus \phi(pq), u)
=
\widetilde X^\pi_{lm}(s, u)
; \\
\up{\widetilde X_{-m, i}(-p)\, \widetilde X_{-l}(s)}{\{\widetilde X_{im}(a)\}}
&\gives
\widetilde X^\pi_{lm}(s, \phi(pa))
=
1;
\end{align*}
in particular, \(\widetilde X^\pi_{lm}(s, \phi(a)) = \widetilde X^\pi_{lm}(\phi(p), u) = 1\). From this and lemma \ref{elim-diag} we easily obtain that \(\widetilde X_\alpha(S, \Theta)\) is abelian. Next,
\begin{align*}
\up{\widetilde X_{li}(p)\, \widetilde X_{lj}(r)\, \widetilde X_{ij}(q)}{\{\widetilde X_{jm}(a)\}}
&\gives
\widetilde X_{lm}^i(p, qa)\,
\widetilde X_{lm}^j(r, a)
=
\widetilde X_{lm}^j(pq + r, a)
\text{ for } i \neq \pm j
; \\
\up{\widetilde X_{li}(p)}{\{\widetilde X_{im}(a + b)\}}
&\gives
\widetilde X^i_{lm}(p, a + b)
=
\widetilde X^i_{lm}(p, a)\, \widetilde X^i_{lm}(p, b)
; \\
\up{\widetilde X_{-m, i}(-q)\, \widetilde X_{li}(r)\, \widetilde X_{l, -m}(p)}{\{\widetilde X_{im}(a)\}}
&\gives
\widetilde X^{-m}_{lm}(-p, \phi(qa))\,
\widetilde X^i_{lm}(pq + r, a)
=
\up{-i}{\widetilde X}_{lm}(p \inv a, \inv q)\,
\widetilde X^i_{lm}(r, a).
\end{align*}
The first identity is equivalent to
\begin{align*}
\widetilde X_{lm}^i(p, qa)
&=
\widetilde X_{lm}^j(pq, a)
; \tag{B1} \\
\widetilde X_{lm}^j(pq + r, a)
&=
\widetilde X_{lm}^j(pq, a)\,
\widetilde X_{lm}^j(r, a)
\end{align*}
for \(i \neq \pm j\) and the third one is equivalent to
\begin{align*}
\widetilde X^{-m}_{lm}(-p, \phi(qa))\,
\widetilde X^i_{lm}(pq, a)
&=
\up{-i}{\widetilde X}_{lm}(p \inv a, \inv q); \tag{B2} \\
\widetilde X^i_{lm}(pq + r, a)
&=
\widetilde X^i_{lm}(pq, a)\,
\widetilde X^i_{lm}(r, a).
\end{align*}
It follows that the maps \(\widetilde X^i_{lm}(-, =)\) are biadditive.
Now we have
\begin{align*}
\up{\widetilde X_{-l, i}(p)\, \widetilde X_{-l}(s)}{\{\widetilde X_{im}(a)\}}
&\gives
\widetilde X^{-l}_{lm}(s, pa)
=
\widetilde X^i_{lm}(\rho(s) p, a)
; \tag{B3} \\
\up{\widetilde X_{li}(p)\, \widetilde X_{-i}(s)}{\{\widetilde X_{-i, m}(a)\}}
&\gives
\widetilde X^i_{lm}(p, \rho(s) a)
=
\widetilde X^{-i}_{lm}(p \rho(s), a)
; \tag{B4} \\
\up{\widetilde X_{i, -l}(q)\, \widetilde X_{li}(-p)}{\{\widetilde X_{-l, m}(a)\}}
&\gives
\widetilde X^{-l}_{lm}(\phi(pq), a)
=
\widetilde X^i_{lm}(p, qa)\,
\widetilde X^{-i}_{lm}(-\inv q, \inv pa)
; \tag{B5} \\
\up{\widetilde X_{-l}(s)\, \widetilde X_i(t)}{\{\widetilde X_{im}(-a)\}}
&\gives
\widetilde X^\pi_{lm}(s, t \cdot a)
=
\widetilde X^i_{lm}(\inv{\pi(s)} \pi(t), a)
. \tag{B6}
\end{align*}
Using (B6), we easily obtain
\[
\up{\widetilde X_{i, -l}(-p)\, \widetilde X_i(t)}{\{\widetilde X_{-l, m}(a)\}}
\gives
\widetilde X^{-l}_{lm}(t \cdot p, a)
=
\widetilde X^i_{lm}(\inv p \rho(t), pa)
. \tag{B7}
\]
We are ready to construct a homomorphism \(\widetilde X_{lm}\) such that \(\widetilde X_{lm}^i(p, a) = \widetilde X_{lm}(pa)\) using lemma \ref{ring-pres}. If \(\ell \geq 4\), then it exists by (B1). Otherwise \(H\) is firm and we may apply lemma \ref{associator} to
\[\{p \otimes q \otimes a\}_{ij} = \widetilde X_{lm}^j(pq, a) \widetilde X_{lm}^i(p, qa)^{-1}.\]
Namely, (B5) and (B7) imply (A2); (B3) and (B5) imply (A3); and (B5) implies (A4).
It remains to express the remaining generators via \(\widetilde X_{lm}\). For \(\widetilde X^\pi_{lm}\), \(\widetilde X^{-l}_{lm}\), and \(\widetilde X^{-m}_{lm}\) this follows from (B3), (B6),
\begin{align*}
\up{\widetilde X_{-l}(s)}{\{\widetilde X_m(\dotminus u \dotminus v)\}}
&\gives
\widetilde X^\pi_{lm}(s, v \dotplus u)
=
\widetilde X^\pi_{lm}(s, u)\, \widetilde X^\pi_{lm}(s, v)
; \\
\up{\widetilde X_{-l}(s)}{\{\widetilde X_{-l, m}(a + b)\}}
&\gives
\widetilde X^{-l}_{lm}(s, a + b)
=
\widetilde X^{-l}_{lm}(s, a)\, \widetilde X^{-l}_{lm}(s, b)
; \\
\up{\widetilde X_{li}(p)\, \widetilde X_{l, -m}(-r)\, \widetilde X_{i, -m}(-q)}{\{\widetilde X_m(\dotminus u)\}}
&\gives
\widetilde X^{-m}_{lm}(pq + r, u)
=
\widetilde X^i_{lm}(p, q \rho(u))\,
\widetilde X^{-m}_{lm}(r, u).
\end{align*}
The dual generators may be expressed via \(\widetilde X_{lm}\) using (B2).
\end{proof}
The elements \(\widetilde X_\alpha(\mu)\) constructed by lemmas \ref{ush-new-root} and \ref{sh-new-root} satisfy all the missing Steinberg relations in \(\overline \stunit(R, \Delta; S, \Theta; \Phi / \alpha)\). Also,
\[\widetilde X_\alpha(\mu)\, g\, \widetilde X_\alpha(\mu)^{-1} = \up{T_\alpha(\mu)}g \tag{*}\]
for any \(g\), this is easy to check for \(g = Z_\beta(\lambda, \nu)\) expressing \(\widetilde X_\alpha(\mu)\) via \(Z_\gamma(\mu_1, \mu_2)\), where \(\gamma\) and \(\beta\) are linearly independent.
\section{Presentation of relative odd unitary Steinberg groups} \label{sec-pres}
We are ready to construct a conjugacy calculus with respect to \(H\) on \(\overline \stunit(R, \Delta; S, \Theta; \Phi / \alpha)\). For a special closed subset \(\Sigma \subseteq \Phi / \alpha\) and \(g \in \stunit(R, \Delta; \Sigma)\) let
\begin{align*}
\up g{\{\widetilde X_\beta(\mu)\}_{\pi_\alpha^{-1}(\Sigma)}} &= \up g{\{\widetilde X_{\pi_\alpha(\beta)}(\mu)\}} \text{ for } \beta \notin \mathbb R \alpha; \\
\up g{\{\widetilde X_\beta(\mu)\}_{\pi_\alpha^{-1}(\Sigma)}} &= [g, T_\beta(\mu)]\, \widetilde X_\beta(\mu) \text{ for } \beta \in \mathbb R \alpha
\end{align*}
be the elements of \(\overline \stunit(R, \Delta; S, \Theta; \Phi / \alpha)\), where \(T_\beta(\mu)\) naturally acts on \(\stunit(S \rtimes R, \Theta \rtimes \Delta; \Sigma)\).
\begin{lemma} \label{conj-comm}
Let \(\beta\) and \(\gamma\) be linearly independent roots of \(\Phi\) such that \(\alpha\) lies in the relative interior of the angle spanned by \(\beta\) and \(\gamma\). Then
\[\up{g X_\beta(\mu)}{\{X_\gamma(\nu)\}_{\pi_\alpha(\beta)}} = \prod_{\substack{i \beta + j \gamma \in \Phi; \\ i \geq 0, j > 0}} \up g {\{X_{i \alpha + j \beta}(f_{\alpha \beta ij}(\mu, \nu))\}_{\pi_\alpha(\beta)}}\]
for all \(g \in \stunit(R, \Delta; \pi_\alpha(\beta))\) and some order of the factors in the right hand side.
\end{lemma}
\begin{proof}
We evaluate the expressions
\begin{align*}
\up{X_{lj}(p)\, X_{li}(q)\, X_{mi}(q')\, X_{ji}(r)}{\{\widetilde X_{im}(a)\}} &\text{ for } \alpha = \mathrm e_m - \mathrm e_l; \\
\up{X_{i, -l}(p)\, X_{-m}(s)\, X_{-l, m}(q)\, X_{-l}(s')\, X_{mi}(r)\, X_{li}(r')\, X_i(t)}{\{\widetilde X_m(u)\}} &\text{ for } \alpha = \mathrm e_m - \mathrm e_l; \\
\up{X_{-m, j}(p)\, X_j(s)\, X_{-i, j}(q)\, X_{-m, i}(r)\, X_i(t)\, X_{mi}(r')\, X_{ji}()}{\{\widetilde X_{im}(u)\}} &\text{ for } \alpha = \mathrm e_m; \\
\up{X_{li}(p)\, X_{-m}(s)\, X_{-l, m}(q)\, X_{-l}(s')\, X_{i, -l}(r)}{\{\widetilde X_{-l, m}(a)\}} &\text{ for } \alpha = \mathrm e_m - \mathrm e_l;
\end{align*}
in two ways as products of the elements \(\widetilde X_\beta\) and the terms from the statement assuming that the indices and their opposites are distinct and non-zero. During the calculations we put the factors \(\widetilde X_{im}(a)\), \(\widetilde X_m(u)\), \(\widetilde X_{-l, m}(a)\) inside the curly brackets in the rightmost position. In this way we obtain the cases of the required identity modulo an element from \(\stunit(S, \Theta; \Sigma')\) for a special closed subset \(\Sigma'\), but such an element must be trivial by considering the images in \(\unit(S, \Theta)\).
The remaining two cases follow from
\begin{align*}
\up{X_l(u)\, X_{-m, l}(q)\, X_m(v)}{\{}&\widetilde X_{l, -m}(a)\} \equiv
\up{X_{im}(p)\, X_l(u)\, X_{-m, l}(q)\, X_m(v)\, X_l(w)}{\{\widetilde X_{l, -m}(a)\}} \\
&= \up{X_l(u)\, X_{-m, l}(q)\, X_m(v \dotplus w \cdot (-p))\, X_{-i, m}(-\rho(w) p)\, X_i(w)}{\{\widetilde X_{l, -i}(a \inv p)\, \widetilde X_{l, -m}(a)\}} \\
&= \up{X_{-i, m}(-\rho(w) p)\, X_l(u)\, X_{-m, l}(q)\, X_m(v \dotplus w \cdot (-p))}{\{\widetilde X_{l, -i}(a \inv p)\, \widetilde X_{-l}(w \cdot (-p \inv a))\, \widetilde X_{li}(-a \inv p \inv{\rho(w)})\}} \\
&\cdot \up{X_l(u)\, X_{-m, l}(q)\, X_m(v \dotplus w \cdot (-p))}{\{\widetilde X_{li}(-a \inv p \inv{\rho(w)})\, \widetilde X_{l, -m}(a)\}} \\
&\equiv \up{X_l(u)\, X_{-m, l}(q)\, X_m(v \dotplus w \cdot (-p))}{\{\widetilde X_{-l}(w \cdot (-p \inv a))\, \widetilde X_{l, -m}(a)\}}
\end{align*}
in \(\stunit(S, \Theta; \langle \mathrm e_l - \mathrm e_i, -\mathrm e_i - \mathrm e_l, \mathrm e_m - \mathrm e_l, \mathrm e_m + \mathrm e_l \rangle) \backslash \overline \stunit(R, \Delta; S, \Theta; \Phi / \alpha)\) for \(\alpha = \mathrm e_m - \mathrm e_l\) and
\begin{align*}
\up{X_{mi}(p)\, X_i(u)\, X_{-m, i}(q)}{\{\widetilde X_{-i}(v)\}} &\equiv
\up{X_{mi}(p)\, X_i(u)\, X_{-m, i}(q)\, X_{ji}(s)\, X_{-m, j}(r)}{\{\widetilde X_{-i}(v)\}} \\
&= \up{X_{mi}(p)\, X_i(u)\, X_{-m, i}(q - rs)\, X_{-m, j}(r)}{\{\widetilde X_{-i}(v)\, \widetilde X_{i, -j}(-\rho(v) \inv s)\, \widetilde X_{-j}(v \cdot \inv s)\}} \\
&\equiv \up{X_{mi}(p)\, X_i(u)\, X_{-m, i}(q - rs)}{\{\widetilde X_{-i}(v)\, \widetilde X_{im}(-\rho(v) \inv s \inv r)\}}
\end{align*}
in \(\overline \stunit(R, \Delta; S, \Theta; \Phi / \alpha) / \stunit(S, \Theta; \langle -\mathrm e_j - \mathrm e_i, -\mathrm e_m - \mathrm e_j, \mathrm e_i, \mathrm e_m \rangle)\) for \(\alpha = \mathrm e_m\).
\end{proof}
\begin{theorem} \label{root-elim}
Let \(\delta \colon (S, \Theta) \to (R, \Delta)\) be a crossed module of odd form rings, where \((R, \Delta)\) has an orthogonal hyperbolic family of rank \(\ell\). Suppose that
\begin{itemize}
\item either \(\ell \geq 4\),
\item or \(\ell = 3\) and the orthogonal hyperbolic family is strong.
\end{itemize}
Then \(F_\alpha \colon \overline \stunit(R, \Delta; S, \Theta; \Phi / \alpha) \to \overline \stunit(R, \Delta; S, \Theta; \Phi)\) is an isomorphism for any \(\alpha \in \Phi\).
\end{theorem}
\begin{proof}
We construct the inverse homomorphism \(G_\alpha\) by providing a conjugacy calculus with respect to \(H\) on \(\overline \stunit(R, \Delta; S, \Theta; \Phi / \alpha)\). Let us construct the maps \((g, h) \mapsto \up g{\{h\}_\Sigma}\) by induction on \(\Sigma\) ordered by inclusion. If \(\Sigma = \varnothing\), then the required homomorphism \(\stunit(S, \Theta; \Phi) \to \overline \stunit(R, \Delta; S, \Theta; \Phi / \alpha)\) exists by lemmas \ref{ush-new-root} and \ref{sh-new-root}. Below we use the conjugacy calculus on \(\overline \stunit(R, \Delta; S, \Theta; \Phi / \alpha)\) and the already known properties of \(\widetilde X_\alpha(\mu)\) without explicit references.
If \(\Sigma\) does not intersect \(\mathbb R \alpha\), then we already have the map if \(h\) is a root element. In the subcase of the two-dimensional \(\langle \alpha, \Sigma \rangle\) the maps \(\up g{\{-\}_\Sigma}\) are homomorphisms by lemma \ref{conj-comm} and the easy observation
\begin{align*}
\up g{\{\widetilde X_\alpha(\mu)\}_{\pi_\alpha^{-1}(\langle -\beta \rangle)}}\, \up g{\{F_\alpha(\widetilde X_\beta(\nu))\}_{\pi_\alpha^{-1}(\langle -\beta \rangle)}}\, \up g{\{\widetilde X_\alpha(\mu)\}^{-1}_{\pi_\alpha^{-1}(\langle -\beta \rangle)}} &= \up{[g, X_\alpha(\delta(\mu))]\, \up{T_\alpha(\mu)}g}{\{F_\alpha(\widetilde X_\beta(\up{T_\alpha(\mu)} \nu))\}_{\pi_\alpha^{-1}(\langle -\beta \rangle)}} \\
&= \up g{\{F_\alpha(\widetilde X_\beta(\up{T_\alpha(\mu)}\nu))\}_{\pi_\alpha^{-1}(\langle -\beta \rangle)}}
\end{align*}
for \(\beta \in \Phi / \alpha\). They also satisfy (Chev) by lemma \ref{conj-comm}. In general we prove (Hom) and (Chev) by an easy induction, since we always may apply (Chev) to some extreme root of \(\Sigma\).
Now suppose that \(\alpha\) is an extreme root of \(\Sigma\). We define the map \(\up{(-)}{\{=\}_\Sigma}\) by
\[\up{X_\alpha(\mu) g}{\{h\}_\Sigma} = \up{T_\alpha(\mu)}{(\up g{\{h\}_{\Sigma \setminus \langle \alpha \rangle}})}\]
for \(g \in \stunit(R, \Delta; \Sigma \setminus \langle \alpha \rangle)\). It clearly satisfies (Hom) and (Chev).
Finally, we show that there is unique \(\up{(-)}{\{=\}_\Sigma}\) by induction on the smallest face \(\Gamma\) of \(\mathbb R_+ \Sigma\) containing \(\alpha\). The cases where such a face has the dimension \(0\) or \(1\) are already known. In order to construct \(\up{(-)}{\{\widetilde X_\beta(\lambda)\}_\Sigma}\) we take an extreme root \(\gamma \in \Gamma \cap \Sigma\) not antiparallel to \(\beta\) and let
\[\up{g\, X_\gamma(\mu)}{\{\widetilde X_\beta(\lambda)\}_\Sigma} = \up g{\bigl\{\prod_{\substack{i \gamma + j \beta \in \Phi\\ i > 0, j \geq 0}} \widetilde X_{i \gamma + j \beta}(f_{\gamma \beta i j}(\mu,
\lambda))\bigr\}_{\Sigma \setminus \langle \gamma \rangle}}\]
for \(g \in \stunit(R, \Delta; \Sigma \setminus \langle \gamma \rangle)\). It is clearly independent on \(\gamma\) unless \(\Gamma\) is two-dimensional and both \(\alpha\) and \(-\beta\) lie in its relative interior. In this case let \(\gamma_1\) and \(\gamma_2\) be the extreme roots of \(\Sigma\) and \(\widetilde X_\beta(\lambda) = \prod_t \up{X_\delta(\kappa_t)}{\{\widetilde X_{\varepsilon_t}(\lambda_t)\}}\) be a decomposition for some root \(\delta\) not in \(\mathbb R \Sigma\). We may choose such a decomposition with the additional property that \(\langle \Sigma, \delta \rangle\) is special with the face \(\Gamma\). Abusing the notation, for any \(g \in \stunit(R, \Delta; \Sigma \setminus (\langle \gamma_1 \rangle \cup \langle \gamma_2 \rangle))\) we have
\begin{align*}
\up{g\, X_{\gamma_1}(\mu)}{\{\up{\widetilde X_{\gamma_2}(\nu)}{\widetilde X_\beta(\lambda)}\}_{\Sigma \setminus \langle \gamma_2 \rangle}} &= \prod_t \up{g\, X_{\gamma_1}(\mu)\, \up{X_{\gamma_2}(\nu)}{X_\delta(\kappa_t)}}{\{\up{\widetilde X_{\gamma_2}(\nu)}{\widetilde X_{\varepsilon_t}(\lambda_t)}\}_{\langle \Sigma, \delta \rangle \setminus \langle \gamma_2 \rangle}} \\
&= \prod_t \up{g\, \up{X_{\gamma_1}(\mu)\, X_{\gamma_2}(\nu)}{X_\delta(\kappa_t)}}{\{\up{\widetilde X_{\gamma_1}(\mu)\, \widetilde X_{\gamma_2}(\nu)}{\widetilde X_{\varepsilon_t}(\lambda_t)}\}_{\langle \Sigma, \delta \rangle \setminus \langle \gamma_1, \gamma_2 \rangle}} \\
&= \prod_t \up{g\, [X_{\gamma_1}(\mu), X_{\gamma_2}(\nu)]\, \widetilde X_{\gamma_2}(\nu)\, \up{X_{\gamma_1}(\mu)}{X_\delta(\kappa_t)}}{\{\up{\widetilde X_{\gamma_1}(\mu)}{\widetilde X_{\varepsilon_t}(\lambda_t)}\}_{\langle \Sigma, \delta \rangle \setminus \langle \gamma_1 \rangle}} \\
&= \up{g\, [X_{\gamma_1}(\mu), X_{\gamma_2}(\nu)]\, \widetilde X_{\gamma_2}(\nu)}{\{\up{\widetilde X_{\gamma_1}(\mu)}{\widetilde X_\beta(\lambda)}\}_{\Sigma \setminus \langle \gamma_1 \rangle}},
\end{align*}
so the maps \(\up{(-)}{\{\widetilde X_\beta(\lambda)\}_\Sigma}\) are well-defined.
If the dimension of \(\Gamma\) is at least \(3\), then it is easy to see that \(\up{(-)}{\{=\}_\Sigma}\) satisfy (Hom) and (Chev). Now suppose that \(\Gamma\) is two-dimensional. Clearly, \(\up g{\{-\}_\Sigma}\) preserve the Steinberg relations for \([\widetilde X_\beta(\mu), \widetilde X_\gamma(\nu)]\) if at least one of \(\beta\) and \(\gamma\) does not lie in \(\mathbb R \Gamma\). It follows that (Chev) holds in the form
\[\up{g X_\beta(\mu)}{\{X_\gamma(\nu)\}_\Sigma} = \prod_{\substack{i \beta + j \gamma \in \Phi\\ i > 0, j \geq 0}} \up g{\{X_{i \beta + j \gamma}(f_{\beta \gamma i j}(\mu, \nu))\}_\Sigma}\]
for \(\beta \in \Sigma \setminus \mathbb R \Gamma\).
We prove that \(\up g{\{-\}_\Sigma}\) preserves the remaining Steinberg relations for \([\widetilde X_\beta(\mu), \widetilde X_\gamma(\nu)]\) with \(\beta, \gamma \in \mathbb R \Gamma\) by induction on the angle between \(\beta\) and \(\gamma\), the case of parallel roots is trivial. Let again \(\widetilde X_\gamma(\nu) = \prod_t \up{X_\delta(\kappa_t)}{\{\widetilde X_{\varepsilon_t}(\lambda_t)\}}\) be a decomposition for a root \(\delta\) not in \(\mathbb R \Sigma\). Applying (Chev), we may also assume that \(\Sigma \subseteq \Gamma\). Then
\begin{align*}
\up{\up g{\{\widetilde X_\beta(\mu)\}_\Sigma}}{(\up g{\{\widetilde X_\gamma(\nu)\}_\Sigma})} &= \prod_t \up{\up g{\{\widetilde X_\beta(\mu)\}_\Sigma}}{(\up{g\, X_\delta(\kappa_t)}{\{\widetilde X_{\varepsilon_t}(\lambda_t)\}_{\langle \Sigma, \delta \rangle}})} \\
&= \prod_t \up{g\, X_\delta(\kappa_t)}{\{\up{\widetilde X_\delta(\kappa_t)^{-1}\, \widetilde X_\beta(\mu)\, \widetilde X_\delta(\kappa_t)}{\widetilde X_{\varepsilon_t}(\lambda_t)}\}_{\langle \Sigma, \delta \rangle}} \\
&= \prod_t \up g{\{\up{\widetilde X_\beta(\mu)\, \widetilde X_\delta(\kappa_t)}{\widetilde X_{\varepsilon_t}(\lambda_t)}\}_{\langle \Sigma, \delta \rangle}} = \up g{\{\up{\widetilde X_\beta(\mu)}{\widetilde X_\gamma(\nu)}\}_\Sigma}.
\end{align*}
Now it is easy to see that \(\up{(-)}{\{=\}_\Sigma}\) satisfy all the cases of (Hom), (Sub), and (Chev). Let us show that they satisfy (XMod) in the form
\[\widetilde X_\gamma(\mu)\, \up g{\{\widetilde X_\beta(\nu)\}_\Sigma}\, \widetilde X_\gamma(\mu)^{-1} = \up{X_\gamma(\delta(\mu))\, g}{\{\widetilde X_\beta(\nu)\}_\Sigma}\]
by induction on \(\Sigma\). If there is an extreme root \(\gamma \neq \delta \in \Sigma\) non-antiparallel to \(\beta\), then we use (Chev) and the induction hypothesis. If \(\Sigma = \langle \gamma \rangle\), then the identity follows from the definitions. Finally, suppose that \(\Sigma = \langle \gamma, -\beta \rangle\) is two-dimensional. Then
\[\widetilde X_\gamma(\mu)\, \up g {\{\widetilde X_\beta(\nu)\}_\Sigma}\, \widetilde X_\gamma(\mu)^{-1} = \up{\widetilde X_\gamma(\mu)\, \up g{\{\widetilde X_\gamma(\mu)\}_\Sigma^{-1}}}{(\up g{\{\up{\widetilde X_\gamma(\mu)}{\widetilde X_\beta(\nu)}\}_\Sigma})} = \up{X_\gamma(\delta(\mu))\, g}{\{\widetilde X_\beta(\nu)\}_\Sigma}\]
for any \(g \in \stunit(R, \Delta; \Sigma \setminus \langle \gamma \rangle)\) by (Chev) and the induction hypothesis.
It remains to check (Conj) in the form
\[\widetilde Z_\beta(\mu, \nu)\, \up{F_\beta(g)}{\{F_\beta(h)\}_{\pi_\beta^{-1}(\Sigma)}}\, \widetilde Z_\beta(\mu, \nu)^{-1} = \up{F_\beta(\up fg)}{\{F_\beta(\up fh)\}_{\pi_\beta^{-1}(\Sigma)}}\]
for \(f = Z_\beta(\mu, \nu) \in \unit(S, \Theta)\). This is easy for \(\beta \in \mathbb R \alpha\), so we may assume that \(\beta\) and \(\alpha\) are linearly independent. Then
\[\widetilde Z_\beta(\mu, \nu)\, \up{F_{\langle \alpha, \beta \rangle}(g')}{\{F_{\langle \alpha, \beta \rangle}(h')\}}\, \widetilde Z_\beta(\mu, \nu)^{-1} = \up{F_{\langle \alpha, \beta \rangle}(\up f{g'})}{\{F_{\langle \alpha, \beta \rangle}(\up f{h'})\}}\]
by (Conj) applied to the conjugacty calculus with respect to \(H / \alpha\) and any \(\up{F_\beta(g)}{\{F_\beta(h)\}}\) may be expressed in terms of \(\up{F_{\langle \alpha, \beta \rangle}(g')}{\{F_{\langle \alpha, \beta \rangle}(h')\}}\).
Now we have group homomorphisms \(F_\alpha\) and \(G_\alpha\). By proposition \ref{elim-sur} the map \(F_\alpha\) is surjective and by construction \(G_\alpha \circ F_\alpha\) is the identity, so they are mutually inverse.
\end{proof}
\begin{theorem} \label{pres-stu}
Let \(\delta \colon (S, \Theta) \to (R, \Delta)\) be a crossed modules of odd form rings, where \((R, \Delta)\) has an orthogonal hyperbolic family of rank \(\ell\). Suppose that
\begin{itemize}
\item either \(\ell \geq 4\),
\item or \(\ell = 3\) and the orthogonal hyperbolic family is strong.
\end{itemize}
Then \(\overline \stunit(R, \Delta; S, \Theta) \to \stunit(R, \Delta; S, \Theta)\) is an isomorphism. In particular, the relative Steinberg group has the explicit presentation from proposition \ref{identities}. Moreover, under the assumption \(\ell \geq 3\) this homomorphism is surjective.
\end{theorem}
\begin{proof}
First of all we prove the surjectivity of
\[u \colon \overline \stunit(R, \Delta; S, \Theta) \to \stunit(R, \Delta; S, \Theta)\]
for \(\ell \geq 3\) and of
\[\overline \stunit(R, \Delta; S, \Theta; \Phi / \alpha) \to \stunit(R, \Delta; S, \Theta; \Phi / \alpha)\]
for any root \(\alpha\) and \(\ell \geq 3\) under the additional assumption that the orthogonal hyperbolic family is strong or \(\alpha\) is ultrashort. It suffices to check that the image is invariant under the action of the corresponding absolute Steinberg group. But it is invariant under the action of any fixed generator by proposition \ref{elim-sur}.
Now we assume that \(\ell \geq 4\) or \(\ell = 3\) and the orthogonal hyperbolic family is strong. Let us construct an action of \(\stunit(R, \Delta)\) on \(G = \overline \stunit(R, \Delta; S, \Theta)\). For any \(\alpha \in \Phi\) an element \(X_\alpha(\mu)\) gives the canonical automorphism of \(\overline \stunit(R, \Delta; S, \Theta; \Phi / \alpha)\), so by theorem \ref{root-elim} it gives an automorphism of \(G\). We have to check that these automorphisms satisfy the Steinberg relations. Clearly, \(X_\alpha(\mu \dotplus \nu)\) gives the composition of the automorphisms associated with \(X_\alpha(\mu)\) and \(X_\alpha(\nu)\). If \(\alpha\) and \(\beta\) are linearly independent roots, then the automorphisms induced by the formal products \([X_\alpha(\mu), X_\beta(\nu)]\) and \(\prod_{\substack{i \alpha + j \beta \in \Phi \\ i, j > 0}} X_{i \alpha + j \beta}(f_{\alpha \beta i j}(\mu, \nu))\) coincide on the image of \(\overline \stunit(R, \Delta; S, \Theta; \Phi / \langle \alpha, \beta \rangle)\), so it remains to apply proposition \ref{elim-sur}.
Now it is easy to construct an \(\stunit(R, \Delta)\)-equivariant homomorphism
\[v \colon \stunit(R, \Delta; S, \Theta) \to \overline \stunit(R, \Delta; S, \Theta),\, X_\alpha(\mu) \mapsto X_\alpha(\mu).\]
We already know that \(u\) is surjective and clearly \(v \circ u\) is the identity, so \(u\) is an isomorphism.
\end{proof}
\section{Doubly laced Steinberg groups} \label{pairs-type}
In this and the next section \(\Phi\) is one of the root systems \(\mathsf B_\ell\), \(\mathsf C_\ell\), \(\mathsf F_4\). In order to define relative Steinberg groups of type \(\Phi\) over commutative rings with respect to Abe's admissible pairs, it is useful to consider Steinberg groups of type \(\Phi\) over pairs \((K, L)\), where \(K\) parametrizes the short root elements and \(L\) parametrizes the long root ones.
Let us say that \((K, L)\) is a \textit{pair of type} \(\mathsf B\) if
\begin{itemize}
\item \(L\) is a unital commutative ring, \(K\) is an \(L\)-module;
\item there is a classical quadratic form \(s \colon K \to L\), i.e. \(s(kl) = s(k) l^2\) and the expression \(s(k \mid k') = s(k + k') - s(k) - s(k')\) is \(L\)-bilinear.
\end{itemize}
Next, \((K, L)\) is a \textit{pair of type} \(\mathsf C\) if
\begin{itemize}
\item \(K\) is a unital commutative ring;
\item \(L\) is an abelian group;
\item there are additive maps \(d \colon K \to L\) and \(u \colon L \to K\);
\item there is a map \(L \times K \to L,\, (l, k) \mapsto l \cdot k\);
\item \((l + l') \cdot k = l \cdot k + l' \cdot k\), \(l \cdot (k + k') = l \cdot k + d(kk' u(l)) + l \cdot k'\);
\item \(u(d(k)) = 2k\), \(u(l \cdot k) = u(l) k^2\), \(d(u(l)) = 2l\), \(d(k) \cdot k' = d(k{k'}^2)\);
\item \(l \cdot 1 = l\), \((l \cdot k) \cdot k' = l \cdot kk'\).
\end{itemize}
Finally, \((K, L)\) is a \textit{pair of type} \(\mathsf F\) if
\begin{itemize}
\item \(K\) and \(L\) are unital commutative rings;
\item there is a unital ring homomorphism \(u \colon L \to K\);
\item there are maps \(d \colon K \to L\) and \(s \colon K \to L\);
\item \(d(k + k') = d(k) + d(k')\), \(d(u(l)) = 2l\), \(u(d(k)) = 2k\), \(d(k u(l)) = d(k) l\);
\item \(s(k + k') = s(k) + d(kk') + s(k')\), \(s(kk') = s(k) s(k')\), \(s(u(l)) = l^2\), \(u(s(k)) = k^2\).
\end{itemize}
If \((K, L)\) is a pair of type \(\mathsf C\) or \(\mathsf F\), we have a map \(K \times L \to K,\, (k, l) \mapsto kl = k u(l)\). If \((K, L)\) is a pair of type \(\mathsf B\) or \(\mathsf F\), then there is a map \((L, K) \to L,\, (l, k) \mapsto l \cdot k = l s(k)\). Using these additional operations, any pair \((K, L)\) of type \(\mathsf F\) is also a pair of both types \(\mathsf B\) and \(\mathsf C\). For any unital commutative ring \(K\) the pair \((K, K)\) with \(u(k) = k\), \(d(k) = 2k\), \(s(k) = k^2\) is of type \(\mathsf F\).
The \textit{Steinberg group} of type \(\Phi\) over a pair \((K, L)\) of the corresponding type is the abstract group \(\stlin(\Phi; K, L)\) with the generators \(x_\alpha(k)\) for short \(\alpha \in \Phi\), \(k \in K\); \(x_\beta(l)\) for long \(\beta \in \Phi\), \(l \in L\); and the relations
\begin{align*}
x_\alpha(p)\, x_\alpha(q) &= x_\alpha(p + q); \\
[x_\alpha(p), x_\beta(q)] &= 1 \text{ if } \alpha + \beta \notin \Phi \cup \{0\}; \\
[x_\alpha(p), x_\beta(q)] &= x_{\alpha + \beta}(N_{\alpha \beta} pq) \text{ if } \alpha + \beta \in \Phi \text{ and } |\alpha| = |\beta| = |\alpha + \beta|; \\
[x_\alpha(p), x_\beta(q)] &= \textstyle x_{\alpha + \beta}(\frac{N_{\alpha \beta}}2 s(p \mid q)) \text{ if } \alpha + \beta \in \Phi \text{ and } |\alpha| = |\beta| < |\alpha + \beta|; \\
[x_\alpha(p), x_\beta(q)] &= x_{\alpha + \beta}(N_{\alpha \beta} pq)\, x_{2\alpha + \beta}(N_{\alpha \beta}^{21} q \cdot p) \text{ if } \alpha + \beta, 2\alpha + \beta \in \Phi;\\
[x_\alpha(p), x_\beta(q)] &= x_{\alpha + \beta}(N_{\alpha \beta} qp)\, x_{\alpha + 2\beta}(N_{\alpha \beta}^{12} p \cdot q) \text{ if } \alpha + \beta, \alpha + 2\beta \in \Phi.
\end{align*}
Here \(N_{\alpha \beta}\), \(N_{\alpha \beta}^{21}\), \(N_{\alpha \beta}^{12}\) are the structure constants. In the case of the pair \((K, K)\) there is a canonical homomorphism \(\stmap \colon \stlin(\Phi, K) = \stlin(\Phi; K, K) \to \group^{\mathrm{sc}}(\Phi, K)\) to the simply connected Chevalley group over \(K\) of type \(\Phi\).
In order to apply the results on odd unitary groups, we need a construction of odd form rings by pairs of types \(\mathsf B\) and \(\mathsf C\). If \((K, L)\) is a pair of type \(\mathsf B\) and \(\ell \geq 0\), then we consider the special odd form ring \((R, \Delta) = \ofaorth(2\ell + 1; K, L)\)
\begin{itemize}
\item \(R = (K \otimes_L K) e_{00} \oplus \bigoplus_{1 \leq |i| \leq \ell} (K e_{i0} \oplus K e_{0i}) \oplus \bigoplus_{1 \leq |i|, |j| \leq \ell} L e_{ij}\);
\item \(\inv{x e_{ij}} = x e_{-j, -i}\) for \(i \neq 0\) or \(j \neq 0\), \(\inv{(x \otimes y) e_{00}} = (y \otimes x) e_{00}\);
\item \((x e_{ij}) (y e_{kl}) = 0\) for \(j \neq k\);
\item \((x e_{ij}) (y e_{jk}) = xy e_{ik}\) for \(j \neq 0\) if at least one of \(i\) and \(k\) is non-zero;
\item \((x e_{0j}) (y e_{j0}) = (x \otimes y) e_{00}\) for \(j \neq 0\);
\item \((x e_{i0}) (y e_{0j}) = s(x \mid y) e_{ij}\) for \(i, j \neq 0\);
\item \((x e_{i0}) ((y \otimes z) e_{00}) = s(x \mid y) z e_{i0}\) for \(i \neq 0\);
\item \(((x \otimes y) e_{00}) ((z \otimes w) e_{00}) = (x \otimes s(y \mid z) w) e_{00}\);
\item \(\Delta\) is the subgroup of \(\Heis(R)\) generated by \(\phi(S)\), \(((x \otimes y) e_{00}, -(s(x) y \otimes y) e_{00})\), \((x e_{0i}, -s(x) e_{-i, i})\), \((x e_{i0}, 0)\), and \((x e_{ij}, 0)\).
\end{itemize}
Clearly, \((R, \Delta)\) had a strong orthogonal hyperbolic family of rank \(\ell\) and the corresponding odd unitary Steinberg group is naturally isomorphic to \(\stlin(\mathsf B_\ell; K, L)\). The Steinberg relations in \(\stlin(\mathsf B_\ell; K, L)\) and \(\stunit(R, \Delta)\) are the same since \(\ofaorth(2\ell + 1, K, K)\) has the unitary group \(\sorth(2\ell + 1, K) \times (\mathbb Z / 2 \mathbb Z)(K)\) by \cite{ClassicOFA} for every unital commutative ring \(K\). Of course, \(\stlin(\mathsf B_\ell; K, L)\) may also be constructed by the module \(L^\ell \oplus K \oplus L^\ell\) with the quadratic form \(q(x_{-\ell}, \ldots, x_{-1}, k, x_1, \ldots, x_\ell) = \sum_{i = 1}^\ell x_{-i} x_i + s(k)\), but the corresponding odd form ring and the orthogonal group are not functorial on \((K, L)\).
If \((K, L)\) is a pair of the type \(\mathsf C\), then let \((R, \Delta) = \ofasymp(2\ell; K, L)\), where
\begin{itemize}
\item \(R = \bigoplus_{1 \leq |i|, |j| \leq \ell} K e_{ij}\);
\item \(\inv{x e_{ij}} = \varepsilon_i \varepsilon_j x e_{-j, -i}\), \((x e_{ij}) (y e_{kl}) = 0\) for \(j \neq k\), \((x e_{ij}) (y e_{jl}) = xy e_{il}\);
\item \(\Delta = \bigoplus^\cdot_{1 \leq |i|, |j| \leq \ell; i + j > 0} \phi(K e_{ij}) \dotoplus \bigoplus_{1 \leq |i| \leq \ell}^\cdot L v_i \dotoplus \bigoplus_{1 \leq |i|, |j| \leq \ell}^\cdot q_i \cdot K e_{ij}\);
\item \(x v_i \dotplus y v_i = (x + y) v_i\), \(q_i \cdot x e_{ij} \dotplus q_i \cdot y e_{ij} = q_i \cdot (x + y) e_{ij}\);
\item \(\phi(x e_{-i, i}) = d(x) v_i\), \(\pi(x v_i) = 0\), \(\rho(x v_i) = u(x) e_{-i, i}\);
\item \((x v_i) \cdot (y e_{jk}) = \dot 0\) for \(i \neq j\), \((x v_i) \cdot (y e_{ik}) = \varepsilon_i \varepsilon_k (x \cdot y) v_k\);
\item \(\pi(q_i \cdot x e_{ij}) = x e_{ij}\), \(\rho(q_i \cdot x e_{ij}) = 0\);
\item \((q_i \cdot x e_{ij}) \cdot (y e_{kl}) = \dot 0\) for \(j \neq k\), \((q_i \cdot x e_{ij}) \cdot (y e_{jk}) = q_i \cdot xy e_{ik}\);
\end{itemize}
Again, \((R, \Delta)\) had a strong orthogonal hyperbolic family of rank \(\ell\) and its odd unitary Steinberg group is naturally isomorphic to \(\stlin(\mathsf C_\ell; K, L)\). The Steinberg relations in these two Steinberg groups coincide since \(\ofasymp(2\ell; K, K)\) is the odd form ring constructed by the split symplectic module over \(K\) for any unital commutative ring \(K\), so its unitary group is \(\symp(2\ell, K)\).
We do not construct an analogue of \(\mathrm G(\mathsf F_4, K)\) for pairs of type \(\mathsf F\) and do not prove that the product map
\[\prod_{\text{short } \alpha \in \Pi} K \times \prod_{\text{long } \beta \in \Pi} L \to \stlin(\mathsf F_4; K, L), (x_\alpha)_{\alpha \in \Pi} \mapsto \prod_{\alpha \in \Pi} X_\alpha(p_\alpha)\]
is injective for a system of positive roots \(\Pi \subseteq \Phi\). Such claims are not required in the proof of our main result.
\section{Relative doubly laced Steinberg groups}
In the simply-laced case \cite{RelStLin} relative Steinberg groups are parametrized by the root system and a \textit{crossed module of commutative rings} \(\delta \colon \mathfrak a \to K\), where \(K\) is a unital commutative ring, \(\mathfrak a\) is a \(K\)-module, \(\delta\) is a homomorphism of \(K\)-modules, and \(a \delta(a') = \delta(a) a'\) for all \(a, a' \in K\). In the doubly-laced case we may construct semi-abelian categories of pairs of all three types by omitting the unitality conditions in the definitions, but in this approach we have to add the condition that the action in the definition of a crossed module is unital.
Instead we say that \((\mathfrak a, \mathfrak b)\) is a \textit{precrossed module} over a pair \((K, L)\) of a given type if there is a reflexive graph \(((K, L), (K', L'), p_1, p_2, d)\) in the category of pairs of the given type, where \((\mathfrak a, \mathfrak b) = \Ker(p_2)\). This may be written as explicit family of operations between the sets \(\mathfrak a\), \(\mathfrak b\), \(K\), \(L\) satisfying certain axioms, in particular, \(\delta \colon (\mathfrak a, \mathfrak b) \to (K, L)\) is a pair of homomorphisms of abelian groups induced by \(p_1\). A precrossed module \(\delta \colon (\mathfrak a, \mathfrak b) \to (K, L)\) is called a \textit{crossed module} if the corresponding reflexive graph has a structure of internal category (necessarily unique), this may be described as additional axioms on the operations (an analogue of Peiffer identities). It is easy to see that crossed submodules of \(\id \colon (K, L) \to (K, L)\) are precisely Abe's admissible pairs.
For crossed module \(\delta \colon (\mathfrak a, \mathfrak b) \to (K, L)\) of pairs of a given type the \textit{relative Steinberg group} is
\[\stlin(\Phi; K, L; \mathfrak a, \mathfrak b) = \Ker(p_{2*}) / [\Ker(p_{1*}), \Ker(p_{2*})],\]
where \(p_{i*} \colon \stlin(\Phi; \mathfrak a \rtimes K, \mathfrak b \rtimes L) \to \stlin(\Phi; K, L)\) are the induced homomorphisms. As in the odd unitary case and the simply laced case, this is the crossed module over \(\stlin(\Phi; K, L)\) with the generators \(x_\alpha(a)\) for short \(\alpha \in \Phi\), \(a \in \mathfrak a\) and \(x_\beta(b)\) for long \(\beta \in \Phi\), \(b \in \mathfrak b\) satisfying the Steinberg relations, \(\delta(x_\alpha(a)) = x_\alpha(\delta(a))\) for any root \(\alpha\) and \(a \in \mathfrak a \cup \mathfrak b\), and
\[\up{x_\alpha(p)}{x_\beta(a)} = \prod_{\substack{i \alpha + j \beta \in \Phi \\ i, j > 0}} x_{i \alpha + j \beta}(f_{\alpha \beta i j}(p, a))\]
for \(\alpha \neq -\beta\), \(p \in K \cup L\), \(a \in \mathfrak a \cup \mathfrak b\).
If \(\delta \colon (\mathfrak a, \mathfrak b) \to (K, L)\) is a crossed modules of pairs of type \(\mathsf C\) and \(\ell \geq 0\), then \(\delta \colon \ofasymp(\ell; \mathfrak a, \mathfrak b) \to \ofasymp(\ell; K, L)\) is a crossed module of odd form rings, where
\[\ofasymp(\ell; \mathfrak a, \mathfrak b) = \Ker(p_2 \colon \ofasymp(\ell; \mathfrak a \rtimes K, \mathfrak b \rtimes L) \to \ofasymp(\ell; K, L)\bigr).\]
Clearly,
\[\ofasymp(\ell; \mathfrak a, \mathfrak b) = \Bigl( \bigoplus_{1 \leq |i|, |j| \leq \ell} \mathfrak a e_{ij},
\bigoplus^\cdot_{\substack{1 \leq |i|, |j| \leq \ell \\ i + j > 0}} \phi(\mathfrak a e_{ij}) \dotoplus \bigoplus^\cdot_{1 \leq |i| \leq \ell} \mathfrak b v_i \dotoplus \bigoplus^\cdot_{1 \leq |i|, |j| \leq \ell} q_i \cdot \mathfrak a e_{ij} \Bigr),\]
so we may apply theorem \ref{pres-stu} for Chevalley groups of type \(\mathsf C_\ell\).
For pairs of type \(\mathsf B\) the construction \(\ofaorth(2\ell + 1; -, =)\) does not preserve fiber products, so we have to modify it a bit. Take a crossed module \(\delta \colon (\mathfrak a, \mathfrak b) \to (K, L)\) of pairs of type \(\mathsf B\) and consider the odd form rings \((T, \Xi) = \ofaorth(2\ell + 1; \mathfrak a \rtimes K, \mathfrak b \rtimes L)\), \((R, \Delta) = \ofaorth(2\ell + 1; K, L)\) forming a reflexive graph. Let \((\widetilde T, \widetilde \Xi)\) be the special odd form factor ring of \((T, \Xi)\) by the odd form ideal \((I, \Gamma)\), where \(I \leq T\) is the subgroup generated by \((a \otimes a' - a \otimes \delta(a')) e_{00}\) and \((a \otimes a' - \delta(a) \otimes a') e_{00}\) for \(a, a' \in \mathfrak a\). It is easy to check that \(I\) is an involution invariant ideal of \(T\), so \((\widetilde T, \widetilde \Xi)\) is well-defined, and the homomorphisms \(p_i \colon (T, \Xi) \to (R, \Delta)\) factor through \((\widetilde T, \widetilde \Xi)\). Moreover, the precrossed module \((S, \Theta) = \Ker\bigl(p_2 \colon (\widetilde T, \widetilde \Xi) \to (R, \Delta)\bigr)\) over \((R, \Delta)\) satisfies the Peiffer relations and
\[S = X e_{00} \oplus \bigoplus_{1 \leq |i| \leq \ell} (\mathfrak a e_{i0} \oplus \mathfrak a e_{0i}) \oplus \bigoplus_{1 \leq |i|, |j| \leq \ell} \mathfrak b e_{ij}\]
for some group \(X\), so we may also apply theorem \ref{pres-stu} for Chevalley groups of type \(\mathsf B_\ell\).
If \(\alpha\), \(\beta\), \(\alpha - \beta\) are short roots, then \(V_{\alpha \beta}(\mathfrak a, \mathfrak b)\) denotes the abelian group \(\mathfrak a \mathrm e_\alpha \oplus \mathfrak a \mathrm e_\beta\). The groups \(X_{\alpha - \beta}(K)\) and \(X_{\beta - \alpha}(K)\) naturally act on \(V_{\alpha \beta}(\mathfrak a, \mathfrak b)\) in such a way that the homomorphism
\[V_{\alpha \beta}(\mathfrak a, \mathfrak b) \to \stlin(\Phi; \mathfrak a, \mathfrak b), x \mathrm e_\alpha \oplus y \mathrm e_\beta \mapsto X_\alpha(x)\, X_\beta(y)\]
is equivariant (in the case of \(\mathsf F_4\) we consider \(X_{\pm(\alpha - \beta)}(K)\) as abstract groups, not as their images in the Steinberg group). Similarly, if \(\alpha\), \(\beta\), \(\alpha - \beta\) are long roots, then \(V_{\alpha \beta}(\mathfrak a, \mathfrak b) = \mathfrak b \mathrm e_\alpha \oplus \mathfrak b \mathrm e_\beta\) is a representation of \(X_{\alpha - \beta}(L)\) and \(X_{\beta - \alpha}(L)\). If \(\alpha\) and \(\beta\) are long and \((\alpha + \beta)/2\) is short, then \(V_{\alpha \beta}(\mathfrak a, \mathfrak b) = \mathfrak b \mathrm e_\alpha \oplus \mathfrak a \mathrm e_{(\alpha + \beta)/2} \oplus \mathfrak b \mathrm e_\beta\) is a representation of \(X_{(\alpha - \beta)/2}(K)\) and \(X_{(\beta - \alpha)/2}(K)\). Finally, if \(\alpha\) and \(\beta\) are short and \(\alpha + \beta\) is long, then \(V_{\alpha \beta}(\mathfrak a, \mathfrak b) = \mathfrak a \mathrm e_\alpha \dotoplus \mathfrak b \mathrm e_{\alpha + \beta} \dotoplus \mathfrak a \mathrm e_\alpha\) is a \(2\)-step nilpotent group with the group operation
\[\textstyle (x \mathrm e_\alpha \dotoplus y \mathrm e_{\alpha + \beta} \dotoplus z \mathrm e_\beta) \dotplus (x' \mathrm e_\alpha \dotoplus y' \mathrm e_{\alpha + \beta} \dotoplus z' \mathrm e_\beta) = (x + x') \mathrm e_\alpha \dotoplus \bigl(y - \frac{N_{\alpha \beta}}2 s(z \mid x') + z'\bigr) \mathrm e_{\alpha + \beta} \dotoplus (z + z') \mathrm e_\beta\]
and the action of \(X_{\alpha - \beta}(L)\) and \(X_{\beta - \alpha}(L)\) such that the homomorphism
\[V_{\alpha \beta}(\mathfrak a, \mathfrak b) \to \stlin(\Phi; \mathfrak a, \mathfrak b), x \mathrm e_\alpha \dotoplus y \mathrm e_{\alpha + \beta} \dotoplus z \mathrm e_\beta \mapsto X_\alpha(x)\, X_{\alpha + \beta}(y)\, X_\beta(z)\]
is equivariant.
We are ready to construct a presentation of \(G = \stlin(\Phi; K, L; \mathfrak a, \mathfrak b)\). Let \(Z_\alpha(x, p) = \up{X_{-\alpha}(p)}{X_\alpha(x)} \in G\) for \(x \in \mathfrak a\), \(p \in K\) if \(\alpha\) is short and \(x \in \mathfrak b\), \(p \in L\) if \(\alpha\) is long. If \(V_{\alpha \beta}(K, L)\) is defined, then there are also natural elements \(Z_{\alpha \beta}(u, s) \in G\) for \(u \in V_{\alpha \beta}(\mathfrak a, \mathfrak b)\) and \(s \in V_{-\alpha, -\beta}(K, L)\)
\begin{theorem}\label{pres-stphi}
Let \(\Phi\) be one of the root systems \(\mathsf B_\ell\), \(\mathsf C_\ell\), or \(\mathsf F_4\) for \(\ell \geq 3\); \((K, L)\) be a pair of the corresponding type; \((\mathfrak a, \mathfrak b)\) be a crossed module over \((K, L)\). Then \(\stlin(\Phi; K, L; \mathfrak a, \mathfrak b)\) as an abstract group is generated by \(Z_\alpha(x, p)\) and \(Z_{\alpha \beta}(u, s)\) with the relations
\begin{itemize}
\item[(Sym)]
\(Z_{\alpha \beta}(u, s) = Z_{\beta \alpha}(u, s)\) if we identify \(V_{\alpha \beta}(\mathfrak a, \mathfrak b)\) with \(V_{\beta \alpha}(\mathfrak a, \mathfrak b)\);
\item[(Add)]
\begin{enumerate}
\item \(Z_\alpha(x, p)\, Z_\alpha(y, p) = Z_\alpha(x + y, p)\),
\item \(Z_{\alpha \beta}(u, s)\, Z_{\alpha \beta}(v, s) = Z_{\alpha \beta}(u \dotplus v, s)\);
\end{enumerate}
\item[(Comm)]
\begin{enumerate}
\item \([Z_\alpha(x, p), Z_\beta(y, q)] = 1\) for \(\alpha \perp \beta\) and \(\alpha + \beta \notin \Phi\),
\item \(\up{Z_\gamma(x, p)}{Z_{\alpha \beta}(u, s)} = Z_{\alpha \beta}\bigl(Z_\gamma(\delta(x), p)\, u, Z_\gamma(\delta(x), p)\, s\bigr)\) for \(\gamma \in \mathbb R (\alpha - \beta)\);
\end{enumerate}
\item[(Simp)] \(Z_\alpha(x, p) = Z_{\alpha \beta}(x \mathrm e_\alpha, p \mathrm e_{-\alpha})\);
\item[(HW)]
\begin{enumerate}
\item \(Z_{\alpha, \alpha + \beta}\bigl(X_{-\beta}(r)\, x \mathrm e_{\alpha + \beta}, p \mathrm e_{-\alpha} \oplus q \mathrm e_{-\alpha - \beta}\bigr) = Z_{\alpha + \beta, \beta}\bigl(X_{-\alpha}(p)\, x \mathrm e_{\alpha + \beta}, X_{-\alpha}(p)\, (q \mathrm e_{-\alpha - \beta} \oplus r \mathrm e_{-\beta})\bigr)\), where \(\alpha\), \(\beta\) is a basis of root subsystem of type \(\mathsf A_2\),
\item \(Z_{\alpha, \alpha + \beta}\bigl(X_{-\beta}(s)\, (x \mathrm e_{2\alpha + \beta} \dotoplus y \mathrm e_{\alpha + \beta}), p \mathrm e_{-\alpha} \dotoplus q \mathrm e_{-2 \alpha - \beta} \dotoplus r \mathrm e_{-\alpha - \beta}\bigr) = Z_{2 \alpha + \beta, \beta}\bigl(X_{-\alpha}(p)\, (x \mathrm e_{2 \alpha + \beta} \oplus y \mathrm e_{\alpha + \beta}), X_{-\alpha}(p)\, (q \mathrm e_{-2\alpha - \beta} \oplus r \mathrm e_{-\alpha - \beta} \oplus s \mathrm e_{-\beta})\bigr)\), where \(\alpha\), \(\beta\) is a basis of a root subsystem of type \(\mathsf B_2\) and \(\alpha\) is short;
\end{enumerate}
\item[(Delta)] \(Z_\alpha(x, \delta(y) + p) = Z_{-\alpha}(y, 0)\, Z_\alpha(x, p)\, Z_{-\alpha}(-y, 0)\).
\end{itemize}
\end{theorem}
\begin{proof}
If \(\Phi\) is of type \(\mathsf B_\ell\) or \(\mathsf C_\ell\), then the claim directly follows from theorem \ref{pres-stu}, so we may assume that \(\Phi\) is of type \(\mathsf F_4\). Let \(G\) be the group with the presentation from the statement, it is generated by the elements \(Z_\alpha(a, p)\) satisfying only the relations involving roots from root subsystems of rank \(2\). First of all, we construct a natural action of \(\stlin(\Phi; K, L)\) on \(G\). Notice that any three-dimensional root subsystem \(\Psi \subseteq \Phi\) such that \(\Psi = \mathbb R \Psi \cap \Phi\) is of type \(\mathsf B_3\), \(\mathsf C_3\), or \(\mathsf A_1 \times \mathsf A_2\).
Let \(g = X_\alpha(a)\) be a root element in \(\stlin(\Phi; K, L)\) and \(h = Z_\beta(b, p)\) be a generator of \(G\). If \(\alpha\) and \(\beta\) are linearly independent, then \(\up gh\) may be defined directly as \(h\) itself or an appropriate \(Z_{\gamma_1 \gamma_2}(u, s)\). Otherwise we take a root subsystem \(\alpha \in \Psi \subseteq \Phi\) of rank \(2\) of type \(\mathsf A_2\) or \(\mathsf B_2\), express \(h\) in terms of \(Z_\gamma(c, q)\) for \(\gamma \in \Psi \setminus \mathbb R \alpha\), and apply the above construction for \(\up g{Z_\gamma(c, q)}\). The resulting element of \(G\) is independent of the choices of \(\Psi\) and the decomposition of \(h\) since we already know the theorem in the cases \(\mathsf B_3\) and \(\mathsf C_3\).
We have to check that \(\up g{(-)}\) preserves the relations between the generators. Let \(\Psi\) be the intersection of \(\Phi\) with the span of the roots in a relation, \(\Psi' = (\mathbb R \Psi + \mathbb R \alpha) \cap \Phi\). Consider the possible cases:
\begin{itemize}
\item If \(\Psi'\) is of rank \(< 3\) or has one of the types \(\mathsf B_3\), \(\mathsf C_3\), then the result follows from the cases \(\mathsf B_3\) and \(\mathsf C_3\) of the theorem.
\item If \(\Psi'\) is of type \(\mathsf A_1 \times \mathsf A_2\) and \(\alpha\) lies in the first factor (so \(\Psi\) is the second factor), then \(g\) acts trivially on the generators with the roots from \(\Psi\) and there is nothing to prove.
\item If \(\Psi'\) is of type \(\mathsf A_1 \times \mathsf A_2\), \(\alpha\) lies in the second factor, and \(\Psi\) is of type \(\mathsf A_1 \times \mathsf A_1\), then we have to check that \([\up g{Z_\beta(b, p)}, \up g{Z_\gamma(c, q)}] = 1\), where \(\beta\) lies in the first factor and \(\gamma\) lies in the second factor. But \(\up g{Z_\beta(b, p)} = Z_\beta(b, p)\) commutes with \(\up g{Z_\gamma(c, q)}\), the latter one is a product of various \(Z_{\gamma'}(c', q')\) with \(\gamma'\) from the second factor of \(\Psi'\).
\end{itemize}
Now let us check that the resulting automorphisms of \(G\) corresponding to root elements satisfy the Steinberg relations when applied to a fixed \(Z_\beta(b, p)\). Let \(\Psi\) be the intersection of \(\Phi\) with the span of the root from such a relation and \(\Psi' = (\mathbb R \Psi + \mathbb R \beta) \cap \Phi\). There are the following cases:
\begin{itemize}
\item If \(\Psi'\) is of rank \(< 3\) or has one of the types \(\mathsf B_3\), \(\mathsf C_3\), then the result follows from the cases \(\mathsf B_3\) and \(\mathsf C_3\) of the theorem.
\item If \(\Psi'\) is of type \(\mathsf A_1 \times \mathsf A_2\) and \(\beta\) lies in the first factor (so \(\Psi\) is the second factor), then both sides of the relation trivially act on \(Z_\beta(b, p)\) and there is nothing to prove.
\item If \(\Psi'\) is of type \(\mathsf A_1 \times \mathsf A_2\), \(\beta\) lies in the second factor, and \(\Psi\) is of type \(\mathsf A_1 \times \mathsf A_1\), then we have to check that \(\up{X_\alpha(a)\, X_\gamma(c)}{Z_\beta(b, p)} = \up{X_\gamma(c)\, X_\alpha(a)}{Z_\beta(b, p)}\), where \(\alpha\) lies in the first factor and \(\gamma\) lies in the second factor. But both sides coincide with \(\up{X_\gamma(c)}{Z_\beta(b, p)}\), it is a product of various \(Z_{\beta'}(b', p')\) with \(\beta'\) from the second factor of \(\Psi'\).
\end{itemize}
Now consider the homomorphism
\[u \colon G \to \stlin(\Phi; K, L; \mathfrak a, \mathfrak b), Z_\alpha(a, p) \mapsto \up{X_{-\alpha}(p)}{X_\alpha(a)}.\]
By construction, it is \(\stlin(\Phi; K, L)\)-equivariant, so it is surjective. The \(\stlin(\Phi; K, L)\)-equivariant homomorphism
\[v \colon \stlin(\Phi; K, L; \mathfrak a, \mathfrak b) \to G, X_\alpha(a) \mapsto X_\alpha(a)\]
is clearly well-defined and surjective. It remains to notice that \(v \circ u\) is the identity.
\end{proof}
\bibliographystyle{plain}
|
2,869,038,154,634 | arxiv | \section{Introduction}
Video has become a highly significant form of visual data, and the amount of video content uploaded to various online platforms has increased dramatically in recent years. In this regard, efficient ways of handling video have become increasingly important. One popular solution is to summarize videos into shorter ones without missing semantically important frames. Over the past few decades, many studies~\cite{song2015tvsum,ngo2003automatic,lu2013story,kim2014reconstructing,khosla2013large} have attempted to solve this problem. Recently, Zhang~\textit{et al.}~showed promising results using deep neural networks, and a lot of follow-up work has been conducted in areas of supervised~\cite{zhang2016summary,zhang2016video,zhao2017hierarchical,zhao2018hsa,wei2018video} and unsupervised learning~\cite{Mahasseni2017VAEGAN,zhou2017deep}.
Supervised learning methods~\cite{zhang2016summary,zhang2016video,zhao2017hierarchical,zhao2018hsa,wei2018video} utilize ground truth labels that represent importance scores of each frame to train deep neural networks. Since human-annotated data is used, semantic features are faithfully learned. However, labeling for many video frames is expensive, and overfitting problems frequently occur when there is insufficient label data. These limitations can be mitigated by using the unsupervised learning method as in~\cite{Mahasseni2017VAEGAN,zhou2017deep}. However, since there is no human labeling in this method, a method for supervising the network needs to be appropriately designed.
Our baseline method~\cite{Mahasseni2017VAEGAN} uses a variational autoencoder (VAE)~\cite{kingma2013auto} and generative adversarial networks (GANs)~\cite{goodfellow2014generative} to learn video summarization without human labels. The key idea is that a good summary should reconstruct original video seamlessly. Features of each input frame obtained by convolutional neural network (CNN) are multiplied with predicted importance scores. Then, these features are passed to a generator to restore the original features. The discriminator is trained to distinguish between the generated (restored) features and the original ones.
Although it is fair to say that a good summary can represent and restore original video well, original features can also be restored well with uniformly distributed frame level importance scores. This trivial solution leads to difficulties in learning discriminative features to find key-shots. Our approach works to overcome this problem. When output scores become more flattened, the variance of the scores tremendously decreases. From this mathematically obvious fact, we propose a simple yet powerful way to increase the variance of the scores. Variance loss is simply defined as a reciprocal of variance of the predicted scores.
In addition, to learn more discriminative features, we propose Chunk and Stride Network (CSNet) that simultaneously utilizes local (chunk) and global (stride) temporal views on the video. CSNet splits input features of a video into two streams (chunk and stride), then passes both split features to bidirectional long short-term memory (LSTM) and merges them back to estimate the final scores. Using chunk and stride, the difficulty of feature learning for long-length videos is overcome.
Finally, we develop an attention mechanism to capture dynamic scene transitions, which are highly related to key-shots. In order to implement this module, we use temporal difference between frame-level CNN features. If a scene changes only slightly, the CNN features of the adjacent frames will have similar values. In contrast, at scene transitions in videos, CNN features in the adjacent frames will differ a lot. The attention module is used in conjunction with CSNet as shown in~\figref{fig:overview}, and helps to learn discriminative features by considering information about dynamic scene transitions.
We evaluate our network by conducting extensive experiments on SumMe~\cite{gygli2014creating} and TVSum~\cite{song2015tvsum} datasets. YouTube and OVP~\cite{de2011vsumm} datasets are used for the training process in augmented and transfer settings. We also conducted an ablation study to analyze the contribution of each component of our design.
Quantitative results show the selected key-shots and demonstrate the validity of difference attention. Similar to previous methods, we randomly split the test set and the train set five times. To make the comparison fair, we exclude duplicated or skipped videos in the test set.
Our overall contributions are as follows. (i) We propose variance loss, which effectively solves the flat output problem experienced by some of the previous methods. This approach significantly improves performance, especially in unsupervised learning. (ii) We construct CSNet architecture to detect highlights in local (chunk) and global (stride) temporal view on the video. We also impose a difference attention approach to capture dynamic scene transitions which are highly related to key-shots. (iii) We analyze our methods with ablation studies and achieve the state-of-the-art performances on SumMe and TVSum datasets.
\section{Related Work}
Given an input video, video summarization aims to produce a shortened version
that highlights the representative video frames. Various
prior work has proposed solutions to this problem, including video time-lapse~\cite{joshi2015real,kopf2014first,poleg2015egosampling}, synopsis~\cite{pritch2008nonchronological}, montage~\cite{kang2006space,sun2014salient} and storyboards~\cite{gong2014diverse,gygli2014creating,gygli2015video,lee2012discovering,liu2010hierarchical,yang2015unsupervised,gong2014diverse}. Our work is most closely related to storyboards, selecting some important pieces of information to summarize key events present in the entire video.
Early work on video summarization problems heavily relied on hand-crafted features and unsupervised learning. Such work defined various heuristics to represent the importance of the frames~\cite{song2015tvsum,ngo2003automatic,lu2013story,kim2014reconstructing,khosla2013large} and to use the scores to select representative frames to build the summary video. Recent work has explored supervised learning approach for
this problem, using training data consisting
of videos and their ground-truth summaries generated by humans. These
supervised learning methods outperform early work on unsupervised
approach, since they can better learn the high-level semantic knowledge that is
used by humans to generate summaries.
Recently, deep learning based methods~\cite{zhang2016video,Mahasseni2017VAEGAN,sharghi2017query} have gained attention for video summarization tasks. The most recent studies adopt recurrent models such as LSTMs, based on the intuition that using LSTM enables the capture of long-range temporal dependencies among video frames which are critical for effective summary generation.
Zhang~\textit{et al.}~\cite{zhang2016video} introduced two LSTMs to model the variable range dependency in video summarization. One LSTM was used for video frame sequences in the forward direction, while the other LSTM was used for the backward direction. In addition, a determinantal point process model~\cite{gong2014diverse,zhang2016summary} was adopted for further improvement of diversity in the subset selection. Mahasseni~\textit{et al.}.~\cite{Mahasseni2017VAEGAN} proposed an unsupervised method that was based on a generative adversarial framework. The model consists of the summarizer and discriminator. The summarizer was a variational autoencoder LSTM, which first summarized video and then reconstructed the output. The discriminator was another LSTM that learned to distinguish between its reconstruction and the input video.
In this work, we focus on unsupervised video summarization, and adopt LSTM following previous work. However, we empirically worked out that these LSTM-based models have inherent limitations for unsupervised video summarization. In particular, two main issues exits: First, there is ineffective feature learning due to flat distribution of output importance scores and second, there is the training difficulty with long-length video inputs. To address these problems, we propose a simple yet effective regularization loss term called Variance Loss, and design a novel two-stream network named the Chunk and Stride Network. We experimentally verify that our final model considerably outperforms state-of-the-art unsupervised video summarization. The following section gives a detailed description of our method.
\begin{figure*}[t]
\centering
\includegraphics[width=0.85\textwidth]{./architecture_v3.pdf}
\caption{The overall architecture of our network. (a) chunk and stride network (CSNet) splits input features $x_t$ into $c_t$ and $s_t$ by chunk and stride methods. Each orange, yellow, green, and blue color represents how the chunk and stride divide the input features $x_t$. Divided features are combined in the original order after going through LSTM and FC separately. (b) Difference attention is a approach for designing dynamic scene transitions at different temporal strides. $d^1_t$, $d^2_t$, $d^4_t$ are difference of input features $x_t$ with 1, 2, 4 temporal strides. Each difference features are summed after FC, which is denoted as difference attention $d_t$, and summed again with $c'_t$ and $s'_t$, respectively.}
\label{fig:overview}
\end{figure*}
\section{Proposed Approach}
In this section, we introduce methods for unsupervised video summarization. Our methods are based on a variational autoencoder (VAE) and generative adversarial networks (GAN) as~\cite{Mahasseni2017VAEGAN}. We firstly deal with discriminative feature learning under a VAE-GAN framework by using variance loss. Then, a chunk and stride network (CSNet) is proposed to overcome the limitation of most of the existing methods, which is the difficulty of learning for long-length videos. CSNet resolves this problem by taking a local (chunk) and a global (stride) view of input features. Finally, to consider which part of the video is important, we use the difference in CNN features between adjacent or wider spaced video frames as attention, assuming that dynamic plays a large role in selecting key-shots. \figref{fig:overview} shows the overall structure of our proposed approach.
\subsection{Baseline Architecture}
We adopt ~\cite{Mahasseni2017VAEGAN} as our baseline, using a variational autoencoder (VAE) and generative adversarial networks (GANs) to perform unsupervised video summarization. The key idea is that a good summary should reconstruct original video seamlessly and adopt a GAN framework to reconstruct the original video from summarized key-shots.
In the model, an input video is firstly forwarded through the backbone CNN (i.e., GoogleNet), Bi-LSTM, and FC layers (encoder LSTM) to output the importance scores of each frame. The scores are multiplied with input features to select key-frames. Original features are then reconstructed from those frames using the decoder LSTM. Finally, a discriminator distinguishes whether it is from an original input video or from reconstructed ones. By following Mahasseni~\textit{et al.}{}'s overall concept of VAE-GAN, we inherit the advantages, while developing our own ideas, significantly overcoming the existing limitations.
\subsection{Variance Loss}
The main assumption of our baseline~\cite{Mahasseni2017VAEGAN} is ``well-picked key-shots can reconstruct the original image well". However, for reconstructing the original image, it is better to keep all frames instead of selecting only a few key-shots. In other words, mode collapse occurs when the encoder LSTM attempts to keep all frames, which is a trivial solution. This results in flat importance output scores for each frame, which is undesirable. To prevent the output scores from being a flat distribution, we propose a variance loss as follows:
\begin{eqnarray}
\pazocal{L}_{V}(\textbf{\textit{p}}) = \frac{1}{\hat{V}(\textbf{\textit{p}}) + \textit{eps}},
\label{equ:var_loss}
\end{eqnarray}
where $\textbf{\textit{p}}= \left \{ p_{t} : t = 1,..., T \right \}$, \textit{eps} is epsilon, and $\hat{V}$$(\cdot)$ is the variance operator. $p_t$ is an output importance score at time $t$, and $T$ is the number of frames. By enforcing~\eqnref{equ:var_loss}, the network makes the difference in output scores per frames larger, then avoids a trivial solution (flat distribution).
In addition, in order to deal with outliers, we extend variance loss in~\eqnref{equ:var_loss} by utilizing the median value of scores. The variance is computed as follows:
\begin{eqnarray}
\hat{V}_{median}((\textbf{\textit{p}})) = \frac{\sum \limits_{t=1}^{T} {|p_t - med(\textbf{\textit{p}})|^2}}{T},
\label{equ:var_loss_med}
\end{eqnarray}
where $med(\cdot)$ is the median operator. As has been reported for many years~\cite{Pratt1975medianfilter,Huang1979median,Zhang2014Wmedian}, the median value is usually more robust to outliers than the mean value. We call this modified function variance loss for the rest of the paper, and use it for all experiments.
\if 0
\begin{eqnarray}
\hat{V}_{median}(p_t) = \frac{\sum{|p_t - med(p_t)|^2}}{N}.
\label{equ:var_loss_med}
\end{eqnarray}
\fi
\subsection{Chunk and Stride Network}
To handle long-length videos, which are difficult for LSTM-based methods, our approach suggests a chunk and stride network (CSNet) as a way of jointly considering a local and a global view of input features. For each frame of the input video $\textbf{\textit{v}}= \left \{ v_{t} : t = 1,..., T \right \}$, we obtain the deep features $\textbf{\textit{x}}= \left \{ x_{t} : t = 1,..., T \right \}$ of the CNN which is GoogLeNet pool-5 layer.
As shown in~\figref{fig:overview} (a), CSNet takes a long video feature $\textbf{\textit{x}}$ as an input, and divides it into smaller sequences in two ways. The first way involves dividing $\textbf{\textit{x}}$ into successive frames, and the other way involves dividing it at a uniform interval. The streams are denoted as $\textbf{\textit{$c_{m}$}}$, and $\textbf{\textit{$s_{m}$}}$, where $\left \{m = 1,..., M \right \}$ and $M$ is the number of divisions. Specifically, $\textbf{\textit{$c_{m}$}}$ and $\textbf{\textit{$s_{m}$}}$ can be explained as follows:
\begin{eqnarray}
c_{m} = \left \{ x_{i} : i = (m-1)\cdot(\frac{T}{M}) +1,..., m \cdot (\frac{T}{M}) \right \},\\
s_{m} = \left \{ x_{i} : i = m, m+k, m+2k, ...., m+T-M \right \},
\label{equ:csnet1}
\end{eqnarray}
where $k$ is the interval such that $k=M$. Two different sequences, $c_{m}$ and $s_{m}$, pass through the chunk and stride stream separately. Each stream consists of bidirectional LSTM (Bi-LSTM) and a fully connected (FC) layer, which predicts importance scores at the end. Then, each of the outputs are reshaped into $\textbf{\textit{$c_{m}'$}}$ and $\textbf{\textit{$s_{m}'$}}$, enforcing the maintenance of the original frame order. Then, $\textbf{\textit{$c_{m}'$}}$ and $\textbf{\textit{$s_{m}'$}}$ are added with difference attention $d_t$. Details of the attentioning process are described in the next section. The combined features are then passed through sigmoid function to predict the final scores $p_t$ as follows:
\if 0
\begin{eqnarray}
c_{t} = \sum\limits_{k=0}^{n-1} \Bigg( \sum\limits_{t=1 + k\lfloor{}T/n\rfloor{}}^{(k+1)\lfloor{}T/n\rfloor{}} {x_t} \Bigg),\\
s_{t} = \sum\limits_{k=0}^{n-1} \Bigg( \sum\limits_{t=1}^{\lfloor{}T/n\rfloor{}} {x_{n(t-1) + k+1}} \Bigg).
\label{equ:csnet2}
\end{eqnarray}
\fi
\begin{eqnarray}
p^1_t = \textit{sigmoid}\Big(c'_{t} + d_t\Big),\\
p^2_t = \textit{sigmoid}\Big(s'_{t} + d_t\Big),\\
p_t = W[p^1_t + p^2_t].
\label{equ:csnet3}
\end{eqnarray}
where $W$ is learnable parameters for weighted sum of $p^1_t$ and $p^2_t$, which allows for flexible fusion of local (chunk) and global (stride) view of input features.
\subsection{Difference Attention}
In this section, we introduce the attention module, exploiting dynamic information as guidance for the video summarization. In practice, we use the differences in CNN features of adjacent frames. The feature difference softly encodes temporally different dynamic information which can be used as a signal for deciding whether a certain frame is relatively meaningful or not.
As shown in~\figref{fig:overview} (b), the differences $d^1_t$, $d^2_t$, $d^4_t$ between $x_{t+k}$, and $x_t$ pass through the FC layer ($d'^1_t$, $d'^2_t$, $d'^4_t$) and are merged to become $d_t$, then added to both $c_{m}$ and $s_{m}$. The proposed attention modules are represented as follows:
\begin{eqnarray}
d_{1t} = |x_{t+1} - x_t|,\\
d_{2t} = |x_{t+2} - x_t|,\\
d_{4t} = |x_{t+4} - x_t|,\\
d_t = d'_{1t} + d'_{2t} + d'_{4t}.
\label{equ:diff}
\end{eqnarray}
While the difference between the features of adjacent frames can model the simplest dynamic, the wider temporal stride can include a relatively global dynamic between the scenes.
\begin{figure*}[t]
\begin{center}
\def0.9{1.1}
\begin{tabular}{@{}c@{\hskip 0.01\linewidth}c@{}}
\includegraphics[width=0.48\linewidth]{./q11_v2.jpg}&
\includegraphics[width=0.48\linewidth]{./q12_v2.jpg}\\
{(a) Video 1} & {(b) Video 15}\\
\includegraphics[width=0.48\linewidth]{./q13_v2.jpg}&
\includegraphics[width=0.48\linewidth]{./q14_v2.jpg}\\
{(c) Video 18} & {(d) Video 41}
\end{tabular}
\end{center}
\caption{Visualization of which key-shots are selected in the various videos of TVSum dataset. The light blue bars represent the labeled scores. Our key-shots are painted in red, green, blue, and yellow respectively in (a) - (d).}
\label{fig:visual}
\end{figure*}
\section{Experiments}
\subsection{Datasets}
We evaluate our approach on two benchmark datasets, SumMe~\cite{gygli2014creating} and TVSum~\cite{song2015tvsum}. SumMe contains 25 user videos with various events. The videos include both cases where the scene changes quickly or slowly. The length of the videos range from 1 minute to 6.5 minutes. Each video has an annotation of mostly 15 user annotations, with a maximum of 18 users. TVSum contains 50 videos with lengths ranging from 1.5 to 11 minutes. Each video in TVSum is annotated by 20 users. The annotations of SumMe and TVSum are frame-level importance scores, and we follow the evaluation method of ~\cite{zhang2016video}. OVP~\cite{de2011vsumm} and YouTube~\cite{de2011vsumm} datasets consist of 50 and 39 videos, respectively. We use OVP and YouTube datasets for transfer and augmented settings.
\subsection{Evaluation Metric}
Similar to other methods, we use the F-score used in ~\cite{zhang2016video} as an evaluation metric. In all datasets, user annotation and prediction are changed from frame-level scores to key-shots using the KTS method in ~\cite{zhang2016video}. The precision, recall, and F-score are calculated as a measure of how much the key-shots overlap. Let ``predicted" be the length of the predicted key-shots, ``user annotated" be the length of the user annotated key-shots and ``overlap" be the length of the overlapping key-shots in the following equations.
\begin{eqnarray}
P=\frac{\text{overlap}}{\text{predicted}},
R=\frac{\text{overlap}}{\text{user annotated}},\\
\text{F-score}=\frac{2PR}{P+R} * 100\%.
\label{equ:metric}
\end{eqnarray}
\begin{table}
\centering
\resizebox{1.0\linewidth}{!}{%
\begin{tabular}{ c | c | c }
\hline
Setting & Training set & Test set \\
\hline
Canonical & 80\% SumMe & 20\% SumMe\\
Augmented & OVP + YouTube + TVSum + 80\% SumMe & 20\% SumMe\\
Transfer & OVP + YouTube + TVSum & SumMe\\
\hline
\end{tabular}
}
\caption{Evaluation setting for SumMe. In the case of TVSum, we switch between SumMe and TVSum in the above table.}
\label{tab:setting}
\end{table}
\subsection{Evaluation Settings}
Our approach is evaluated using the Canonical (C), Augmented (A), and Transfer (T) settings shown in~\tabref{tab:setting} in~\cite{zhang2016video}. To divide the test set and the training set, we randomly extract the test set five times, 20\% of the total. The remaining 80\% of the videos is used for the training set. We use the final F-score, which is the average of the F-scores of the five tests. However, if a test set is randomly selected, there may be video that is not used in the test set or is used multiple times in duplicate, making it difficult to evaluate fairly. To avoid this problem, we evaluate all the videos in the datasets without duplication or exception.
\subsection{Implementation Details}
For input features, we extract each frame by 2fps as in ~\cite{zhang2016video}, and then obtain a feature with 1024 dimensions through GoogLeNet pool-5~\cite{szegedy2015going} trained on ImageNet~\cite{russakovsky2015imagenet}. The LSTM input and hidden size is 256 reduced by FC (1024 to 256) for fast convergence, and the weight is shared with each chunk and stride input. The maximum epoch is 20, the learning rate is 1e-4, and 0.1 times after 10 epochs. The weights of the network are randomly initialized. M in CSNet is experimentally picked as 4. We implement our method using Pytorch.
\paragraph{Baseline}
Our baseline~\cite{Mahasseni2017VAEGAN} uses the VAE and GAN in the model of Mahasseni~\textit{et al.}~We use their adversarial framework, which allows us unsupervised learning. Specifically, basic sparsity loss, reconstruction loss, and GAN loss are adopted. For supervised learning, we add binary cross entropy (BCE) loss between ground truth scores and predicted scores. We also put fake input, which has uniform distribution.
\subsection{Quantitative Results}
In this section, we show the experimental results of our various approach proposed in the ablation study. Then, we compare our methods with the existing unsupervised and supervised methods and finally show the experimental results in canonical, augmented, and transfer settings. For fair comparison, we quote performances of previous research recorded in~\cite{zhou2017deep}.
\begin{table}
\centering
\resizebox{1.0\linewidth}{!}{%
\begin{tabular}{ c | c | c | c | c}
\hline
Exp. & CSNet & Difference & Variance Loss & F-score (\%) \\
\hline
1 & & & & 40.8\\
\hline
2 &\checkmark & & & 42.0\\
3 & &\checkmark & & 42.0\\
4 & & &\checkmark & 44.9\\
\hline
5 &\checkmark &\checkmark & & 43.5\\
6 &\checkmark & &\checkmark & 49.1\\
7 & &\checkmark &\checkmark & 46.9\\
\hline\hline
8 &\checkmark &\checkmark &\checkmark & \textbf{51.3}\\
\hline
\end{tabular}
}
\caption{F-score (\%) of all cases where each proposed methods can be applied. When CSNet is not applied, LSTM without chunk and stride is used. Variance loss and difference attention can be simply on/off. This experiment uses SumMe dataset, unsupervised learning and canonical setting.}
\label{tab:ablation}
\end{table}
\begin{figure*}[t]
\begin{center}
\def0.9{0.9}
\begin{tabular}{@{}c@{\hskip 0.01\linewidth}c@{}}
\includegraphics[width=0.48\linewidth]{./q11_v2.jpg}&
\includegraphics[width=0.48\linewidth]{./q3_2.jpg}\\
{(a) CSNet 8} & {(b) CSNet 2}\\
\includegraphics[width=0.48\linewidth]{./q3_3.jpg}&
\includegraphics[width=0.48\linewidth]{./q3_4.jpg}\\
{(c) CSNet 3} & {(d) CSNet 4}
\end{tabular}
\end{center}
\caption{Similar to \figref{fig:visual}, key-shots are selected by variants of CSNet denoted in ablation study. A video 1 in TVSum is used.}
\label{fig:visual-2}
\end{figure*}
\paragraph{Ablation study.}
We have three proposed approaches: CSNet, difference attention and variance loss. When all three methods are applied, the highest performance can be obtained. The ablation study in \tabref{tab:ablation} shows the contribution of each proposed method to the performance by conducting experiments on the number of cases in which each method can be applied. We call these methods shown in exp. 1 to exp. 8 CSNet\textsubscript{1} through CSNet\textsubscript{8}, respectively. If any of our proposed methods is not applied, we experiment with a version of the baseline in that we reproduce and modify some layers and hyper parameters. In this case, the lowest F-score is shown, and it is obvious that performance increases gradually when each method is applied.
Analyzing the contribution to each method, first of all, the performance improvement due to variance loss is immensely large, which proves that it is a way to solve the problem of our baseline precisely. CSNet\textsubscript{4} is higher than CSNet\textsubscript{1} by 4.1\%, and CSNet\textsubscript{8} is better than CSNet\textsubscript{5} by 7.8\%. The variance of output scores is less than 0.001 without variance loss, but as it is applied, the variance increases to around 0.1. Since we use a reciprocal of variance to increase variance, we can observe the loss of an extremely large value in the early stages of learning. Immediately after, the effect of the loss increases the variance as a faster rate, giving the output a much wider variety of values than before.
By comparing the performance with and without the difference attention, we can see that difference attention is well modeled in the relationship between static or dynamic scene changes and frame-level importance scores. By comparing CSNet\textsubscript{1} to CSNet\textsubscript{3}, the F-score is increased by 1.2\%. Similarly, CSNet\textsubscript{5} and CSNet\textsubscript{7} are higher than CSNet\textsubscript{2} and CSNet\textsubscript{4} by 1.5\% and 2.0\%. CSNet\textsubscript{8} is greater than CSNet\textsubscript{6} by 2.2\%. These comparisons mean that the difference attention always contributes to these four cases.
We can see from our \tabref{tab:ablation} that CSNet also contributes to performance, and it is effective to design the concept of local and global features with chunk and stride while reducing input size of LSTM in temporal domain. Experiments on the number of cases where CSNet can be removed are as follow. CSNet\textsubscript{2} is better than CSNet\textsubscript{1} by 1.2\%, and each CSNet\textsubscript{5}, CSNet\textsubscript{6} outperform CSNet\textsubscript{3}, CSNet\textsubscript{4} by 1.5\%, 4.2\%. Lastly, CSNet\textsubscript{8} and CSNet\textsubscript{7} have 4.4\% difference.
Since each method improves performance as it is added, the three proposed approaches contribute individually to performance. With the combination of the proposed methods, CSNet\textsubscript{8} achieves a higher performance improvement than the sum of each F-score increased by CSNet\textsubscript{2}, CSNet\textsubscript{3} and CSNet\textsubscript{4}. In the rest of this section, we use CSNet\textsubscript{8}.
\begin{table}
\centering
\resizebox{0.8\linewidth}{!}{%
\begin{tabular}{ l | c | c }
\hline
Method & SumMe & TVSum \\
\hline
K-medoids & 33.4 & 28.8 \\
Vsumm & 33.7 & - \\
Web image & - & 36.0 \\
Dictionary selection & 37.8 & 42.0 \\
Online sparse coding & - & 46.0\\
Co-archetypal & - & 50.0\\
GAN\textsubscript{dpp} & 39.1 & 51.7 \\
DR-DSN & 41.4 & 57.6 \\
\hline
\hline
CSNet & \textbf{51.3} & \textbf{58.8} \\
\hline
\end{tabular}
}
\caption{F-score (\%) of unsupervised methods in canonical setting on SumMe and TVSum datasets. Our approach outperforms other existing methods. Dramatic performance improvement is shown on the SumMe dataset.}
\label{tab:unsupervised}
\end{table}
\paragraph{Comparison with unsupervised approaches.}
\tabref{tab:unsupervised} shows the experimental results for SumMe and TVSum datasets using unsupervised learning in canonical settings. Since our approach mainly target unsupervised learning, CSNet outperforms both SumMe and TVSum over the existing methods~\cite{elhamifar2012see,khosla2013large,de2011vsumm,zhao2014quasi,song2015tvsum,zhou2017deep,Mahasseni2017VAEGAN}. As a significant improvement in performance for the SumMe dataset, \tabref{tab:unsupervised} shows a F-score enhancement over 9.9\% compared to the best of the existing methods~\cite{zhou2017deep}.
To the best of our knowledge, all existing methods are scored at less than 50\% of the F-score in the SumMe dataset. Evaluation of the SumMe dataset is more challenging than the TVSum dataset in terms of performance. DR-DSN has already made a lot of progress for the TVSum dataset, but for the first time, we have achieved extreme advancement in the SumMe dataset which decreases the gap between SumMe and TVSum.
An interesting observation of supervised learning in video summarization is the non-optimal ground truth scores. Users who evaluated video for each data set are different, and every user does not make a consistent evaluation. In such cases, there may be a better summary than the ground truth which is a mean value of multiple user annotations. Surprisingly, during our experiments we observe that predictions for some videos receive better F-scores than in the results of ground truth. Unsupervised approaches do not use the ground truth, so it provides a step closer to the user annotation.
\begin{table}
\centering
\resizebox{0.8\linewidth}{!}{%
\begin{tabular}{ l | c | c }
\hline
Method & SumMe & TVSum \\
\hline
Interestingness & 39.4 & - \\
Submodularity & 39.7 & - \\
Summary transfer & 40.9 & - \\
Bi-LSTM & 37.6 & 54.2 \\
DPP-LSTM & 38.6 & 54.7\\
GAN\textsubscript{sup} & 41.7 & 56.3 \\
DR-DSN\textsubscript{sup} & 42.1 & 58.1\\
\hline
\hline
CSNet\textsubscript{sup} & \textbf{48.6} & \textbf{58.5} \\
\hline
\end{tabular}
}
\caption{F-score (\%) of supervised methods in canonical setting on SumMe and TVSum datasets. We achieve the state-of-the-art performance.}
\label{tab:supervised}
\end{table}
\paragraph{Comparison with supervised approaches.}
We implemented CSNet\textsubscript{sup} for supervised learning by simply adding binary cross entropy loss between prediction and ground truth to existing loss for CSNet. In \tabref{tab:supervised}, CSNet\textsubscript{sup} obtains state-of-the-art results compared to existing methods~\cite{gygli2014creating,gygli2015video,zhang2016summary,zhang2016video,zhou2017deep}, but does not provide a better performance than CSNet. In general, supervision improves performance, but in our case, the point of view mentioned in the unsupervised approaches may fall out of step with using ground truth directly.
\begin{table}
\centering
\resizebox{1.0\linewidth}{!}{%
\begin{tabular}{ l | c c c | c c c }
\hline
& & SumMe & & & TVSum \\
\hline
Method &C & A & T & C & A & T \\
\hline
Bi-LSTM & 37.6 & 41.6 & 40.7 & 54.2 & 57.9 & 56.9 \\
DPP-LSTM & 38.6 & 42.9 & 41.8 & 54.7 & 59.6 & 58.7 \\
GAN\textsubscript{dpp} & 39.1 & 43.4 & - & 51.7 & 59.5 & - \\
GAN\textsubscript{sup} & 41.7 & 43.6 & - & 56.3 & \textbf{61.2} & - \\
DR-DSN & 41.4 & 42.8 & 42.4 & 57.6 & 58.4 & 57.8 \\
DR-DSN\textsubscript{sup} & 42.1 & 43.9 & 42.6 & 58.1 & 59.8 & 58.9 \\
HSA-RNN & - & 44.1 & - & - & 59.8 & - \\
\hline
\hline
CSNet & \textbf{51.3} & \textbf{52.1} & \textbf{45.1} & \textbf{58.8} & 59.0 & \textbf{59.2} \\
CSNet\textsubscript{sup} & 48.6 & 48.7 & 44.1 & 58.5 & 57.1 & 57.4 \\
\hline
\end{tabular}
}
\caption{F-score (\%) of both unsupervised and supervised methods in canonical, augmented and transfer settings on SumMe and TVSum datasets.}
\label{tab:CAT}
\end{table}
\paragraph{Comparison in augmented and transfer settings.}
We compare our CSNet with other state-of-the-art literature with augmented and transfer settings in \tabref{tab:CAT}. We can make a fair comparison using the 256 hidden layer size of LSTM used by DR-DSN~\cite{zhou2017deep}, which is a previous state-of-the-art method. We obtain better performance in CSNet than CSNet\textsubscript{sup}, and our unsupervised CSNet performs better than the supervised method in any other approach except for GAN\textsubscript{sup}, which uses 1024 hidden size in TVSum dataset with augmented setting.
\subsection{Qualitative Results}
\paragraph{Selected key-shots.}
In this section, we visualize selected key-shots in two ways.
First, in \figref{fig:visual}, selected key-shots are visualized in bar graph form using various genre of videos. (a) - (d) show that many of our key-shots select peak points of labeled scores. In terms of the content of the video, the scenes selected by CSNet are mostly meaningful scenes by comparing colored bars with the images in \figref{fig:visual}. Then, in \figref{fig:visual-2}, we compare variants of our approach with a video 1 in TVSum. Although minor differences exist, each approach select peak points well.
\paragraph{Difference attention.}
With a deeper analysis of difference attention, we visualize the difference attention in the TVSum dataset. Its motivation is to capture dynamic information between frames of video. We can verify our assumption that the dynamic scene should be more important than the static scene with this experiment. As shown in \figref{fig:diff}, the plotted blue graph is in line with the selected key-shots, which highlight portions with high scores. The selected key-shots are of a motorcycle jump, which is a dynamic scene in the video. As a result, difference attention can effectively predict key-shots using dynamic information.
\begin{figure}[t]
\centering\resizebox{0.85\linewidth}{!}{
\includegraphics[width=0.88\textwidth]{./q2_v2.pdf}
}
\caption{ Experiment with video 41 in the TVSum dataset. In addition to the visualization results in \figref{fig:visual}, difference attention is plotted with blue color. When visualizing the difference attention, it is normalized to have a same range of ground truth scores. The picture is the video frames which are mainly predicted part with key-shots.}
\label{fig:diff}
\end{figure}
\section{Conclusion}
In this paper, we propose discriminative feature learning for unsupervised video summarization with our approach. Variance loss tackles the temporal dependency problem, which causes a flat output problem in LSTM. CSNet designs a local and global scheme, which reduces temporal input size for LSTM. Difference attention highlights dynamic information, which is highly related to key-shots in a video. Extensive experiments on two benchmark datasets including ablation study show that our state-of-the-art unsupervised approach outperforms most of the supervised methods.
\paragraph{Acknowledgements}
This research is supported by the Study on Deep Visual Understanding funded by the Samsung Electronics Co., Ltd (Samsung Research)
\medskip
{
\bibliographystyle{aaai}
|
2,869,038,154,635 | arxiv | \section{INTRODUCTION}
Extremely high energy (EHE) cosmic rays beyond $100~{\rm EeV}$
have been observed in a couple of decades \citep{takeda98,abbasi04a},
but their origin still remains enigmatic.
In regard to the generation of the EHE particles, there are two
alternative schemes: the ``top-down'' scenario that hypothesizes
topological defects, Z-bursts, and so on, and the traditional
``bottom-up'' \citep[see, e.g.,][for a review]{olinto00}.
In the latter approach, we explore the candidate
celestial objects operating as a cosmic-ray ``Zevatron''
\citep{blandford00}, namely, an accelerator boosting
particle kinetic energy to ZeV ($10^{21}~{\rm eV}$) ranges.
By simply relating the celestial size to the gyroradius
for the typical magnetic field strength, one finds that the
candidates are restricted to only a few objects; these
include pulsars, active galactic nuclei (AGNs), radio galaxy
lobes, and clusters of galaxies \citep{hillas84,olinto00}.
In addition, gamma-ray bursters (GRBs) are known as
possible sources \citep{waxman95}.
As for the transport of EHE particles from the extragalactic sources,
within the GZK horizon \citep{greisen66,zatsepin66} the trajectory of
the particles (particularly protons) ought to suffer no significant
deflection due to the cosmological magnetic field, presuming
its strength of the order of $0.1~{\rm nG}$
\citep[see, e.g.,][for a review]{vallee04}.
According to a cross-correlation study \citep{farrar98}, some
super-GZK events seem to be well aligned with compact, radio-loud quasars.
Complementarily, self-correlation study is in progress,
showing small-scale anisotropy in the distribution of
the arrival direction of EHE primaries \citep{teshima03}.
More recently, the strong clustering has been confirmed,
as is consistent with the null hypothesis of isotropically
distributed arrival directions \citep{abbasi04b}.
At the moment, the interpretation of these results is
under active debate.
In the bottom-up scenario, the most promising mechanism for achieving EHE is
considered to be that of diffusive shock acceleration \citep[DSA;][]
{lagage83a,lagage83b,drury83}, which has been substantially studied
for solving the problems of particle acceleration in heliosphere
and supernova remnant (SNR) shocks
\citep[see, e.g.,][for a review]{blandford87}.
In general, it calls for the shock to be accompanied by
some kinds of turbulence that serve as the particle scatterers
\citep{krymskii77,bell78,blandford78}.
Concerning the theoretical modeling and its application to
extragalactic sources such as AGN jets, GRBs, and so forth,
it is still very important to know the actual magnetic field strength,
configuration, and turbulent state around the shock front.
At this juncture, modern polarization measurements by using very long
baseline interferometry began to reveal the detailed configuration of
magnetic fields in extragalactic jets, for example, the quite smooth
fields transverse to the jet axis \citep[1803+784:][]{gabuzda99}.
Another noticeable result is that within the current resolution,
a jet is envisaged as a bundle of {\it at least} a few filaments
(e.g., 3C\,84: \citealt{asada00}; 3C\,273: \citealt{lobanov01}),
as were previously confirmed in the radio arcs near the Galactic center
\citep[GC;][]{yusefzadeh84,yusefzadeh87}, as well as in the well-known
extragalactic jets
(e.g., Cyg\,A: \citealt{perley84}, \citealt{carilli96}; M87: \citealt{owen89}).
The morphology of filaments can be self-organized via the nonlinear
development of the electromagnetic current filamentation instability
\citep[CFI;][and references therein]{honda04} that breaks up a uniform beam
into many filaments, each carrying about net one unit current \citep{honda00}.
Fully kinetic simulations indicated that this subsequently led to the
coalescence of the filaments, self-generating significant toroidal
(transverse) components of magnetic fields \citep{honda00a,honda00b}.
As could be accommodated with this result, large-scale toroidal magnetic
fields have recently been discovered in the GC region \citep{novak03}.
Accordingly, we conjecture that a similar configuration appears in
extragalactic objects, particularly AGN jets \citep{hh04a}.
It is also pointed out that the toroidal fields could play a remarkable
role in collimating plasma flows \citep{honda02}.
Relating to this point, the AGN jets have narrow opening angles
of $\phi_{\rm oa}<10\degr$ in the long scales, although
in close proximity to the central engine the angles tend to spread
\citep[e.g., $\phi_{\rm oa}\approx 60\degr$ for the M87 jet;][]{junor99}.
Moreover, there is observational evidence that the internal pressures are
higher than the pressures in the external medium (e.g., 4C\,32.69:
\citealt{potash80}; Cyg\,A: \citealt{perley84}; M87: \citealt{owen89}).
These imply that the jets must be {\it self}-collimating and stably
propagating, as could be explained by the kinetic theory \citep{honda02}.
In the nonlinear stage of the CFI, the magnetized filaments
can often be regarded as strong turbulence that
more strongly deflects the charged particles.
When the shock propagation is allowed, hence, the particles
are expected to be quite efficiently accelerated for the
DSA scenario \citep{drury83,gaisser90}.
Indeed, such a favorable environment seems to be well established
in the AGN jets.
For example, in the filamentary M87 jet, some knots moving toward
the radio lobe exhibit the characteristics of shock discontinuity
\citep{biretta83,capetti97}, involving circumstantial evidence of
in situ electron acceleration \citep{meisenheimer96}.
As long as the shock accelerator operates for electrons, arbitrary
ions will be co-accelerated, providing that the ion abundance
in the jet is finite \citep[e.g.,][]{rawlings91,kotani96}.
It is, therefore, quite significant to study the feasibility
of EHE particle production in the filamentary jets with shocks:
this is just the original motivation for the current work.
This paper has been prepared to show a full derivation
of the diffusion coefficient for cosmic-ray particles
scattered by the magnetized filaments.
The present theory relies on a consensus that the kinetic energy
density (ram pressure) of the bulk plasma carrying currents is larger
than the energy density of the magnetic fields self-generated via
the CFI, likely comparable to the thermal pressure of the bulk.
That is, the flowing plasma as a reservoir of free energy
is considered to be in a high-$\beta$ state.
In a new regime in which the cosmic-ray particles interact off-resonantly
with the magnetic turbulence having no regular field, the quasi-linear
approximation of the kinetic transport equation is found to be
consistent with the condition that the accelerated particles
must be rather free from magnetic traps; namely, the
particles experience meandering motion.
It follows that the diffusion anisotropy becomes small.
Apparently, these are in contrast with the conventional quasi-linear
theory (QLT) for small-angle resonant scattering, according to which
one sets the resonance of the gyrating particles bound to a
mean magnetic field with the weak turbulence superimposed on
the mean field \citep{drury83,biermann87,longair92,hh04b}.
It is found that there is a wide parameter range in which
the resulting diffusion coefficient is smaller than that
from a simplistic QLT in the low-$\beta$ regime.
We compare a specified configuration of the filaments to an astrophysical
jet including AGN jets and discuss the correct treatment of what the
particle injection threshold in the present context could be.
We then apply the derived coefficient for calculations of the DSA
timescale and the achievable highest energy of accelerated particles
in that environment.
As a matter of convenience, we also show some generic scalings
of the highest energy, taking account of the conceivable energy
restrictions for both ions and electrons.
In order to systematically spell out the theoretical scenario, this paper is
divided into two major parts, consisting of the derivation of the diffusion
coefficient (\S~2) and its installation to the DSA model (\S~3).
We begin, in \S~2.1, with a discussion on the turbulent
excitation mechanism due to the CFI, so as to specify
a model configuration of the magnetized current filaments.
Then in \S~2.2, we explicitly formulate the equation that describes
particle transport in the random magnetic fluctuations.
In \S~2.3, the power-law spectral index of the magnetic
fluctuations is suggested for a specific case.
In \S~2.4, we write down the diffusion coefficients
derived from the transport equation.
In \S~3.1, we deal with the subject of particle injection, and
in \S~3.2, we estimate the DSA timescale, which is used to evaluate the
maximum energy of an accelerated ion (\S~3.3) and electron (\S~3.4).
Finally, \S~4 is devoted to a discussion of the feasibility and a summary.
\section[]{THEORY OF PARTICLE DIFFUSION IN MAGNETIC TURBULENCE
SUSTAINED BY ANISOTROPIC CURRENT FILAMENTS}
In what follows, given the spatial configuration of the magnetized
filaments of a bulk plasma jet, we derive the evolution equation
for the momentum distribution function of test particles, which is
linear to the turbulent spectral intensity, and then extract
an effective frequency for collisionless scattering and the
corresponding diffusion coefficient from the derived equation.
\subsection[]{\it Model Configuration of Magnetized Current Filaments}
Respecting the macroscopic transport of energetic particles in active galaxies,
there is direct/indirect observational evidence that they are
ejected from the central core of the galaxies and subsequently
transferred, through bipolar jets, to large-scale radio robes in which the
kinematic energy considerably dissipates \cite[e.g.,][]{biretta95,tashiro04}.
In this picture, it is expected that the directional plasma flows
will favorably induce huge currents in various aspects of the
above transport process (e.g., \citealt{appl92,conway93}; an
analogous situation also seems to appear in GRB jets:
e.g., \citealt{lyutikov03}).
Because of perfect conductivity in fully ionized plasmas, hot
currents driven in, e.g., the central engine prefer being quickly
compensated by plasma return currents.
This creates a pattern of the counterstreaming currents
that is unstable for the electromagnetic CFI.
As is well known, the pattern is also unstable to electrostatic
disturbances with the propagation vectors parallel to the
streaming direction, but in the present work, we eliminate
the longitudinal modes, so as to isolate the transverse CFI.
Providing a simple case in which the two uniform currents are
carried by electrons, the mechanism of magnetic field
amplification due to the CFI is explained as follows.
When the compensation of the counterpropagating electron currents
is disturbed in the transverse direction, magnetic repulsion
between the two currents reinforces the initial disturbance.
As a consequence, a larger and larger magnetic field is
produced as time increases.
For the Weibel instability as an example, the unstable mode is the
purely growing mode without oscillations \citep{honda04}, so the
temporal variation of magnetic fields is expected to be markedly
slow in the saturation regime (more on these is given
in \S\S~2.2 and 2.3).
Note that a similar pattern of quasi-static magnetic fields
can be also established during the collision of electron-positron
plasmas \citep{kazimura98,silva03} and in a shock front
propagating through an ambient plasma with/without initial
magnetic fields \citep{nishikawa03}.
These dynamics might be involved in the organization of the knotlike
features in the Fanaroff-Riley (FR) type~I radio jets, which appear to
be a shock caused by high-velocity material overtaking slower material
\citep[e.g.,][]{biretta83}.
Similarly, the cumulative impingement could also take place
around the hot spots of the FR type~II sources, which arguably
reflect the termination shocks.
In fact, the filamentary structure has been observed in
the hot spot region of a FR~II source \citep{perley84,carilli96}.
Furthermore, estimating the energy budget in many radio lobes
implies that the ram pressure of such current-carrying jets
is much larger than the energy density of the magnetic fields
\citep{tashiro04}; namely, the jet bulk can be regarded as a huge
reservoir of free energy.
In this regime, the ballistic motion is unlikely to be
affected by the self-generated magnetic fields,
as is actually seen in the linear feature of jets.
As a matter of fact, the GC region is known to arrange numerous
linear filaments, including nonthermal filaments \citep{yusefzadeh04}.
Taking these into consideration, we give a simple model
of the corresponding current--magnetic field system and
attempt to unambiguously distinguish the present system from the one
that appears in the low-$\beta$ plasmas hitherto well studied.
In Figure~1, for a given coordinate, we depict the configuration
of the linear current filaments and turbulent magnetic
fields of the bulk plasma.
Recalling that magnetic field perturbations develop in the direction
transverse to the initial currents \citep[e.g.,][]{honda04}, one
supposes the magnetic fields developed in the nonlinear phase to
be ${\bf B}=(B_{x},B_{y},0)$ \citep{montgomery79,medvedev99},
such that the vectors of zeroth-order current density
point in the directions parallel and antiparallel to the $z$-direction,
i.e., ${\bf J}\sim J{\hat{\bf z}}$, where the scalar $J$ ($\gtrless 0$)
is nonuniformly distributed on the transverse $x$-$y$ plane, while
uniformly distributed in the $z$-direction.
Note that for the fluctuating magnetic field vectors, we have used
the simple character (${\bf B}$) without any additional symbol such as
``$\delta$,'' since the establishment of no significant regular component
is expected, and simultaneously, ${\bf J}$ ($\sim\nabla\times{\bf B}$)
well embodies the quasi-static current filaments in the zeroth order.
For convenience, hereafter, the notations ``parallel''
($\parallel$) and ``transverse'' ($\perp$) are referred to
as the directions with respect to the linear current filaments
aligned in the $z$-axis, as they are well defined reasonably
(n.b. in the review of \S~2.4, $\parallel_{b}$ and $\perp_{b}$
with the subscript ``$b$'' refer to a mean magnetic field line).
It is mentioned that the greatly fluctuating transverse fields could be
reproduced by some numerical simulations \citep[e.g.,][]{lee73,nishikawa03}.
In an actual filamentary jet, a significant reduction of polarization
has been found in the center, which could be ascribed to the cancellation
of the small-scale structure of magnetic fields \citep{capetti97},
compatible with the present model configuration.
In addition, there is strong evidence that random fields accompany GRB jets
\citep[e.g.,][]{greiner03}.
The arguments expanded below highlight the transport properties
of test particles in such a bulk environment, that is,
in a forest of magnetized current filaments.
\subsection[]{\it The Quasi-linear Type Equation for Cosmic-Ray Transport}
We are particularly concerned with the stochastic diffusion of the
energetic test particles injected into the magnetized current filaments
(for the injection problem, see the discussion in \S~3.1).
As a rule, the Vlasov equation is appropriate for describing the
collisionless transport of the relativistic particles in the turbulent
magnetic field, ${\bf B}({\bf r},t)$, where ${\bf r}=(x,y)$ and
the slow temporal variation has been taken into consideration.
Transverse electrostatic fields are ignored, since
they preferentially attenuate over long timescales,
e.g., in the propagation time of jets (see \S~2.3).
The temporal evolution of the momentum distribution function
for the test particles, $f_{\bf p}$, can then be described as
\begin{equation}
{{\rm D}f_{\bf p}\over{{\rm D}t}}={\partial\over{\partial t}}f_{\bf p}
+\left({\bf v}\cdot{\partial\over{\partial{\bf r}}}\right)f_{\bf p}
+{q\over c}\left[\left({\bf v}\times{\bf B}\right)
\cdot{\partial\over{\partial{\bf p}}}\right]f_{\bf p}=0
\label{eqn:1}
\end{equation}
\noindent
for arbitrary particles. Here $q$ is the particle charge,\footnote {For
example, $q=-|e|$ for electrons, $q=|e|$ for positrons,
and $q=Z|e|$ for ions or nuclei, where $e$ and $Z$ are the
elementary charge and the charge number, respectively.}
$c$ is the speed of light, and the other notations are standard.
We decompose the total distribution function
into the averaged and fluctuating part,
$f_{\bf p}=\left<f_{\bf p}\right>+\delta f_{\bf p}$, and consider
the specific case in which from a macroscopic point of view, the vector
${\bf B}=(B_{x},B_{y},0)$ is randomly distributed on the
transverse $x$-$y$ plane \citep{montgomery79,medvedev99}.
Taking the ensemble average of equation~(\ref{eqn:1}),
$\left<{\rm D}f_{\bf p}/{{\rm D} t}\right>=0$, then yields
\begin{equation}
{\partial\over{\partial t}}\left<f_{\bf p}\right>
+\left({\bf v}\cdot{\partial\over{\partial{\bf r}}}\right)
\left<f_{\bf p}\right>=-{q\over c}
\left<\left[\left({\bf v}\times{\bf B}\right)
\cdot{\partial\over{\partial{\bf p}}}\right]\delta f_{\bf p}\right>,
\label{eqn:2}
\end{equation}
\noindent
where we have used $\left<{\bf B}\right>\simeq {\bf 0}$.
Taking account of no mean field implies that we do not
invoke the gyration and guiding center motion of the particles.
Subtracting equation~(\ref{eqn:2}) from equation (\ref{eqn:1})
and picking up the term linear in fluctuations, viz., employing
the conventional quasi-linear approximation, we obtain
\begin{equation}
{\partial\over{\partial t}}\delta f_{\bf p}
+\left({\bf v}\cdot{\partial\over{\partial{\bf r}}}\right)\delta f_{\bf p}=
-{q\over c}\left[\left({\bf v}\times{\bf B}\right)
\cdot{\partial\over{\partial{\bf p}}}\right]\left<f_{\bf p}\right>.
\label{eqn:3}
\end{equation}
\noindent
As usual, equation~(\ref{eqn:3}) is valid for
$\left<f_{\bf p}\right>\gg|\delta f_{\bf p}|$ \citep{landau81}.
As shown in \S~3.1, this condition turns out to be consistent
with the aforementioned implication that the injected test particles
must be free from the small-scale magnetic traps embedded in the bulk.
Relating to this, note that to remove the ambiguity of terminologies,
the injected, energetic test particles obeying $f_{\bf p}$
are just compared to the cosmic rays that are shown below to be
diffusively accelerated owing to the present scenario.
Within the framework of the test particle approximation,
the back-reaction of the slow spatiotemporal change of
$\left<f_{\bf p}\right>$ to the modulation of ${\bf B}$
(sustained by the bulk) is ignored, in contrast to the case for
SNR environments, where such effects often become nonnegligible
\citep[e.g.,][]{bell04}.
In general, the vector potential conforms to ${\bf B}=\nabla\times{\bf A}$
and $\nabla\cdot{\bf A}=0$.
For the standard, plane wave approximation, we carry out
the Fourier transformation of the fluctuating components
for time and the transverse plane:
\begin{equation}
\delta f_{\bf p}({\bf r},t)=\int\delta f_{{\bf p},{\cal K}}
e^{i\left[\left({\bf k}\cdot{\bf r}\right)-\omega t\right]}{\rm d}^3{\cal K},
\label{eqn:4}
\end{equation}
\begin{equation}
{\bf A}({\bf r},t)=\int{\bf A}_{\cal K}
e^{i\left[\left({\bf k}\cdot{\bf r}\right)-\omega t\right]}{\rm d}^3{\cal K},
\label{eqn:5}
\end{equation}
\begin{equation}
{\bf B}({\bf r},t)=\int{\bf B}_{\cal K}
e^{i\left[\left({\bf k}\cdot{\bf r}\right)-\omega t\right]}{\rm d}^3{\cal K},
\label{eqn:6}
\end{equation}
\noindent
and ${\bf B}_{\cal K}=i{\bf k}\times{\bf A}_{\cal K}$,
where $i=\sqrt{-1}$, ${\cal K}=\left\{{\bf k},\omega\right\}$,
and ${\rm d}^3{\cal K}={\rm d}^2{\bf k}{\rm d\omega}$.
The given magnetic field configuration follows
${\bf A}=A{\hat{\bf z}}$ (${\bf A}_{\cal K}=A_{\cal K}{\hat{\bf z}}$)
and ${\bf k}\perp{\hat{\bf z}}$.
As illustrated in Figure~1, the scalar quantity $A$ ($\gtrless 0$)
is also random on the transverse plane, with no mean value.
Making use of equations~(\ref{eqn:4})--(\ref{eqn:6}),
equation~(\ref{eqn:3}) can be transformed into
\begin{equation}
\delta f_{{\bf p},{\cal K}}={q\over c}
\left[\omega -\left({\bf k}\cdot{\bf v}\right)\right]^{-1}
\left[{\bf v}\times\left({\bf k}\times{\bf A}_{\cal K}\right)\right]
\cdot{{\partial\left< f_{\bf p}\right>}\over{\partial{\bf p}}}.
\label{eqn:7}
\end{equation}
\noindent
On the other hand, the right-hand side (RHS) of
equation~(\ref{eqn:2}) can be written as
\begin{equation}
{\rm RHS}=-i{q\over c}\left<\int{\rm d}^3{\cal K}^{\prime}
e^{i\left[\left({\bf k}^{\prime}\cdot{\bf r}\right)-\omega^{\prime} t\right]}
\left\{\left[{\bf v}\times\left({\bf k}^{\prime}\times
{\bf A}_{{\cal K}^{\prime}}\right)\right]
\cdot{\partial\over{\partial{\bf p}}}\right\}\delta f_{\bf p}\right>.
\label{eqn:8}
\end{equation}
\noindent
Substituting equation~(\ref{eqn:4}) (involving eq.~[\ref{eqn:7}]) into
equation~(\ref{eqn:8}), equation~(\ref{eqn:2}) can be expressed as
\begin{eqnarray}
{{{\rm d}\left<f_{\bf p}\right>}\over{{\rm d}t}}=-i{q^2\over c^2}
\left<\int{\rm d}^3{\cal K}{\rm d}^3{\cal K}^{\prime}
e^{i\left\{\left[\left({\bf k}+{\bf k}^{\prime}\right)\cdot{\bf r}\right]
-\left(\omega+\omega^{\prime}\right)t\right\}}\right. \nonumber \\
\left. \left[{\bf v}\times\left({\bf k}^{\prime}\times
{\bf A}_{{\cal K}^{\prime}}\right)\right]
\cdot{\partial\over{\partial{\bf p}}}
\left\{{{\left[{\bf v}\times\left({\bf k}\times
{\bf A}_{\cal K}\right)\right]}\over
{\omega-\left({\bf k}\cdot{\bf v}\right)}}
\cdot{\partial\over{\partial{\bf p}}}
\right\}\left< f_{\bf p}\right>\right>,
\label{eqn:9}
\end{eqnarray}
\noindent
where the definition of the total derivative,
${\rm d}/{\rm d}t\equiv\partial/{\partial t}+{\bf v}\cdot
\left({\partial/{\partial{\bf r}}}\right)$, has been introduced.
As for the integrand of equation~(\ref{eqn:9}),
it may be instructive to write down the vector identity of
\begin{equation}
{\bf v}\times\left({\bf k}^{(\prime)}\times{\bf A}_{{\cal K}^{(\prime)}}\right)
=\left({\bf v}\cdot{\bf A}_{{\cal K}^{(\prime)}}\right){\bf k}^{(\prime)}
-\left({\bf k}^{(\prime)}\cdot{\bf v}\right){\bf A}_{{\cal K}^{(\prime)}}.
\label{eqn:10}
\end{equation}
\noindent
From the general expression of equation~(\ref{eqn:9}),
we derive an effective collision frequency that stems from
fluctuating field-particle interaction, as shown below.
For convenience, we decompose the collision integral
(RHS of eq.~[\ref{eqn:9}]) including the scalar products,
$\cdot\left(\partial/\partial{\bf p}\right)$, into the four parts:
\begin{equation}
{{{\rm d}\left<f_{\bf p}\right>}\over{{\rm d}t}}=\sum_{i,j}I_{ij},
\label{eqn:11}
\end{equation}
\noindent
where $i,j=1,2$ and
\begin{equation}
I_{ij}
\equiv -i{q^2\over c^2}\left<\int{\rm d}^3{\cal K}{\rm d}^3{\cal K}^{\prime}
\cdots{\partial\over{\partial p_{i}}}
\cdots{\partial\over{\partial p_{j}}}\left< f_{\bf p}\right>\right>.
\label{eqn:12}
\end{equation}
\noindent
In the following notations, the subscripts ``1'' and ``2''
indicate the parallel ($\parallel$) and perpendicular ($\perp$)
direction to the current filaments, respectively.
Below, as an example, we investigate the contribution from the integral
$I_{11}$ (see Appendix for calculation of the other components).
For the purely parallel diffusion involving the partial
derivative of only $\partial/\partial p_{\parallel}$,
the first term of the RHS of equation~(\ref{eqn:10})
does not make a contribution to equation~(\ref{eqn:9}).
In the ordinary case in which the random fluctuations are
stationary and homogeneous, the correlation function
has its sharp peak at $\omega=-\omega^{\prime}$ and
${\bf k}=-{\bf k}^{\prime}$ \citep{tsytovich95}, that is,
\begin{equation}
\left< A_{\cal K}A_{{\cal K}^{\prime}}\right> =|A|_{{\bf k},\omega}^{2}
\delta({\bf k}+{\bf k}^{\prime})\delta(\omega+\omega^{\prime}),
\label{eqn:13}
\end{equation}
\noindent
where the Dirac $\delta$-function has been used.
Here note the relation of
$|A|_{{\bf k},\omega}^{2}=|A|_{{\bf -k},-\omega}^{2}$ because we have
$A_{{\bf -k},-\omega}=A_{{\bf k},\omega}^{*}$, where the superscript
asterisk indicates the complex conjugate; this is valid as far as
${\bf A}({\bf r},t)$ is real, i.e., ${\bf B}({\bf r},t)$ is observable.
By using equation~(\ref{eqn:13}), the integral component
$I_{11}$ can be expressed as
\begin{equation}
I_{11}=i{q^{2}\over c^{2}}
\int{\rm d}^{2}{\bf k}{\rm d}\omega|A|_{{\bf k},\omega}^{2}
\left({\bf k}\cdot{\bf v}\right)
{\partial\over{\partial p_{\parallel}}}
\left[{{{\bf k}\cdot{\bf v}}
\over{\omega-\left({\bf k}\cdot{\bf v}\right)}}
{\partial\over{\partial p_{\parallel}}}\left<f_{\bf p}\right>\right],
\label{eqn:14}
\end{equation}
\noindent
where the relation of
${\bf A}_{\cal K}\cdot\left(\partial/\partial{\bf p}\right)
=A_{\cal K}\left(\partial/\partial p_{\parallel}\right)$
has been used.
In order to handle the resonant denominator of equation~(\ref{eqn:14}),
we introduce the causality principle of
$\lim_{\epsilon\rightarrow +0}\left[\omega-
\left({\bf k}\cdot{\bf v}\right)+i\epsilon\right]^{-1}
\rightarrow{\cal P}\left[\omega-\left({\bf k}\cdot{\bf v}\right)\right]^{-1}
-i\pi\delta\left[\omega-\left({\bf k}\cdot{\bf v}\right)\right]$,
where ${\cal P}$ indicates the principal value \citep{landau81}.
One can readily confirm that the real part of the resonant
denominator does not contribute to the integration.
Thus, we have
\begin{equation}
I_{11}={{\pi q^{2}}\over c^{2}}
\int{\rm d}^{2}{\bf k}{\rm d}\omega|A|_{{\bf k},\omega}^{2}
\left({\bf k}\cdot{\bf v}\right)
{\partial\over{\partial p_{\parallel}}}
\left\{\delta\left[\omega-\left({\bf k}\cdot{\bf v}\right)\right]
\left({\bf k}\cdot{\bf v}\right)
{\partial\over{\partial p_{\parallel}}}
\left<f_{\bf p}\right>\right\}.
\label{eqn:15}
\end{equation}
\noindent
Equation~(\ref{eqn:15}) shows the generalized form of the
quasi-linear equation, allowing $|A|_{{\bf k},\omega}^{2}$
to be arbitrary functions of ${\bf k}$ and $\omega$.\footnote{For the
case in which the unstable mode is a wave mode with
$\omega_{\bf k}\neq0$, the frequency dependence of the correlation function
can be summarized in the form of $|{\cal F}|_{{\bf k},\omega}^{2}=
|{\cal F}|_{\bf k}^{2}\delta\left(\omega-\omega_{\bf k}\right)+
|{\cal F}|_{\bf -k}^{2}\delta\left(\omega+\omega_{\bf k}\right)$,
which is valid for weak turbulence
concomitant with a scalar or vector potential ${\cal F}$.
However, this is not the case considered here.
The free-energy source that drives instability is now
current flows; thereby, unstable modes without oscillation
(or with quite slow oscillation) can be excited.}
In the present circumstances, a typical unstable mode of the
CFI is the purely growing Weibel mode with $\omega=0$ in
collisionless regimes, although in a dissipative regime
the dephasing modes with a finite but small value of
$\omega=\pm\Delta\omega_{\bf k}$ are possibly excited \citep{honda04}.
In the latter case, the spectral lines will be broadened
in the nonlinear phase.
Nevertheless, it is assumed that the spectrum still retains
the peaks around $\omega\approx\pm\Delta\omega$,
accompanied by their small broadening of the same order,
where $|\Delta\omega|\ll\gamma_{\bf k}\sim\omega_{\rm p}$,
and $\gamma_{\bf k}$ and $\omega_{\rm p}/(2\pi)$ are the
growth rate and the plasma frequency, respectively.
In the special case reflecting the purely growing mode,
the spectrum retains a narrow peak at $\omega =0$ with
$|\Delta\omega|\sim 0$ \citep{montgomery79}.
Apparently, the assumed quasi-static properties are in
accordance with the results of the fully kinetic simulations
\citep{kazimura98,honda00a}, except for a peculiar temporal
property of the rapid coalescence of filaments.
Accordingly, here we employ an approximate expression of
\begin{equation}
|A|_{{\bf k},\omega}^{2}\sim
|A|_{\bf k}^{2}\delta\left(\omega-\Delta\omega\right)+
|A|_{\bf -k}^{2}\delta\left(\omega+\Delta\omega\right),
\label{eqn:16}
\end{equation}
\noindent
where $|A|_{\bf k}^{2}=|A|_{\bf -k}^{2}$.
Note that when taking the limit of $|\Delta\omega|\rightarrow 0$,
equation~(\ref{eqn:16}) degenerates into
$|A|_{{\bf k},\omega}^{2}\sim 2|A|_{\bf k}^{2}\delta\left(\omega\right)$.
Substituting equation~(\ref{eqn:16}) into equation~(\ref{eqn:15}) yields
\begin{equation}
I_{11}\sim{{2\pi q^{2}}\over c^{2}}\int{\rm d}^{2}{\bf k}|A|_{\bf k}^{2}
\left({\bf k}\cdot{\bf v}\right)
{\partial\over{\partial p_{\parallel}}}
\left\{\delta\left[\Delta\omega -\left({\bf k}\cdot{\bf v}\right)\right]
\left({\bf k}\cdot{\bf v}\right)
{\partial\over{\partial p_{\parallel}}}
\left<f_{\bf p}\right>\right\}.
\label{eqn:17}
\end{equation}
\noindent
Furthermore, we postulate that the turbulence is isotropic
on the transverse plane, though still, of course, allowing
anisotropy of the vectors ${\bf A}$ parallel to the $z$-axis.
Equation~(\ref{eqn:17}) can be then cast to
\begin{equation}
I_{11}\sim{{2\pi q^{2}}\over c^{2}}
v_{\perp}{\partial\over{\partial p_{\parallel}}}
\int {{\rm d}\theta\over{2\pi}}\cos^{2}\theta
\int{\rm d}k 2\pi k
\delta\left(\Delta\omega -kv_{\perp}\cos\theta\right)
{k^{2}|A|_{k}^{2}}v_{\perp}{\partial\over{\partial p_{\parallel}}}
\left<f_{\bf p}\right>,
\label{eqn:18}
\end{equation}
\noindent
where $k=|{\bf k}|$, $v_{\perp}=|{\bf v}_{\perp}|$, and
${\bf k}\cdot{\bf v}={\bf k}\cdot\left({\bf v}_{\perp}+v_{\parallel}
{\hat{\bf z}}\right)=kv_{\perp}\cos\theta$.
As concerns the integration for $\theta$, we see that the
contribution from the marginal region of the smaller $|\cos\theta|$,
reflecting narrower pitch angle, is negligible.
In astrophysical jets, the pitch angle distribution for
energetic particles still remains unresolved, although
the distribution itself is presumably unimportant.
Hence, at the moment it may be adequate to simply take an angular average,
considering, for heuristic purposes, the contribution from the range of
$|\cos\theta|\sim O(1)\gg\epsilon$ for a small value of $\epsilon$.
If one can choose
$\epsilon\gtrsim|\Delta\omega|/(k_{\rm min}v_{\perp})$, the above relation,
$\epsilon\ll |\cos\theta|$, reflects the off-resonant interaction,
i.e., $|\Delta\omega|\ll |{\bf k}\cdot{\bf v}|$.
The minimum wavenumber, $k_{\rm min}$, is typically of the order
of the reciprocal of the finite system size, which is, in the present
circumstances, larger than the skin depth $c/\omega_{\rm p}$.
These ensure the aforementioned relation of
$|\Delta\omega|\ll \omega_{\rm p}$ (or $|\Delta\omega|\sim 0$).
In addition, the off-resonance condition provides an approximate
expression of $\delta\left(\Delta\omega -kv_{\perp}\cos\theta\right)
\sim\left(kv_{\perp}|\cos\theta|\right)^{-1}$.
Using this expression, the integral for the angular average
can be approximated by
$\sim\int_{0}^{2\pi}\cos^{2}\theta/(2\pi|\cos\theta|){\rm d}\theta=2/\pi$.
This is feasible, on account of the negligible contribution from the angle of
$|\cos\theta|\lesssim\epsilon$.
Then equation~(\ref{eqn:18}) reduces to
\begin{equation}
I_{11}\sim{{16\pi q^{2}}\over c^{2}}v_{\perp}
{\partial^2 \over{\partial p_{\parallel}^{2}}}\left<f_{\bf p}\right>
\int_{k_{\rm min}}^{k_{\rm max}}{{{\rm d}k}\over k}I_{k},
\label{eqn:19}
\end{equation}
\noindent
where we have defined the modal energy density
(spectral intensity) of the quasi-static turbulence by
$I_{k}\equiv2\pi k\left(k^{2}|A|_{k}^{2}/4\pi\right)$, such that
the magnetic energy density in the plasma medium can be evaluated by
$u_{\rm m}\simeq\left<|{\bf B}|^{2}\right>/8\pi
=\int_{k_{\rm min}}^{k_{\rm max}}I_{k}{\rm d}k$.
\subsection[]{\it Spectral Intensity of the Transverse Magnetic Fields}
The energy density of the quasi-static magnetic fields, $u_{\rm m}$,
likely becomes comparable to the thermal pressure of the filaments
\citep{honda00a,honda00b,honda02}.
When exhibiting such a higher $u_{\rm m}$ level, the bulk plasma
state may be regarded as the strong turbulence; but recall that
in the nonlinear CFI, the frequency spectrum with a sharp peak
at $\omega =0$ is scarcely smoothed out, since significant
mode-mode energy exchanges are unexpected.
This feature is in contrast to the ordinary magnetohydrodynamic (MHD)
and electrostatic turbulence, in which a larger energy density
of fluctuating fields would involve modal energy transfer.
One of the most remarkable points is that as long as
the validity condition of the quasi-linear approximation,
$\left<f_{\bf p}\right>\gg |\delta f_{\bf p}|$, is satisfied
(for details, see \S~3.1), the present off-resonant scattering
theory covers even the strong turbulence regime.
That is, the theory, which might be classified into an extended
version of the QLT, does not explicitly restrict the magnetic
turbulence to be weak (for instruction, Tsytovich \& ter~Haar [1995]
have considered a generalization of the quasi-linear equation in
regard to its application to strong electrostatic turbulence).
Apparently, this is also in contrast to the conventional
QLT for small-angle resonant scattering, which invokes
a mean magnetic field (well defined only for the case
in which the turbulence is weak) in ordinary low-$\beta$ plasmas.
In any case, in equation~(\ref{eqn:19}) we specify the spectral
intensity of the random magnetic fields, which are established
via the aforementioned mechanism of the electromagnetic CFI.
The closely related analysis in the nonlinear regime was first performed by
\citet{montgomery79}, for a simple case in which two counterstreaming
electron currents compensate for a uniform, immobile ion background.
In the static limit of $\omega\rightarrow 0$, they have derived
the modal energy densities of fluctuating electrostatic and
magnetic fields, by using statistical mechanical techniques.
They predicted the accumulation of magnetic energy at long wavelengths,
consistent with the corresponding numerical simulation \citep{lee73}.
It was also shown that at long wavelengths, the energy density of a
transverse electrostatic field was comparable to the thermal energy density.
However, when allowing ion motions, such an electrostatic field is
found to attenuate significantly, resulting in equipartition of the energy
into magnetic and thermal components \citep{honda00a,honda00b}.
That is why we have neglected the electrostatic field in
equation~(\ref{eqn:1}).
When the spectral intensity of the magnetic fluctuations
can be represented by a power-law distribution of the form
\begin{equation}
I_{k}\propto k^{-\alpha},
\label{eqn:20}
\end{equation}
\noindent
we refer to $\alpha$ as spectral index.
\citet{montgomery79} found that for the transverse magnetic
fields accompanying anisotropic current filaments,
the spectral index could be approximated by
$\alpha\approx 2$ in a wide range of $k$, that is,
\begin{equation}
I_{k}\propto k^{-2}.
\label{eqn:21}
\end{equation}
\noindent
Note that the spectral index is somewhat larger than
$\alpha_{\rm MHD}\simeq 1-5/3$ for the classical
MHD context \citep{kolmogorov41,bohm49,kraichnan65}.
The larger index is rather consistent with the observed trends of
softening of filamentary turbulent spectra in extragalactic jets
\citep[e.g., $\alpha\simeq 2.6$ in Cyg~A;][and references therein]{carilli96}.
Although the turbulent dissipation actually involves the truncation
of $I_{k}$ in the short-wavelength regions, we simply take
$k_{\rm max}\rightarrow\infty$, excluding the complication.
Using equation~(\ref{eqn:20}) and the expression of the
magnetic energy density $u_{\rm m}$, we find the relation of
\begin{equation}
\int_{k_{\rm min}}^{\infty}{{{\rm d}k}\over k}I_{k}=
{1\over k_{\rm min}}{{\alpha-1}\over \alpha}
{{\left<|{\bf B}|^{2}\right>}\over{8\pi}}
\label{eqn:22}
\end{equation}
\noindent
for $\alpha >1$.
The spectral details for individual jets \citep[such as the
bend-over scales of $I_{k}$, correlation length, and so on; e.g.,
for the heliosphere, see][]{zank98,zank04} will render the
integration of equation~(\ref{eqn:22}) more precise, but the
related observational information has been poorly updated thus far.
For the present purpose, we simply use equation~(\ref{eqn:22}),
setting $k_{\rm min}=\pi/R$, where $R$ stands for the radius of
the jet, which is actually associated with the radius of a bundle of
filaments of various smaller radial sizes \citep[e.g.,][]{owen89}.
This ensures that the coherence length of the fluctuating force,
$\sim k^{-1}$, is small compared with a characteristic system size,
i.e., the transverse size, as is analogous to the restriction
for use of the conventional QLT.
\subsection[]{\it The Diffusion Coefficients}
In order to evaluate the diffusion coefficients of test particles,
one needs to specify the momentum distribution function,
$\left<f_{\bf p}\right>$, in equation~(\ref{eqn:19}).
As is theoretically known, the Fermi acceleration mechanisms lead to
the differential spectrum of ${\rm d}n/{\rm d}E\propto E^{-\beta}$
[or $n(>E)\propto E^{-\beta+1}$; \citealt{gaisser90}],
where ${\rm d}n(E)$ defines the density of particles
with kinetic energy between $E$ and $E+{\rm d}E$.
For the first-order Fermi mechanism involving nonrelativistic
shock with its compression ratio of $r\leq 4$, the power-law index
reads $\beta=\left( r+2\right) /\left( r-1\right)\geq 2$,
accommodated by the observational results.
With reference to these, we have the momentum distribution function
of $\left<f_{\bf p}\right>\propto |{\bf p}|^{-(\beta +2)}$
for the ultrarelativistic particles having $E=|{\bf p}|c$,
such that in the isotropic case, the differential quantity
$\left<f_{\bf p}\right>|{\bf p}|^{2}{\rm d}|{\bf p}|/(2\pi^{2})$
corresponds to ${\rm d}n(E)$ defined above \citep[e.g.,][]{blandford78}.
Then, in equation~(\ref{eqn:19}) the partial derivative of
the distribution function can be estimated as
$\partial^{2}\left< f_{\bf p}\right>/\partial p_{\parallel}^{2}\sim
(\beta +2)[\left(\beta +3\right)(p_{\parallel}/|{\bf p}|)^{2}
-(p_{\perp}/|{\bf p}|)^{2}](c^{2}/E^{2})\left< f_{\bf p}\right>$,
where we have used $|{\bf p}|^{2}=p_{\parallel}^{2}+p_{\perp}^{2}$.
Making use of this expression and equation~(\ref{eqn:22}),
equation~(\ref{eqn:19}) can be arranged in the form of
$I_{11}\sim\nu_{11}\left< f_{\bf p}\right>$.
Here $\nu_{11}$ reflects an effective collision frequency
related to the purely parallel diffusion in momentum space, to give
\begin{equation}
\nu_{11}={{2\left(\alpha -1\right)\left(\beta +2\right)
\left[\left(\beta +3\right)
\psi_{1}^{2}-\psi_{2}^{2}\right]\psi_{2}}\over
{\pi\alpha}}{{cq^{2}B^{2}R}\over{E^{2}}},
\label{eqn:23}
\end{equation}
\noindent
where we have used the definitions of
$B^{2}\equiv\left<|{\bf B}|^{2}\right>$, and
$\psi_{1}\equiv p_{\parallel}/|{\bf p}|\gtrless 0$ and
$\psi_{2}\equiv p_{\perp}/|{\bf p}|>0$, whereby $\sum_{i}\psi_{i}^{2}=1$.
Similarly, one can calculate the other components of
the integral $I_{ij}$ as outlined in the Appendix
and arrange them in the form of
$I_{ij}\sim\nu_{ij}\left< f_{\bf p}\right>$.
As a result, we obtained
\begin{equation}
\nu_{22}=
{{2\left(\alpha -1\right)\left(\beta +2\right)\left(\beta +4\right)
\psi_{1}^{2}\psi_{2}}\over{\pi\alpha}}{{cq^{2}B^{2}R}\over{E^{2}}},
\label{eqn:24}
\end{equation}
\noindent
and
\begin{eqnarray}
\nu_{12}&=&-\nu_{11}, \nonumber \\
\nu_{21}&=&-\nu_{22}.
\label{eqn:25}
\end{eqnarray}
\noindent
As would be expected, we confirm a trivial relation of
${{\rm d}\left<f_{\bf p}\right>}/{{\rm d}t}=\sum_{i,j}I_{ij}
\sim\sum_{i,j}\nu_{ij}\left<f_{\bf p}\right>=0$,
stemming from the orthogonality in the RHS of equation~(\ref{eqn:2}).
Now we estimate the spatial diffusion coefficients in an ad hoc
manner: $\kappa_{ij}\sim c^{2}\psi_{i}\psi_{j}/(2\nu_{ij})$.
It is then found that the off-diagonal components,
$\kappa_{12}$ and $\kappa_{21}$, include the factor of
${\rm sgn}(\psi_{1}\gtrless 0)=\pm 1$, implying that
these components vanish for an average.
For $\psi_{1}^{2}={1\over 3}$ and $\psi_{2}^{2}={2\over 3}$
reflecting the momentum isotropy, the diffusion coefficients
can be summarized in the following tensor form:
\begin{eqnarray}
\left(
\begin{array}{cc}
\kappa_{\parallel} & 0 \\
0 & \kappa_{\perp} \\
\end{array}
\right)
&\sim&{\sqrt{6}\pi\alpha\over{8\left(\alpha -1\right)}}
{{cE^{2}}\over{q^{2}B^{2}R}}\left[
\begin{array}{cc}
{1\over{\left(\beta +1\right)\left(\beta +2\right)}} &
0 \\
0 &
{2\over {\left(\beta +2\right)\left(\beta +4\right)}} \\
\end{array}
\right],
\label{eqn:26}
\end{eqnarray}
\noindent
where $\kappa_{\parallel}\equiv\kappa_{11}$ and
$\kappa_{\perp}\equiv\kappa_{22}$.
The perpendicular component can be expressed as
$\kappa_{\perp}={\tilde\kappa}\kappa_{\parallel}$, where
${\tilde\kappa}\equiv 2(\beta +1)/(\beta +4)$.
Note the allowable range of $1\leq{\tilde\kappa}<2$
for $\beta\geq 2$; particularly, ${\tilde\kappa}\approx 1$
for the expected range of $\beta\approx 2-3$.
It may be instructive to compare the diffusion coefficient
of equation~(\ref{eqn:26}) with that derived from the
previously suggested theories including the QLT.
In weakly turbulent low-$\beta$ plasmas, the mean magnetic field
with its strength ${\bar B}$, which can bind charged particles
and assign the gyroradius of $r_{\rm g}=E/(|q|{\bar B})$,
provides a well-defined direction along the field line; therefore,
in the following discussion, we refer, for convenience, to
${\parallel}_{b}$ and ${\perp}_{b}$ as the parallel and perpendicular
directions to the mean magnetic field, respectively.
For a simplistic QLT, one sets an ideal environment in which the
turbulent Alfv\'en waves propagating along the mean field
line resonantly scatter the bound particles, when
$k_{{\parallel}_{b}}^{-1}\sim r_{\rm g}$, where $k_{{\parallel}_{b}}$
is the parallel wavenumber \citep{drury83,longair92}.
Assuming that the particles interact with the waves in the
inertial range of the turbulent spectrum with its index $\alpha_{b}$,
the parallel diffusion coefficient could be estimated as
\citep{biermann87,muecke01}
\begin{equation}
\kappa_{{\parallel}_{b}}\sim
{1\over{3\left(\alpha_{b}-1\right)\eta_{b}}}
{{cr_{\rm g}}\over
{\left(k_{{\parallel}_{b},{\rm min}}r_{\rm g}\right)^{\alpha_{b}-1}}}
\label{eqn:27}
\end{equation}
\noindent
for $\alpha_{b}\neq 1$ and
$r_{\rm g}\leq k_{{\parallel}_{b},{\rm min}}^{-1}$,
where $k_{{\parallel}_{b},{\rm min}}^{-1}$ reflects the
correlation length of the turbulence and $\eta_{b}$ ($\leq 1$)
defines the energy density ratio of the turbulent/mean field.
In the special case of $\alpha_{b}=1$, referred to as
the Bohm diffusion limit \citep{bohm49}, one gets the ordering
$\kappa_{{\parallel}_{b}}\sim\kappa_{\rm B}/\eta_{b}$,
where $\kappa_{\rm B}=cr_{\rm g}/3$ denotes the
Bohm diffusion coefficient for ultrarelativistic particles.
Considering the energy accumulation range of smaller $k_{{\parallel}_{b}}$
for the Kolmogorov turbulence with $\alpha_{b}=5/3$, \citet{zank98}
derived a modified coefficient that recovered the
scaling of equation~(\ref{eqn:27}) in the region of
$r_{\rm g}\ll k_{{\parallel}_{b},{\rm min}}^{-1}$.
As for the more complicated perpendicular diffusion, a phenomenological
hard-sphere scattering form of the coefficient is
$\kappa_{{\perp}_{b}}=\eta_{b}^{2}\kappa_{{\parallel}_{b}}$ in the
Bohm diffusion limit; and \citet{jokipii87} suggested a somewhat
extended version, $\kappa_{{\perp}_{b}}=\kappa_{{\parallel}_{b}}
/[1+(\lambda_{{\parallel}_{b}}/r_{\rm g})^{2}]$
(referred to as $\kappa_{\rm J}$ below), where
$\lambda_{{\parallel}_{b}}$ is the parallel mean free path (mfp).
A significantly improved theory of perpendicular diffusion
has recently been proposed by \citet{matthaeus03}, including
nonlinearity incorporated with the two-dimensional wavevector
$k_{{\perp}_{b}}$, whereupon for $\alpha_{b}=5/3$, \citet{zank04}
have derived an approximate expression of the corresponding
diffusion coefficient, although it still exhibits a somewhat
complicated form (referred to as $\kappa_{\rm Z}$).
On the other hand, within the present framework the gyroradius
of the injected energetic particles cannot be well defined,
because of $|\left<{\bf B}\right>|\simeq 0$ (\S\S~2.1 and 2.2).
Nonetheless, in order to make a fair comparison with the order
of the components of equation~(\ref{eqn:26}), the variable
${\bar B}$ is formally equated with $B=\left<|{\bf B}|^2\right>^{1/2}$.
In addition, the correlation length is chosen as
$k_{{\parallel}_{b},{\rm min}}\sim R^{-1}$,
corresponding to the setting in \S~2.3.
Then the ratio of $\kappa_{ii}$ for ${\tilde\kappa}=1$ to
equation~(\ref{eqn:27}) is found to take a value in the range of
\begin{equation}
{\kappa\over{\kappa_{{\parallel}_{b}}}}<\left(\alpha_{b}-1\right)
\left({1\over Z}{E\over {100~{\rm EeV}}}{{1~{\rm mG}}\over B}
{{100~{\rm pc}}\over{R}}\right)^{\alpha_{b}}
\label{eqn:28}
\end{equation}
\noindent
for the expected values of $\alpha$, $\beta\approx 2-3$.
Here $\kappa\equiv\kappa_{ii}$ and $q=Z|e|$ have been introduced.
Similarly, we get the scaling of
$\kappa/\kappa_{\rm B}\sim 10^{-1}(E/ZeBR)$, and
$\kappa/\kappa_{\rm J}\sim (E/ZeBR)^{1/3}$
for $\alpha_{b}=5/3$ and $\eta_{b}\sim 10^{-1}$
followed by $\lambda_{{\parallel}_{b}}\gg r_{\rm g}$.
Furthermore, considering the parameters given in
\citet{zank04}, which can be accommodated
with the above $\eta_{b}\sim 10^{-1}$, we also have
$\kappa/\kappa_{\rm Z}\sim (E/ZeBR)^{17/9}$ in the leading
order of $\kappa_{\rm Z}$, proportional to $r_{\rm g}^{1/9}$.
These scalings are valid for arbitrary species of
charged particles; for instance, setting $Z=1$ reflects
electrons, positrons, or protons (see footnote~1).
Particularly, for $\kappa <\kappa_{{\parallel}_{b}}$ in
equation~(\ref{eqn:28}), the efficiency of the present
DSA is expected to be higher than that of the conventional one
based on the simplistic QLT invoking parallel diffusion
\citep{biermann87}.
This can likely be accomplished for high-$Z$ particles,
as well as electrons with lower maximum energies.
Here note that $\kappa$ cannot take an unlimitedly smaller value
with decreasing $E$, since the effects of cold particle trapping in
the local magnetic fields make the approximation of no guide field
(eq.~[\ref{eqn:2}]) worse; and the lower limit of $E$ is relevant
to the injection condition called for the present DSA.
More on these is given in \S~3.
To apply the DSA model, one needs the effective
diffusion coefficient for the direction normal to the shock front,
referred to as the shock-normal direction.
For convenience, here we write down the coefficient for the
general case in which the current filaments are inclined by an
angle of $\phi$ with respect to the shock-normal direction.
In the tensor transformation of
$\kappa_{\mu\nu}^{\prime}=\Lambda_{\mu}^{\delta}\Lambda_{\nu}^{\epsilon}
\kappa_{\delta\epsilon}$, where
\begin{equation}
\kappa^{\prime}=\left(
\begin{array}{cc}
\kappa_{11}^{\prime} & \kappa_{12}^{\prime} \\
\kappa_{21}^{\prime} & \kappa_{22}^{\prime} \\
\end{array}
\right),
\label{eqn:29}
\end{equation}
\begin{equation}
\Lambda=\left(
\begin{array}{cc}
\cos\phi & -\sin\phi \\
\sin\phi & \cos\phi \\
\end{array}
\right),
\label{eqn:30}
\end{equation}
\noindent
we identify the shock-normal component $\kappa_{\rm n}$ with
$\kappa_{11}^{\prime}$.
It can be expressed as
\begin{equation}
\kappa_{{\rm n},\zeta}=\kappa_{\parallel ,\zeta}\left(
\cos^{2}\phi_{\zeta}+{\tilde\kappa}_{\zeta}\sin^{2}\phi_{\zeta}\right),
\label{eqn:31}
\end{equation}
\noindent
or simply as $\kappa_{{\rm n},\zeta}\approx\kappa_{\zeta}$ for
$\tilde\kappa_{\zeta}\approx 1$,
where the subscripts $\zeta={\rm I}$, ${\rm II}$ indicate
the upstream and downstream regions, respectively.
The expression of equation~(\ref{eqn:31}) appears to be the same
as equation~(4) in \citet{jokipii87}.
However, note again that now $\parallel$ and $\perp$ refer
to the direction of the linear current filaments,
compared to an astrophysical jet (\S~2.1 and Fig.~1).
\section[]{PARTICLE ACCELERATION BY SHOCK IN MAGNETIZED\\* CURRENT FILAMENTS}
We consider the particle injection mechanism
that makes the present DSA scenario feasible, retaining
the validity of the quasi-linear approximation.
Then, using the diffusion coefficient (eq.~[\ref{eqn:26}]), we estimate
the DSA timescale for arbitrary species of charged particles and
calculate, by taking the competitive energy loss processes into account, at
the achievable highest energies of the particles in astrophysical filaments.
\subsection[]{\it The Conception of Energy Hierarchy, Transition, and
Injection of\\* Cosmic-Ray Particles}
In the usual DSA context, equation~(\ref{eqn:31}) that
calls equation~(\ref{eqn:26}) determines the cycle time
for one back-and-forth of cosmic-ray particles across
the shock front, which is used below for evaluation of
the mean acceleration time \citep[\S~3.2;][]{gaisser90}.
Here we note that equation~(\ref{eqn:26}) is valid for
a high-energy regime in which the test particles with $E$ are unbound
to the local magnetic fields, so as to experience the nongyrating motion.
As shown below, this limitation can be deduced from the validity condition
of the quasi-linear approximation that has been employed in \S~2.2.
Using equations~(\ref{eqn:4}) and (\ref{eqn:7}), the validity condition
$\left< f_{\bf p}\right>\gg|\delta f_{\bf p}|$ can be rewritten as
\begin{equation}
\left< f_{\bf p}\right>\gg
\left|{q\over c}\int{\rm d}^2{\bf k}e^{i{\bf k}\cdot{\bf r}}
\left\{
\left({\bf k}\cdot{\bf v}\right)^{-1}
\left[{\bf v}\times\left({\bf k}\times{\bf A}_{\bf k}\right)\right]
\cdot{{\partial\left< f_{\bf p}\right>}\over{\partial{\bf p}}}
\right\}\right|,
\label{eqn:32}
\end{equation}
\noindent
where the off-resonance interaction with the quasi-static
fluctuations has been considered (\S~2.2).
For the momentum distribution function of
$\left<f_{\bf p}\right>\propto |{\bf p}|^{-\beta^{\prime}}$ for
the statistically accelerated particles with $E=|{\bf p}|c$
(\S~2.4), the RHS of equation~(\ref{eqn:32}) is of the order of
$\sim[|q{\bf A}({\bf r})|/(c|{\bf p}|)]\left<f_{\bf p}\right>$
for $\beta^{\prime}\sim O(1)$.
Therefore, we see that within the present framework, the
quasi-linear approximation is valid for the test particles
with an energy of $E\gg |qA({\bf r})|$, in a confinement region.
Note that this relation ensures the condition that
the gyroradius for the local field strength of $|{\bf B}({\bf r})|$
greatly exceeds the filament size (coherence length) of order
$\sim k^{-1}$, namely, $E/|q{\bf B}({\bf r})|\gg k^{-1}$
[equivalently, $E\gg |qA({\bf r})|$], except for a
marginal region of $k\sim R^{-1}$.
Obviously, this means that in the high-energy regime
of $E\gg |qA|$, the test particles are not strongly
deflected by a local magnetic field accompanying a fine
filament with its transverse scale of $\sim k^{-1}$.
On the other hand, in the cold regime of $E\ll |qA|$, the test
particles are tightly bound to a local magnetic field having the
(locally defined) mean strength, violating equation~(\ref{eqn:2}).
Here it is expected that the bound particles can diffuse along the
local field line, and hence, diffusion theories for a low-$\beta$ plasma
are likely to be more appropriate, rather than the present theory.
Summarizing the above discussions, there seem to exist
two distinct energy regimes for the test particles confined
in the system comprising numerous magnetized filaments: the
higher energy regime of $E\gg |qA|$, in which the particles
are free from the local magnetic traps, and the lower
energy regime of $E\ll |qA|$, in which the particles are bound
to the local fields, as compared to a low-$\beta$ state.
The hierarchy is illustrated in Figure~2, indicating
the characteristic trajectories of those particles.
When shock propagation is allowed, as seen in actual AGN jets,
the shock accelerator can energize the particles in each energy level.
At the moment, we are particularly concerned with EHE particle production
by a feasible scenario according to which energetic free particles,
unbound to small-scale
structure of the magnetized filaments, are further energized by the shock.
In this aspect, the particle escape from magnetically
bound states, due to another energization mechanism,
can be regarded as the injection of preaccelerated
particles into the concerned diffusive shock accelerator.
If the preaccelerator, as well, is of DSA, relying on the gyromotion
of bound particles \citep{drury83,biermann87,zank04}, the preaccelerator
also calls for the injection (in a conventional sense) in a far lower
energy level, owing to, e.g., the Maxwellian tail, or the energization
of particles up to the energies where the pre-DSA turns on
\citep[for a review, see][]{berezinskii90}.
The energy required for this injection, the so-called injection energy,
could be determined by, e.g., the competition with collisional resistance.
In order to distinguish from this commonly used definition of
``injection,'' we refer to the corresponding one, owing to the particle escape
from the magnetic traps, as the ``transition injection,'' in
analogy to the bound-free transition in atomic excitation.
The energy required to accomplish of the transition
injection is formally denoted as $E_{\rm inj}\sim |qA|_{\rm th}$,
where $|qA|_{\rm th}$ represents a threshold potential energy.
That is, the particles with $q$ and the energy exceeding
$E_{\rm inj}$ are considered to spaciously meander to
experience successive small deflection by the fields of
many filaments (Fig.~2), such that the present theory
is adequate for describing the particle diffusion.
This scattering property can be compared to that for the
conventional QLT in low-$\beta$ regimes: an unperturbed (zeroth order)
guiding center trajectory of gyrating particles bound to
a mean magnetic field must be a good approximation for
many coherence lengths of particle scatterer.
If both the injection and transition injection work,
the multistep DSA can be realized.
In the stage of $E\ll |qA({\bf r})|$, many acceleration scenarios
that have been proposed thus far (DSA: e.g., \citealt{drury83,biermann87};
shock drift acceleration: e.g., \citealt{webb83}; some versions of the
combined theories: e.g., \citealt{jokipii87,ostrowski88}; for a review,
see, e.g., \citealt{jones91}) can be candidates for the mechanism
of the preacceleration up to the energy range of
$E\sim |qA({\bf r})|$, although before achieving this energy level,
the acceleration, especially for electrons, might be,
in some cases, knocked down by the energy loss, such as
synchrotron cooling, collision with photons, and so on.
The relevant issues for individual specific situations
are somewhat beyond the scope of this paper
(observability is discussed in \S~4).
Here we just briefly mention that in the termination regions of large-scale
jets where the bulk kinetic energy is significantly converted into the
magnetic and particle energies, a conventional DSA mechanism involving
large-scale MHD turbulence might work up to EHE ranges
(Honda \& Honda [2004b] for an updated scenario of oblique
DSA of protons).
\subsection[]{\it Timescale of the Diffusive Shock Acceleration}
In the following, we focus on the DSA of energetic
free particles after the transition injection.
Let us consider a typical case of $\phi_{\rm I}=\phi_{\rm II}=0^{\circ}$
in equation~(\ref{eqn:31}), reflecting a reasonable situation that
a shock wave propagates along the jet comprising linear filaments.
Since the vectors of the random magnetic fields are on
the plane transverse to the current filaments, this
plane is perpendicular to the shock-normal direction.
That is, the shock across the perpendicular magnetic fields is considered.
In this case, no irregularity of magnetic surfaces in the
shock-normal direction exists, because of $k_{\parallel}=0$.
However, this does {\it not} mean that the particle flux diffusively
across the shock surface is in free-streaming;
note that the particles crossing the local fields with
nonsmall pitch angles suffer the orthogonal deflection.
Anyhow, the injected particles are off-resonantly scattered by
the filamentary turbulence, to diffuse, migrating back and forth
many times between the upstream and downstream regions of the shock.
As a consequence, a small fraction of them can be
stochastically accelerated to very high energy.
This scenario is feasible, as long as the filamentary
structure can exist around the discontinuity, as seen in
a kinetic simulation for shock propagation \citep{nishikawa03}
and an actual filamentary jet \citep{owen89}.
The timescale of this type of DSA is of the order of the
cycle time for one back-and-forth divided by the energy
gain per encounter with the shock \citep{gaisser90}.
Here the cycle time is related to the mean residence time of
particles (in regions I and II), which is determined by the
diffusive particle flux across the shock, dependent on
$\kappa_{{\rm n},\zeta}$ (eq.~[\ref{eqn:31}]).
For the moment, the shock speed is assumed to be nonrelativistic.
Actually, this approximation is reasonable, since the discrete knots
(for FR~I) and hot spots (for FR~II),
which are associated with shocks \citep[e.g.,][]{biretta83,carilli96},
preferentially move at a nonrelativistic speed, slower than the speed of
the relativistic jets \citep[e.g.,][]{meisenheimer89,biretta95}.
When taking the first-order Fermi mechanism into consideration
for calculation of the energy gain, the mean acceleration
time can be expressed as \citep{lagage83a,lagage83b,drury83}
\begin{equation}
t_{\rm acc}\simeq{3\over{U_{\rm I}-U_{\rm II}}}
\left({\kappa_{\rm n,I}\over U_{\rm I}}+
{\kappa_{\rm n,II}\over U_{\rm II}}\right),
\label{eqn:33}
\end{equation}
\noindent
where $U_{\rm I}$ and $U_{\rm II}$ are the flow speed of the
upstream and downstream regions in the shock rest frame, respectively.
The present case of $\phi_{\zeta}=0$ (in eq.~[\ref{eqn:31}])
provides $\kappa_{{\rm n},\zeta}=\kappa_{\parallel,\zeta}$, where
$\kappa_{\parallel,\zeta}$ is given in equation~(\ref{eqn:26}).
Here note the relation of $B_{\rm I}=B_{\rm II}$,
derived from the condition that the current density,
${\bf J}_{\zeta}\sim\nabla\times {\bf B}_{\zeta}$,
must be continuous across the shock front.
When assuming $\alpha_{\rm I}=\alpha_{\rm II}$ and
$\beta_{\rm I}=\beta_{\rm II}$, we arrive at the result
\begin{equation}
t_{a,{\rm acc}}\simeq{{3\sqrt{6}\pi\alpha r\left( r+1\right)}\over
{8\left(\alpha -1\right)\left(\beta_{a}+1\right)\left(\beta_{a}+2\right)
\left( r-1\right)}}{cE_{a}^{2}\over{q_{a}^{2}B^{2}RU^{2}}},
\label{eqn:34}
\end{equation}
\noindent
where the definitions of $\alpha\equiv\alpha_{\zeta}$,
$\beta_{a}\equiv\beta_{\zeta}$, $B\equiv B_{\zeta}$, and
$U\equiv U_{\rm I}=rU_{\rm II}$ have been introduced.
Equation~(\ref{eqn:34}) is valid for arbitrary species of particles ``$a$''
having energy $E_{a}$, spectral index $\beta_{a}$, and charge $q_{a}$.
Note that for the plausible ranges of the values of $\alpha$, $\beta_{a}$,
and $r$, the value of equation~(\ref{eqn:34}) does not significantly change.
The $\phi_{\zeta}$ dependence is also small, because of
${\tilde\kappa}\approx 1$, reflecting three-dimensional rms deflection of
unbound particles (\S~3.1 and Fig.~2).
In the scaling laws shown below, for convenience we use the
typical values of $\alpha=2$ \citep{montgomery79} and $r=4$
(for the strong shock limit), although we indicate, in equation~(\ref{eqn:39}),
the parameter dependence of the numerical factor.
\subsection[]{\it The Highest Energy of an Accelerated Ion}
In equation~(\ref{eqn:34}) for ions ($a=$''i''), we set $q_{\rm i}=Z|e|$
(see footnote~1) and $\beta_{\rm i}=3$ \citep[e.g.,][]{stecker99,demarco03}.
By balancing equation~(\ref{eqn:34}) with the timescale of the
most severe energy loss process, we derive the maximum possible
energy defined as $E_{\rm i,max}\equiv E_{\rm i}$.
In the environment of astrophysical filaments including extragalactic jets,
the phenomenological time balance equation can be expressed as
\begin{equation}
t_{{\rm i},{\rm acc}}={\rm min}\left(t_{\rm sh},t_{\rm i, syn},
t_{\rm n\gamma},t_{\rm nn^{\prime}}\right),
\label{eqn:35}
\end{equation}
\noindent
where $t_{\rm sh}$, $t_{\rm i,syn}$, $t_{\rm n\gamma}$,
and $t_{\rm nn^{\prime}}$ stand for the timescales of the shock propagation
(\S~3.3.1; eq.~[\ref{eqn:36}]), the synchrotron loss for ions
(\S~3.3.2; eq.~[\ref{eqn:40}]), the photodissociation of the nucleus
(\S~3.3.3; e.g., eq.~[\ref{eqn:43}]), and the collision of nucleus
``${\rm n}$'' with target nucleus ``${\rm n}^{\prime}$''
(\S~3.3.4; e.g., eq.~[\ref{eqn:46}]), respectively.
In addition, the energy constraint ascribed to the
spatial scale, i.e., the quench caused by the particle
escape, should also be taken into account (\S~3.3.5).
The individual cases are investigated below.
\subsubsection[]{\it The Case Limited by the Shock Propagation Time}
In the actual circumstances of astrophysical jets,
the propagation time of a shock through the jet, $t_{\rm sh}$,
restricts the maximum possible energy of accelerated particles.
The shock propagation time may be interpreted as the age of knots or
hot spots \citep{hh04b}, which can be crudely estimated as
$\sim L/U_{\rm prop}$, where $L$ represents a distance from
the central engine to the knot or hot spot being considered
and $U_{\rm prop}$ an average speed of their proper motion.
When assuming $U\sim U_{\rm prop}$, we get the scaling
\begin{equation}
t_{\rm sh}\sim 1\times 10^{11}{L\over{1~{\rm kpc}}}{{0.1c}\over{U}}~~{\rm s}.
\label{eqn:36}
\end{equation}
\noindent
For the case in which the shock is currently alive as is observed in AGN jets,
$t_{\rm sh}$ cannot be compared to the ``lifetime'' of the accelerator
that is considered in SNR shocks \citep[e.g.,][]{gaisser90}.
It is mentioned that in AGN jets, the timescale of
adiabatic expansion loss might be estimated as
$t_{\rm ad}\approx 3L/\left( 2\Gamma U_{r}\right)$,
where $\Gamma$ and $U_{r}$ represent the
Lorentz factor of jet bulk flows and the speed of
radial expansion, respectively \citep{muecke03}.
The fact that the jets are collimating well with an
opening angle of $\phi_{\rm oa}\lesssim 10\degr$ means
$U_{\rm prop}/U_{r}\gtrsim O(10)$; thereby,
$t_{\rm sh}\lesssim t_{\rm ad}$ for $\Gamma\lesssim{\cal O}(10)$.
Thus, it is sufficient to pay attention to the limit
due solely to the shock propagation time.
These circumstances are also in contrast with those in the SNRs,
where the flows are radially expanding without collimation,
and the shock propagation time (or lifetime) just reflects
the timescale of adiabatic expansion loss \citep[e.g.,][]{longair92}.
In equation~(\ref{eqn:35}), let us first consider
the case of $t_{\rm i, acc}=t_{\rm sh}$.
By equating (\ref{eqn:34}) with (\ref{eqn:36}), we obtain the following
expression for the maximum possible energy of an accelerated ion:
\begin{equation}
E_{{\rm i},{\rm max}}\sim 70~Z{B\over{1~{\rm mG}}}
\left({L\over{1~{\rm kpc}}}\right)^{1/2}
\left({R\over{100~{\rm pc}}}\right)^{1/2}
\left({U\over{0.1c}}\right)^{1/2}~~{\rm EeV}.
\label{eqn:37}
\end{equation}
\noindent
Note the ratio of $L/R\sim 360/(\pi\phi_{\rm oa}\sin\phi_{\rm va})\sim 10-100$
for the narrow opening angle of AGN jets of
$\phi_{\rm oa}\sim 1\degr-10\degr$ and not-so-small viewing angle
$\phi_{\rm va}$ (e.g., for the M87 jet, $L\simeq 23R-33R$ for
$\phi_{\rm oa}\simeq 6\fdg 9$ [\citealt{reid89}] and
$\phi_{\rm va}=42\fdg 5\pm 4\fdg 5$ [\citealt{biretta95}]
or $30\degr-35\degr$ [\citealt{bicknell96}]).
Equation~(\ref{eqn:37}) (and eq.~[\ref{eqn:48}] shown below)
corresponds to the modified version of the simple scaling
originally proposed by \citet{hillas84}.
Concerning the abundance of high-$Z$ elements and their acceleration
to EHE regimes, the following points 1--4 may be worth noting:
\begin{enumerate}
\item Radial metallicity gradients are expected to be enhanced in
elliptical galaxies \citep[e.g.,][]{kobayashi04}.
Along with this, a significant increase of heavy elements has been
discovered in the central region of the nearby giant elliptical galaxy
M87 \citep{gastaldello02}, which contains a confirmed jet.
\item A variety of heavy ions including iron have been detected
in a microquasar jet \citep[SS\,433;][]{kotani96}.
\item The Haverah Park data favor proton primaries below
an energy of $\sim 50~{\rm EeV}$, whereas they appear to favor
a heavier composition above it \citep{ave00}.
\item The recent Fly's Eye data of $\sim 320~{\rm EeV}$
are compatible with the assumption of a hadron primary between
proton and iron nuclei \citep{risse04}.
\end{enumerate}
With reference to this observational evidence, we take the
possibility of acceleration of (or deceleration by) heavy particles
into consideration and indicate the charge ($Z$) and/or atomic number
($A$) dependence of the maximum possible energies and loss timescales.
\subsubsection[]{\it The Case Limited by the Synchrotron Cooling Loss}
The particles deflected by the random magnetic fields tend to emit
unpolarized synchrotron photons, which can be a dominant cooling process.
For relativistic ions, the timescale can be written as
$t_{\rm i, syn}\simeq 36\pi^{2}(A/Z)^{4}
[m_{\rm p}^{4}c^{7}/(e^{4}B^{2}E_{\rm i})]$,
where $m_{\rm p}$ denotes the proton rest mass \citep{gaisser90}.
In this expression, the energy of an accelerated ion, $E_{\rm i}$,
can be evaluated by equating $t_{\rm i,acc}$ with $t_{\rm i,syn}$.
That is, we have
\begin{equation}
{E_{\rm i}\over {Am_{\rm p}c^{2}}}=\xi(\alpha,\beta_{\rm i},r)
\left[{A\over Z^{2}}{{m_{\rm p}}\over{m_{\rm e}}}
{R\over r_{0}}\left({U\over c}\right)^{2}\right]^{1/3}.
\label{eqn:38}
\end{equation}
\noindent
Here the dimensionless factor $\xi$ is given by
\begin{equation}
\xi(\alpha,\beta,r)=\left[{{4\sqrt{6}\left(\alpha -1\right)
\left(\beta +1\right)\left(\beta +2\right)\left( r-1\right)}
\over{\alpha r\left( r+1\right)}}\right]^{1/3},
\label{eqn:39}
\end{equation}
\noindent
and $r_{0}=e^{2}/(4\pi m_{\rm e}c^{2})$ stands for the classical
radius of the electron, where $m_{\rm e}$ is the electron rest mass.
Substituting equation~(\ref{eqn:38}) into the expression of
$t_{\rm i, syn}$, the cooling timescale can be expressed as
a function of the physical parameters of the target object.
As a result, we find
\begin{equation}
t_{\rm i, syn}\sim 3\times 10^{15}
{1\over Z^{2/3}}\left({A\over{2Z}}\right)^{8/3}
\left({{1~{\rm mG}}\over B}\right)^{2}
\left({{100~{\rm pc}}\over{R}}\right)^{1/3}
\left({{0.1c}\over{U}}\right)^{2/3}~~{\rm s}.
\label{eqn:40}
\end{equation}
\noindent
Practically, this expression can be used in equation~(\ref{eqn:35})
for making a direct comparison with the other loss timescales.
For example, in the FR sources with $B\lesssim 1~{\rm mG}$
\citep{owen89,meisenheimer89,meisenheimer96,rachen93},
we have $t_{\rm i,syn}\gg t_{\rm sh}$,
so that the synchrotron cooling loss is ineffective.
It should, however, be noted that in blazars with $B\gtrsim 0.1~{\rm G}$
\citep{kataoka99,muecke01,aharonian02}, equation~(\ref{eqn:40}) becomes,
in some cases, comparable to equation~(\ref{eqn:36}).
When the equality of $t_{\rm i,acc}=t_{\rm i,syn}$ is fulfilled
in equation~(\ref{eqn:35}), equation~(\ref{eqn:38}) just provides
the maximum possible energy of the accelerated ion, which scales as
\begin{equation}
E_{\rm i,max}\sim 2A^{2/3}\left({A\over{2Z}}\right)^{2/3}
\left({R\over{100~{\rm pc}}}\right)^{1/3}
\left({U\over{0.1c}}\right)^{2/3}~~{\rm ZeV}.
\label{eqn:41}
\end{equation}
\noindent
The important point is that $t_{\rm i, acc}$ and $t_{\rm i, syn}$
are both proportional to $B^{-2}$, so that the
$B$ dependence of $E_{\rm i, max}$ is canceled out.
This property also appears in the case of electron acceleration
attenuated by the synchrotron cooling (\S~3.4.1).
In equation~(\ref{eqn:41}), it appears that for heavier ions,
$E_{\rm i, max}$ takes a larger value.
In the actual situation, however, the extremely energetic ions possess
a long mfp, and therefore, acceleration may be
quenched by the particle escape, as discussed in \S~3.3.5.
\subsubsection[]{\it The Case Limited by the Collision with Photons}
Here we focus on the proton-photon collision
that engenders a pion-producing cascade.
The characteristic time of the collision depends on the target
photon spectrum $n(\epsilon_{\rm ph})$ in the acceleration site, where
$n(\epsilon_{\rm ph})$ is the number density of photons per unit energy
interval for photon energy $\epsilon_{\rm ph}$.
For $n(\epsilon_{\rm ph})\propto \epsilon_{\rm ph}^{-2}$
\citep[e.g.,][]{bezler84}, typical for the FR sources \citep{rachen93},
the timescale can be expressed as
$t_{{\rm p}\gamma}\sim[u_{\rm m}/(\chi u_{\rm ph})]t_{\rm p,syn}$,
where $\chi\sim 200$ for the average cross section of
$\sigma_{\gamma {\rm p}}\sim 900~{\rm\mu barns}$ \citep{biermann87},
$u_{\rm ph}$ denotes the average energy density of target photons, and
$t_{\rm p,syn}=t_{\rm i,syn}|_{A=Z=1}$ (the subscript ``p'' indicates proton).
Thus, the expression of $t_{{\rm p}\gamma}$ includes
$E_{\rm p}$, i.e., the energy of the accelerated proton.
This can be evaluated by equating $t_{\rm p,acc}$
with $t_{{\rm p}\gamma}$, to have the form of
\begin{equation}
{E_{\rm p}\over {m_{\rm p}c^{2}}}={{\xi(\alpha,\beta_{\rm i},r)}\over{
\left(\chi\eta_{u}\right)^{1/3}}}
\left[{{m_{\rm p}}\over{m_{\rm e}}}{R\over r_{0}}
\left({U\over c}\right)^{2}\right]^{1/3},
\label{eqn:42}
\end{equation}
\noindent
where the definition $\eta_{u}\equiv u_{\rm ph}/u_{\rm m}$
has been introduced.
Substituting equation~(\ref{eqn:42}) into the expression of
$t_{{\rm p}\gamma}$, we obtain the following scaling of
the photomeson cooling time:
\begin{equation}
t_{{\rm p}\gamma}\sim 6\times 10^{15}
\left({200\over\chi}\right)^{2/3}\eta_{u}^{1/3}
{{10^{-10}~{\rm erg}~{\rm cm}^{-3}}
\over{u_{\rm ph}}}\left({{100~{\rm pc}}\over{R}}\right)^{1/3}
\left({{0.1c}\over{U}}\right)^{2/3}~~{\rm s}.
\label{eqn:43}
\end{equation}
\noindent
Note that for $\eta_{u}=\chi^{-1}\sim 5\times 10^{-3}$,
we have $t_{{\rm p}\gamma}=t_{\rm p, syn}$.
If the equality of $t_{\rm p,acc}=t_{{\rm p}\gamma}$ is
satisfied in equation~(\ref{eqn:35}), then equation~(\ref{eqn:42})
gives the maximum possible energy, which scales as
\begin{equation}
E_{\rm p, max}\sim 200
\left({200\over\chi}\right)^{1/3}
\left(1\over\eta_{u}\right)^{1/3}
\left({R\over{100~{\rm pc}}}\right)^{1/3}
\left({U\over{0.1c}}\right)^{2/3}~~{\rm EeV}.
\label{eqn:44}
\end{equation}
\noindent
For $\eta_{u}=\chi^{-1}$, equation~(\ref{eqn:44}) is
identical with equation~(\ref{eqn:41}) for $A=Z=1$.
\subsubsection[]{\it The Case Limited by the Collision with Particles}
The nucleus-nucleus collisions involving spallation reactions
can also be a competitive process in high-density regions.
For proton-proton collision, the timescale can be simply evaluated by
$t_{\rm pp^{\prime}}=(n_{\rm p^{\prime}}\sigma_{\rm pp^{\prime}}c)^{-1}$,
where $n_{\rm p^{\prime}}$ is the number density of target protons, and
$\sigma_{\rm pp^{\prime}}\approx 40~{\rm mbarns}$ denotes the cross section
in high-energy regimes.
The timescale can be rewritten as
\begin{equation}
t_{\rm pp^{\prime}}\simeq 8.3\times 10^{14}
{{1~{\rm cm}^{-3}}\over{n_{\rm p^{\prime}}}}~~{\rm s}.
\label{eqn:45}
\end{equation}
\noindent
It is found that for tenuous jets with $n_{\rm p^{\prime}}\ll 1~{\rm cm}^{-3}$,
the value of equation~(\ref{eqn:45}) is larger than the conceivable value
of equation~(\ref{eqn:36}); that is, the collisional loss is ineffective.
For the collision of an accelerated proton with a nonproton nucleus,
the timescale can be evaluated by the analogous notation,
$t_{\rm p{\rm N}^{\prime}}=(n_{A^{\prime}}\sigma_{{\rm p}A^{\prime}}c)^{-1}$,
where $n_{A^{\prime}}$ is the fractional number density of the
target nuclei having atomic number $A^{\prime}>1$.
Here we use an empirical scaling of the cross section,
$\sigma_{{\rm p}A^{\prime}}\approx\pi r_{0}^{2}A^{\prime 2/3}$,
where $r_{0}\simeq 1.4\times 10^{-13}~{\rm cm}$,
although the value of $r_{0}$ may be an overestimate for
very high energy collisions \citep[e.g.,][]{burbidge56}.
Combining $t_{\rm p{\rm N}^{\prime}}$ with $t_{\rm pp^{\prime}}$,
in general the timescale for collision of a proton with a nucleus
of an arbitrary composition can be expressed as
\begin{equation}
t_{\rm pn^{\prime}}\simeq 5.4\times 10^{14}
{1\over{0.65n_{\rm p^{\prime}}+
\sum_{A^{\prime}>1}n_{A^{\prime}}A^{\prime 2/3}}}~~{\rm s},
\label{eqn:46}
\end{equation}
\noindent
where $n_{\rm p^{\prime}}$ and $n_{A^{\prime}}$ are
both in units of ${\rm cm}^{-3}$.
In equation~(\ref{eqn:35}), we consider the case of
$t_{\rm p,acc}=t_{\rm pn^{\prime}}$.
By equating (\ref{eqn:34}) with (\ref{eqn:46}), we obtain the following
expression for the maximum possible energy of an accelerated proton:
\begin{equation}
E_{\rm p,max}\sim 200
\left({{100~{\rm cm}^{-3}}\over{n_{\rm p^{\prime}}+1.5
\sum_{A^{\prime}>1}n_{\rm A^{\prime}}A^{\prime 2/3}}}\right)^{1/2}
{B\over{1~{\rm mG}}}\left({R\over{100~{\rm pc}}}\right)^{1/2}
{U\over{0.1c}}~~{\rm EeV}.
\label{eqn:47}
\end{equation}
\noindent
As for the collision of an arbitrary accelerated nucleus with a target nucleus,
we can analogously estimate $t_{\rm nn^{\prime}}$ and $E_{\rm i,max}$.
In particular, the heavier nucleus--proton collision is more important,
since its timescale $t_{\rm np^{\prime}}$ is of the order of
$t_{\rm pp^{\prime}}/A^{2/3}$: for larger $A$ and $n_{\rm p^{\prime}}$,
it can be comparable to the other loss timescales.
For example, the parameters of $A=56$ (iron) and
$n_{\rm p^{\prime}}\sim 100~{\rm cm}^{-3}$ lead to
$t_{\rm np^{\prime}}\sim 4\times 10^{11}~{\rm s}$.
For the case of $t_{\rm i,acc}=t_{\rm np^{\prime}}$
in equation~(\ref{eqn:35}), we have the scaling of
$E_{\rm i,max}\sim 0.6Z^{2/3}(2Z/A)^{1/3}E_{\rm p,max}$,
where $E_{\rm p,max}$ is of equation~(\ref{eqn:47}) for
$n_{\rm p^{\prime}}\gg \sum_{A^{\prime}>1}n_{A^{\prime}}A^{\prime 2/3}$.
\subsubsection[]{\it Quenching by Particle Escape}
The particle escape also limits its acceleration; that is, the
spatioscale of the system brings on another energy constraint.
Relating to this point, in \S~2.4 we found the relation of
${\tilde\kappa}\approx 1$, meaning that the anisotropy
of the spatial diffusion coefficient is small.
It follows that the radial size of the jet (rather than $L$)
affects the particle confinement.
Recall here that in the interior of a jet the magnetic
field vectors tend to be canceled out, whereas around the envelope the
uncanceled, large-scale ordered field can appear \citep{hh04a}.
From the projected view of the jet, on both sides of the
envelope the magnetic polarities are reversed.
The spatially decaying properties of such an envelope field in the
external tenuous medium or vacuum might influence the transverse
diffusion of particles.
The key property that should be recalled is that
for $r\gg k^{-1}$ distant from a filament, the magnetic
field strength is likely to slowly decay, being proportional
to $\sim (kr)^{-1}$ \citep{honda00,honda02}.
It is, therefore, expected that as long as the radial size of the
largest filament, i.e., correlation length, is comparable to the
radius of the jet (\S~2.3), the long-range field pervades the
exterior of the jet, establishing the ``magnetotail'' with
the decay property of $\sim (k_{\rm min}r)^{-1}$ for
$r\gg k_{\rm min}^{-1}(\sim R)$.
In fact, in a nearby radio galaxy, the central kiloparsec-scale
``hole'' of the inner radio lobe containing a jet is filled with an
ordered, not-so-weak (rather strong) magnetic field of the order of
$10-100~{\rm \mu G}$ \citep{owen90}, whose magnitude is comparable
to (or $\sim 10~\%$ of) that in the jet \citep{owen89,heinz97}.
Presumably, the exuding magnetic field plays an additional role in
confining the leaky energetic particles with their long mfp of
$\lambda_{\perp}(\sim c\psi_{2}/\nu_{22})\sim R$.
In this aspect, let us express an effective confinement radius as
$R_{\rm c}={\tilde \rho}R$, where ${\tilde \rho}\gtrsim 1$,
and impose the condition that the accelerator operates for the particles
with the transverse mfp of $\lambda_{\perp}\leq R_{\rm c}$.
Then the equality gives the maximum possible energy in the form of
\begin{equation}
E_{\rm i,max}\sim 200~Z{\tilde\rho}^{1/2}
{B\over{1~{\rm mG}}}{R\over{100~{\rm pc}}}~~{\rm EeV}.
\label{eqn:48}
\end{equation}
\noindent
Values of $E_{\rm i,max}$ (and $E_{\rm p,max}$)
derived from the time balance equation~(\ref{eqn:35})
cannot exceed that of equation~(\ref{eqn:48}).
It appears that equation~(\ref{eqn:48}) can be compared to
the energy scaling derived from, in the simplest model,
the maximum gyroradius in a uniform magnetic field.
\subsection[]{\it The Highest Energy of an Accelerated Electron}
In a manner simiar to that explained in \S~3.3, we find the generic
scaling for the achievable highest energy of electrons.
In equation~(\ref{eqn:34}) for electrons ($a=$''e''),
we set $q_{\rm e}=-|e|$ and $\beta_{\rm e}=2$
\citep[e.g.,][]{meisenheimer89,rachen93,wilson02}.
By balancing equation~(\ref{eqn:34}) with the timescale of
a competitive energy loss process, we derive the maximum
possible energy defined as $E_{\rm e,max}\equiv E_{\rm e}$.
The time balance equation can be written as
\begin{equation}
t_{{\rm e},{\rm acc}}={\rm min}\left(t_{\rm e, syn},
t_{\rm ic}, t_{\rm br}\right),
\label{eqn:49}
\end{equation}
\noindent
where $t_{\rm e, syn}$, $t_{\rm ic}$, and $t_{\rm br}$ stand for
the timescales of the synchrotron loss for electrons
(\S~3.4.1; eq.~[\ref{eqn:51}]), the inverse Compton scattering
(\S~3.4.2; eq.~[\ref{eqn:54}]), and the bremsstrahlung emission loss
(\S~3.4.3; eq.~[\ref{eqn:57}]), respectively.
For positrons the method is so analogous that we omit the explanation.
\subsubsection[]{\it The Case Limited by the Synchrotron Cooling Loss}
For electrons, the synchrotron cooling is a familiar loss process,
and the timescale can be expressed as
$t_{\rm e,syn}\simeq 36\pi^{2}m_{\rm e}^{4}c^{7}/(e^{4}B^{2}E_{\rm e})$.
In this expression, the energy of an accelerated electron, $E_{\rm e}$,
can be evaluated by equating $t_{\rm e,acc}$ with $t_{\rm e,syn}$, to give
\begin{equation}
{E_{\rm e}\over{m_{\rm e}c^{2}}}=\xi(\alpha,\beta_{\rm e},r)
\left[{R\over r_{0}}\left({U\over c}\right)^{2}\right]^{1/3}.
\label{eqn:50}
\end{equation}
\noindent
Substituting equation~(\ref{eqn:50}) into the aforementioned expression
of $t_{\rm e,syn}$, the cooling timescale can be expressed
as a function of the physical parameters of the target object:
\begin{equation}
t_{\rm e,syn}\sim 1\times 10^{6}\left({{1~{\rm mG}}\over B}\right)^{2}
\left({{100~{\rm pc}}\over{R}}\right)^{1/3}
\left({{0.1c}\over{U}}\right)^{2/3}~~{\rm s}.
\label{eqn:51}
\end{equation}
\noindent
This can be used in equation~(\ref{eqn:49}) for
comparison with the other loss timescales.
When the equality of $t_{\rm e,acc}=t_{\rm e,syn}$ is
satisfied in equation~(\ref{eqn:49}), equation~(\ref{eqn:50})
gives the maximum possible energy, which scales as
\begin{equation}
E_{\rm e,max}\sim 50\left({R\over{100~{\rm pc}}}\right)^{1/3}
\left({U\over{0.1c}}\right)^{2/3}~~{\rm PeV}.
\label{eqn:52}
\end{equation}
\noindent
According to the explanation given in \S~3.3.2, equation~(\ref{eqn:52})
is independent of $B$ (see also eq.~[\ref{eqn:41}]).
The striking thing is that for plausible parameters,
the value of $E_{\rm e,max}$ is significantly larger
than that obtained in the context of the simplistic QLT
invoking the Alfv\'en waves \citep{biermann87}.
This enhancement is, as seen in equation~(\ref{eqn:28}), attributed
to the smaller value of the diffusion coefficient for electrons, which
leads to a shorter acceleration time, i.e., a smaller value of
equation~(\ref{eqn:33}), and thereby to a higher acceleration efficiency.
\subsubsection[]{\it The Case Limited by the Inverse Compton Scattering}
For the case of $\eta_{u}>1$, the inverse Compton scattering of accelerated
electrons with target photons can be a dominant loss process.
Actually, the environments of AGN jets often allow the synchrotron
self-Compton (SSC) and/or external Compton (EC) processes.
The characteristic time of the inverse Comptonization can be estimated as
$t_{\rm ic}\sim (t_{\rm e,syn}/\eta_{u})(\sigma_{\rm T}/\sigma_{\rm KN})$,
where $\sigma_{\rm T}=8\pi r_{0}^{2}/3$ and
$\sigma_{\rm KN}(\epsilon_{\rm ph}E_{\rm e})$
denote the total cross sections in the Thomson limit of
$\epsilon_{\rm ph}E_{\rm e}\ll m_{\rm e}^{2}c^{4}$ and the Klein-Nishina
regime of $\epsilon_{\rm ph}E_{\rm e}\gtrsim m_{\rm e}^{2}c^{4}$,
respectively \citep[e.g.,][]{longair92}.
The expression of $t_{\rm ic}$ includes $E_{\rm e}$, which is determined by
numerically solving the balance equation of $t_{\rm e,acc}=t_{\rm ic}$.
Because of $\sigma_{\rm KN}\leq\sigma_{\rm T}$,
the value of $E_{\rm e}$ is found to be in the region of
\begin{equation}
{E_{\rm e}\over {m_{\rm e}c^{2}}}\geq
{{\xi(\alpha,\beta_{\rm e},r)}\over{\eta_{u}^{1/3}}}
\left[{R\over r_{0}}\left({U\over c}\right)^{2}\right]^{1/3},
\label{eqn:53}
\end{equation}
\noindent
in the whole range of $\epsilon_{\rm ph}$.
Note that the equality in equation~(\ref{eqn:53}) reflects
the Thomson limit of $\sigma_{\rm KN}/\sigma_{\rm T}=1$.
Substituting the value of $E_{\rm e}$ into the expression of $t_{\rm ic}$,
we can evaluate the scattering time, which takes a value in the range of
\begin{equation}
t_{\rm ic}\geq 5\times 10^{8}\eta_{u}^{1/3}
{{10^{-10}~{\rm erg}~{\rm cm}^{-3}}
\over{u_{\rm ph}}}\left({{100~{\rm pc}}\over{R}}\right)^{1/3}
\left({{0.1c}\over{U}}\right)^{2/3}~~{\rm s}.
\label{eqn:54}
\end{equation}
\noindent
For a given parameter $\eta_{u}$, the larger value of
$u_{\rm ph}$ depresses the lower bound of $t_{\rm ic}$,
though the Klein-Nishina effects prolong the timescale.
It should be noted that the evaluation of $t_{\rm ic}$ along
equation~(\ref{eqn:54}) is, in equation~(\ref{eqn:49}),
meaningful only for $\eta_{u}\geq 1$; that is, the
relation of $\eta_{u}<1$ ensures $t_{\rm ic}>t_{\rm e,syn}$.
For the case of $t_{\rm e,acc}=t_{\rm ic}$ in equation~(\ref{eqn:49}),
$E_{\rm e}$, conforming to equation~(\ref{eqn:53}), gives the
maximum possible energy, which takes the value of
\begin{equation}
E_{\rm e,max}\geq 50\left({1\over{\eta_{u}}}\right)^{1/3}
\left({R\over{100~{\rm pc}}}\right)^{1/3}
\left({U\over{0.1c}}\right)^{2/3}~~{\rm PeV}
\label{eqn:55}
\end{equation}
\noindent
for $\eta_{u}\geq 1$.
Again, note that the Thomson limit sets the lower bound of $E_{\rm e,max}$.
It is found that the Klein-Nishina effects enhance the value of
$E_{\rm e,max}$ in the regime of
$\epsilon_{\rm ph}\gtrsim m_{\rm e}^{2}c^{4}/E_{\rm e,max}$.
Note here that $E_{\rm e,max}$ cannot unlimitedly increase in actual
circumstances but tends to be limited by the synchrotron cooling.
Combining equation~(\ref{eqn:52}) with equation~(\ref{eqn:55}), therefore,
we can express the allowed domain of the variables as follows:
\begin{equation}
1\geq{E_{\rm e,max}\over{50~{\rm PeV}}}
\left({{100~{\rm pc}}\over R}\right)^{1/3}
\left({{0.1c}\over U}\right)^{2/3}
\geq{1\over{\eta_{u}^{1/3}}}.
\label{eqn:56}
\end{equation}
\noindent
Note that the upper bound reflects the synchrotron limit.
In the critical case of $\eta_{u}=1$ reflecting the energy equipartition,
the generic equation~(\ref{eqn:56}) degenerates into equation~(\ref{eqn:52}).
\subsubsection[]{\it The Bremsstrahlung Loss}
The bremsstrahlung emission of electrons in the Coulomb field of
nuclei whose charge is incompletely screened also affects the acceleration.
The timescale can be evaluated by the notation
$t_{\rm br}=(n_{Z^{\prime}}\sigma_{{\rm rad,e}Z^{\prime}}c)^{-1}$,
where $n_{Z^{\prime}}$ is the fractional number density of the target
nuclei having charge number $Z^{\prime}$
and $\sigma_{{\rm rad,e}Z^{\prime}}$ describes the radiation
cross section \citep[e.g.,][]{heitler54}.
When the screening effects are small,
for interaction with a heavy composite we have
\begin{equation}
t_{\rm br}\simeq 1.4\times 10^{16}
\left\{\left[22+{\rm ln}
\left(E_{\rm e}/1~{\rm PeV}\right)\right]
\sum_{Z^{\prime}}n_{Z^{\prime}}Z^{\prime 2}
\right\}^{-1}~~{\rm s},
\label{eqn:57}
\end{equation}
\noindent
where $n_{Z^{\prime}}$ is in units of ${\rm cm}^{-3}$.
In the peculiar environments of high density, enhanced metallicity, and
lower magnetic and photon energy densities, equation~(\ref{eqn:57})
may be comparable with equation~(\ref{eqn:51}) or (\ref{eqn:54}).
In ordinary AGN jets, however, the corresponding
physical parameters seem to be marginal.
Note also that the bremsstrahlung timescale for
ion-ion interactions is larger, by the order of
$(A^{2}/Z^{4})(m_{\rm p}/m_{e})^{2}\sim 10^{7}/Z^{2}$,
than the value of equation~(\ref{eqn:57}), and found to
largely exceed the value of equation~(\ref{eqn:36}),
namely, the age of the accelerator.
That is why the ion bremsstrahlung has been
excluded in equation~(\ref{eqn:35}).
\section[]{DISCUSSION AND SUMMARY}
The feasibility of the present model could be verified by
the measurement of energetic photons emanating from a
source, typically, bright knots in nearby AGN jets.
In any case, the electrons with energy $E_{\rm e,max}$,
given in equation~(\ref{eqn:56}), emit the most energetic
synchrotron photons, whose frequency may be estimated as
$\nu^{\ast}\sim (E_{\rm e,max}/m_{\rm e}c^{2})^{2}(eB/m_{\rm e}c)$,
where the mean field strength ${\bar B}$ has been compared to
the rms strength $B$.
For $E_{\rm e,max}\sim 10~{\rm PeV}$ as an example,
the frequencies of $\nu^{\ast}\gtrsim 10^{22}~{\rm Hz}$
are found to be achieved for $B\gtrsim 10~{\rm\mu G}$.
In the gamma-ray bands, however, the energy flux of photons
from the synchrotron originator is predicted to be often overcome
by that produced by the inverse Comptonization of target photons.
In this case, as far as the condition of
$\epsilon_{\rm ph}E_{\rm e,max}\gg (m_{\rm e}c^{2})^{2}$
is satisfied, the boosted photon energy is given by
$\epsilon_{\rm ph}^{\prime}\sim E_{\rm e,max}$, independent of
the target photon energy $\epsilon_{\rm ph}$, thereby
irrespective of SSC or ECs.
This is in contrast to another case of
$\epsilon_{\rm ph}E_{\rm e,max}\ll (m_{\rm e}c^{2})^{2}$,
in which one has $\epsilon_{\rm ph}^{\prime}\sim\epsilon_{\rm ph}
(E_{\rm e,max}/m_{\rm e}c^{2})^{2}$ dependent on $\epsilon_{\rm ph}$.
Apparently, for the extremely high energy ranges of $E_{\rm e,max}$
achieved in the present scheme, the former condition is more likely
satisfied for a wide range of $\epsilon_{\rm ph}$.
Therefore, in the circumstances that the source is nearby such that
collision with the cosmic infrared background, involving
photon-photon pair creation, is insignificant,
$\epsilon_{\rm ph}^{\prime}$ ($\sim E_{\rm e,max}$) just gives the
theoretical maximum of gamma-ray energy, although the Klein-Nishina
effects also take part in lowering the flux level.
This means, in turn, that a comparison of the $\epsilon_{\rm ph}^{\prime}$
value (multiplied by an appropriate Doppler factor) with the
observed highest energy of the Compton emissions might constitute
a method to verify the present DSA for electrons.
The case for this method is certainly solidified when the operation
of the transition injection (\S~3.1) is confirmed.
Making use of the inherent property that the synchrotron photons emitted by
electrons having an energy above $|eA({\bf r})|$ reduce their polarization,
the energy hierarchy can be revealed by the polarization measurements,
particularly, with wide frequency ranges and high spatioresolution.
According to the reasoning that the critical frequency above which the
measured polarization decreases, $\nu_{\rm c}({\bf r})$, ought to be of
the order of
$\sim [|eA({\bf r})|/m_{\rm e}c^{2}]^{2}[|e{\bf B}({\bf r})|/m_{\rm e}c]$,
the related coherence length can be estimated as
$k_{\rm c}^{-1}\sim c\{\nu_{\rm c}({\bf r})
[m_{\rm e}c/|e{\bf B}({\bf r})|]^{3}\}^{1/2}$.
Note that when the locally defined gyroradius reaches this critical scale,
the bound electrons are released.
In actual circumstances, $\nu_{\rm c}$ and the polarization for a fixed
frequency band are, if anything, likely to increase near the jet surface,
where the large-scale coherency could appear (\S~3.3.5).
This may be responsible for the results of the polarization measurement
of a nearby filamentary jet, which indicate a similar transverse
dependence \citep{capetti97}.
In the sense of
$E_{\rm e,max}/E_{\rm inj}|_{q=-|e|}\ll E_{\rm i,max}/E_{\rm inj}|_{q=Z|e|}$,
the transition injection condition for electrons is more
restrictive than that for ions.
Thus, observational evidence of the present DSA scenario for
energetic electrons will, if it is obtained, strongly suggest
that the same scenario operates for ion acceleration,
providing its finite abundance.
To summarize, we have accomplished the modeling of the
diffusive shock accelerator accompanied by the quasi-static,
magnetized filamentary turbulence that could be self-organized
via the current filamentation instability.
The new theory of particle diffusion relies on the following
conditions analogous to those for the conventional QLT:
(1) the test particles must not be strongly deflected by a fine filament
but suffer the cumulative small deflection by many filaments, and
(2) the transverse filament size, i.e., the coherence length of
the scatterer, is limited by the system size transverse to the filaments;
whereas, more importantly, it is dependent on neither the gyration,
the resonant scattering, nor the explicit limit of the weak turbulence.
We have derived the diffusion coefficient from the
quasi-linear type equation and installed it in a DSA model
that involves particle injection associated with the
bound-free transition in the fluctuating vector potential.
By systematically taking the conceivable energy
restrictions into account, some generic scalings of
the maximum energy of particles have been presented.
The results indicate that the shock in kiloparsec-scale jets
could accelerate a proton and heavy nucleus to
$10-100~{\rm EeV}$ and ${\rm ZeV}$ ranges, respectively.
In particular, for high-$Z$ particles, and electrons as well,
the acceleration efficiency is significantly higher than that
derived from a simplistic QLT-based DSA, as is
deduced from equation~(\ref{eqn:28}).
Consequently, the powerful electron acceleration to ${\rm PeV}$
ranges becomes possible for the plausible parameters.
We expect that the present theory can be, mutatis
mutandis, applied for solving the problem of particle transport
and acceleration in GRBs \citep{nishikawa03,silva03}.
The topic is of a cross-disciplinary field closely relevant
to astrophysics, high-energy physics, and plasma physics involving
fusion science; particularly, the magnetoelectrodynamics of filamentary
turbulence is subject to the complexity of ``flowing plasma.''
In perspective, further theoretical details might be
resolved, in part, by a fully kinetic approach allowing
multiple dimensions, which goes far beyond the MHD context.\\
|
2,869,038,154,636 | arxiv |
\section{The Bi-criteria Case}
In this section we present a lower bound for the expected number of Pareto optimal solutions in bi-criteria optimization problems that shows that the upper bound of Beier, R\"{o}glin, and V\"{o}cking \cite{DBLP:conf/ipco/BeierRV07} cannot be significantly improved. To prove this lower bound, we consider a class of instances for a variant of the knapsack problem, in which subsets of items can form groups such that either all items in a group have to be put into the knapsack or none of them.
\begin{theorem}
\label{bi.mainthm}
There is a class of instances for the bi-criteria knapsack problem with groups for which the expected number of Pareto-optimal solutions is lower bounded by
\[ \OMEGA{ \MIN{ n^2 \phi^{1-\Theta(1/\phi)}, 2^{\THETA{n}} } } \KOMMA \]
where~$n$ is the number of objects and~$\phi$ is the maximum density of the profits' probability distributions.
\end{theorem}
Note, that Beier, R\"{o}glin, and V\"{o}cking \cite{DBLP:conf/ipco/BeierRV07} proved an upper bound of $O(\MIN{ n^2 \phi, 2^n })$. That is, the exponents of~$n$ and~$\phi$ in the lower and the upper bound are asymptotically the same.
For our construction we use the following lower bound from Beier and V\"{o}cking.
\begin{theorem}[\cite{DBLP:journals/jcss/BeierV04}]
\label{bi.thm.n.squared}
Let $\LIST{a}{n}$ be objects with weights $\SLIST{2^}{n}$ and profits $\LIST{p}{n}$ that are independently and uniformly distributed in~$[0, 1]$. Then, the expected number of Pareto optimal solutions of $K(\{ \LIST{a}{n} \})$ is $\OMEGA{n^2}$.
\end{theorem}
Note that scaling all profits does not change the Pareto set and hence Theorem~\ref{bi.thm.n.squared} remains true if the profits are chosen uniformly from~$[0, a]$ for an arbitrary~$a > 0$. We will exploit this observation later in our construction.
The idea how to create a large Pareto set is what we call the copy step. Let us assume we have an additional object~$b$ with weight~$2^{n+1}$ and fixed profit~$q$. The solutions from $K(\{ \LIST{a}{n}, b \})$ can be considered as solutions from $K(\{ \LIST{a}{n} \})$ that do not use object~$b$ or as solutions from $K(\{ \LIST{a}{n} \})$ that additionally use object~$b$. By the choice of the weight of~$b$, a Pareto optimal solution from $K(\{ \LIST{a}{n} \})$ is also a Pareto optimal solution from $K(\{ \LIST{a}{n}, b\})$ as object~$b$ alone is heavier than all objects $\LIST{a}{n}$ together. The crucial observation is that a solution that uses object~$b$ is Pareto optimal if and only if its profit is larger than the largest profit of any Pareto optimal solution from $K(\{ \LIST{a}{n} \})$ and if it is Pareto optimal for $K(\{ \LIST{a}{n} \})$ when not using~$b$. The first condition is always fulfilled if we choose the profit~$q$ large enough. In this case we can view the Pareto optimal solutions using object~$b$ as copies of the Pareto optimal solutions that do not use~$b$.
\begin{lemma}
\label{bi.lemma.copy}
Let $\LIST{a}{n}$ be objects with weights $\SLIST{2^}{n}$ and profits $\LIST{p}{n} \geq 0$ and let~$b$ be an object with weight~$2^{n+1}$ and profit~$q > \sum_{i=1}^n p_i$. Furthermore, let~$\P$ denote the Pareto set of $K(\{ \LIST{a}{n} \})$ and let~$\P'$ denote the Pareto set of $K(\{ \LIST{a}{n}, b\})$. Then,~$\P'$ is the disjoint union of $\P'_0 := \SET{ (s, 0) \WHERE s \in \P }$ and $\P'_1 := \SET{ (s, 1) \WHERE s \in \P }$ and thus $|\P'| = 2\cdot|\P|$.
\end{lemma}
Figure~\ref{bi.fig.copy.step} visualizes the proof idea. If we represent all solutions by a weight-profit pair in the weight-profit space, then the set of solutions using object~$b$ is the set of solutions that do not use object~$b$, but shifted by~$(2^{n+1}, q)$. As both components of this vector are chosen sufficiently large, there is no domination between solutions from different copies and hence the Pareto optimal solutions of $K(\{ \LIST{a}{n}, b \})$ are just the copies of the Pareto optimal solutions of $K(\{ \LIST{a}{n} \})$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.5\textwidth]{copystep.pdf}
\end{center}
\caption{The copy step. The Pareto set $\P'$ consist of two copies of the Pareto set $\P$.}
\label{bi.fig.copy.step}
\end{figure}
Now we use the copy idea to construct a large Pareto set. Let $\LIST{a}{n_p}$ be objects with weights $\SLIST{2^}{n_p}$ and with profits $\LIST{p}{n_p} \in P := [0, \frac{1}{\phi}]$ where~$\phi > 1$, and let $\LIST{b}{n_q}$ be objects with weights $\SLIST[n_p+1]{2^}{n_p+n_q}$ and with profits
\[ q_i \in Q_i := \left( m_i - \frac{\ceil{m_i}}{\phi}, m_i \right] \KOMMA \mbox{ where } m_i = \frac{n_p+1}{\phi-1} \cdot \left( \frac{2\phi-1}{\phi-1} \right)^{i-1} \DOT \]
To apply Lemma~\ref{bi.lemma.copy}, we first have to show that we chose the intervals~$Q_i$ appropriately. Additionally, we implicitely show that the lower boundaries of the intervals~$Q_i$ are non-negative.
\begin{lemma}
\label{bi.lemma.appr.Qi}
Let $\LIST{p}{n_p} \in P$ and let $q_i \in Q_i$. Then, $q_i > \sum_{j=1}^{n_p} p_j + \sum_{j=1}^{i-1} q_j$ for all $i = \NLIST{n_q}$.
\end{lemma}
\begin{proof}
Using the definition of~$m_i$, we get
\begin{align*}
q_i &> m_i - \frac{\ceil{m_i}}{\phi} \geq m_i - \frac{m_i+1}{\phi} = \frac{\phi-1}{\phi} \cdot m_i - \frac{1}{\phi} = \frac{n_p+1}{\phi} \cdot \left( \frac{2\phi-1}{\phi-1} \right)^{i-1} - \frac{1}{\phi}
\end{align*}
and
\begin{align*}
\sum \limits_{j=1}^{n_p} p_j + \sum \limits_{j=1}^{i-1} q_j &\leq \sum \limits_{j=1}^{n_p} \frac{1}{\phi} + \sum \limits_{j=1}^{i-1} m_j = \frac{n_p}{\phi} + \sum \limits_{j=1}^{i-1} \frac{n_p+1}{\phi-1} \cdot \left( \frac{2\phi-1}{\phi-1} \right)^{j-1} \cr
&= \frac{n_p}{\phi} + \frac{n_p+1}{\phi-1} \cdot \frac{\left( \frac{2\phi-1}{\phi-1} \right)^{i-1} - 1}{\frac{2\phi-1}{\phi-1} - 1} = \frac{n_p}{\phi} + \frac{n_p+1}{\phi} \cdot \left( \left( \frac{2\phi-1}{\phi-1} \right)^{i-1} - 1 \right) \cr
&= \frac{n_p+1}{\phi} \cdot \left( \frac{2\phi-1}{\phi-1} \right)^{i-1} - \frac{1}{\phi} \DOT \qedhere
\end{align*}
\end{proof}
Combining Theorem~\ref{bi.thm.n.squared}, Lemma~\ref{bi.lemma.copy} and Lemma~\ref{bi.lemma.appr.Qi}, we immediately get a lower bound for the knapsack problem using the objects $\LIST{a}{n_p}$ and $\LIST{b}{n_q}$ with profits chosen from~$P$ and~$Q_i$, respectively.
\begin{corollary}
\label{bi.corol.pareto}
Let $\LIST{a}{n_p}$ and $\LIST{b}{n_q}$ be as above, but the profits~$p_i$ are chosen uniformly from~$P$ and the profits~$q_i$ are arbitrarily chosen from~$Q_i$. Then, the expected number of Pareto optimal solutions of $K(\{ \LIST{a}{n_p}, \LIST{b}{n_q} \})$ is $\OMEGA{n_p^2 \cdot 2^{n_q}}$.
\end{corollary}
\begin{proof}
Because of Lemma~\ref{bi.lemma.appr.Qi}, we can apply Lemma~\ref{bi.lemma.copy} for each realization of the profits $\LIST{p}{n_p}$ and $\LIST{q}{n_q}$. This implies that the expected number of Pareto optimal solutions is~$2^{n_q}$ times the expected size of the Pareto set of $K(\{ \LIST{a}{n_p} \})$ which is $\OMEGA{n_p^2}$ according to Theorem~\ref{bi.thm.n.squared}.
\end{proof}
The profits of the objects~$b_i$ grow exponentially and leave the interval~$[0, 1]$. We resolve this problem by splitting each object~$b_i$ into~$k_i := \ceil{m_i}$ objects $b_i^{(1)}, \ldots, b_i^{(k_i)}$ with the same total weight and the same total profit, i.e.\ with weight~$2^{n_p+i}/k_i$ and profit
\[ q_i^{(l)} \in Q_i/k_i := \left( \frac{m_i}{k_i} - \frac{1}{\phi}, \frac{m_i}{k_i} \right] \DOT \]
As the intervals~$Q_i$ are subsets of~$\R_+$, the intervals~$Q_i/k_i$ are subsets of~$[0, 1]$. It remains to ensure that for any fixed~$i$ all objects~$b_i^{(l)}$ are treated as a group. This can be done by restricting the set~$\S$ of solutions. Let $\S_i = \SET{ (0, \ldots, 0), (1, \ldots, 1) } \subseteq \SET{ 0, 1 }^{k_i}$. Then, the set~$\S$ of solutions is defined as
\[ \S := \SET{ 0, 1 }^{n_p} \times \prod \limits_{i=1}^{n_q} \S_i \DOT \]
By choosing the set of solutions that way, the objects $b_i^{(1)}, \ldots, b_i^{(k_i)}$ can be viewed as substitute for object~$b_i$. Thus, a direct consequence of Corollary~\ref{bi.corol.pareto} is the following.
\begin{corollary}
\label{bi.corol.final.pareto}
Let~$\S$, $\LIST{a}{n_p}$ and $b_i^{(l)}$ be as above, and let the profits $\LIST{p}{n_p}$ be chosen uniformly from~$P$ and let the profits $q_i^{(1)}, \ldots, q_i^{(k_i)}$ be chosen uniformly from~$Q_i / k_i$. Then, the expected number of Pareto optimal solutions of $K_\S (\{ \LIST{a}{n_p} \} \cup \{ b_i^{(l)} \WHERE i = \NLIST{n_q}, \ l = \NLIST{k_i} \})$ is $\OMEGA{n_p^2 \cdot 2^{n_q}}$.
\end{corollary}
The remainder contains just some technical details. First, we give an upper bound for the number of objects~$b_i^{(l)}$.
\begin{lemma}
\label{bi.lemma.count.objects}
The number of objects~$b_i^{(l)}$ is upper bounded by $n_q + \frac{n_p+1}{\phi} \cdot \left( \frac{2\phi-1}{\phi-1} \right)^{n_q}$.
\end{lemma}
\begin{proof}
The number of objects~$b_i^{(l)}$ is $\sum_{i=1}^{n_q} k_i = \sum_{i=1}^{n_q} \ceil{m_i} \leq n_q + \sum_{i=1}^{n_q} m_i$, and
\[ \sum \limits_{i=1}^{n_q} m_i = \frac{n_p+1}{\phi-1} \cdot \sum \limits_{i=1}^{n_q} \left( \frac{2\phi-1}{\phi-1} \right)^{i-1} \leq \frac{n_p+1}{\phi-1} \cdot \frac{\left( \frac{2\phi-1}{\phi-1} \right)^{n_q}}{\frac{2\phi-1}{\phi-1} - 1} = \frac{n_p+1}{\phi} \cdot \left( \frac{2\phi-1}{\phi-1} \right)^{n_q} \DOT \qedhere \]
\end{proof}
Now we are able to prove Theorem~\ref{bi.mainthm}.
\begin{proof}[Proof of Theorem~\ref{bi.mainthm}]
Without loss of generality let~$n \geq 4$ and $\phi \geq \frac{3+\sqrt{5}}{2} \approx 2.62$. For the moment let us assume $\phi \leq (\frac{2\phi-1}{\phi-1})^\frac{n-1}{3}$. This is the interesting case leading to the first term in the minimum in Theorem~\ref{bi.mainthm}. We set $\hat{n}_q := \frac{\LOG{\phi}}{\LOG{ \frac{2\phi-1}{\phi-1} }} \in [ 1, \frac{n-1}{3}]$ and $\hat{n}_p := \frac{n-1-\hat{n}_q}{2} \geq \frac{n-1}{3} \geq 1$. All inequalities hold because of the bounds on~$n$ and~$\phi$. We obtain the numbers~$n_p$ and~$n_q$ by rounding, i.e.\ $n_p := \floor{\hat{n}_p} \geq 1$ and $n_q := \floor{\hat{n}_q} \geq 1$. Now we consider objects $\LIST{a}{n_p}$ with weights~$2^i$ and profits chosen uniformly from~$P$, and objects~$b_i^{(l)}$, $i = \NLIST{n_q}$, $l = \NLIST{k_i}$, with weights~$2^{n_p+i}/k_i$ and profits chosen uniformly from~$Q_i/k_i$. Observe that~$P$ and all~$Q_i/k_i$ have length~$\frac{1}{\phi}$ and thus the densities of all profits are bounded by~$\phi$. Let~$N$ be the number of all these objects. By Lemma~\ref{bi.lemma.count.objects}, this number is bounded by
\begin{align*}
N &\leq n_p + n_q + \frac{n_p+1}{\phi} \cdot \left( \frac{2\phi-1}{\phi-1} \right)^{n_q} \leq \hat{n}_p + \hat{n}_q + \frac{\hat{n}_p+1}{\phi} \cdot \left( \frac{2\phi-1}{\phi-1} \right)^{\hat{n}_q} \cr
&= \hat{n}_p + \hat{n}_q + \frac{\hat{n}_p+1}{\phi} \cdot \phi = 2\hat{n}_p + \hat{n}_q + 1 = n \DOT
\end{align*}
Hence, the number~$N$ of binary variables we actually use is at most~$n$, as required. As set of solutions we consider $\S := \SET{ 0, 1 }^{n_p} \times \prod \limits_{i=1}^{n_q} \S_i$. Due to Corollary~\ref{bi.corol.final.pareto}, the expected size of the Pareto set of $K_\S(\{ \LIST{a}{n_p}\} \cup \{ b_i^{(l)} \WHERE i = \NLIST{n_q}, \ l = \NLIST{k_i} \})$ is
\begin{align*}
\OMEGA{n_p^2 \cdot 2^{n_q}} &= \OMEGA{\hat{n}_p^2 \cdot 2^{\hat{n}_q}} = \OMEGA{ \hat{n}_p^2 \cdot 2^{\frac{\LOG{\phi}}{\LOG{ \frac{2\phi-1}{\phi-1} }}} } = \OMEGA{ n^2 \cdot \phi^{\frac{\LOG{2}}{\LOG{ \frac{2\phi-1}{\phi-1} }}} } \cr
&= \OMEGA{ n^2 \cdot \phi^{1-\Theta{1/\phi}} } \KOMMA
\end{align*}
where the last step holds because
\[ \frac{1}{\LOG[2]{ 2+\frac{c_1}{\phi-c_2} }} = 1 - \frac{\LOG{ 1+\frac{c_1}{2\phi-2c_2} }}{\LOG{ 2 + \frac{c_1}{\phi-c_2} }} = 1 - \frac{\THETA{ \frac{c_1}{2\phi-2c_2} }}{\THETA{1}} = 1 - \THETA{ \frac{1}{\phi} } \]
for any constants $c_1, c_2 > 0$. We formulated this argument slightly more general than necessary as we will use it again in the multi-criteria case.
In the case $\phi > (\frac{2\phi-1}{\phi-1})^\frac{n-1}{3}$ we construct the same instance as above, but for maximum density~$\phi' > 1$ where $\phi' = (\frac{2\phi'-1}{\phi'-1})^{\frac{n-1}{3}}$. Since~$n \geq 4$, the value~$\phi'$ exists, is unique and $\phi' \in \left[ \frac{3+\sqrt{5}}{2}, \phi \right)$. As above, the expected size of the Pareto set is
\begin{align*}
\OMEGA{ n^2 \cdot 2^{\frac{\LOG{\phi'}}{\LOG{ \frac{2\phi'-1}{\phi'-1} }}} } &= \OMEGA{ n^2 \cdot 2^{\frac{n-1}{3}} } = \OMEGA{ n^2 \cdot 2^{\THETA{n}} } = \OMEGA{ 2^{\THETA{n}} } \DOT \qedhere
\end{align*}
\end{proof}
\section{Introduction}
In multi-criteria optimization problems we are given several objectives and aim at finding a solution that is simultaneously optimal in all of them. In most cases the objectives are conflicting and no such solution exists. The most popular way to deal with this problem is based on the following simple observation. If a solution is \emph{dominated} by another solution, i.e.\ it is worse than the other solution in at least one objective and not better in the others, then this solution does not have to be considered for our optimization problem. All solutions that are not dominated are called \emph{Pareto optimal}, and the set of these solutions is called \emph{Pareto set}.
\paragraph{Knapsack Problem with Groups}
Let us consider a variant of the \emph{knapsack problem} which we call \emph{restricted multi-profit knapsack problem}. Here, we have~$n$ objects $\LIST{a}{n}$, each with a weight~$w_i$ and a profit vector~$p_i \in \R^d$ for a positive integer~$d$. By a vector~$s \in \SET{ 0, 1 }^n$ we can describe which object to put into the knapsack. In this variant of the knapsack problem we are additionally given a set~$\S \subseteq \SET{ 0, 1 }^n$ of \emph{solutions} describing all combinations of objects that are allowed. We want to simultaneously minimize the total weight and maximize all total profits of a solution~$s$. Thus, our optimization problem, denoted by $K_\S(\{ \LIST{a}{n} \})$, can be written as
\begin{itemize}
\item[] \textbf{minimize} $\sum \limits_{i=1}^n w_i \cdot s_i$, \text{\,\, and \,\,} \textbf{maximize} $\sum \limits_{i=1}^n (p_i)_j \cdot s_i$ for all~$j = \NLIST{d}$
\item[] \textbf{subject to}~$s$ in the feasible region~$\S$.
\end{itemize}
For~$\S = \SET{ 0, 1 }^n$ we just write $K(\{ \LIST{a}{n} \})$ instead of $K_\S(\{ \LIST{a}{n} \})$.
In this paper we will consider a special case of the optimization problem above where we partition the objects into groups. For each group a set of allowed subgroups is given. Independently of the choice of the objects outside this group we have to decide for one of those subgroups. Hence, the set~$\S$ of solutions is of the form $\S = \prod_{i=1}^k \S_i$ where~$\S_i \subseteq \SET{ 0, 1 }^{n_i}$ and $\sum_{i=1}^k n_i = n$. We refer to this problem as \emph{multi-profit knapsack problem with groups}.
\paragraph{Smoothed Analysis}
For many multi-criteria optimization problems the worst-case size of the Pareto
set is exponential in the number of variables. However, worst-case analysis is
often too pessimistic, whereas average-case analysis assumes a certain
distribution on the input universe, which is usually unknown. \emph{Smoothed
analysis}, introduced by Spielman and Teng~\cite{DBLP:journals/jacm/SpielmanT04}
to explain the efficiency of the simplex algorithm in practice despite its
exponential worst-case running time, is a combination of both approaches.
Like in a worst-case analysis the model of smoothed analysis still considers adverserial
instances. In contrast to the worst-case model, however, these instances are
subsequently slightly perturbed at random, for example by Gaussian noise.
This assumption is made to model that often the input an algorithm gets is subject to imprecise
measurements, rounding errors, or numerical imprecision.
In a more general model of smoothed analysis, introduced by Beier and
V\"{o}cking~\cite{DBLP:journals/jcss/BeierV04}, the adversary is even allowed to
specify the probability distribution of the random noise. The influence he can
exert is described by a parameter~$\phi$ denoting the maximum density of the
noise.
For the restricted multi-profit knapsack problem we use the following smoothing
model which has also been used by Beier and
V\"{o}cking~\cite{DBLP:journals/jcss/BeierV04}, by Beier, R\"{o}glin, and
V\"{o}cking~\cite{DBLP:conf/ipco/BeierRV07}, by R\"{o}glin and
Teng~\cite{DBLP:conf/focs/RoglinT09}, and by Moitra and
O'Donnell~\cite{MoitraO10}. Given positive integers~$n$ and~$d$ and a real~$\phi
\geq 1$ the adversary can specify a set~$\S \subseteq \SET{ 0, 1 }^n$ of
solutions, arbitrary object weights $\LIST{w}{n}$ and density functions $f_{i,j}
\colon [-1, 1] \ra \R$ such that $f_{i,j} \leq \phi$, $i = \NLIST{n},\ j =
\NLIST{d}$. Now the profits~$(p_i)_j$ are drawn independently according to the
density functions~$f_{i,j}$. The \emph{smoothed number of Pareto optimal
solutions} is the largest expected size of the Pareto set of $K_\S(\{ \LIST{a}{n} \})$
that the adversary can achieve by choosing the set $\S$, the weights $w_i$, and the
probability densities $f_{i,j}\leq\phi$ for the profits~$(p_i)_j$.
\paragraph{Previous Work}
Beier and V\"{o}cking~\cite{DBLP:journals/jcss/BeierV04} showed that for $d = 1$
the expected size of the Pareto set is $O(n^4 \phi)$. Furthermore, they showed a
lower bound of $\OMEGA{n^2}$ if all profits are uniformly drawn from $[0, 1]$.
Later, Beier, R\"{o}glin, and V\"{o}cking~\cite{DBLP:conf/ipco/BeierRV07}
improved the upper bound to $O(n^2 \phi)$ by analyzing the so-called \emph{loser
gap}. R\"{o}glin and Teng~\cite{DBLP:conf/focs/RoglinT09} generalized the notion
of this gap to higher dimensions, i.e.\ $d \geq 2$, and gave the first
polynomial bound in $n$ and $\phi$ for the smoothed number of Pareto optimal
solutions. Furthermore, they were able to bound higher moments. The degree of
the polynomial, however, was $d^{\THETA{d}}$. Recently, Moitra and
O'Donnell~\cite{MoitraO10} showed a bound of $O(n^{2d} \phi^{d(d+1)/2})$,
which is the first polynomial bound for the expected size of the Pareto
set with degree polynomial in $d$.
An ``intriguing problem'' with which Moitra and O'Donnell conclude their paper is
whether their upper bound could be significantly improved, for example to $f(d, \phi) n^2$.
Moitra and O'Donnell suspect that for constant $\phi$ there should be a lower
bound of $\OMEGA{n^d}$. In this paper we resolve this question almost completely.
\paragraph{Our Contribution}
For $d = 1$ we prove a lower bound $\OMEGA{\MIN{n^2 \phi^{1-\THETA{1/\phi}},
2^{\THETA{n}}}}$. This is the first bound with dependence on $n$ and $\phi$ and
it nearly matches the upper bound $O(\MIN{n^2 \phi, 2^n})$. For $d \geq 2$ we
prove a lower bound $\OMEGA{ \MIN{ (n\phi)^{(d-\LOG[2]{d}) \cdot (1 -
\THETA{1/\phi})}, 2^{\THETA{n}} } }$. This is the first bound for the general
multi-criteria case. Still, there is a significant gap between this lower bound
and the upper bound of $O(\MIN{n^{2d} \phi^{d(d+1)/2},2^n})$ shown by Moitra and O'Donnell, but the exponent of $n$ is
nearly $d - \LOG[2]{d}$. Hence our lower bound is close to the lower bound of $\OMEGA{n^d}$
conjectured by Moitra and O'Donnell.
\section{The Multi-criteria Case}
In this section we present a lower bound for the expected number of Pareto optimal solutions in multi-criteria optimization problems. For this, we construct a class of instances for a variant of the knapsack problem where each object has one weight and~$d$ profits and where objects can form groups. We restrict our attention to~$d \geq 2$ as we discussed the case~$d = 1$ in the previous section.
\begin{theorem}
\label{multi.mainthm}
For any fixed integer~$d \geq 2$ there is a class of instances for the $(d+1)$-dimensional knapsack problem with groups for which the expected number of Pareto-optimal solutions is lower bounded by
\[ \OMEGA{ \MIN{ (n\phi)^{(d-\LOG{d}) \cdot (1 - \THETA{1/\phi})}, 2^{\THETA{n}} } } \KOMMA \]
where~$n$ is the number of objects and~$\phi$ is the maximum density of the profit's probability distributions.
\end{theorem}
Unfortunately, Theorem~\ref{multi.mainthm} does not generalize
Theorem~\ref{bi.mainthm}. This is due to the fact that, though we know an
explicit formula for the expected number of Pareto optimal solutions if all
profits are uniformly chosen from~$[0, 1]$, we were not able to find a simple
non-trivial lower bound for it. Hence, in the general multi-criteria case, we
concentrate on analyzing the copy and split steps.
In the bi-criteria case we used an additional object~$b$ to copy the Pareto set
(see Figure~\ref{bi.fig.copy.step}). For that we had to ensure that every
solution using this object has higher weight than all solutions without~$b$. The
opposite had to hold for the profit. Since all profits are in~$[0,1]$, the
profit of every solution must be in~$[0,n]$. As the Pareto set of the first $n_p \leq n/2$
objects has profits in $[0,n/(2\phi)]$, we could fit $n_q = \THETA{\LOG{\phi}}$ copies of
this initial Pareto set into the interval~$[0,n]$.
In the multi-criteria case, every solution has a profit in~$[0,n]^d$.
In our construction, the initial Pareto set consists only of a single solution, but
we benefit from the fact that the number of mutually non-dominating copies of the
initial Pareto set that we can fit into the hypercube~$[0,n]^d$ grows quickly with~$d$.
Let us consider the case that we have some Pareto set~$\P$ whose profits lie in some hypercube~$[0,a]^d$.
We will create $\binom{d}{\dt}$ copies of this Pareto set; one for every vector~$x \in \SET{ 0, 1 }^d$
with exactly $\dt = \ceil{d/2}$ ones. Let $x \in \SET{ 0, 1 }^d$ be such a vector. Then we generate the
corresponding copy~$C_x$ of the Pareto set~$\P$ by shifting it by $a + \varepsilon$ in every dimension~$i$ with~$x_i=1$.
If all solutions in these copies have higher weights than the solutions in the initial Pareto set~$\P$,
then the initial Pareto set stays Pareto optimal. Furthermore, for each pair of copies~$C_x$ and~$C_y$,
there is one index~$i$ with~$x_i=1$ and~$y_i=0$. Hence, solutions from~$C_y$ cannot dominate solutions from~$C_x$.
Similarly, one can argue that no solution in the initial copy can dominate any solution from~$C_x$.
This shows that all solutions in copy~$C_x$ are Pareto optimal. All the copies (including the initial one)
have profits in $[0,2a + \varepsilon]^d$ and together $|\P|\cdot \big( 1+\binom{d}{\dt} \big) \geq |\P| \cdot 2^d/d$ solutions.
We start with an initial Pareto set of a single solution with profit in~$[0,1/\phi]^d$, and hence we can
make $\THETA{\LOG{n\phi}}$ copy steps before the hypercube~$[0,n]^d$ is filled. In each of these steps
the number of Pareto optimal solutions increases by a factor of at least~$2^d/d$, yielding a total number
of at least
\[ \left( \frac{2^d}{d} \right)^{\THETA{\LOG{n \phi}}} = (n \phi)^{\THETA{d-\LOG{d}}} \]
Pareto optimal solutions.
In the following, we describe how these copy steps can be realized in the
restricted multi-profit knapsack problem. Again, we have to make a split step because the profit of every object must be in~$[0,1]^d$. Due to such technicalities, the actual bound we prove looks slightly different than the one above.
It turns out that we need (before splitting)~$d$ new objects
$\LIST{b}{d}$ for each copy step in contrast to the bi-criteria case, where (before splitting) a single
object~$b$ was enough.
Let~$n_q \geq 1$ be an arbitrary positive integer and let~$\phi \geq 2d$ be a
real. We consider objects~$b_{i,j}$ with weights~$2^i/\dt$ and profit vectors
\[ q_{i,j} \in Q_{i,j} := \prod \limits_{k=1}^{j-1} \left[ 0, \frac{\ceil{m_i}}{\phi} \right] \times \left( m_i - \frac{\ceil{m_i}}{\phi}, m_i \right] \times \prod \limits_{k=j+1}^d \left[ 0, \frac{\ceil{m_i}}{\phi} \right] \KOMMA \]
where~$m_i$ is recursively defined as
\begin{equation}
\label{eq.multi.recurrence}
m_0 := 0 \ \mbox{ and } \ m_i := \frac{1}{\phi - d} \cdot \left( \sum \limits_{l=0}^{i-1} \left( m_l \cdot \left( \phi + d \right) + d \right) \right),\ i = \NLIST{n_q} \DOT
\end{equation}
The explicit formula for this recurrence is
\begin{equation*}
\label{eq.multi.explicit}
m_i = \frac{d}{\phi+d} \cdot \left( \left( \frac{2\phi}{\phi-d} \right)^i - 1 \right), \ i = \NLIST{n_q} \DOT
\end{equation*}
The $d$-dimensional interval~$Q_{i,j}$ is of the form that the $j^\text{th}$~profit of object~$b_{i,j}$ is large and all the other profits are small as discussed in the motivation.
Let~$H(x)$ be the \emph{Hamming weight} of a $0$-$1$-vector~$x$, i.e.\ the number of ones in~$x$, and let $\hat{\S} := \{ x \in \SET{ 0, 1 }^d \WHERE H(x) \in \SET{ 0, \d } \}$ denote the set of all $0$-$1$-vectors of length~$d$ with~$0$ or~$\d$ ones. As set~$\S$ of solutions we consider $\S := \hat{\S}^{n_q}$.
\begin{lemma}
\label{multi.lemma.copy}
Let the set~$\S$ of solutions and the objects~$b_{i,j}$ be as above. Then, each solution~$s \in \S$ is Pareto optimal for $K_\S(\{ b_{i,j} \WHERE i = \NLIST{n_q}, \ j = \NLIST{d} \})$.
\end{lemma}
\begin{proof}
We show the statement by induction over~$n_q$ and discuss the base case and the inductive step simultaneously because of similar arguments. Let $\S' := \hat{\S}^{n_q-1}$ and let $(s, s_{n_q}) \in \S' \times \hat{\S}$ be an arbitrary solution from~$\S$. Note that for~$n_q = 1$ we get~$s = \l$, the $0$-$1$-vector of length $0$. First we show that there is no domination within one copy, i.e.\ there is no solution of type $(s', s_{n_q}) \in \S$ that dominates $(s, s_{n_q})$. For~$n_q = 1$ this is obviously true. For~$n_q \geq 2$ the existence of such a solution would imply that~$s'$ dominates~$s$ in the knapsack problem $K_{\S'}(\{ b_{i,j} \WHERE i = \NLIST{n_q - 1}, \ j = \NLIST{d} \})$. This contradicts the inductive hypothesis.
Now we prove that there is no domination between solutions from different copies, i.e.\ there is no solution of type $(s', s'_{n_q}) \in \S$ with $s'_{n_q} \neq s_{n_q}$ that dominates $(s, s_{n_q})$. If $s_{n_q} = \vec{0}$, then the total weight of the solution $(s, s_{n_q})$ is at most $\sum_{i=1}^{n_q-1} 2^i < 2^{n_q}$. The right side of this inequality is a lower bound for the weight of solution $(s', s'_{n_q})$ because $s'_{n_q} \neq s_{n_q}$. Hence, $(s', s'_{n_q})$ does not dominate $(s, s_{n_q})$. Finally, let us consider the case $s_{n_q} \neq \vec{0}$. There must be an index~$j \in [d]$ where $(s_{n_q})_j = 1$ and $(s'_{n_q})_j = 0$. We show that the $j^\text{th}$~total profit of $(s, s_{n_q})$ is higher than the $j^\text{th}$~profit of $(s', s'_{n_q})$. The former one is strictly bounded from below by $m_{n_q} - \ceil{m_{n_q}}/\phi$, whereas the latter one is bounded from above by
\[ \sum \limits_{i=1}^{n_q-1} \left( (\dt-1) \cdot \frac{\ceil{m_i}}{\phi} + \max \SET{ \frac{\ceil{m_i}}{\phi}, m_i } \right) + \dt \cdot \frac{\ceil{m_{n_q}}}{\phi} \DOT \]
Solution $(s', s'_{n_q})$ can use at most~$\dt$ objects of each group $b_{i,1}, \ldots, b_{i,d}$. Each of them, except one, can contribute at most $\frac{\ceil{m_i}}{\phi}$ to the $j^\text{th}$~total profit. One can contribute either at most~$\frac{\ceil{m_i}}{\phi}$ or at most~$m_i$. This argument also holds for the $n_q^\text{th}$~group, but by the choice of index~$j$ we know that each object chosen by~$s'_{n_q}$ contributes at most~$\frac{\ceil{m_i}}{\phi}$ to the $j^\text{th}$~total profit. It is easy to see that $\ceil{m_i}/\phi \leq m_i$ because of $\phi > d \geq 1$. Hence, our bound simplifies to
\begin{align*}
\sum \limits_{i=1}^{n_q-1} &\left( (\dt-1) \cdot \frac{\ceil{m_i}}{\phi} + m_i \right) + \dt \cdot \frac{\ceil{m_{n_q}}}{\phi} \cr
&\leq \sum \limits_{i=1}^{n_q-1} \left( d \cdot \frac{m_i+1}{\phi} + m_i \right) + (d-1) \cdot \frac{m_{n_q}+1}{\phi} && \mbox{($d \geq 2$)} \cr
&= \frac{1}{\phi} \cdot \left( \sum \limits_{i=1}^{n_q-1} ( m_i \cdot (\phi+d) + d) + d \cdot (m_{n_q} + 1) \right) - \frac{m_{n_q}+1}{\phi} \cr
&= \frac{1}{\phi} \cdot \left( \sum \limits_{i=0}^{n_q-1} ( m_i \cdot (\phi+d) + d) + d \cdot m_{n_q} \right) - \frac{m_{n_q}+1}{\phi} && \mbox{($m_0 = 0$)} \cr
&= \frac{1}{\phi} \cdot ( (\phi-d) \cdot m_{n_q} + d \cdot m_{n_q}) - \frac{m_{n_q}+1}{\phi} && \mbox{(Equation~\eqref{eq.multi.recurrence})} \cr
&\leq m_{n_q} - \frac{\ceil{m_{n_q}}}{\phi} \DOT
\end{align*}
This implies that $(s', s'_{n_q})$ does not dominate $(s, s_{n_q})$.
\end{proof}
Immediately, we get a statement about the expected number of Pareto optimal solutions if we randomize.
\begin{corollary}
\label{multi.corol.pareto}
Let~$\S$ and~$b_{i,j}$ be as above, but the profit vectors~$q_{i,j}$ are arbitrarily drawn from~$Q_{i,j}$. Then, the expected number of Pareto optimal solutions for $K_\S(\{ b_{i,j} \WHERE i = \NLIST{n_q}, \ j = \NLIST{d} \})$ is at least $\left( \frac{2^d}{d} \right)^{n_q}$.
\end{corollary}
\begin{proof}
This result follows from Lemma~\ref{multi.lemma.copy} and the fact \[ |\hat{S}| = 1 + \binom{d}{\dt} \geq 1 + \frac{\sum \limits_{i=1}^d \binom{d}{i}}{d} = 1 + \frac{2^d-1}{d} \geq \frac{2^d}{d} \DOT \qedhere \]
\end{proof}
As in the bi-criteria case we now split each object~$b_{i,j}$ into~$k_i := \ceil{m_i}$ objects $b_{i,j}^{(1)}, \ldots, b_{i,j}^{(k_i)}$ with weights~$2^i/(k_i \cdot \dt)$ and with profit vectors
\[ q_{i,j}^{(l)} \in Q_{i,j}/k_i := \prod \limits_{k=1}^{j-1} \left[ 0, \frac{1}{\phi} \right] \times \left( \frac{m_i}{k_i} - \frac{1}{\phi}, \frac{m_i}{k_i} \right] \times \prod \limits_{k=j+1}^d \left[ 0, \frac{1}{\phi} \right] \DOT \]
Then, we adapt our set~$\S$ of solutions such that for any fixed indices~$i$ and~$j$ either all objects $b_{i,j}^{(1)}, \ldots, b_{i,j}^{(k_i)}$ are put into the knapsack or none of them. Corollary~\ref{multi.corol.pareto} yields the following result.
\begin{corollary}
\label{multi.corol.final.pareto}
Let~$\S$ and~$b_{i,j}^{(l)}$ be as described above, but the profit vectors $p_{i,j}^{(1)}, \ldots, p_{i,j}^{(k_i)}$ are chosen uniformly from~$Q_{i,j}/k_i$. Then, the expected number of Pareto optimal solutions of $K_\S(\{ b_{i,j}^{(l)} \WHERE i = \NLIST{n_q}, \ j = \NLIST{d}, \ l = \NLIST{k_i} \})$ is at least $\left( \frac{2^d}{d} \right)^{n_q}$.
\end{corollary}
Still, the lower bound is expressed in~$n_q$ and not in the number of objects used. So the next step is to analyze the number of objects.
\begin{lemma}
\label{multi.lemma.count.objects}
The number of objects~$b_{i,j}^{(l)}$ is upper bounded by $d \cdot n_q + \frac{2d^2}{\phi-d} \cdot \left( \frac{2\phi}{\phi-d} \right)^{n_q}$.
\end{lemma}
\begin{proof}
The number of objects~$b_{i,j}^{(l)}$ is $\sum_{i=1}^{n_q} (d \cdot k_i) = d \cdot \sum_{i=1}^{n_q} \ceil{m_i} \leq d \cdot n_q + d \cdot \sum_{i=1}^{n_q} m_i$, and
\begin{align*}
\sum \limits_{i=1}^{n_q} m_i &\leq \frac{d}{\phi+d} \cdot \sum \limits_{i=1}^{n_q} \left( \frac{2\phi}{\phi-d} \right)^i \leq \frac{d}{\phi+d} \cdot \frac{\left( \frac{2\phi}{\phi-d} \right)^{n_q+1}}{\left( \frac{2\phi}{\phi-d} \right) - 1} \cr
&\leq \frac{d}{\phi} \cdot \left( \frac{2\phi}{\phi-d} \right) \cdot \left( \frac{2\phi}{\phi-d} \right)^{n_q} = \frac{2d}{\phi-d} \cdot \left( \frac{2\phi}{\phi-d} \right)^{n_q} \DOT \qedhere
\end{align*}
\end{proof}
Now we can prove Theorem~\ref{multi.mainthm}.
\begin{proof}[Proof of Theorem~\ref{multi.mainthm}]
Without loss of generality let~$n \geq 16d$ and~$\phi \geq 2d$. For the moment let us assume $\phi - d \leq \frac{4d^2}{n} \cdot \left( \frac{2\phi}{\phi-d} \right)^{\frac{n}{2d}}$. This is the interesting case leading to the first term in the minimum in Theorem~\ref{multi.mainthm}. We set $\hat{n}_q := \frac{\LOG{ (\phi-d) \cdot \frac{n}{4d^2} }}{\LOG{ \frac{2\phi}{\phi-d} }} \in \left[ 1, \frac{n}{2d} \right]$ and obtain $n_q := \floor{\hat{n}_q} \geq 1$ by rounding. All inequalities hold because of the bounds on~$n$ and~$\phi$. Now we consider objects~$b_{i,j}^{(l)}$, $i = \NLIST{n_q}$, $j = \NLIST{d}$, $l = \NLIST{k_i}$, with weights~$2^i/(k_i \cdot d)$ and profit vectors~$q_{i,j}$ chosen uniformly from~$Q_{i,j}/k_i$. All these intervals have length~$\frac{1}{\phi}$ and hence all densities are bounded by~$\phi$. Let~$N$ be the number of objects. By Lemma~\ref{multi.lemma.count.objects}, this number is bounded by
\begin{align*}
N &\leq d \cdot n_q + \frac{2d^2}{\phi-d} \cdot \left( \frac{2\phi}{\phi-d} \right)^{n_q} \leq d \cdot \hat{n}_q + \frac{2d^2}{\phi-d} \cdot \left( \frac{2\phi}{\phi-d} \right)^{\hat{n}_q} \cr
&\leq d \cdot \hat{n}_q + \frac{2d^2}{\phi-d} \cdot (\phi-d) \cdot \frac{n}{4d^2} \leq n \DOT
\end{align*}
Hence, the number~$N$ of binary variables we actually use is at most~$n$, as required. As set~$\S$ of solutions we use the set described above, encoding the copy step and the split step. Due to Corollary~\ref{multi.corol.final.pareto}, for fixed~$d \geq 2$ the expected number of Pareto optimal solutions of $K_\S(\{ b_{i,j}^{(l)} \WHERE i = \NLIST{n_q}, \ j = \SLIST{]}{d}, \ l = \NLIST{k_i} \})$ is
\begin{align*}
\OMEGA{ \left( \frac{2^d}{d} \right)^{n_q} } &= \OMEGA{ \left( \frac{2^d}{d} \right)^{\hat{n}_q} } = \OMEGA{ \left( \frac{2^d}{d} \right)^{\frac{\LOG{ (\phi-d) \cdot \frac{n}{4d^2} }}{\LOG{ \frac{2\phi}{\phi-d} }}} } = \OMEGA{ \left( (\phi-d) \cdot \frac{n}{4d^2} \right)^{\frac{\LOG{ \frac{2^d}{d} }}{\LOG{ \frac{2\phi}{\phi-d} }}} } \cr
&= \OMEGA{ (\phi \cdot n)^{\frac{d - \LOG[2]{d}}{\LOG[2]{ \frac{2\phi}{\phi-d} }}} } = \OMEGA{ (\phi \cdot n)^{(d - \LOG[2]{d}) \cdot (1 - \THETA{1/\phi})} } \KOMMA
\end{align*}
where the last step holds because of the same reason as in the proof of Theorem~\ref{bi.mainthm}.
In the case $\phi - d > \frac{4d^2}{n} \cdot \left( \frac{2\phi}{\phi-d} \right)^{\frac{n}{2d}}$ we construct the same instance above, but for a maximum density~$\phi' > d$ where $\phi' - d = \frac{4d^2}{n} \cdot \left( \frac{2\phi'}{\phi'-d} \right)^{\frac{n}{2d}}$. Since~$n \geq 16d$, the value~$\phi'$ exists, is unique and $\phi' \in [65d, \phi)$. Futhermore, we get $\hat{n}_q = \frac{n}{2d}$. As above, the expected size of the Pareto set is
\[ \OMEGA{ \left( \frac{2^d}{d} \right)^{\hat{n}_q} } = \OMEGA{ \left( \frac{2^d}{d} \right)^{\frac{n}{2d}} } = \OMEGA{ 2^{\THETA{n}} } \DOT \qedhere \]
\end{proof} |
2,869,038,154,637 | arxiv | \section*{abstract}
Influenza A is a serious disease that causes significant morbidity and mortality, and vaccines against the seasonal influenza disease are of variable effectiveness. In this paper, we discuss use of the
$p_{\rm epitope}$
method to predict the dominant influenza strain and the expected vaccine effectiveness in the coming flu season. We illustrate how the effectiveness of the 2014/2015 A/Texas/50/2012 [clade 3C.1] vaccine against the A/California/02/2014 [clade 3C.3a] strain that emerged in the population can be estimated via pepitope. In addition, we show by a multidimensional scaling analysis of data collected through 2014, the emergence of a new A/New Mexico/11/2014-like cluster [clade 3C.2a] that is immunologically distinct from the A/California/02/2014-like strains.
\section*{Author Summary}
We show that the $p_{\rm epitope}$ measure of antigenic distance
is correlated with influenza A H3N2 vaccine effectiveness in humans
with $R^2 = 0.75$ in the years 1971--2015.
As an example,
we use this measure to predict from sequence data prior to 2014 the effectiveness
of the 2014/2015 influenza vaccine against the
A/California/02/2014 strain that emerged in 2014/2015.
Additionally, we use this measure along with a reconstruction of the
probability density of the virus in sequence space from sequence
data prior to 2015 to predict that
a newly emerging A/New Mexico/11/2014 cluster will likely
be the dominant circulating strain in 2015/2016.
\section*{Introduction}
Influenza is a highly contagious virus, usually spread by
droplet or fomite transmission. The high mutation and reassortment
rates of this virus lead to significant viral diversity in the population
\cite{ferguson2003,Holmes}.
In most years, one type of influenza predominates among
infected people, typically A/H1N1, A/H3N2, or B.
In the 2014/2015 season, A/H3N2 was the most common \cite{Y14}.
While there are many strains of influenza A/H3N2,
typically there is a dominant cluster of strains that infect
most people during one winter season.
Global travel by infected individuals leads this cluster of sequences
to dominate in most affected countries in a single influenza season.
New clusters arise every 3--5 years by the combined effects of
mutation and selection \cite{smith,clustering}.
There is significant selection pressure upon the virus to evolve
due to prior vaccination or exposure \cite{Illingworth,Lassig}.
Due to evolution of the influenza virus, the strains selected
by the World Health Organization (WHO) for inclusion
in the seasonal vaccine are reviewed annually and often updated.
The selection is based on which strains are circulating,
the geographic spread of circulating strains, and
the expected effectiveness of the current vaccine strains
against newly identified strains \cite{CDC_flu}.
There are to date 143 national influenza centers
located in 113 countries that provide and study influenza
surveillance data.
Five WHO Collaborating Centers for Reference and Research on Influenza
(Centers for Disease Control and Prevention in Atlanta, Georgia, USA;
National Institute for Medical Research in London, United Kingdom;
Victorian Infectious Diseases Reference Laboratory in Melbourne, Australia;
National Institute of Infectious Diseases in Tokyo, Japan; and
Chinese Center for Disease Control and Prevention in Beijing, China)
are sent samples for additional analysis.
These surveillance data are used to make forecasts about which
strains are mostly likely to dominate in the human population.
These forecasts are used by the WHO to make specific recommendations
about
the strains to include in the annual vaccine,
in 2016 one each of a A/H1N1, A/H3N2, and influenza B Yamagata lineage or Victoria lineage subtype strain. Additionally, for each recommended strain there is often a list of 5--6 ``like'' strains that may be substituted by manufacturers for the recommended strain and which may grow more readily in the vaccine manufacturing process that uses hen's eggs.
We here focus on predicting the
expected effectiveness of the current vaccine strains
against newly identified strains and on predicting or detecting
the emergence of new influenza strains.
Predicting effectiveness or emergence without recourse to
animal models or human data is challenging.
The influenza vaccine protects against strains similar to the vaccine, but not
against strains sufficiently dissimilar. For example, the A/Texas/50/2012(H3N2)
2014/2015 Northern hemisphere
vaccine has been observed to not protect against the
A/California/02/2014(H3N2) virus.
Furthermore, there is no vaccine that provides long-lasting,
universal protection, although this is an active research
topic \cite{universal}.
Vaccine effectiveness is expected to be a function of
``antigenic distance.''
While antigenic distance is often estimated from
ferret animal model hemagglutination inhibition (HI)
studies, the concept is more general.
In particular, in the present study we are interested in
the antigenic distance that the human immune system detects.
A measurement of antigenic distance that is
predictive of vaccine effectiveness for H3N2 and H1N1 influenza A
in humans is $p_{\rm epitope}$ \cite{calculator,gupta,flu2,h1n1,entropy}.
The quantity $p_{\rm epitope}$ is the fraction of
amino acids in the dominant epitope region of hemagglutinin that
differ between the vaccine and virus
\cite{gupta}.
The structure of the H3N2 hemagglutinin is shown in Figure \ref{fig0},
and the five epitopes are highlighted in color.
The quantity $p_{\rm epitope}$
is an accurate estimate of
influenza antigenic distance in humans.
Previous work has shown that $p_{\rm epitope}$ correlates with influenza H3N2 vaccine
effectiveness in humans with $R^2 = 0.81$ for the years 1971--2004 \cite{gupta}.
While our focus here is H3N2, other work has shown that $p_{\rm epitope}$
also correlates with influenza H1N1 vaccine effectiveness in humans
\cite{h1n1,Huang2012}.
The $p_{\rm epitope}$ measure has been extended to
the highly pathogenic avian influenza H5N1 viruses \cite{Peng2014}.
The $p_{\rm epitope}$ measure has additionally been extended to veterinary applications,
for example equine H3N8 vaccines \cite{Daly2013}.
In order to determine the strains to be included in the
vaccine, the emergence of new strains likely to dominate
in the human population must be detected.
We here use the method of multidimensional scaling
to detect emerging strains. As an example, we apply
the approach to the 2014--2015 season.
Dominant, circulating strains of influenza H3N2 in the human population
typically have been present at low frequencies for
2--3 years before fixing in the population. While the frequencies of such
emerging strains are low, they are high enough that
samples are collected, sequenced, and
deposited in GenBank.
Multidimensional scaling, also known as
principal component analysis \cite{Gower},
has been used to identify clusters of influenza from
animal model data \cite{smith}.
Thus, this method can be used to detect
an incipient dominant strain for an upcoming flu season from
sequence data alone,
before the strain becomes dominant \cite{clustering}.
We here use this method to detect emerging strains in the 2014--2015 season.
Interestingly,
H3N2 evolves such that the reconstructed phylogenetic tree
has a distinct one-dimensional backbone \cite{Lassig2012,clustering}.
In this paper,
we show that the current A/Texas/50/2012
vaccine is predicted not to protect against the
A/California/02/2014 strain that has emerged in the population,
consistent with recent observations \cite{WHO}.
This A/California/02/2014 strain
can be detected and predicted as a transition from the A/Texas/50/2012 strain.
The proposed summer 2015 vaccine strain is A/Switzerland/9715293/2013, which is
identical in the expressed hemagglutinin (HA1) region to
the A/California/02/2014 strain \cite{note1}.
Furthermore, we find that there is in 2015/2016 a transition underway from the
A/California/02/2014 cluster
to an
A/New Mexico/11/2014
cluster.
The latter may be an appropriate vaccine component for next season,
because the new A/New Mexico/11/14
cluster is emerging and appears based upon representation in the sequence database
to be displacing the A/California/02/14 cluster.
\section*{Methods}
\subsection*{The $p_{\rm epitope}$ method}
We calculate $p_{\rm epitope}$, the fraction of amino acids in the dominant epitope region of hemagglutinin that differ between the vaccine and virus \cite{gupta}. We use epitope sites as in \cite{gupta} and illustrated in Fig.\ \ref{fig1}. For each of the five epitopes \cite{calculator,gupta}, we calculate the number of amino acid substitutions between the vaccine and virus and divide this quantity by the number of amino acids in the epitope. The value of pepitope is defined to be the largest of these five values.
\subsection*{Identification of Vaccine Strains and Circulating Strains}
The dominant circulating influenza H3N2 strain and the vaccine strain
were determined from annual WHO reports
\cite{who04,who05,who06,who07,who2007,who08,who09,who10,Y102,Y112,who13,WHO,wer8941}.
These strains are listed in Table \ref{table0}.
In many years, the WHO report lists a preferred vaccine strain, while
the actual vaccine is a ``like'' strain.
Additionally, in some years, different vaccines
were used in different regions.
For each study listed in Table \ref{table0},
the vaccine strain used is listed.
\subsection*{Estimation of Vaccine Effectiveness}
Vaccine effectiveness can be quantified. It is defined as \cite{gupta}
\begin{equation}
E = \frac{u-v}{u}
\label{effectiveness}
\end{equation}
where $u$ is the rate at which unvaccinated people
are infected with influenza, and $v$ is the rate at which
vaccinated people are infected with influenza.
The vaccine effectiveness in Eq.\ \ref{effectiveness} was calculated
from rates of infection observed in epidemiological studies.
Influenza H3N2 vaccine effectiveness values for years 1971--2004
are from studies previously collected \cite{gupta}.
Laboratory-Confirmed
data for the years 2004--2015 were collected from the studies cited in Table \ref{table0}.
Epidemiological data from healthy adults, aged approximately 18--65, were used.
For each study,
the total number of unvaccinated subjects, $N_u$,
the total number of vaccinated subjects, $N_v$,
the number of H3N2 influenza cases among the unvaccinated subjects, $n_u$,
and
the number of H3N2 influenza cases among the vaccinated subjects, $n_v$,
are known and listed in the table.
From these numbers, vaccine effectiveness was calculated
from Eq.\ \ref{effectiveness},
where $u = n_u / N_u$ and $v = n_v / N_v$.
Error bars, $\varepsilon$, on the calculated effectiveness values were obtained
assuming binomial statistics for each data set \cite{gupta}:
$\varepsilon^2 = [\sigma_v^2/u^2 / N_v + (v/u^2)^2 \sigma_u^2 / N_u ]$,
where
$\sigma_v^2 = v (1-v)$, and
$\sigma_u^2 = u (1-u)$.
\subsection*{Virus Sequence Data in 2013 and 2014}
The evolution of the HA1 region of the H3N2 virus in
the 2013/2014 and 2014/2015 seasons was analyzed in detail.
We downloaded from GenBank the 1006 human HA1 H3N2 sequences that were
collected in 2013 and the
179 human
HA1 H3N2 sequences that were collected in 2014.
\subsection*{Sequence Data Alignment}
All sequences were aligned
before further processing
by multialignment using Clustal Omega.
Only full length HA1 sequences of 327 amino acids were used,
as partial sequences
were excluded in the GenBank search criterion.
Default clustering parameters in Clustal Omega were used.
There were no gaps or deletions detected
by Clustal Omega in the 2013 and 2014 sequence data.
\subsection*{Multidimensional Scaling}
Multidimensional scaling finds a reduced number of dimensions, $n$,
that best reproduce the distances between all pairs of a set of points.
In the present application, the points are HA1 sequences of length 327
amino acids, and the data were reduced to $n=2$ dimensions.
Distances between two sequences were defined as the
Hamming distance, i.e.\ the number of differing amino acids, divided
by the total length of 327. In this way, multidimensional scaling places
the virus sequences in a reduced sequence space so that distances between
pairs of viral sequences are maintained as accurately as possible.
This low-dimensional clustering method enables one
to visualize the viruses, by finding the two best
dimensions to approximate the Hamming distances between all
clustered sequences.
\subsection*{Gaussian Kernel Density Estimation}
The method of Gaussian kernel density estimation was used
to predict the probability density of sequences in
the reduced sequence space identified by multidimensional scaling
\cite{clustering}.
Briefly, each sequence was represented by a Gaussian distribution
centered at the position where the sequence lies in the
reduced space. The total estimated viral probability density was the
sum of all of these Gaussians for each virus sequence.
The weight of the Gaussian for each sequence was constant.
The standard deviation of the Gaussian for each sequence was specified as either
one-half, one, or three substitutions in the dominant epitope of the
virus, as discussed later.
In other words, the reconstructed probability density of the
viruses in the reduced $(x,y)$ space, as estimated by the
sequences from GenBank, was given by
$p(x,y) \propto \sum_i \exp\{-[(x-x_i)^2 + (y-y_i)^2] / (2 \sigma^2) \} $,
where the location of virus $i$ in the reduced space is $(x_i, y_i)$,
and $\sigma$ is the standard deviation.
In this way, a smooth estimation of the underlying
distribution of virus sequences from which the sequences deposited in
GenBank are collected is generated.
There are three criteria by which a new cluster can be judged to
determine if it will dominate in the human population in a
future season. First, the cluster must be evident in a
density estimation. Second, the cluster must be growing. That is,
there must be evident selection pressure on the cluster.
Third, the cluster must be sufficiently far from the
current vaccine strain, as judged by
$p_{\rm epitope}$, for the vaccine to provide
little or no protection against the new strains.
From prior work \cite{gupta} and from the results discussed
below,
peaks separated by more than roughly $p_{\rm epitope} = 0.19$
are sufficiently separated that protection against the virus
at one peak is expected to provide little protection against the
viruses at the other.
\section*{Results and Discussion}
\subsection*{Vaccine Effectiveness Correlates with Antigenic Distance}
Figure \ref{fig1} shows how vaccine effectiveness decreases with
antigenic distance.
The equation for the average effectiveness (the solid line in
Figure \ref{fig1}) is
$E = -2.417 p_{\rm epitope} + 0.466$.
Vaccine effectiveness declines to zero
at approximately $p_{\rm epitope} > 0.19$, on average.
When the dominant epitope is A or B, in which there are 19 or 21
amino acids respectively,
this means that vaccine effectiveness declines to zero
after roughly 4 substitutions.
When the dominant epitope is C, in which there are 27 amino acids,
the vaccine effectiveness declines to zero after roughly 5 substitutions.
Figure \ref{fig1} shows that H3N2 vaccine effectiveness in humans
correlates well with the $p_{\rm epitope}$ measure of
antigenic distance.
In particular, the Pearson correlation coefficient
of $p_{\rm epitope}$ with H3N2 vaccine effectiveness in
humans is $R^2 = 0.75$.
Interestingly, this correlation is nearly
the same as that previously reported for the 1971--2004 subset of years
\cite{gupta}, despite the addition of 50\% more data.
Also of significance to note is that these
correlations with $p_{\rm epitope}$
are significantly larger than those of ferret-derived distances with
vaccine effectiveness in humans, which as we will show
are $R^2 = 0.39$ or $R^2 = 0.37$ for the two most common measures.
\subsection*{Consistency of Epitopic Sites}
Analysis of HA1 sites shows that of the sites under diversifying selection
\cite{entropy} shows, there are only 10 that by this measure should be added
to the 130 known epitope sites \cite{gupta}. Alternatively, of the sites
under diversifying selection, 81\% are within the known epitope regions
\cite{entropy}. The 130 epitope sites that we have used nearly cover the
surface of the head region of the HA1 protein, and this is why they are
nearly complete. Another recent study \cite{ref47} identified epitopes
somewhat different from those that we use and further suggested that
proximity to receptor binding site is a significant determinant of H3
evolution. This result is known to be true because the sialic acid receptor
binding site is in epitope B, which is adjacent to epitope A, and epitopes
A and B are the most commonly dominant epitopes over the years
(Table \ref{table0},
and Table 1 of \cite{gupta}). We note, however, that upon computing
the
correlation of the four epitope sites defined in \cite{ref47} with the vaccine
effectiveness in human data considered here one finds $R^2=0.53$.
This result is to be compared to the $R^2=0.75$ illustrated in Figure 2.
\subsection*{The Influenza A/H3N2 2014/2015 Season}
The
2014/2015 influenza vaccine contains an A/Texas/50/2012(H3N2)-like
virus to protect against A/H3N2 viruses \cite{WHO}.
Novel viral strains detected in the human population
this year include A/Washington/18/2013, A/California/02/2014,
A/Nebraska/4/2014, and A/Switzerland/9715293/2013 \cite{wer8941}.
It should be noted that
A/California/02/2014 and A/Switzerland/9715293/2013
are completely identical
in the HA1 sequence that contains the HA epitopes \cite{note1}.
Table \ref{table1} shows the
$p_{\rm epitope}$ values between the vaccine strain
and these newly-emerged strains.
The values indicate, along with Figure \ref{fig1}, that the
vaccine is unlikely to provide much protection against
these strains, since $p_{\rm epitope} > 0.19$.
\subsection*{Dynamics of Influenza Evolution}
The strains detected in 2013 and
2014 cluster in sequence space.
While the strains are sparse in the full, high-dimensional
sequence space, this clustering is detected
by multidimensional scaling to the two most informative dimensions,
as shown in Figure \ref{fig2}.
The novel strain A/Washington/18/2013
emerged in 2013, followed by A/California/02/2014
and A/Nebraska/4/2014 in 2014, as shown in Figure \ref{fig2}.
The later two are sufficiently distinct from previous vaccine strains that
expected vaccine effectiveness is limited.
Figure \ref{fig3}
is an estimate of the density distribution of the
influenza H3N2 HA1 sequences in years 2013 and 2014
in the low-dimensional space
provided by the multidimensional scaling.
Dimensional reduction was applied to the subset of sequences
in each subfigure \ref{fig3} a, b, or c. Then, Gaussian kernel
density estimation was applied to estimate the distribution of sequences
in the reduced two dimensions.
Each sequence is represented by a Gaussian function with
a standard deviation of one-half substitution in the dominant epitope.
By the criteria above, A/California/02/2014(H3N2)
represented the dominant strain circulating in the human population in 2014/2015.
The time evolution in
Figure \ref{fig2}, or a comparison of Figure \ref{fig3}a with
Figure \ref{fig3}b, shows that the
A/California/02/2014 cluster emerged in 2014.
Table \ref{table1} shows that the distance of this new
cluster from the A/Texas/50/2012(egg) strain is
$p_{\rm epitope} > 0.19$, and so from Figure \ref{fig1}
the expected effectiveness of
A/Texas/50/2012(egg) against these novel A/California/02/2014-like
strains is zero.
Conversely,
an effective vaccine for this cluster in the 2014/2015 flu season
could be
A/California/02/2014, or the A/Switzerland/9715293/2013 that is identical
in the HA1 region.
\subsection*{Early detection of new dominant strains}
Surprisingly, when we enlarge the region of sequence
space considered, going from Figure \ref{fig3}b to
Figure \ref{fig2} or Figure \ref{fig3}c, we find another large and growing peak
at a distance $p_{\rm epitope} = 0.24$ from
the A/Texas/50/2012 sequence.
This new cluster contains the
A/Nebraska/4/2014
sequence.
The A/Nebraska/4/2014 sequence is $p_{\rm epitope} = 0.16$
from the
A/California/02/2014 sequence.
The A/Nebraska/4/2014
sequence appears to be dominating the
A/California/02/2014 sequence in the 2015/2016 season.
The consensus strain of this cluster to which A/Nebraska/4/2014 belongs is
A/New Mexico/11/2014.
The consensus strain minimizes the distance from all strains
in the cluster, thus maximizing expected vaccine effectiveness.
Thus, A/New Mexico/11/2014 might be a more effective
choice of vaccine for the majority of the population
in comparison to
A/Switzerland/9715293/2013 or A/California/02/2014.
\subsection*{Phylogenetic Analysis}
A systematic phylogenetic analysis of recent A/H3N2 virus HA
nucleotide sequences has been carried out\cite{clade1,clade2}.
Briefly, phylogenetic trees were reconstructed from three reference
sequence datasets using the maximum likelihood method \cite{clade1},
with bootstrap analyses of 500 replicates.
Dominant branches of the tree were identified with
distinct clade labels.
Analysis of the HA protein sequences showed that there were relatively
few residue changes across all HA clades. The 2014 vaccine
strain A/Texas/50/2012 falls into clade 3C.1, while the new
emerging A/California/02/2014 strain falls into subclade 3C.3a.
The A/Nebraska/4/2014 and the consensus A/New Mexico/11/2014 strains
fall into subclade 3C.2a.
The phylogenetic analysis indicates a closer relationship
of A/Nebraska/4/2014 or A/New Mexico/11/2014 to A/California/02/2014 than
to A/Texas/50/2012.
Note that phylogenetic methods make a number of assumptions. For example, substitution rates at different sites are assumed to be the same and constant in time. Due to selection, however, substitution rates are dramatically higher, at least 100x, in dominant epitope regions than in non-dominant epitope or stalk regions. Multi-gene phylogenetic methods are inconsistent in the presence reassortment, and single-gene phylogenetic methods are inconsistent in the presence of recombination, with the former being perhaps more significant than the latter in the case of influenza. Multidimensional scaling, on the other hand, does not make either of these assumptions. MDS also naturally filters out neutral substitutions that are random as the dominant dimensions are identified. Thus, MDS provides a complementary approach to the traditional phylogenetic analysis.
\subsection*{Ferret HI Analysis}
Since an analysis showing the correlations between the two standard methods
of analyzing ferret hemagglutinin inhibition antisera assays with vaccine
effectiveness in humans in the years 1968--2004 were $R^2=0.47$
and $R^2=0.57$ \cite{gupta}, a number of studies have appear supporting these
low correlations. For example, Table 3 of \cite{ref45} shows that correlation of
various immunogenicity parameters is higher with genetic distance than with
HI measures of antigenic distance. The study by Xie et.\ al further
illustrated the limitations of relying on ferret HI data alone \cite{ref46}.
We have updated our calculation of the orrelations between the two standard methods
of analyzing ferret hemagglutinin inhibition antisera assays with vaccine
effectiveness in humans to the years 1968--2015, see \cite{gupta} and the last
two columns of Table \ref{table0}. The correlations with $d_1$ and $d_2$
are now $R^2 = 0.39$ and $R^2 = 0.37$, respectively, showing that ferret
HI studies have become even less correlated with human
vaccine effectiveness in recent years.
\section*{Conclusion}
In conclusion, we have shown how vaccine effectiveness can be predicted
using $p_{\rm epitope}$ values.
This method requires only sequence data, unlike
traditional methods that require animal model data, such as
ferret HI assay experiments or post-hoc observations in humans.
Interestingly, the correlation of
$p_{\rm epitope}$ with H3N2 vaccine effectiveness in
humans is $R^2 = 0.75$,
nearly
the same as that previously reported for the 1971--2004 subset of years
\cite{gupta}, despite the addition of 50\% more data.
Significantly, the correlation of H3N2 vaccine effectiveness in humans
with $p_{\rm epitope}$
is significantly larger than with ferret-derived distances,
which are $R^2 = 0.43$ or $R^2 = 0.57$ for
the two most common measures \cite{gupta}.
As an application, we estimated the
effectiveness of the H3N2 vaccine strain of A/Texas/50/2012
against the observed A/California/02/2014 strains.
Clustering of the 2013 and 2014 sequence data confirms the
significance of the $p_{\rm epitope}$ measure.
We showed from data through 2014 that there is
a transition underway from the A/California/02/2014 cluster
to a
A/New Mexico/11/2014
cluster.
The consensus sequence of A/New Mexico/11/2014
from this cluster could have been
considered in late Winter 2015
for inclusion among the H3N2 candidate
vaccine strains for the 2015/2016 flu season.
\clearpage
\begin{sidewaystable}
\begin{minipage}{\linewidth}
\caption{Historical vaccine strains, circulating strains, and vaccine effectivenesses.
\label{table0}}
{\centering
\begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c}
{\tiny Year}&
{\tiny Vaccine}&
{\tiny Circulating Strain}&
{\tiny Dominant Strain}&
{\tiny $p_{\rm epitope}$}&
{\tiny Vaccine}&
{\tiny $n_u$}&
{\tiny $N_u$}&
{\tiny $n_v$}&
{\tiny $N_v$}&
{\tiny $d_1$} &
{\tiny $d_2$}\\
&
{\tiny }&
{\tiny }&
{\tiny Epitope}&
{\tiny }&
{\tiny Effectiveness}&
&
&
&
&
&
\\
\hline
\hline
{\tiny 2004-2005}&
{\tiny A/Wyoming/3/2003 (AY531033)}&
{\tiny A/Fujian/411/2002 (AFG72823)}&
{\tiny B}&
{\tiny 0.095}&
{\tiny 9\% \cite{Y04}}&
{\tiny 6}&
{\tiny 40}&
{\tiny 50}&
{\tiny 367}&
{\tiny 2 \cite{d2004}} &
{\tiny 1 \cite{d2004}}
\\
{\tiny 2005-2006}&
{\tiny A/New York/55/2004 (AFM71868)}&
{\tiny A/Wisconsin/67/2005 (AFH00648)}&
{\tiny A}&
{\tiny 0.053}&
{\tiny 36\% \cite{Y05}}&
{\tiny 43}&
{\tiny 165}&
{\tiny 6}&
{\tiny 36}&
{\tiny 1 \cite{wer8109}} &
{\tiny 2 \cite{wer8109}}
\\
{\tiny 2006-2007}&
{\tiny A/Wisconsin/67/2005 (ACF54576)}&
{\tiny A/Hiroshima/52/2005 (ABX79354)}&
{\tiny A}&
{\tiny 0.105}&
{\tiny 5\% \cite{Y06}}&
{\tiny 130}&
{\tiny 406}&
{\tiny 20}&
{\tiny 66}&
{\tiny 1 \cite{d2006}}&
{\tiny 2 \cite{d2006}}
\\
{\tiny 2007}&
{\tiny A/Wisconsin/67/2005 (ACF54576)}&
{\tiny A/Wisconsin/67/2005 (AFH00648)}&
{\tiny B}&
{\tiny 0.048}&
{\tiny 54\% \cite{Y07}}&
{\tiny 74}&
{\tiny 234}&
{\tiny 8}&
{\tiny 55}&
\\
{\tiny 2008-2009}&
{\tiny A/Brisbane/10/2007 (ACI26318)}&
{\tiny A/Brisbane/10/2007 (AIU46080)}&
{\tiny }&
{\tiny 0}&
{\tiny 51\% \cite{Y08}}&
{\tiny 36}&
{\tiny 240}&
{\tiny 4}&
{\tiny 54}&
\\
{\tiny 2010-2011}&
{\tiny A/Perth/16/2009 (AHX37629)}&
{\tiny A/Victoria/208/2009 (AIU46085)}&
{\tiny A}&
{\tiny 0.053}&
{\tiny 39\% \cite{Y10,Y102}}&
{\tiny 100}&
{\tiny 991}&
{\tiny 35}&
{\tiny 569}&
{\tiny 0 \cite{d2010}}&
{\tiny 1.4 \cite{d2010}}
\\
{\tiny 2011-2012}&
{\tiny A/Perth/16/2009 (AHX37629)}&
{\tiny A/Victoria/361/2011 (AIU46088)}&
{\tiny C}&
{\tiny 0.111}&
{\tiny 23\% \cite{Y11,Y112}}&
{\tiny 335}&
{\tiny 616}&
{\tiny 47}&
{\tiny 112}&
{\tiny 1 \cite{d2011}}&
{\tiny 2.8 \cite{d2011}}
\\
{\tiny 2012-2013}&
{\tiny A/Victoria/361/2011 (AGB08328)}&
{\tiny A/Victoria/361/2011 (AIU46088)}&
{\tiny B}&
{\tiny 0.095}&
{\tiny 35\% \cite{Y12}}&
{\tiny 288}&
{\tiny 1257}&
{\tiny 15}&
{\tiny 100}&
{\tiny 5 \cite{wer8810} }&
{\tiny 4 \cite{wer8810}}
\\
{\tiny 2013-2014}&
{\tiny A/Victoria/361/2011 (AGL07159)}&
{\tiny A/Texas/50/2012 (AIE52525)}&
{\tiny B}&
{\tiny 0.190}&
{\tiny 12\% \cite{Y13}}&
{\tiny 145}&
{\tiny 476}&
{\tiny 16}&
{\tiny 60}&
{\tiny 5 \cite{wer8810} }&
{\tiny 4 \cite{wer8810}}
\\
{\tiny 2014-2015}&
{\tiny A/Texas/50/2012 (AIE52525)}&
{\tiny A/California/02/2014 (AIE09741)}&
{\tiny B}&
{\tiny 0.191}&
{\tiny 14\% \cite{Y14}}&
{\tiny 135}&
{\tiny 342}&
{\tiny 100}&
{\tiny 293}&
{\tiny 4 \cite{wer8941}} &
{\tiny 5.6 \cite{wer8941}}
\\
\end{tabular}
}
\par
\bigskip
{H3N2 influenza vaccine effectiveness in humans and
corresponding $p_{\rm epitope}$ antigenic distances
for the 2004 to 2015 seasons.
The vaccine and circulating strains
are shown for each of the years since 2004 that
the H3N2 virus has been the predominant influenza virus and for
which vaccine effectiveness data are available. Vaccine
effectiveness values are taken from the
literature.
Here $N_u$ is the total number of unvaccinated subjects, $N_v$ is the total number of
vaccinated subjects,
$n_u$ is the number of H3N2 influenza cases
among the unvaccinated subjects, and
$n_v$ is the number of H3N2 influenza cases
among the vaccinated subjects.
Also shown are the distances derived from ferret HI data by the two common
measures \cite{gupta}.
}
\end{minipage}
\end{sidewaystable}
\clearpage
\begin{table}
\begin{minipage}{\linewidth}
\caption{The $p_{\rm epitope}$ distances between
the vaccine strain A/Texas/50/2012(egg) and selected
novel strains.
\label{table1}
}
\vspace{3mm}
\renewcommand{\baselinestretch}{1.5} \normalsize
{\scriptsize
\begin{tabular}{lcccccccc}\hline
& & \multicolumn{5}{c}{$p_i$ for each epitope $i$} & \cr\cline{3-7}
Strain name & Collection date & A & B & C & D & E & $p_{\rm epitope}$ &
predicted effectiveness \\
\hline
A/Texas/50/2012(cell) & 2012-04-15 & 0 & 0.0476 & 0 & 0.0244 & 0 & 0.0476 & 35\% \\
A/Washington/18/2013 & 2013-11-29 & 0.1053 & 0.1905 & 0 & 0.0244 & 0 & 0.1905 & 0\% \\
A/California/02/2014 & 2014-01-16 & 0.1579 & 0.1905 & 0 & 0.0244 & 0 & 0.1905 & 0\% \\
A/Nebraska/04/2014 & 2014-03-11 & 0.1053 & 0.2381 & 0.0370 & 0.0244 & 0.0455 & 0.2381 & 0\% \\
\hline
\end{tabular}
}
\par
\bigskip
The $p_{\rm epitope}$ distances between
the vaccine strain A/Texas/50/2012(egg) and reported
novel strains \cite{wer8941}
in 2013 and 2014.
The $p_i$ values for each epitope ($i = $ A--E), the number
of substitutions in epitope $i$ divided by the number of
amino acids in epitope $i$, are also shown.
The value of $p_{\rm epitope}$ is the largest of the $p_i$ values, and the
corresponding epitope $i$ is dominant. Zero values indicate
no substitutions in that epitope.
\end{minipage}
\end{table}
\clearpage
\begin{figure}[tb!]
\begin{center}
\includegraphics[width=0.45\columnwidth,clip=]{Fig1subnew_small.pdf}
\caption{
Shown is the structure of
hemagglutinin in H3N2 (accession number 4O5N).
The five epitope regions \cite{gupta} are color coded:
epitope A is red (19 amino acids), B is yellow (21 aa), C is orange (27 aa), D is blue
(41 aa), and E is green (22 aa).
Note epitope B was dominant in 2013/2014 and 2014/2015.
\label{fig0}
}
\end{center}
\end{figure}
\clearpage
\begin{figure}[tb!]
\begin{center}
\includegraphics[width=0.45\columnwidth,clip=]{Fig1a_small.pdf}
\caption{Vaccine effectiveness in humans
as a function of the $p_{\rm epitope}$ antigenic distance.
Vaccine effectiveness values from epidemiological
studies of healthy adults, aged approximately 18--65, are
shown (triangles).
Also shown is a linear fit to the data (solid, $R^2 = 0.75$).
Vaccine effectiveness declines to zero at $p_{\rm epitope} = 0.19$ on average.
The error bars show the standard estimate of the mean of each sample point,
as discussed in the text.
\label{fig1}
}
\end{center}
\end{figure}
\clearpage
\begin{figure}[tb!]
\begin{center}
\includegraphics [width=0.90\columnwidth,clip=] {NewFig2_small.pdf}
\end{center}
\caption{Dimensional reduction of all H3N2 influenza sequences
collected from humans in 2013 and 2014 and deposited in GenBank.
Distances are normalized by the length of the HA1 sequence, 327 aa.
Dimensional reduction identifies the principal
observed substitutions, i.e.\ those
correlated with fitness of the virus, which we expect to be in
the epitope regions. A value of $p_{\rm epitope} = 0.19$
corresponds to a distance of $0.012$ here.
Sequences from Table \ref{table1} are labeled.
While the A/Texas/50/2012 sequence
was collected in 2012, substantially
similar strains were collected in 2013 and downloaded from GenBank.
\label{fig2}
}
\end{figure}
\clearpage
\begin{figure}[tb!]
\begin{center}
a) \includegraphics [width=0.45\columnwidth,clip=] {Fig3a_small.pdf}
b) \includegraphics [width=0.45\columnwidth,clip=] {Fig3b_small.pdf}
c) \includegraphics [width=0.45\columnwidth,clip=] {Fig3c_small.pdf}
\end{center}
\caption{Gaussian density estimation
of sequences in reduced two dimensions
for a) all 2013 H3N2 influenza sequences in humans,
b) those 2014 H3N2 influenza sequences in humans near
the A/Texas/50/2012 sequence,
and c) all 2014 H3N2 influenza sequences in humans.
The consensus strain of the cluster to which A/Nebraska/4/2014
belongs is A/New Mexico/11/2014.
\label{fig3}
}
\end{figure}
\clearpage
|
2,869,038,154,638 | arxiv | \section{Introduction}
Black objects (holes, strings, rings etc.) in higher dimensional
spacetimes have attracted a lot of attention recently. The existence
of higher than 4 dimensions of the spacetime is a natural consequence
of the consistency requirement in the string theory. Models with
large extra dimensions, originally proposed to solve such
longstanding fundamental `puzzles' as the hierarchy and cosmological
constant problems, became very popular recently. In these models mini
black holes and other black objects play a special role serving as
natural probes of extra dimensions. This is one of the reasons why
the questions what kind of black objects can exist in higher
dimensions and what are their properties are now discussed so
intensively.
Higher dimensional generalizations of the Kerr metric for a rotating
black holes were obtained quite long time ago by Myers and
Perry~\cite{mp1}. In a $D$-dimensional spacetime the MP metrics
besides the mass $M$ contain also $[(D-1)/2]$ parameters connected
with the independent components of the angular momentum of the black
hole. (Here $[A]$ means the integer part of $A$.) The event horizon
of the MP black holes has the spherical topology $S^{D-2}$. This
makes them in many aspects similar to the 4D Kerr black hole.
According to the Hawking theorem~\cite{Hawk} any stationary black
hole in a 4D spacetime obeying the dominant energy condition has the
topology of the horizon $S^2$. Black hole surface topologies distinct
from $S^2$ are possible if the dominant energy condition is violated
\cite{GeHa:82}. Moreover, a vacuum stationary black is uniquely
specified by its mass and angular momentum. Recent discover of black
ring solutions ~\cite{br1}-\cite{br3} demonstrated that both the
restriction on the topology of the horizon and the uniqueness property
of black holes are violated in the 5D spacetime.
In this paper we discuss the geometry of the horizon surfaces of 5D
black rings and a 5D black holes with one rotation parameter. A
similar problem for the 4D rotating black holes was studied in
detail by Smarr \cite{kerr1}. We generalize his approach to the 5D
case. After a brief summary of known properties of 3D round spheres and
tori in the flat 4D space (Section~2) we consider a geometry of 3D
space which admits 2 orthogonal commuting Killing vectors (Section~3). In
particular we calculate its Gauss curvature. In Section~4
we apply these results to the horizon surface of 5D rotating black
hole with one rotation parameter. The embedding of this 3D surface
into the flat spacetime is considered in Section~5. The horizon
surface geometry for a 5D rotating black ring is discussed in
Section~6. This Section considers also a Kaluza-Klein reduction of
the black ring metric along the direction of its rotation which maps
this solution onto a black hole solution of 4D Einstein equations with the
dilaton and `electromagnetic' fields. The geometry and embedding of
the horizon in the ${\mathbb E}^3$ for this metric is obtained. Section~7
contains the discussion of the results.
\section{Sphere $S^3$ and torus $S^2\times S^1$ in ${\mathbb E}^4$}
\subsection{Sphere $S^3$}
In this section we briefly remind some known properties of a 3D
sphere and a torus in a flat 4D space.
Consider 4-dimensional Euclidean space ${\mathbb E}^4$ and denote by $X_i$
($i=1,\ldots,4$) the Cartesian coordinates in it. A 3-sphere
consists of all points equidistant from a single point $X_i=0$ in
${\mathbb R}^4$. A unit round sphere $S^3$ is a surface defined by the
equation $\sum_{i=1}^4 X_i^2=1$. Using complex coordinates
$z_1=X_1+iX_2$ and $z_2=X_3+iX_4$ one can also equivalently define
the unit 3-sphere as a subset of ${\mathbb C}^2$
\begin{equation}
S^3=\left\{(z_1,z_2)\in{\mathbb C}^2|\;|z_1|^2+|z_2|^2=1\right\} \, .
\label{eq:s31}
\end{equation}
We use the embedding of $S^3$ in ${\mathbb C}^2$ to introduce the
{\em Hopf} co-ordinates ($\theta,\phi,\psi$) as,
\begin{equation}
z_1=\sin(\theta)e^{i\phi}\;\;;\;\;z_2=\cos(\theta)e^{i\psi} \, .
\label{eq:s32}
\end{equation}
Here $\ensuremath{\theta}$ runs over the range $[0,\pi/2]$, and $\phi$ and $\psi$
can take any values between $0$ and $2\pi$. In these co-ordinates
the metric on the 3-sphere is
\begin{equation}
ds^2=d\ensuremath{\theta}^2+\sin^2\ensuremath{\theta} d\phi^2+\cos^2\ensuremath{\theta} d\psi^2 \, .
\label{eq:s33}
\end{equation}
The volume of the unit 3-sphere is $2\pi^2$. Coordinate lines of
$\phi$ and $\psi$ are circles. The length of these circles take the
maximum value $2\pi$ at $\theta=\pi/2$ for $\phi$-line and at
$\theta=0$ for $\psi$-line, respectively. These largest circles are
geodesics. Similarly, the coordinate lines of $\theta$ coordinate are
geodesics. For the fixed values of $\phi=\phi_0$ and $\psi=\psi_0$
and $\theta\in [0,\pi/2]$ this line is a segment of the length $\pi/2$
connecting the fixed points of the Killing vectors $\partial_{\phi}$ and
$\partial_{\psi}$. Four such segments $\phi=\phi_0,\phi_0+\pi$,
$\psi=\psi_0,\psi_0+\pi$ form the largest circle of the length
$2\pi$.
The surfaces of constant $\ensuremath{\theta}$ are flat {\em tori} $T^2$. For
instance, $\ensuremath{\theta}=\ensuremath{\theta}_0$ can be cut apart to give a rectangle with
horizontal edge length $\cos\ensuremath{\theta}_0$ and vertical edge length
$\sin\ensuremath{\theta}_0$. These tori are called {\em Hopf tori} and they are
pairwise linked. The fixed points of the vectors $\partial_{\phi}$ and
$\partial_{\psi}$ ($\ensuremath{\theta}=0$ for $\partial_{\phi}$
and $\ensuremath{\theta}=\pi/2$ for $\partial_{\psi}$) form a pair of linked great
circles. Every other Hopf torus passes between these circles. The
equatorial Hopf torus is the one which can be made from a square.
The others are all rectangular. Also we can easily see that the
surfaces of constant $\phi$ or constant $\psi$ are {\em half}
2-spheres or topologically disks.
\subsection{Torus ${\cal T}^3=S^2\times S^1$}
The equation of a torus ${\cal T}^3=S^2\times S^1$ in ${\mathbb E}^4$ is
\begin{equation}
X_1^2+X_2^2+(\sqrt{X_3^2+X_4^2}-a)^2=b^2\, .
\end{equation}
The surface ${\cal T}^3$ is obtained by the rotation of a sphere $S^2$
of the radius $b$ around a circle $S^1$ of the radius $a$ ($a>b$).
Let us define toroidal coordinates as
\begin{eqnarray}
X_1&=&{\alpha\sin \hat{\theta}\over B}\cos\phi \, ,\hspace{0.2cm} X_2={\alpha\sin
\hat{\theta}\over B}\sin\phi\, ,\nonumber\\
X_3&=&{\alpha\sinh \eta\over B}\cos\psi \, ,\hspace{0.2cm} X_4={\alpha\sinh \eta\over
B}\sin\psi \, ,
\end{eqnarray}
where $B=\cosh \eta -\cos\hat{\theta}$. The toroidal coordinates
$(\eta,\hat{\theta},\phi,\psi)$ change in the following intervals
\begin{equation}
0<\eta<\infty\, ,\hspace{0.2cm}
0\le \hat{\theta}\le \pi\, ,\hspace{0.2cm}
0\le \phi, \psi \le 2\pi \, .
\end{equation}
The flat metric in this coordinates takes the form
\begin{equation}\n{mtor}
ds^2={\alpha^2\over B^2}\left( d\eta^2+\sinh^2\eta d\psi^2
+d\hat{\theta}^2+\sin^2\hat{\theta} d\phi^2\right)\, .
\end{equation}
In these coordinates the surface of constant $\eta=\eta_0$ is a torus
${\cal T}^3$ and one has
\begin{equation}
\alpha=\sqrt{a^2-b^2}\, ,\hspace{0.5cm}
\cosh \eta_0=a/b\, .
\end{equation}
Introducing new coordinates $y=\cosh \eta$ ans $x=\cos \hat{\theta}$ one
can also write the metric
\eq{mtor} in the form \cite{br3}
\begin{eqnarray}
ds^2&=&{\alpha^2\over (y-x)^2}\left[ {dy^2\over y^2-1}+(y^2-1)d\psi^2 \right.\nonumber \\
&+&\left. {dx^2\over 1-x^2}+(1-x^2)d\phi^2\right]\, .
\end{eqnarray}
The points with $\eta<\eta_0$ lie in the exterior of ${\cal T}^3$.
The induced geometry on the 3-surface $\eta=\eta_0$ is
\begin{equation}\n{cantor}
ds^2={a^2-b^2\over (a-b\cos\hat{\theta})^2} [(a^2-b^2)
d\psi^2+b^2(d\hat{\theta}^2+\sin^2\hat{\theta} d\phi^2)]\, .
\end{equation}
This metric has 2 Killing vectors, $\partial_{\phi}$ and $\partial_{\psi}$. The
first one has 2 sets of fixed points, $\theta=0$ and $\theta=\pi$,
which are circles $S^1$. The second Killing vector, $\partial_{\psi}$ does
not have fixed points. The 3-volume of the torus ${\cal T}^3$ is
$8\pi^2 a b^2$.
Since the sections $\psi=$const are round spheres, instead of
$\hat{\theta}$ it is convenient to use another coordinate,
$\theta\in[0,\pi]$,
\begin{equation}
\sin\theta={\sqrt{a^2-b^2} \sin\hat{\theta} \over
a-b\cos\hat{\theta}}\, .
\end{equation}
Using this coordinate one can rewrite the metric \eq{cantor} in the
form
\begin{equation}\n{mt}
ds^2=(a+b\cos\theta)^2 d\psi^2+b^2(d{\theta}^2+\sin^2{\theta} d\phi^2)\, .
\end{equation}
Once again we can easily see
that the surfaces of constant $\ensuremath{\theta}$ are flat {\em tori} $T^2$ except
for $\ensuremath{\theta}=0$ or $\ensuremath{\theta}=\pi$, which are circles. The surfaces of constant
$\psi$ are 2-spheres whereas the surfaces of constant $\phi$ are
2-tori.
Sometimes it is convenient to consider special foliations of ${\cal
T}^3$ \cite{fol}. This foliation is a kind of "clothing" worn on a
manifold, cut from a stripy fabric. These stripes are called plaques
of the foliation. On each sufficiently small piece of the manifold,
these stripes give the manifold a local product structure. This
product structure does not have to be consistent outside local
patches, a stripe followed around long enough might return to a
different, nearby stripe. As an example for foliations let us
consider the manifold ${\mathbb R}^3$. The foliations are generated by two
dimensional leaves or plaques with one co-ordinate as constant. That
is, the surfaces $z=constant$, would be the plaques of the
foliations and in this case there is a global product structure.
Similarly one can consider the foliations of $S^2\times S^1$. Fig.(1)
shows the transverse {\em Reeb} foliations of the cylindrical section
of $S^2\times S^1$ \cite{fol}. We can see the the stacking of
spherical shaped plaques giving rise to the cylindrical section.
\begin{figure}[t]
\begin{center}
\includegraphics[height=4.5cm,width=7cm]{reeb.eps}
\caption{This picture shows the transverse Reeb foliations of the
cylindrical section of $S^2{\times}S^1$. The two dimensional
spherical shaped stripes or `plaques' are stacked giving rise two a
cylindrical section of the 3-torus. (courtesy:
http://kyokan.ms.u-tokyo.ac.jp)} \label{f1}
\end{center}
\end{figure}
\section{Geometry of 3-dimensional space with 2 orthogonal commuting
Killing vectors}
As we shall see both metrics of the horizon surface of a 5D black
ring and a black hole with one rotation parameter can be written in
the form
\begin{equation}
ds_H^2=f(\ensuremath{\zeta})d\ensuremath{\zeta}^2+g(\ensuremath{\zeta})d\phi^2+h(\ensuremath{\zeta})d\psi^2\, .
\label{eq:genmetric}
\end{equation}
Here $f$, $g$ and $h$ are non-negative functions of the co-ordinate
$\ensuremath{\zeta}$. One can use an ambiguity in the choice of the coordinate $\ensuremath{\zeta}$
to put $f=1$. For this choice $\ensuremath{\zeta}$ has the meaning of the proper
distance along $\ensuremath{\zeta}$-coordinate line. We call such a parametrization
canonical. The co-ordinates $\phi$ and $\psi$ have a period of $2\pi$
and $\ensuremath{\zeta}\in[\ensuremath{\zeta}_{0},\ensuremath{\zeta}_{1}]$. $\partial_{\phi}$ and $\partial_{\psi}$
are two mutually orthogonal Killing vectors. If $g(\ensuremath{\zeta})$ ($h(\ensuremath{\zeta})$)
vanishes at some point then the Killing vector $\partial_{\phi}$
($\partial_{\psi}$) has a fixed point at this point. The metric
\eq{eq:genmetric} does not have a cone-like singularity at a fixed
point of $\partial_{\phi}$ if at this point the following condition
is satisfied
\begin{equation}\n{cone}
{1\over 2\sqrt{hf}}{dh\over d\ensuremath{\zeta}}=1\, .
\end{equation}
A condition of regularity of a fixed point of $\partial_{\psi}$ can be
obtained from \eq{cone} by changing $h$ to $g$.
By comparing the metric \eq{eq:genmetric} with the metric for the
3-sphere \eq{eq:s33} one can conclude that \eq{eq:genmetric} describes
the geometric of a distorted 3D sphere if $g$ and $h$ are positive
inside some interval $(\ensuremath{\zeta}_1,\ensuremath{\zeta}_2)$, while $g$ vanishes at one of its end
point (say $\ensuremath{\zeta}_1$) while $h$ vanishes at the other (say $\ensuremath{\zeta}_2$).
Similarly, by comparison \eq{eq:genmetric} with \eq{mt} one concludes
that if for example $g$ is positive in the interval $(\ensuremath{\zeta}_1,\ensuremath{\zeta}_2)$ and
vanishes at its ends, while $h$ is positive everywhere on this interval,
including its ends, the metric \eq{eq:genmetric} describe a topological
torus.
For the metric (\ref{eq:genmetric}), the non-vanishing components of the
curvature tensor are,
\begin{eqnarray}
R_{\ensuremath{\zeta}\phi\ensuremath{\zeta}\phi}&=&\frac{g'(fg)'}{4fg}-\frac{1}{2}g''\, ,
\label{eq:riemann1}\\
R_{\ensuremath{\zeta}\psi\ensuremath{\zeta}\psi}&=&\frac{h'(fh)'}{4fh}-\frac{1}{2}h''\, ,
\label{eq:riemann2}\\
R_{\phi\psi\phi\psi}&=&-\frac{g'h'}{4f} \, .
\label{eq:riemann3}
\end{eqnarray}
Here ($'$) denotes the differentiation with respect to co-ordinate $\ensuremath{\zeta}$.
Denote by $e^{i}_{{a}}$ ($i, {a}=1,2,3$) 3 orthonormal
vectors and introduce the Gauss curvature tensor as follows
\begin{equation}
K_{ab}=-R_{ijkl}e^{i}_{{a}}e^{j}_{{b}}e^{k}_{{a}}e^{l}_{{b}}\,
.
\end{equation}
The component $K_{ab}$ of this tensor coincides with the curvature in
the 2D direction for the 2D plane spanned by $e^{i}_{{a}}$ and
$e^{j}_{{b}}$. One has
\begin{equation}\n{KK}
\sum_{b=1}^3 K_{ab}=R_{ij}e^{i}_{{a}}e^{j}_{{a}}\, ,\hspace{0.5cm}
\sum_{a=1}^3\sum_{b=1}^3 K_{ab}=R\, .
\end{equation}
For the metric \eq{eq:genmetric} the directions of the coordinate line
$\theta,\phi$ and $\psi$ are eigen-vectors of $K_{ab}$ and the
corresponding eigen-values are $K_a$
\begin{equation}
K_{\psi}=\frac{R_{\ensuremath{\zeta}\phi\ensuremath{\zeta}\phi}}{fg}\, ,\hspace{0.2cm}
K_{\phi}=\frac{R_{\ensuremath{\zeta}\psi\ensuremath{\zeta}\psi}}{fh}\, ,\hspace{0.2cm}
K_{\ensuremath{\zeta}}=\frac{R_{\phi\psi\phi\psi}}{gh}\, .
\label{eq:K}
\end{equation}
These quantities are the curvatures of the 2D sections orthogonal to
$\psi$, $\phi$, and $\zeta$ lines, respectively. For brevity, we call
these 2D surfaces $\psi$-, $\phi$- and $\zeta$-sections.
For the unit sphere $S^3$, from \eq{eq:s33} one can easily see that
\begin{equation}
K_{\psi}=K_{\phi}=K_{\theta}=1.
\end{equation}
However for the torus $S^2\times S^1$, from \eq{mt} we have
\begin{equation}
K_{\psi}=\frac{1}{b^2};\;\;\; K_{\phi}=K_{\theta}=\frac{\cos\theta}{b(a+b\cos\theta)}.
\label{eq:ktori}
\end{equation}
Thus we see that $K_{\psi}$ always remains positive, while $K_{\phi}$ and $K_{\hat\theta}$
is positive in the interval ($0\le\theta<\pi/2$). Thus the {\it equitorial plane} ($\theta=\pi/2$),
divides the torus in two halves, one in which all the sectional curvatures are positive while
the other has two of the sectional curvatures negative.
In fact the surface $\theta=\pi/2$ is topologically $S^1\times S^1$ with the metric,
\begin{equation}
ds^2=a^2d\psi^2+b^2d\phi^2
\end{equation}
The equations \eq{KK} imply
\begin{equation}
R^\ensuremath{\zeta}_\ensuremath{\zeta}=K_{\phi}+K_{\psi}\, ,\hspace{0.2cm}
R^\phi_\phi=K_{\psi}+K_{\ensuremath{\zeta}}\, ,\hspace{0.2cm}
R^\psi_\psi=K_{\ensuremath{\zeta}}+K_{\phi}\, .
\label{eq:Ricci}
\end{equation}
\begin{equation}
R=2(K_{\ensuremath{\zeta}}+K_{\phi}+K_{\psi})\, .
\label{eq:Ricciscalar}
\end{equation}
From the above expression it is clear that $K_{\psi}<0$ if $g'$ and
$\ln[fg/(g')^2]'$ have the opposite signs. Similarly $K_{\phi}<0$
imply $h'$ and $\ln[fh/(h')^2]'$ have opposite sign. For $K_{\ensuremath{\zeta}}<0$,
$g'$ and $h'$ must have the same sign.
Let us consider now {\em Euler characteristics} of the two dimensional sections of the horizon
surface. We denote by $\chi_a$ the Euler characteristic for the 2-surface $x^a=$const.
By using the
{\em Gauss Bonnet} theorem we have,
\begin{equation}
2\pi\chi_a=\int\int_{{\cal M}}K_adA +\int_{\partial{\cal M}}k_gds\, .
\label{eq:Euler}
\end{equation}
Here $dA$ is the element of area on the surface and $k_g$ is the
geodesic curvature on the boundary. If the surface has no boundary
or the boundary line is a geodesic, then the last term vanishes. For
the metric (\ref{eq:genmetric}) simple calculations give
\begin{eqnarray}
2\pi\chi_{\psi}&=&-\pi\left[\frac{g'}{\sqrt{fg}}\right]_{\ensuremath{\zeta}_{0}}^{\ensuremath{\zeta}_{1}}
+\int_{\partial{\cal M}}k_gds\, ,
\label{eq:Euler1}\\
2\pi\chi_{\phi}&=&-\pi\left[\frac{h'}{\sqrt{fh}}\right]_{\ensuremath{\zeta}_{0}}^{\ensuremath{\zeta}_{1}}
+\int_{\partial{\cal M}}k_gds\, ,
\label{eq:Euler2}\\
\chi_{\ensuremath{\zeta}}&=&0\, .
\label{eq:Euler3}
\end{eqnarray}
Thus we see that the Gaussian curvatures of sections, completely
describe the topology and geometry of the 3-horizons.
\section{A 5D Rotating Black Hole with One Rotation Parameter}
\subsection{Volume and shape of the horizon surface}
For the 5 dimensional MP black hole with a single parameter of
rotation, the induced metric on the horizon is ~\cite{mp1},
\begin{equation}
ds^2=r_0^2 ds_H^2\, ,
\end{equation}
\begin{equation}
ds_H^2=f(\theta)d\theta^2+\frac{\sin^2\theta}{f(\theta)}d\phi^2
+(1-\ensuremath{\alpha}^2)\cos^2\theta d\psi^2\, .
\label{eq:mpmetric}
\end{equation}
Here $f(\theta)=(1-\ensuremath{\alpha}^2\sin^2\theta)$ and $r_0$ is length parameter
related with the mass $M$ of the black hole as
\begin{equation}
r_0^2=\frac{8\sqrt{\pi}GM}{3}
\end{equation}
The metric \eq{eq:mpmetric} is in the {\em Hopf coordinates} and
hence the co-ordinate $\theta$ varies from $0$ to $\pi/2$. The
rotation is along the $\phi$ direction. The quantity $\ensuremath{\alpha}=|a|/r_0$
characterize the rapidity of the rotation. It vanishes for a
non-rotating black hole and take the maximal value $\alpha=1$ for an
extremely rotating one. In what follows we put $r_0=1$, so that
$\alpha$ coincides with the rotation parameter. Different quantities
(such as lengths and curvature components) can be easily obtained
from the corresponding dimensionless expressions by using their
scaling properties.
\begin{figure}[htb!!]
\begin{center}
\includegraphics[height=6cm,width=5cm]{curves.eps}
\caption{Lengths $l_{\psi}$ (1), $l_{\theta}$ (2) and
$l_{\phi}$ (3) as the functions of the rotation parameter $\alpha$}
\label{f0}
\end{center}
\end{figure}
For $\alpha=0$ the horizon is a round sphere $S^3$ of the unit
radius. In the presence of rotation this sphere is distorted. Its
3-volume is $V_3=2\pi^2\sqrt{1-\alpha^2}$. In the limiting case of an
extremely rotating black hole, $\alpha=1$, $V_3$ vanishes.
The coordinate lines of $\phi$ and $\psi$ on this distorted sphere
are remain closed circles. The length of the circle corresponding to
the $\phi$-coordinate changes from 0 (at $\theta=0$) to its largest
value (at $\theta=\pi/2$)
\begin{equation}
l_{\phi}={2\pi\over \sqrt{1-\alpha^2}}\, .
\end{equation}
Similarly, the length of the circles connected with $\psi$-coordinate
changes from its maximal value (at $\theta=0$)
\begin{equation}
l_{\psi}=2\pi \sqrt{1-\alpha^2}\,
\end{equation}
to 0 at $\theta=\pi/2$. A line $\phi,\psi=$const on the disported
sphere is again a geodesic.
\begin{equation}
l_{\theta}=4{\bf E}(\alpha)\, .
\end{equation}
The lengths $l_{\psi}$, $l_{\theta}$ and $l_{\phi}$ as the functions
of the rotation parameter $\alpha$ are shown at figure~\ref{f0} by
the lines 1, 2, and 3, respectively. All these lines start at the
same point $(0,2\pi)$. In the limit of the extremely rotating black
hole ($\alpha=1$) the horizon volume vanishes, $l_{\psi}=0$,
$l_{\theta}=4$, and $l_{\phi}$ infinitely grows.
\subsection{Gaussian curvature}
Calculations of the eigen-values $K_a$ of the Gaussian curvatures give
\begin{equation}
K_{\psi}=\frac{[1-\ensuremath{\alpha}^2(1+3\cos^2\theta)]}{f(\theta)^3}\, ,\hspace{0.2cm} K_{\phi}=
K_{\theta}=\frac{1}{f(\theta)^2}\, .
\label{eq:Kmp}
\end{equation}
From these relations it follows that the quantity $K_\psi$ is
negative in the vicinity of the `pole' $\theta=0$ for $1/2<\ensuremath{\alpha}<1$,
while the other two quantities, $K_\theta$ and $K_\phi$ are always
positive. This is similar to the 4D Kerr black hole where the
Gaussian curvature of the two dimensional horizon becomes negative
near the pole for $\ensuremath{\alpha}>1/2$. This is not surprising since the 2D
section $\psi=$const of the metric is isometric to the geometry of
the horizon surface of the Kerr black hole.
The Ricci tensor and Ricciscalar for the metric on the surface of
the horizon of 5D black hole are
\begin{eqnarray}
R_\ensuremath{\theta}^\ensuremath{\theta}&=&R^\phi_\phi=\frac{2[1-\ensuremath{\alpha}^2(1+\cos^2\theta)]}{f(\theta)^3}\, ,
\label{eq:Rmp}\\
R_\psi^\psi&=&\frac{2}{f(\theta)^2}\, ,\hspace{0.2cm}
R=\frac{2[3-\ensuremath{\alpha}^2(3+\cos^2\theta)]}{f(\theta)^3}\, .
\label{eq:Ricciscalarmp}
\end{eqnarray}
The components of Ricci tensor $R_\ensuremath{\theta}^\ensuremath{\theta}$ and $R^\phi_\phi$ becomes
negative for certain values near the `pole' $\theta=0$, when
$\ensuremath{\alpha}>1/\sqrt{2}$, while the Ricci scaler is negative when
$\ensuremath{\alpha}>\sqrt{3/4}$.
It is interesting to note that the surfaces of constant $\phi$ or
constant $\psi$ are topologically disks with Euler Characteristics
equal to unity. The boundary of these disks are on $\ensuremath{\theta}=\pi/2$. It is
easy to check from equations (\ref{eq:Euler1}) and (\ref{eq:Euler2})
that boundary terms of {\em Gauss Bonnet} equation vanishes on this
boundary. This shows the boundary, which is the equatorial line on
the deformed hemisphere, is a geodesic of the induced metric. Another
important point is, while approaching the naked singularity limit
($\ensuremath{\alpha}=1$), the Gaussian curvatures of all the three sections, as
well as the negative Ricciscalar blows up along the `equator'
($\ensuremath{\theta}=\pi/2$). This shows the extreme flattening of the horizon along
equatorial plane, before the horizon shrinks to zero volume.
\section{Embedding}
\subsection{Embedding of the horizon in 5D pseudo-Euclidean space}
Let us discuss now the problem of the embedding of the horizon
surface of a rotating 5D black hole into a flat space. We start by
reminding that a similar problem for a 4D (Kerr) black hole was
considered long time ago by Smarr \cite{kerr1}. He showed that if the
rotation parameter of the Kerr metric $\alpha<1/2$, then 2D surface
of the horizon can be globally embedded in ${\mathbb E}^3$ as a rotation
surface. For $\alpha>1/2$ such an embedding is possible if the
signature of the 3D flat space is $(-,+,+)$. In a recent paper
\cite{kerr2} there was constructed a global embedding of the horizon
of a rapidly rotating black hole into ${\mathbb E}^4$.
Since the 3D surface of a rotation 5D black hole has 2
commuting orthogonal Killing vectors it is natural to consider
its embedding into the flat space which has at least two
independent orthogonal 2-planes of the rotation. In this case
the minimal number of the dimensions of the space of embedding
is 5. We write the metric in the form
\begin{equation}
dS^2=\varepsilon dz^2+dx_1^2+dx_2^2+dy_1^2+dy_2^2\, ,
\end{equation}
where $\varepsilon=\pm 1$. By introducing polar coordinates
$(\rho,\phi)$ and $(r,\psi)$ in the 2-planes $(x_1,x_2)$ and
$(y_1,y_2)$, respectively, we obtain
\begin{equation}\n{dS}
dS^2=\varepsilon dz^2+d\rho^2+\rho^2 d\phi^2+dr^2+r^2 d\psi^2\, .
\end{equation}
Using $\mu=\cos\theta$ as a new coordinate one can rewrite the metric on
the horizon \eq{eq:mpmetric} in the form
\begin{eqnarray}
ds^2&=&f d\mu^2+\rho^2 d\phi^2+r^2 d\psi^2\, ,\n{ds}\\
f&=&{1-\alpha^2\mu^2\over 1-\mu^2}\, ,\hspace{0.5cm}
\rho={\mu\over \sqrt{1-\alpha^2\mu^2}}\, ,\\
r&=&\sqrt{(1-\alpha^2)(1-\mu^2)}\, .\n{rr}
\end{eqnarray}
Assuming that $z$ is a function of $\mu$, and identifying $\rho$ and
$r$ in \eq{dS} with \eq{rr} one obtains the metric \eq{ds} provided the
function $z(\mu)$ obeys the equation
\begin{equation}\n{dz}
\left({dz\over d\mu}\right)^2=\varepsilon \left[ f-\left({d\rho\over
d\mu}\right)^2-\left({dr\over d\mu}\right)^2\right]\, .
\end{equation}
By substituting \eq{rr} into \eq{dz} one obtains
\begin{equation}\n{dzz}
\left({dz\over d\mu}\right)^2=\varepsilon
{\alpha^2\mu^2(3\alpha^2\mu^2-\alpha^4\mu^4-3)\over (1-\alpha^2\mu^2)^3} \, .
\end{equation}
It is easy to check that for $|\alpha|\le 1$ and $0\le\mu\le 1$ the
expression in the right hand side of \eq{dzz} always has the sign opposite
to the sign of $\varepsilon$. Thus one must choose $\varepsilon=-1$ and one has
\begin{equation}
z={1\over 2\alpha}\int_{\sqrt{1-\alpha^2\mu^2}}^1 {dy\over y^{3/2}}\sqrt{1+y+y^2}\, .
\end{equation}
Let us emphasize that this result is valid both for the slowly and
rapidly rotating black holes.
\subsection{Global embedding into ${\mathbb E}^6$}
\subsubsection{Construction of an embedding}
It is possible, however, to find a global isometric embedding of the 3-horizon of a rotating
black hole in a flat space with positive signature, if the number of dimensions is 6. This
embedding is analogues to the one discussed in \cite{kerr2} for the
rapidly rotating Kerr black hole.
Let us denote by $X_i, (i=1...6)$ the Cartesian co-ordinates in
${\mathbb E}^6$. We write the embedding equations in the form
\begin{equation}
X_i=\frac{\eta(\theta)}{\rho_0}n^i(\ensuremath{\tilde{\phi}})\, ,
\quad (i=1,2,3)\ ,
\label{eq:emeq1}
\end{equation}
\begin{equation}
X_4=\nu(\theta)\cos\psi\, ,\hspace{0.2cm}
X_5=\nu(\theta)\sin\psi \, ,\hspace{0.2cm}
X_6=\chi(\theta)\, .
\label{eq:emeq2}
\end{equation}
Here the functions $n^i$ obey the condition
\begin{equation}
\sum_{i=1}^3 (n^i(\ensuremath{\tilde{\phi}}))^2=1\, .
\label{eq:cond}
\end{equation}
In other words, the 3D vector $n^i$ as a function of $\ensuremath{\tilde{\phi}}$ describes
a line on the unit round sphere $S^2$. We require this line to be a
smooth closed loop (${\bf n}(0)={\bf n}(2\pi)$) without self
interactions. We denote
\begin{equation}
\rho(\ensuremath{\tilde{\phi}})= \left[\sum_{i=1}^3 (n^i_{,\ensuremath{\tilde{\phi}}})^2\right]^{1/2}\, .
\label{eq:length1}
\end{equation}
Then $dl=\rho(\ensuremath{\tilde{\phi}})d\ensuremath{\tilde{\phi}}$ is the line element along the loop.
The total length of the loop is
\begin{equation}
l_0=2\pi\rho_0=\int_0^{2\pi}\rho(\ensuremath{\tilde{\phi}})d\ensuremath{\tilde{\phi}}\, .
\label{eq:length2}
\end{equation}
We define a new co-ordinate $\ensuremath{\phi}$ as
\begin{equation}
\ensuremath{\phi}=\frac{1}{\rho_0}\int_0^{\ensuremath{\tilde{\phi}}}\rho(\ensuremath{\tilde{\phi}})d\ensuremath{\tilde{\phi}}\, .
\end{equation}
It is a monotonic function of $\ensuremath{\tilde{\phi}}$ and has the same period $2\pi$
as $\ensuremath{\tilde{\phi}}$. The induced metric for the embedded 3D surface defined by
(\ref{eq:emeq1}) and (\ref{eq:emeq2}) becomes
\begin{equation}
ds^2=\left[\frac{{\eta_{,\theta}}^2}{\rho_0^2}+{\nu_{,\theta}}^2+{\chi_{,\theta}}^2\right]d\theta^2
+\eta^2d\phi^2+\nu^2d\psi^2\, .
\label{eq:embedmetric}
\end{equation}
Now comparing equations (\ref{eq:mpmetric}) and (\ref{eq:embedmetric})
we get
\begin{eqnarray}
\eta(\theta)&=&\frac{\sin\theta}{\sqrt{1-\ensuremath{\alpha}^2\sin^2\theta}}\, ,\hspace{0.2cm}
\nu(\theta)=\sqrt{1-\ensuremath{\alpha}^2}\cos\theta\, ,
\label{eq:embfunc1}\\
\chi(\theta)&=&\int_0^\theta\cos\theta\sqrt{\left[1-\frac{1}
{\rho_0^2(1-\ensuremath{\alpha}^2\sin^2\theta)^3}\right]}d\theta \, .
\label{eq:embfunc2}
\end{eqnarray}
We choose the functions $n^i$ in such a way that
\begin{equation}\n{cond}
\rho_0^2\ge\frac{1}{(1-\ensuremath{\alpha}^2)^3}\, ,
\end{equation}
so that the function $\chi(\theta)$ remains real valued for all
$\theta$ and hence we can globally embed the horizon in ${\mathbb E}^6$.
\subsubsection{A special example}
To give an explicit example of the above described embedding let us
put
\begin{eqnarray}
n^1&=&\frac{\cos\ensuremath{\tilde{\phi}}}{F}\, ,\hspace{0.2cm}
n^2=\frac{\sin\ensuremath{\tilde{\phi}}}{F}\, ,\hspace{0.2cm}
n^2=\frac{a\sin(N\ensuremath{\tilde{\phi}})}{F}\, ,
\label{eq:example}\\
F&=&\sqrt{1+a^2\sin^2(N\ensuremath{\tilde{\phi}})}\, .
\end{eqnarray}
Here $N\ge 1$ is a positive integer. For this choice the value of
$\rho_0$ is
\begin{equation}
\rho_0=\frac{1}{2\pi}\int_0^{2\pi}\frac{\sqrt{a^2\cos^2(\ensuremath{\tilde{\phi}})(N^2-1)+a^2+1}}
{1+a^2\sin^2(\ensuremath{\tilde{\phi}})}d\ensuremath{\tilde{\phi}}\, .
\label{eq:rho0}
\end{equation}
For $N=1$ $\rho_0=1$.
For $N>1$ the above integral can be exactly evaluated to give
\begin{eqnarray}
\rho_0&=&\frac{2}{\pi}k_1\left[N^2\Pi(a^2k_1^2,ik_2)-(N^2-1)K(ik_2)\right]\,
,
\label{eq:rho1}\\
k_1&=&\frac{1}{\sqrt{1+a^2}}\, ,\hspace{0.5cm}
k_2=a\sqrt{\frac{N^2-1}{1+a^2}}\, .
\end{eqnarray} Here $K$ and $\Pi$ are elliptic integrals of first and third
kind, respectively. For a fixed value of $N$ $\rho_0$ is
monotonically growing function of $a$ (see Fig.~\ref{ff1}). For a
fixed value of $a$ the value of $\rho_0$ increases monotonically with
$N$ (see Fig.~\ref{f2}). The asymptotic form of $\rho_0$ for large
values of $a$ can be easily obtained as follows. Notice that for
large $a$ the denominator in the integral \eq{eq:rho0} is large
unless $\ensuremath{\tilde{\phi}}$ is close to $0$, $\pi$, or $2\pi$. Near these points
$\cos\ensuremath{\tilde{\phi}}$ can be approximated by 1, and the expression for $\rho_0$
takes the form
\begin{equation}
\rho_0\approx {aN\over 2\pi} \int{d\ensuremath{\tilde{\phi}} \over 1+a^2\sin^2\ensuremath{\tilde{\phi}}}=
{aN\over \sqrt{1+a^2}}\, .
\end{equation}
Using these properties of $\rho_0$, one can show that for large
enough values of $N$ and $a$ the quantity $\rho_0$ can be made
arbitrary large, so that the condition \eq{cond} is satisfied and we have
the global embedding of the horizon surface for any $\ensuremath{\alpha}<1$.
\begin{figure}[tb!!]
\begin{center}
\includegraphics[height=4cm,width=4cm]{plot_1.eps}
\caption{$\rho_0$ as a function of $a$ for the values of $N$ from 2
(line 2) to 7 (line 7)}
\label{ff1}
\end{center}
\end{figure}
\begin{figure}[tb!!]
\begin{center}
\includegraphics[height=4cm,width=5cm]{plot_2.eps}
\caption{$\rho_0$ as a function of $N$ for $a=10$.}
\label{f2}
\end{center}
\end{figure}
\section{A 5D Rotating Black Ring}
\subsection{Horizon surface of a black ring}
Now we consider properties of horizon surfaces of stationary black
strings in an asymptotically 5D flat spacetime ~\cite{br1}. In this paper
we would only consider the {\em balanced} black ring in the sense that
there are no angular deficit or
angle excess causing a conical singularity. The ring rotates along
the $S^1$ and this balances the gravitational self attraction.
The metric of the
rotating black ring is ~\cite{br2,note}
\begin{eqnarray}
ds^2 &= -[F(x)/F(y)]\left[dt + r_0{
\displaystyle\frac{\sqrt{2}\nu}{\sqrt{1+\nu^2}}}(1+y)
d\tilde{\psi}\right]^2 \nonumber\\
&+ {\displaystyle \frac{r_0^2}{(x-y)^2}}\left[-F(x)\left(G(y)d\tilde{\psi}^2
+[F(y)/G(y)]dy^2\right)\right.
\nonumber\\
&+F(y)^2 \left. \left([dx^2/G(x)]+[G(x)/F(x)]d\phi^2\right)\right]\, ,
\label{eq:blackring}
\end{eqnarray}
where
\begin{equation}
F(\ensuremath{\zeta})=1-\frac{2\nu}{1+\nu^2}\ensuremath{\zeta}\, ,\hspace{0.2cm}
G(\ensuremath{\zeta})=(1-\ensuremath{\zeta}^2)(1-\nu\ensuremath{\zeta})\, .
\label{eq:fg}
\end{equation}
The quantity $r_0$ is the radius scale of the ring. The parameter
$\nu\in [0,1]$ determines the shape of the ring. The
coordinate $x$ changes in the the interval $-1\le x\le 1$ , while
$y^{-1}\in [-1,(2\nu)/(1+\nu^2)]$. The black ring is rotating in
$\tilde{\psi}$-direction. The positive `$y$' region is the ergosphere
of the rotating black ring while the negative `$y$' region is lies
outside the ergosphere with the spatial infinity at $x=y=-1$.
The metric \eq{eq:blackring} has a co-ordinate singularity at
$y=1/\nu$. However after the transformation
\begin{eqnarray}
d\psi&=&d\tilde{\psi}+J(y)dy\, , \nonumber \\
dv&=&dt-r_0\frac{\sqrt(2)\nu}{\sqrt{1+\nu^2}}(1+y)J(y)dy\, ,
\label{eq:tran}
\end{eqnarray}
with $J(y)=\sqrt{-F(y)}/G(y)$, the metric is regular at $y=1/\nu$.
In these regular coordinates one can show that the surface $y=1/\nu$
is the horizon.
The induced metric on
the horizon of a rotating black ring is given by
\begin{eqnarray}
ds^2&=&{r_0^2} ds_H^2\, ,\\
ds_H^2&=&\frac{p}{k(\ensuremath{\theta})}\left[\frac{d\ensuremath{\theta}^2}{k(\ensuremath{\theta})^2}+\frac{\sin^2\ensuremath{\theta} d\phi^2}
{l(\ensuremath{\theta})}\right]+ql(\ensuremath{\theta})d\psi^2\, ,
\label{eq:brmetric}\\
k(\ensuremath{\theta})&=&1+\nu\cos\ensuremath{\theta}\, ,\hspace{0.5cm} l(\ensuremath{\theta})=1+\nu^2+2\nu\cos\ensuremath{\theta} \, ,\\
p&=&{\nu^2(1-\nu^2)^2\over {1+\nu^2}}\, ,\hspace{0.5cm} q=2\frac{1+\nu}{(1-\nu){1+\nu^2}}\, .
\label{eq:AB}
\end{eqnarray}
In this metric the co-ordinates
$\phi$ and $\psi$ have a period of $2\pi$ and $\ensuremath{\theta}\in[0,\pi]$.
$\ensuremath{\theta}=0$ is the axis pointing outwards ({\em i.e} increasing $S^1$ radius),
while $\ensuremath{\theta}=\pi$ points inwards.
The volume of the horizon surface for the metric \eq{eq:brmetric} is
\begin{equation}
V=8\sqrt{2}\pi^2 \nu^2\sqrt{1-\nu}\left[\frac{\sqrt{1+\nu}}{\sqrt{1+\nu^2}}\right]^3\, .
\end{equation}
\subsection{Gaussian curvature}
The metric \eq{eq:brmetric} is of the form \eq{eq:genmetric}, so that one can
apply to it the results of Section~III. For example,
its Gaussian curvatures has the following eigen-values
($i=\psi,\phi,\theta$)
\begin{equation}
K_i=\frac{k(\ensuremath{\theta})^2 F_i(\cos\ensuremath{\theta})}{2\nu(1-\nu^2)^2(1+\nu^2)l(\ensuremath{\theta})^2}\,,
\end{equation}
where the functions $F_i=F_i(\ensuremath{\zeta})$ are defined as follows
\begin{eqnarray}
F_{\psi}&=&\nu(3+\nu^2)\ensuremath{\zeta}^2+2(\nu^4+\nu^2+2)\ensuremath{\zeta}+2\nu^{-1}-\nu+3\nu^3\, ,\nonumber
\label{eq:F1}\\
F_{\phi}&=&8\nu^2\ensuremath{\zeta}^3+\nu(5\nu^2+7)\ensuremath{\zeta}^2+2(1-\nu^2)\ensuremath{\zeta}-\nu(3\nu^2+1)\, ,\nonumber
\label{eq:F2}\\
F_{\theta}&=&\nu^3\ensuremath{\zeta}^2+(6\nu^2+3\nu+2)\ensuremath{\zeta}+\nu(3+\nu^2)\, .
\label{eq:F3}
\end{eqnarray}
From the above equations it is clear that for any value of $\nu$ and
$\theta$ the Gaussian curvature of the $\psi$-sections, ({\em i.e}
$K_\psi$), always remains positive. This is absolutely similar to the flat torus case
as described by \eq{eq:ktori}. The sign of Gaussian curvatures
for other two sections, $K_\phi$ and $K_\theta$, depend on
the values of $\nu$ and $\theta$. For example, for $\ensuremath{\theta}=0$, both
these curvatures are positive for all values of $\nu$. But as we
increase $\theta$, $K_\phi$ becomes negative for higher values of
$\nu$ and ultimately when $\ensuremath{\theta}=\pi/2$ it becomes negative for all
$\nu$ and continue to be negative till $\ensuremath{\theta}=\pi$. On the other hand $K_\theta$
remains positive and grows with $\nu$
for all $\nu$ till $\ensuremath{\theta}=\pi/2$, then starts becoming negative for higher values
of $\nu$ and ultimately becomes negative for all $\nu$ at $\ensuremath{\theta}=\pi$.
\begin{figure}[tb!!]
\begin{center}
\includegraphics[width=5cm]{invtheta.eps}
\caption{$\lambda_{\theta 1}$ and $\lambda_{\theta 2}$ as a function of $\nu$.}
\label{f8}
\end{center}
\end{figure}
\begin{figure}[tb!!]
\begin{center}
\includegraphics[width=5cm,height=4cm]{invphi.eps}
\caption{$\lambda_{\phi 1}$ and $\lambda_{\phi 2}$ as a function of $\nu$.}
\label{f9}
\end{center}
\end{figure}
Let us emphasize that, because of the distortions due to the rotation,
both $K_\phi$ and $K_\theta$ do not become negative at the same value of $\theta$
as it was for the flat torus case. To get an invariant measure of distortion produced
due to rotation, let us define two invariant lengths in the following way.
Let $\theta=\theta_i$ ($i=\theta,\phi$), be the point where $F_i(\cos\theta)$
vanishes. Then the two invariant lenghts are,
\begin{equation}
\lambda_{i1}=2\sqrt{p}\int_0^{\theta_i}\frac{d\theta}{\sqrt{k(\theta)}^3},\;\;
\lambda_{i2}=2\sqrt{p}\int_{\theta_i}^\pi\frac{d\theta}{\sqrt{k(\theta)}^3}.
\label{eq:invlength}
\end{equation}
It is easy to check from \eq{mt} that in the case of flat tori we have
$\lambda_{i1}=\lambda_{i2}$. However for the rotating black rings they are
different functions of the parameter $\nu$. Figures ~\ref{f8} and ~\ref{f9}
shows the invariant lenghts
for ($i=\theta,\phi$) as function of $\nu$ respectively. We see that for $i=\theta$,
$\lambda_{i2}<\lambda_{i1}$ for small
$\nu$. However as we increase $\nu$ the difference between them
reduces and ultimately at
$\nu\approx0.615$, $\lambda_{i2}$ overtakes $\lambda_{i1}$.
Whereas for $i=\phi$, $\lambda_{i2}$ is
always greater than $\lambda_{i1}$.
It is evident that both $\psi$- and $\phi$-sections are closed and do
not have a boundary. Calculating the {\em Euler numbers} for these
surface we get
\begin{equation}
\chi_{\psi}=2\, ,\hspace{0.5cm} \chi_\phi=0\, .
\end{equation}
This shows that the $\psi$-section is a deformed 2-sphere with
positive Gaussian curvature. Its rotation in the $\psi$ direction
generates the horizon surface of the rotating black ring.
\subsection{Kaluza-Klein Reduction of Rotating Black Ring}
The absence of the cone-like singularities in the black ring solution
\eq{eq:blackring} is a consequence of the exact balance between the
gravitational attraction and the centrifugal forces generated by the
ring's rotation. We discuss the effects connected with the ring
rotation from a slightly different point of view. Let us write the
metric \eq{eq:blackring} in the following Kaluza-Klein form (see e.g.
\cite{kk3})
\begin{equation}
ds_5^2=\Phi^{-\frac{1}{3}}\left[h_{\alpha\beta}dx^\alpha dx^\beta
+\Phi (A_t dt +d\phi)^2\right]\, .
\label{eq:kk5}
\end{equation}
The 4D reduced {\it Pauli} metric in this space is ($a,b=0,\ldots,3$)
\begin{equation}
ds_4^2= h_{ab}dx^a dx^a
=\Phi^{\frac{1}{3}}\left[g_{ab}dx^a dx^b
-\Phi A_t^2dt^2\right]\, .
\label{eq:kk4}
\end{equation}
Here $g_{ab}$ is the four dimensional metric on the $\ensuremath{\tilde{\psi}}$-section
of the 5D black ring. By the comparison of \eq{eq:kk5} and
\eq{eq:kk4} one has
\begin{eqnarray}
\Phi^{\frac{2}{3}}&=&\xi_{\ensuremath{\tilde{\psi}}}^2=-\frac{F(x)}{F(y)}L(x,y)\, ,\\
A_t&=&\frac{(\xi_t.\xi_{\ensuremath{\tilde{\psi}}})}{\xi_{\ensuremath{\tilde{\psi}}}^2}
=\frac{\sqrt{2}\nu}{\sqrt{1+\nu^2}}\frac{(1+y)}{L(x,y)}\, ,\\
L(x,y)&=&\left[\frac{2\nu^2}{1+\nu^2}(1+y)^2+\frac{F(y)G(y)}{(x-y)^2}\right]\,,
\label{eq:phi}
\end{eqnarray}
where $\xi_t=\partial_t$, $\xi_{{\ensuremath{\tilde{\psi}}}}=\partial_{\ensuremath{\tilde{\psi}}}$ and
$\xi_{\phi}=\partial_{\phi}$ are the Killing vectors of \eq{eq:kk5}.
The quantities $\ln(\Phi)$ and $A_t$ can be interpreted as a dilaton
field and a `electromagnetic potential' in the 4D spacetime.
The horizon for the 4D metric \eq{eq:kk4} is defined by the condition
\begin{equation}
h_{tt}=\xi_t^2-({\xi_{\ensuremath{\tilde{\psi}}}^2})^{-1}(\xi_t.\xi_{\ensuremath{\tilde{\psi}}})^2=0\, .
\end{equation}
It is easy to show that this condition is equivalent to the condition
defining the horizon of the 5D metric. Thus both horizons are
located at $y=1/\nu$.
To summarize, the 4D metric \eq{eq:kk4} obtained after the reduction
describes a static 4D black hole in the presence of an external
dilaton and `electromagnetic' field. The dilaton field $\ln \Phi$ (as
well as the metric \eq{eq:kk4}) has a singularity at the points where
$\xi_{{\ensuremath{\tilde{\psi}}}}^2$ either vanish (at the axis of symmetry,
$x=1, y=-1$) or infinitely grows (at the spatial
infinity, $x=y=-1$) (see Fig.~\ref{f5}). Outside these regions the dilaton field is
regular everywhere including the horizon where it takes the value
\begin{equation}
\Phi_{H}=\left[\frac{2}{1+\nu^2}
\left(\frac{1+\nu}{1-\nu}\right)l(\theta)\right]^{\frac{3}{2}}
\end{equation}
\begin{figure}[tb!!]
\begin{center}
\includegraphics[width=5cm, height=3cm]{phi2.eps}
\caption{$\ln(\Phi)$ as a function of $x$ and $z=1/y$ for
$\nu=2/3$.}
\label{f5}
\end{center}
\end{figure}
The `electromagnetic field strength', which has non zero components
\begin{equation}
F_{tx}=-A_{t,x}\, ,\hspace{0.5cm} F_{ty}=-A_{t,y}\, ,
\end{equation}
is regular everywhere and vanishes at the spatial infinity. However
the $F^2=F_{ab}F^{ab}$ invariant is well defined throughout the space
time but drops (towards negative infinity) at the axis of symmetry.
Figure 8 illustrates the behavior of the invariant $F^2$ for
$\nu=2/3$.
\begin{figure}[tb!!]
\begin{center}
\includegraphics[width=5cm, height=3cm]{fsquire.eps}
\caption{The invariant $F^2$ as a function of $x$ and $z=1/y$ for $\nu=2/3$. It can be seen
that it is well defined everywhere but drops
(towards negative infinity) at the axis of symmetry, $x=1, y=-1$.}
\label{f6}
\end{center}
\end{figure}
The metric $ds_1^2$ on 2D horizon surface for the reduced metric \eq{eq:kk4} is
conformal to the metric $ds_0^2$ of 2D section $\ensuremath{\tilde{\psi}}=$const of the black
ring horizon, \eq{eq:brmetric}. These metrics are of the form
($k=k(\theta)$, $l=l(\theta)$)
\begin{equation}
ds_{\epsilon}^2=\Phi_H^{\epsilon/3}\frac{p}{k}\left[\frac{d\ensuremath{\theta}^2}{k^2}
+\frac{\sin^2\ensuremath{\theta} d\phi^2}{l}\right]\, ,\hspace{0.5cm} \epsilon=0,1\, .
\end{equation}
Both metrics, $ds_{\epsilon}^2$
can be embedded in ${\mathbb E}^3$ as rotation surfaces. The embedding
equations are
\begin{equation}
X_1=m\beta\cos\phi;\;
X_2=m\beta\sin\phi;\;
X_3=m\gamma\, ,
\label{eq:brembed1}
\end{equation}
where,
\begin{eqnarray}
\beta&=&k^{-1/2}l^{-1/2+\epsilon/4}\sin\theta \, ,\nonumber\\
\gamma&=&\int_0^\theta (k^{-3}l^{\epsilon/2}-\beta_{,\theta}^2)^{1/2}\,d\theta,\nonumber\\
m&=&\sqrt{p}\left[\frac{2}{1+\nu^2}({1+\nu\over 1-\nu})\right]^{\epsilon}\, .
\label{eq:brembed4}
\end{eqnarray}
\begin{figure}[tb!!]
\begin{center}
\includegraphics[width=3cm]{brhor.eps}\hspace{0.5cm}
\includegraphics[width=2.5cm]{kkhor.eps}
\caption{The embedding diagrams for the metrics $ds_0^2$ (to the left)
and $ds_1^2$ (to the right) for $\nu=2/3$.}
\label{f7}
\end{center}
\end{figure}
The embedding diagrams for the metrics $ds_0^2$ and $ds_1^2$ are
shown in figure 9 by the left and right plots, respectively. Both
rotation surfaces are deformed spheres. The surface with the
geometry $ds_1^2$ is more flattened at poles.
\section{Discussion}
In this paper we discussed and analyzed the surface geometry of
five dimensional black hole and black rings with one parameter of
rotation. We found that the sectional Gaussian curvature and the
Ricci scalar of the horizon surface of the 5D rotating black hole are
negative if the rotation parameter is greater than some critical
value, similarly to the case of 4D the Kerr black hole. However there
is an important difference between the embeddings of the horizon
surfaces of 5D and 4D black holes in the flat space. As was shown in
~\cite{kerr1}, a rotating 2-horizon can be embedded as a surface of
rotation in a 3-dimensional Euclidean space only when the rotation
parameter is less than the critical value. For the `super-critical'
rotation the global embedding is possible either in 3D flat space
with the signature if the metric $(-,+,+)$ \cite{kerr1} or in ${\mathbb E}^4$
with the positive signature \cite{kerr2}. For the 5D black hole for
any value of its rotation parameter the the horizon surface cannot be
embedded in 5D Euclidean space as a surface of rotation. Such an
embedding requires that the signature of the flat 5D space is
$(-,+,+,+,+)$. However we found a global embedding of this surface in
6D Euclidean space.
We calculated the surface invariants for the rotating black ring and
analyzed the effect of rotation on these invariants. Finally we
considered the Kaluza-Klein reduction of rotating black ring which
maps its metric onto the metric of 4D black hole in the presence of
external dilaton and `electromagnetic' fields. Under this map, the
horizon of the 5D black ring transforms into the horizon of 4D black
hole. The `reduced' black hole is static and axisymmetric. Distorted
black holes in the Einstein-Maxwell-dilaton gravity were discussed in
\cite{Ya}. This paper generalizes the well known results of
\cite{GH,Ch} for vacuum distorted black holes. It would be
interesting to compare the `reduced' distorted black hole discussed
in this paper with solutions presented in \cite{Ya}.
\noindent
\section*{Acknowledgments}
\noindent
The research was supported in part by the Natural
Sciences and Engineering Research Council of Canada and by the Killam
Trust.
|
2,869,038,154,639 | arxiv | \section{Introduction}
Learning with ambiguity has become one of the most prevalent research topics. The traditional way to solve machine learning problems is based on single-label learning (SLL) and multi-label learning (MLL) \cite{tsoumakas2007multiMLL,xu2016multi}. Concerning the SLL framework, an instance is always assigned to one single label, whereas in MLL an instance may be associated with several labels. The existing learning paradigms of SLL and MLL are mostly based on the so-called \textit{problem transformation}. However, neither SLL nor MLL address the problem stated as ``at which degree can a label describe its corresponding instance,'' i.e., the labels have different importance on the description of the instance. It is more appropriate for the importance among candidate labels to be different rather than exactly equal. Taking the above problem into account, a novel learning paradigm called label distribution learning (LDL) \cite{geng2013LDL} is proposed. Compared with SLL and MLL, LDL labels an instance with a real-valued vector that consists of the description degree of every possible label to the current instance. Detail comparison is visualized in Fig. \ref{fig:Comparison among SLL, MLL and LDL}. Actually, LDL can be regarded as a more comprehensive form of MLL and SLL. However, the tagged training sets required by LDL are extremely scarce owing to the heavy burden of manual annotation. Considering the fact that it is difficult to directly attain the annotated label distribution, a process called label enhancement (LE) \cite{xu2018LE} is also proposed to recover the label distributions from logical labels. Taking LE algorithm, the logical label $l \in\{0,1\}^{\mathrm{c}}$ of conventional MLL dataset can be recovered into the label distribution vector by mining the topological information of input space and label correlation \cite{he2019joint}.
\begin{figure}[]
\centering
\includegraphics[width=1\columnwidth]{SLL_VS_MLL_VS_LDL.png}
\caption{Visualized comparison among SLL, MLL and LDL}
\label{fig:Comparison among SLL, MLL and LDL}
\end{figure}
Many relevant algorithms of LDL and LE have been proposed in recent years. These algorithms have progressively boosted the performance of specific tasks. For instance, LDL is widely applied in facial age estimation application. Geng et al. \cite{geng2014facialadaptive} proposed a specialized LDL framework that combines the maximum entropy model \cite{berger1996maximum} with IIS optimization, namely IIS-LDL. This approach not only achieves better performance than other traditional machine learning algorithms but also becomes the foundation of the LDL framework. In other works, Yang et al.’s \cite{fan2017labelreNetFacial} attempt to take into account both facial geometric and convolutional features resulted in remarkably improving efficiency and accuracy. As mentioned above, the difficulty of acquiring labeled datasets restricts the development of LDL algorithms. After presenting several LE algorithms, Xu et al. \cite{XuLv-14} adapted LDL into partial label learning (PLL) with recovered label distributions via LE. Although these methods have achieved significant performance, one potential problem yet to be solved is that they suffer from the discriminative information loss problem, which is caused by the dimensional gap between the input data matrix and the output one. Importantly, it is entirely possible that these existing methods miss the essential information that should be inherited from the original input space, thereby degrading the performance.
As discussed above, the critical point of previous works on LDL and LE is to establish a suitable loss function to fit label distribution data. In previous works, only a unidirectional projection $\mathcal{X} \mapsto \mathcal{Y}$ between input and output space is learned. In this paper, we present a bi-directional loss function with a comprehensive reconstruction constraint. Such function can be applied in both LDL and LE to maintain the latent information. Inspired by the auto-encoder paradigm \cite{kodirov2017SAE,cheng2019multi}, our proposed method builds the reconstruction projection $\mathcal{Y} \mapsto \mathcal{X}$ with the mapping projection to preserve the otherwise lost information. More precisely, optimizing the original loss is the \textit{mapping} step, while minimizing the \textit{reconstruction} error is the reconstruction step. In contrast to previous loss functions, the proposed loss function aims to potentially reconstruct the input data from the output data. Therefore, it is expected to obtain more accurate results than other related loss functions for both LE and LDL problems. Adequate experiments on several well-known datasets demonstrate that the proposed loss function achieves superior performance.
The main contributions of this work are delivered as:
\begin{enumerate}[1)]
\item the reconstruction projection from label space to instance space is considered for the first time in the LDL and LE paradigms;
\item a bi-directional loss function that combines mapping error and reconstruction error is proposed;
\item the proposed method can be used not only in LDL but also for LE.
\end{enumerate}
We organize the rest of this paper as follows. Firstly, related work about LDL and LE methods is reviewed in Section 2. Secondly, the formulation of LE as well LDL and the proposed methods, i.e., BD-LE and BD-LDL are introduced in Section 3. After that, the results of comparison experiment and ablation one are shown in Section 4. The influence of parameters is also discussed in Section 4. Finally, conclusions and future work exploration are summarized in section 5.
\section{Related work}
In this section, we briefly summarize the related work about LDL and LE methods.
\subsection{Label Distribution Learning}
The proposed LDL methods mainly focus on three aspects, namely model assumption, loss function, and the optimization algorithm. The maximum entropy model \cite{berger1996maximum} is widely used to represent the label distribution in the LDL paradigm\cite{xu2019latent,ren2019label}. Maximum entropy model naturally agrees with the character of description degree in LDL model. However, such an exponential model is sometimes not comprehensive enough to accomplish a complex distribution. To overcome this issue, Gent et al. \cite{xing2016logisticBoosting} proposed a LDL family based on a boosting algorithm to extend the traditional LDL model. Inspired by the M-SVR algorithm, LDVSR \cite{geng2015preMovieOpnion} is designed for the movie
opinion prediction task. Furthermore, CPNN \cite{geng2013facialestimation} combines a neural network with the LDL paradigm to improve the effectiveness of facial age estimation applications. What's more, recent work \cite{ren2019specific,xu2017incompleteLDL} has proved that linear model is also able to achieve a relatively strong representation ability and a satisfying result. As reviewed above, most existing methods build the mapping from feature space to label space in an unidirectional way so that it is appropriate to take the bi-directional constraint into consideration.
Concerning the loss function, LDL aims at learning the model to predict unseen instances’ distributions which are similar to the true ones. The criteria to measure the distance between two distributions, such as the Kullback–Leibler (K-L) divergence, is always chosen as the loss function \cite{jia2018labelCorrelation,geng2010facialestimation}. Owing to the asymmetry of the K-L divergence, Jeffery’s divergence is used in xxx \cite{zhou2015emotion} to build LDL model for facial emotion recognition. For the sake of easier computation, it is reasonable to adopt the Euclidean distance in a variety of tasks, e.g., facial emotion recognition \cite{jia2019facialEDLLRL}.
Regarding the optimization method, SA-IIS \cite{geng2016LDL} utilizes the improved iterative scaling (IIS) method whose performance is always worse \cite{Maxinum_entropy} than the other optimization. Fortunately, by leveraging the L-BFGS \cite{nocedal2006numericalOPtimization} optimization method, we maintain the balance between efficiency and accuracy, especially in SA-BFGS \cite{geng2016LDL} and EDL \cite{zhou2015emotion}.
With the complexity of proposed model greater, the number of parameters to be optimized is more than one. Therefore, it is more appropriate to introduce the alternating direction method of multipliers (ADMM) \cite{boyd2011distributedADMM} when the loss function incorporates additional inequality and equality constraints.
In addition, exploiting the correlation among labels or samples can increasingly boost the performance of LDL model. Jia et al. \cite{jia2018labelCorrelation} proposed LDLLC to take the global label correlation into account with introducing the Person's correlation between labels. It is pointed out in LDL-SCL \cite{zheng2018labelCorrelationSample} and EDL-LRL \cite{jia2019facialEDLLRL} that some correlations among labels (or samples) only exist in a set of instances, which are so-called the local correlation exploration. Intuitively, the instances in the same group after clustering share the same local correlation.
What's more, it is common that the labeled data are incomplete and contaminated \cite{ma2017multi}. For the former condition, Xu et al. \cite{xu2017incompleteLDL} put forward IncomLDL-a and IncomLDL-p on the assumption that the recovered complete label distribution matrix is low-rank. Proximal Gradient Descend (PGD) and ADMM are used for the optimization of two methods respectively. The time complexity of the first one is $O\left(1 / T^{2}\right)$, and the last one is $O\left(1 / T\right)$ but good at the accuracy. Jia et al. \cite{jia2019weakly} proposed WSLDL-MCSC which is based on the matrix completion and the exploration of samples' relevance in a transductive way when the data is under weak-supervision.
\subsection{Label Enhancement Learning}
To the best of our knowledge, there are a few researches whose topics focus on the label enhancement learning \cite{xu2018LE}. Five effective strategies have been devised during the present study. Four of them are adaptive algorithms. As discussed in \cite{geng2016LDL}, the concept of \textit{membership} used in fuzzy clustering \cite{jiang2006fuzzySVM} is similar to label distribution. Although they indicated two distinguishing semantics, they are both in numerical format. Thus, FCM \cite{el2006studyKNN} extend the calculation of membership which is used in fuzzy C-means clustering \cite{melin2005hybrid} to recover the label distribution. LE algorithm based on kernel
method (KM) \cite{jiang2006fuzzySVM} utilizes the kernel function to project the instances from origin space into a high-dimensional one. The instances are separated into two parts according to whether the corresponding logical label is 1 or not for every candidate label. Then the label distribution term, i.e. description degree can be calculated based on the distance between the instances and the center of groups. Label propagation technique \cite{wang2007labelpropagation} is used in the LP method to update the label distribution matrix iteratively with a fully-connected graph built. Since the message between samples is shared and passed on the basis of the connection graph, the logical label can be enhanced into distribution-level label. LE method adapted from manifold learning (ML) \cite{hou2016manifold} take the topological consistency between feature space and label space into consideration to obtain the recovered label distribution. The last novel strategy called GLLE \cite{xu2019labelTKDE} is specialized by leveraging the topological information of the input space and the correlation among labels. Meanwhile, the local label correlation is captured via clustering \cite{zhou2012multi}.
\section{Proposed Method}
Let $\mathcal{X}=\mathbb{R}^{m}$ denote the $m$-dimensional input space and $\mathcal{Y}=\left\{y_{1}, y_{2}, \cdots, y_{c}\right\}$ represent the complete set of labels where $c$ is the number of all possible labels. For each instance $x_{i} \in \mathcal{X}$, a simple logical label vector $l_{i} \in\{0,1\}^{c}$ is leveraged to represent which labels can describe the instance correctly. Specially, for the LDL paradigm, instance $x_{i}$ is assigned with distribution-level vector $d_{i} $
\begin{table}[]
\centering
\caption{Summary of some notations}\smallskip
\resizebox{0.5\columnwidth}{!}{
\begin{tabular}{ll}
\toprule
Notations & Description \\
\midrule
$n$ & the number of instances \\
$c$ & number of labels \\
$m$ & dimension of samples \\
$X$ & instance feature matrix \\
$L$ & logical label matrix \\
$D$ & label distribution matrix \\
$\hat{W} $ & Mapping parameter of BD-LE\\
$\tilde{W}$ & Reconstruction parameter of BD-LE \\
$\theta $ & Mapping parameter of BD-LDL\\
$\tilde{\theta}$ & Reconstruction parameter of BD-LDL \\
\bottomrule
\end{tabular}
}
\label{tab:Notations}
\end{table}
\subsection{Bi-directional for Label Enhancement}
Given a dataset $E=\left\{\left(x_{1}, l_{1}\right)\left(x_{2}, l_{2}\right), \cdots,\left(x_{n}, l_{n}\right)\right\}$, $X=\left[x _{1}, x_{2}, x_{3}, \ldots, x_{n}\right]$ and $L=\left[l_{1}, l_{2}, l_{3}, \dots, l_{n}\right]$ is defined as input matrix and logical label matrix, respectively. According to previous discussion, the goal of LE is to transform $L$ into the label distribution matrix $D=\left[d_{1}, d_{2}, d_{3}, \dots, d_{n}\right]$.
Firstly, a nonlinear function $\varphi(\cdot)$, i.e., kernel function is defined to transform each instance ${x_i}$ into a higher dimensional feature $\varphi(x_{i})$, which can be utilized to construct the vector $\phi_{i}=\left[\varphi\left(x_{i}\right) ; 1\right]$ of corresponding instance. For each instance, an appropriate mapping parameter $\hat{\mathrm{W}}$ is required to transform the input feature $\phi_{i}$ into the label distribution $d_i$. As there is a large dimension gap between input space and output space, a lot of information may be lost during the mapping process. To address this issue, it is reasonable to introduce the parameter $\tilde{W}$ for the reconstruction of the input data from the output data. Accordingly, the objective function of LE is formulated as follows:
\begin{equation}
\label{eq:obj_le_1}
\min _{\hat{W}, \tilde{W}} L(\hat{W})+\alpha R(\tilde{W})+\frac{1}{2} \lambda \Omega(\hat{W})+\frac{1}{2} \lambda \Omega(\tilde{W})
\end{equation}
where $L$ denotes the loss function of data mapping, $R$ indicates the loss function of data reconstruction, $\Omega$ is the regularization term, $\lambda$ and $\alpha$ are two trade-off parameters. It should be noted that the LE algorithm is regarded as a pre-processing of LDL methods and it dose not suffer from the over-fitting problem. Accordingly, it is not necessary to add the norm of parameters $\hat{W}$ and $\tilde{W}$ as regularizers.
The first term $L$ is the mapping loss function to measure the distance between logical label and recovered label distribution. According to \cite{xu2018LE}, it is reasonable to select the least squared (LS) function:
\begin{equation}
\begin{aligned} L(\hat{W}) &=\sum_{i=1}^{n}\left\|\hat{W} \phi_{i}-\boldsymbol{l}_{i}\right\|^{2} \\ &=\operatorname{tr}\left[(\hat{W} \Phi-L)^{\top}(\hat{W} \Phi-L)\right] \end{aligned}
\end{equation}
where $\Phi=\left[\phi_{1}, \ldots, \phi_{n}\right]$ and $t r(\cdot)$ is the trace of a matrix defined by the sum of diagonal elements. The second term $R$ is the reconstruction loss function to measure the similarity between the input feature data and the reconstructed one from the output data of LE. Similar to the mapping loss function, the reconstruction loss function is defined as follows:
\begin{equation}
\begin{aligned}
R(\tilde{W}) &=\sum_{i=1}^{n}\left\|\phi_{i}-\tilde{W} l_{i}\right\|^{2} \\ &=\operatorname{tr}\left[(\Phi-\tilde{W} L)^{T}(\Phi-\tilde{W} L)\right]
\end{aligned}
\end{equation}
To further simplify the model, it is reasonable to
consider the tied weights \cite{boureau2008sparseFeature} as follows:
\begin{equation}
\tilde{W}^{*}=\hat{W}^{T}
\end{equation}
where $\tilde{W}^{*}$ is the best reconstruction parameter to be obtained.
Then the Eq. (\ref{eq:obj_le_1}) is rewritten as:
\begin{equation}
\label{eq:obj_le}
\min _{\hat{W}} L(\hat{W})+\alpha R(\hat{W})+\lambda \Omega(\hat{W})
\end{equation}
To obtain desired results, the manifold regularization $\Omega$ is designed to capture the topological consistency between feature space and label space, which can fully exploit the hidden label importance from the input instances. Before presenting this term, it is required to introduce the similarity matrix $A$, whose element is defined as:
\begin{equation}
a_{i j}=\left\{\begin{array}{cc}{\exp \left(-\frac{\left\|x_{i}-x_{j}\right\|^{2}}{2 \sigma^{2}}\right)} & {\text { if } x_{j} \in N(i)} \\ {0} & {\text { otherwise }}\end{array}\right.
\end{equation}
where $N(i)$ denotes the set of $K$-nearest neighbors for the instance $x_{i}$, and $\sigma>0$ is the hyper parameter fixed to be 1 in this paper. Inspired by the smoothness assumption \cite{zhu2005semiGraph}, the more correlated two instances are, the closer are the corresponding recovered label distribution, and vice versa. Accordingly, it is reasonable to design the following manifold regularization:
\begin{equation}
\begin{aligned} \Omega(\hat{W}) &=\sum_{i, j} a_{i j}\left\|d_{i}-d_{j}\right\|^{2}=\operatorname{tr}\left(D G D^{\top}\right) \\ &=\operatorname{tr}\left(\hat{W} \Phi G \Phi^{\top} \hat{W}^{\top}\right) \end{aligned}
\end{equation}
where $d_{i}=\hat{W} \phi_{i}$ indicates the recovered label distribution, and $G=\hat{A}-A$ is the Laplacian matrix. Note that the similarity matrix $A$ is asymmetric so that the element of diagonal matrix $\hat{A}$ element is defined as ${\hat a_{ii}} = \sum\nolimits_{j = 1}^n {{(a_{ij}+a_{ji})/2}}$,
By substituting Eqs. (2), (3) and (7) into Eq. (5), the mapping and reconstruction loss function is defined on parameter $\hat{W}$ as follows:
\begin{equation}
\label{eq:fianl_obj_le}
\begin{aligned}
& T(\hat{W})=\operatorname{tr}\left((\hat{W} \Phi-L)^{T}(\hat{W} \Phi-L)\right)\\
& +\alpha \operatorname{tr}\left(\left(\Phi-\hat{W}^{T} L\right)^{T}\left(\Phi-\hat{W}^{T} L\right)\right)\\
& +\lambda \operatorname{tr}\left(\hat{W} \Phi G \Phi^{T} \hat{W}^{T}\right)
\end{aligned}
\end{equation}
Actually, Eq.(\ref{eq:fianl_obj_le}) can be easily optimized by a well-known method called limited-memory quasi-Newton method (L-BFGS) \cite{yuan1991modifiedBFGS}. This method achieves the optimization by calculating the first-order gradient of $T(\hat{W})$:
\begin{equation}
\begin{aligned}
\frac{\partial T}{\partial \hat{W}}=2 \hat{W} \Phi \Phi^{T}-2 L \Phi^{T}-2 \alpha L \Phi^{T}+2 \alpha L L^{T} \hat{W}\\
+\lambda \hat{W} \Phi G^{T} \Phi^{T}+\lambda \hat{W} \Phi G \Phi^{T}
\end{aligned}
\end{equation}
\subsection{Bi-directional for Label Distribution Learning}
Given dataset $S=\left\{\left(x_{1}, d_{1}\right)\left(x_{2}, d_{2}\right), \cdots,\left(x_{n}, d_{n}\right)\right\}$ whose label is the real-valued format, LDL aims to build a mapping function $f : \mathcal{X} \rightarrow \mathcal{D}$ from the instances to the label distributions, where $x_{i} \in \mathcal{X}$ denotes the $i$-th instance and $d _ { i } = \left\{ d _ { x _ { i } } ^ { y _ { 1 } } , d _ { x _ { i } } ^ { y _ { 2 } } , \cdots , d _ { x _ { i } } ^ { y _ { c } } \right\} \in \mathcal{D} $ indicates the $i$-th label distribution of instance. Note that $d_{ x }^{ y }$ accounts for the description degree of $y$ to $x$ rather than the probability that label tags correctly. All the labels can describe each instance completely, so it is reasonable that $d _ { x } ^ { y } \in [ 0,1 ]$ and $\sum _ { y } d _ { x } ^ { y } = 1$.
As mentioned before, most of LDL methods suffer from the mapping information loss due to the unidirectional projection of loss function. Fortunately, bidirectional projections can extremely preserve the information of input matrix. Accordingly, the goal of our specific BD-LDL algorithm is to determine a mapping parameter $\theta$ and a reconstruction parameter $\tilde{\theta}$ from training set so as to make the predicted label distribution and the true one as similar as possible. Therefore, the new loss function integrates the mapping error with the reconstruction error $R(\tilde{\theta},S)$ as follows:
\begin{equation}
\min _{\theta, \tilde{\theta}} L(\theta, S)+\lambda_{1} R(\tilde{\theta}, S)+\frac{1}{2} \lambda_{2} \Omega(\theta, S)+\frac{1}{2} \lambda_{2} \Omega(\tilde{\theta}, S)
\end{equation}
where ${\theta}$ denotes the mapping parameter, $\tilde{\theta}$ indicates the reconstruction parameter, $\Omega$ is a regularization to control the complexity of the output model to avoid over-fitting, $\lambda_{1}$ and $\lambda_{2}$ are two parameters to balance these four terms.
There are various candidate functions to measure the difference between two distributions such as the Euclidean distance, the Kullback-Leibler (K-L) divergence and the Clark distance etc. Here, we choose the Euclidean distance:
\begin{equation}
L(\theta, S)=\|X \theta-D\|_{F}^{2}
\end{equation}
where $\theta \in R^{d \times c}$ is the mapping parameter to be optimized, and $\|\cdot\|_{F}^{2}$ is the Frobenius norm of a matrix.
For simplification, it is reasonable to consider tied weights \cite{boureau2008sparseFeature} as follows:
\begin{equation}
\tilde{\theta}^{*}=\theta^{T}
\end{equation}
Similarly, the objective function is simplified as follows:
\begin{equation}
\min _{\theta} L(\theta, S)+\lambda_{1} R(\theta, S)+\lambda_{2} \Omega(\theta, S)
\end{equation}
where the term $R(\theta, S)=\left\|X-D \theta^{T}\right\|_{F}^{2}$ denotes the simplified reconstruction error.
As for the second term in objective function, we adopt the F-norm to implement it:
\begin{equation}
\Omega(\theta, S)=\|\theta\|_{F}^{2}
\end{equation}
Substituting Eqs. (11) and (14) into Eq. (13) yields the objective function:
\begin{equation}
\min _{\theta}\|X \theta-D\|_{F}^{2}+\lambda_{1}\left\|X-D \theta^{T}\right\|_{F}^{2}+\lambda_{2}\|\theta\|_{F}^{2}
\end{equation}
Before optimization, the trace properties $\operatorname{tr}(X)=\operatorname{tr}\left(X^{T}\right)$ and $\operatorname{tr}\left(D \theta^{T}\right)=\operatorname{tr}\left(\theta D^{T}\right)$ are applied for the re-organization of objective function:
\begin{equation}
\min _{\theta}\|X \theta-D\|_{F}^{2}+\lambda_{1}\left\|X^{T}-\theta D^{T}\right\|_{F}^{2}+\lambda_{2}\|\theta\|_{F}^{2}.
\label{eq:der0}
\end{equation}
Then, for optimization, we can simply take a derivative of Eq. (\ref{eq:der0}) with respective to the parameter $\theta$ and set it zero:
\begin{equation}
{X^{T}(X \theta-D)-\lambda_{1}\left(X^{T}-\theta D^{T}\right) D+\lambda_{2} \theta=0}
\label{eq:der}
\end{equation}
Obviously, Eq. (\ref{eq:der}) can be transformed into the following equivalent formulation:
\begin{equation}
{\left(X^{T} X+\lambda_{2} I\right) \theta+\lambda_{1} \theta D^{T} D=X^{T} D+\lambda_{1} X^{T} D}
\label{eq:bd_ldl_final}
\end{equation}
Denote $A=X^{T} X+\lambda_{2} I$, $B=\lambda_{1} D^{T} D$ and $C=\left(1+\lambda_{1}\right) X^{T} D$,
Eq. (\ref{eq:bd_ldl_final}) can be rewritten as the following formulation:
\begin{equation}
\label{eq:AB=C}
A \theta+\theta B=C
\end{equation}
\begin{algorithm}[htb]
\caption{BD-LDL Algorithm}
\label{alg:BD-LDL}
\begin{algorithmic}[1]
\Require
$X$: $n \times d$ training feature matrix;
$D$: $n \times c$ labeled distribution matrix;
\Ensure
$\theta$: $d \times c$ projection parameter
\State Initial $\theta^{0}$, $\lambda_{1}$, $\lambda_{2}$ and $t=0$;
\Repeat
\State Compute $A$, $B$ and $C$ in Eq.(\ref{eq:AB=C})
\State Perform Cholesky factorization to gain $P$ and $Q$
\State Perform SVD on $P$ and $Q$
\State Update $\tilde{\theta}^{t+1}$ via Eqs.(\ref{eq:step3}) and (\ref{eq:element})
\State Update $\theta^{t+1}$ via Eqs.(\ref{eq:the_best_theta})
\Until{Stopping criterion is satisfied}
\end{algorithmic}
\end{algorithm}
Although Eq. (\ref{eq:AB=C}) is the well-known Sylvester equation which can be solved by existing algorithm in MATLAB, the computational cost corresponding solution is not ideal. Thus, following \cite{zhu2017a}, we effectively solve Eq. (\ref{eq:AB=C}) with Cholesky factorization \cite{golub1996matrix} as well the Singular Value Decomposition (SVD). Firstly, two positive semi-definite matrix $A$ and $B$ can be factorized as:
\begin{equation}
\label{eq:Cholesky}
\begin{aligned}
&A=P^{T} \times P\\
&B=Q \times Q^{T}
\end{aligned}
\end{equation}
where $P$ and $Q$ are the triangular matrix which can be further decomposed via SVD as:
\begin{equation}
\label{eq:SVD}
\begin{aligned}
& P=U_{1} \Sigma_{1} V_{1}^{T} \\
& Q=U_{2} \Sigma_{2} V_{2}^{T}
\end{aligned}
\end{equation}
Substituting Eqs. (\ref{eq:Cholesky}) and (\ref{eq:SVD}) into Eq. (\ref{eq:AB=C}) yields:
\begin{equation}
\label{eq:step1}
V_{1} \Sigma_{1}^{T} U_{1}^{T} U_{1} \Sigma_{1} V_{1}^{T} \theta +\theta U_{2} \Sigma_{2} V_{2}^{T} V_{2} \Sigma_{2}^{T} U_{2}^{T}=C
\end{equation}
Since $U_{1}$, $U_{2}$, $V_{1}$ and $V_{2}$ are the unitray matrix, Eq. (\ref{eq:step1}) can be rewritten as :
\begin{equation}
\label{eq:step2}
V_{1} \Sigma_{1}^{T} \Sigma_{1} V_{1}^{T} \theta +\theta U_{2} \Sigma_{2} \Sigma_{2}^{T} U_{2}^{T}=C
\end{equation}
We multiplying $V_{1}^{T}$ and $U_{2}$ to both sides of Eq. (\ref{eq:step2}) to obtain the following equation:
\begin{equation}
\label{eq:step3}
\tilde{\Sigma}_{1} \tilde{\theta} + \tilde{\theta} \tilde{\Sigma}_{2} = E
\end{equation}
where $\tilde{\Sigma}_{1}=\Sigma_{1}^{T} \Sigma_{1}$, $\tilde{\Sigma}_{2}=\Sigma_{2} \Sigma_{2}^{T}$, $E=V_{1}^{T} C U_{2}$ and $\tilde{\theta}=V_{1}^{T} \theta U_{2}$.
For both $\tilde{\Sigma}_{1}$ and $\tilde{\Sigma}_{2}$ are the diagonal matrix, we can directly attain $\tilde{\theta}$ whose element is defined as:
\begin{equation}
\label{eq:element}
\tilde{\theta}_{i j}=\frac{e_{i j}}{\tilde{\sigma}_{i i}^{1}+\tilde{\sigma}_{j j}^{2}}
\end{equation}
where $\tilde{\sigma}_{i i}^{1}$ and $\tilde{\sigma}_{i i}^{2}$ can be calculated by eigenvalues of $P$ and $Q$ respectively, and $e_{i j}$ is the \textit{i,j}-th elment of matrix $E$. Accordingly, $\theta$ can be obtained by:
\begin{equation}
\label{eq:the_best_theta}
\theta=V_{1} \tilde{\theta} U_{2}^{T}
\end{equation}
We briefly summarize the procedure of the proposed BD-LDL in Algorithm 1.
\begin{table}[]
\centering
\caption{Statistics of 13 datasets used in comparison experiment}\smallskip
\resizebox{0.55\columnwidth}{!}{
\begin{tabular}{lllll}
\toprule
Index & Data\ Set & \# Examples & \# Features & \# Labels \\
\midrule
1 & Yeast-alpha & 2,465 & 24 & 18 \\
2 & Yeast-cdc & 2,465 & 24 & 15 \\
3 & Yeast-cold & 2,465 & 24 & 4 \\
4 & Yeast-diau & 2,465 & 24 & 7 \\
5 & Yeast-dtt & 2,465 & 24 & 4 \\
6 & Yeast-elu & 2,465 & 24 & 14 \\
7 & Yeast-heat & 2,465 & 24 & 6 \\
8 & Yeast-spo & 2,465 & 24 & 6 \\
9 & Yeast-spo5 & 2,465 & 24 & 3 \\
10 & Yeast-spoem & 2,465 & 24 & 2 \\
11 & Natural\ Scene & 2,000 & 294 & 9 \\
12 & Movie & 7,755 & 1,869 & 5 \\
13 & SBU\_3DFE & 2,500 & 243 & 6 \\
\bottomrule
\end{tabular}
}
\label{tab:Data}
\end{table}
\section{Experiments}
\subsection{Datasets and Measurement}
We conducted extensive experiments on 13 real-world datasets collected from biological experiments \cite{eisen1998Genecluster}, facial expression images \cite{lyons1998coding-facial}, natural scene images, and movies.
The output of both LE and LDL are in the format of label distribution vectors. In contrast to the results of SLL and MLL, the label distribution vectors should be evaluated with diverse measurements. We naturally select six criteria that are most commonly used, i.e., Chebyshev distance (Chebeyshev), Clark distance (Clark), Canberra metric (Canberra), Kullback–Leibler divergence (K-L), Cosine coefficient (Cosine), and Intersection similarity (Intersec). The first four functions are always used to measure distance between groud-truth label distribution $D$ and the predicted one $\widehat{D}$, whereas the last two are similarity measurements.The specifications of criteria and used data sets can be found in Tables \ref{tab:Evaluation measurements} and \ref{tab:Data}.
\begin{table}[]
\large
\centering
\caption{Evaluation Measurements}\smallskip
\resizebox{0.55\columnwidth}{!}{
\begin{tabular}{l|l|l}
\toprule
\ & Name & Defination \\
\midrule
Distance & Chebyshev $\downarrow$ & $D i s_{1}(D, \widehat{D})=\max _{i}\left|d_{i}-\widehat{d}_{i}\right|$ \\
\ & Clark $\downarrow$ & $D i s_{2}(D, \widehat{D})=\sqrt{\sum_{i=1}^{c} \frac{\left(d_{i}-\widehat{d_{i}}\right)^{2}}{\left(d_{i}+\widehat{d_{i}}\right)^{2}}}$ \\
\ & Canberra $\downarrow$ & $D i s_{3}(D, \widehat{D})=\sum_{i=1}^{c} \frac{\left|d_{i}-\widehat{d}_{i}\right|}{d_{i}+\widehat{d}_{i}}$ \\
\midrule
Similarity & Intersaction $\uparrow$ & $S i m _{1}(D, \widehat{D})=\sum_{i=1}^{c} \min \left(d_{i}, \widehat{d}_{i}\right)$ \\
\ & Cosine $\uparrow$ & $S i m_{2}(D, \widehat{D})=\frac{\sum_{i=1}^{c} d_{i} \widehat{d}_{i}}{\sqrt{\left(\sum_{i=1}^{c} d_{i}^{2}\right)\left(\sum_{i=1}^{c} \widehat{d}_{i}^{2}\right)}}$ \\
\toprule
\end{tabular}
}
\label{tab:Evaluation measurements}
\end{table}
\begin{table}[]
\label{tab:LE-CHEB}
\centering
\caption{Comparison Performance(rank) of Different LE Algorithms Measured by Chebyshev $\downarrow$}\smallskip
\resizebox{0.8\columnwidth}{!}{
\begin{tabular}{lllllll}
\toprule
Datasets & Ours & FCM & KM & LP & ML & GLLE\\[2pt]
\midrule
Yeast-alpha & \bf 0.0208(1) & 0.0426(4) & 0.0588(6) & 0.0401(3) & 0.0553(5) & 0.0310(2)\\
Yeast-cdc & \bf 0.0231(1) & 0.0513(4) & 0.0729(6) & 0.0421(3) & 0.0673(5) & 0.0325(2)\\
Yeast-cold & \bf 0.0690(1) & 0.1325(4) & 0.2522(6) & 0.1129(3) & 0.2480(5) & 0.0903(2)\\
Yeast-diau & \bf 0.0580(1) & 0.1248(4) & 0.2500(6) & 0.0904(3) & 0.1330(5) & 0.0789(2)\\
Yeast-dtt & \bf 0.0592(1) & 0.0932(3) & 0.2568(5) & 0.1184(4) & 0.2731(6) & 0.0651(2)\\
Yeast-elu & \bf 0.0256(1) & 0.0512(4) & 0.0788(6) & 0.0441(3) & 0.0701(5) & 0.0287(2)\\
Yeast-heat & \bf 0.0532(1) & 0.1603(4) & 0.1742(5) & 0.0803(3) & 0.1776(6) & 0.0563(2)\\
Yeast-spo & \bf 0.0641(1) & 0.1300(4) & 0.1753(6) & 0.0834(3) & 0.1722(5) & 0.0670(2)\\
Yeast-spo5 & 0.1017(2) & 0.1622(4) & 0.2773(6) & 0.1142(3) & 0.2730(5) & \bf 0.0980(1)\\
Yeast-spoem & \bf 0.0921(1) & 0.2333(4) & 0.4006(6) & 0.1632(3) & 0.3974(5) & 0.1071(2)\\
Natural\_Scene & 0.3355(5) & 0.3681(6) & 0.3060(3) & \bf 0.2753(1) & 0.2952(2) & 0.3349(4)\\
Movie & \bf 0.1254(1) & 0.2302(4) & 0.2340(6) & 0.1617(3) & 0.2335(5) & 0.1601(2)\\
SUB\_3DFE & \bf 0.1285(1) & 0.1356(3) & 0.2348(6) & 0.1293(2) & 0.2331(5) & 0.1412(4)\\
\midrule
Avg. Rank & 1.38 & 4.00 & 5.62 & 2.84 & 4.92 & 2.23 \\
\bottomrule
\end{tabular}
}\label{BD_LDL_RESULTS_1}
\end{table}
\begin{table}[]
\label{tab:LE-COSINE}
\centering
\caption{Comparison Performance(rank) of Different LE Algorithms Measured by Cosine $\uparrow$}\smallskip
\resizebox{0.8\columnwidth}{!}{
\begin{tabular}{lllllll}
\toprule
Datasets & Ours & FCM & KM & LP & ML & GLLE\\[2pt]
\midrule
Yeast-alpha & \bf 0.9852(1) & 0.9221(3) & 0.8115(5) & 0.9220(4) & 0.7519(6) & 0.9731(2)\\
Yeast-cdc & \bf 0.9857(1) & 0.9236(3) & 0.7541(6) & 0.9162(4) & 0.7591(5) & 0.9597(2)\\
Yeast-cold & \bf 0.9804(1) & 0.9220(4) & 0.7789(6) & 0.9251(3) & 0.7836(5) & 0.9690(2)\\
Yeast-diau & \bf 0.9710(1) & 0.8901(4) & 0.7990(6) & 0.9153(3) & 0.8032(5) & 0.9397(2)\\
Yeast-dtt & \bf 0.9847(1) & 0.9599(3) & 0.7602(6) & 0.9210(4) & 0.7631(5) & 0.9832(2)\\
Yeast-elu & \bf 0.9841(1) & 0.9502(3) & 0.7588(5) & 0.9110(4) & 0.7562(6) & 0.9813(2)\\
Yeast-heat & \bf 0.9803(1) & 0.8831(4) & 0.7805(6) & 0.9320(3) & 0.7845(5) & 0.9800(2)\\
Yeas-spo & \bf 0.9719(1) & 0.9092(4) & 0.8001(6) & 0.9390(3) & 0.8033(5) & 0.9681(2)\\
Yeast-spo5 & 0.9697(2) & 0.9216(4) & 0.8820(6) & 0.9694(3) & 0.8841(5) & \bf 0.9713(1)\\
Yeast-spoem & \bf 0.9761(1) & 0.8789(4) & 0.8122(6) & 0.9500(3) & 0.8149(5) & 0.9681(2)\\
Natural\_Scene & 0.7797(4) & 0.5966(6) & 0.7488(5) & 0.8602(2) & \bf 0.8231(1) & 0.7822(3)\\
Movie & \bf 0.9321(1) & 0.7732(6) & 0.8902(4) & 0.9215(2) & 0.8153(5) & 0.9000(3)\\
SBU\_3DFE & \bf 0.9233(1) & 0.9117(3) & 0.8126(6) & 0.9203(2) & 0.8150(5) & 0.9000(4)\\
\midrule
Avg. Rank & 1.31 & 3.92 & 5.62 & 3.08 & 4.85 & 2.23 \\
\bottomrule
\end{tabular}
}\label{BD_LDL_RESULTS_2}
\end{table}
\begin{table}
\label{tab:LDL-CHEB}
\centering
\caption{Comparison Results(mean$\pm$std.(rank)) of Different LDL Algorithms Measured by Clark $\downarrow$}\smallskip
\resizebox{1\textwidth}{!}{
\begin{tabular}{lllllllll}
\toprule
Datasets & Ours & PT-Bayes & AA-BP & SA-IIS & SA-BFGS & LDL-SCL & EDL-LRL & LDLLC\\
\midrule
Yeast-alpha & \bf 0.2097$\pm$0.003(1) & 1.1541$\pm$0.034(8) & 0.7236$\pm$0.060(7) & 0.3053$\pm$0.006(6) & 0.2689$\pm$0.008(5) & 0.2098$\pm$0.002(2) & 0.2126$\pm$0.000(4) & 0.2098$\pm$0.006(3) \\
Yeast-cdc & \bf 0.2017$\pm$0.004(1) & 1.0601$\pm$0.066(8) & 0.5728$\pm$0.030(7) & 0.2932$\pm$0.004(6) & 0.2477$\pm$0.007(5) & 0.2137$\pm$0.004(3) & 0.2046$\pm$2.080(2) & 0.2163$\pm$0.004(4) \\
Yeast-cold & \bf 0.1355$\pm$0.004(1) & 0.5149$\pm$0.024(8) & 0.1552$\pm$0.005(7) & 0.1643$\pm$0.004(8) & 0.1471$\pm$0.004(5) & 0.1388$\pm$0.003(2) & 0.1442$\pm$2.100(4) & 0.1415$\pm$0.004(3) \\
Yeast-diau & \bf 0.1960$\pm$0.006(1) & 0.7487$\pm$0.042(8) & 0.2677$\pm$0.010(7) & 0.2409$\pm$0.006(6) & 0.2201$\pm$0.002(5) & 0.1986$\pm$0.002(2) & 0.2011$\pm$0.003(4) & 0.2010$\pm$0.006(3) \\
Yeast-dtt & 0.0964$\pm$0.004(2) & 0.4807$\pm$0.040(8) & 0.1206$\pm$0.008(7) & 0.1332$\pm$0.003(8) & 0.1084$\pm$0.003(6) & 0.0989$\pm$0.001(4) & 0.0980$\pm$1.600(3) & \bf 0.0962$\pm$0.006(1) \\
Yeast-elu & \bf 0.1964$\pm$0.004(1) & 1.0050$\pm$0.041(8) & 0.5246$\pm$0.028(7) & 0.2751$\pm$0.006(6) & 0.2438$\pm$0.008(5) & 0.2015$\pm$0.002(3) & 0.2029$\pm$0.023(4) & 0.1994$\pm$0.006(2) \\
Yeast-heat & \bf 0.1788$\pm$0.005(1) & 0.6829$\pm$0.026(8) & 0.2261$\pm$0.010(7) & 0.2260$\pm$0.005(6) & 0.1998$\pm$0.003(5) & 0.1826$\pm$0.003(2) & 0.1826$\pm$0.003(2) & 0.1854$\pm$0.004(4) \\
Yeast-spo & \bf 0.2456$\pm$0.008(1) & 0.6686$\pm$0.040(8) & 0.2950$\pm$0.010(7) & 0.2759$\pm$0.006(6) & 0.2639$\pm$0.003(5) & 0.2503$\pm$0.002(4) & 0.2480$\pm$0.685(2) & 0.2500$\pm$0.008(3) \\
Yeast-spo5 & \bf 0.1785$\pm$0.007(1) & 0.4220$\pm$0.020(8) & 0.1870$\pm$0.005(3) & 0.1944$\pm$0.009(6) & 0.1962$\pm$0.001(7) & 0.1881$\pm$0.004(4) & 0.1915$\pm$0.020(5) & 0.1837$\pm$0.007(2) \\
Yeast-spoem & \bf 0.1232$\pm$0.005(1) & 0.3065$\pm$0.030(8) & 0.1890$\pm$0.012(7) & 0.1367$\pm$0.007(6) & 0.1312$\pm$0.001(3) & 0.1316$\pm$0.005(4) & 0.1273$\pm$0.054(2) & 0.1320$\pm$0.008(5) \\
Natural\_Scene & \bf 2.3612$\pm$0.541(1) & 2.5259$\pm$0.015(8) & 2.4534$\pm$0.018(4) & 2.4703$\pm$0.019(6) & 2.4754$\pm$0.013(7) & 2.4580$\pm$0.012(5) & 2.4519$\pm$0.005(2) & 2.4456$\pm$0.019(3) \\
Movie & \bf 0.5211$\pm$0.606(1) & 0.8044$\pm$0.010(8) & 0.6533$\pm$0.010(6) & 0.5783$\pm$0.007(5) & 0.5750$\pm$0.011(4) & 0.5543$\pm$0.007(3) & 0.6956$\pm$0.041(7) & 0.5289$\pm$0.008(2) \\
SBU\_3DFE & 0.3540$\pm$0.010(2) & 0.4137$\pm$0.010(5) & 0.4454$\pm$0.020(8) & 0.4156$\pm$0.012(7) & \bf 0.3465$\pm$0.006(1) & 0.3546$\pm$0.002(3) & 0.3556$\pm$0.006(4) & 0.4145$\pm$0.006(6) \\
\midrule
Avg. Rank & 1.15 & 7.77 & 6.46 & 6.31 & 4.85 & 3.15 & 3.46 & 3.15 \\
\bottomrule
\end{tabular}
}\label{BD_LDL_RESULTS_1}
\end{table}
\begin{table}
\label{tab:LDL-COSINE}
\centering
\caption{Comparison Results(mean$\pm$std.(rank)) of Different LDL Algorithms Measured by Cosine $\uparrow$}\smallskip
\resizebox{1\textwidth}{!}{
\begin{tabular}{lllllllll}
\toprule
Datasets & Ours & PT-Bayes & AA-BP & SA-IIS & SA-BFGS & LDL-SCL & EDL-LRL & LDLLC\\
\midrule
Yeast-alpha & \bf 0.9947$\pm$0.000(1) & 0.8527$\pm$0.005(8) & 0.9482$\pm$0.007(7) & 0.9879$\pm$0.000(6) & 0.9914$\pm$0.000(5) & 0.9945$\pm$0.000(3) & 0.9945$\pm$0.000(4) & 0.9946$\pm$0.000(2)\\
Yeast-cdc & \bf 0.9955$\pm$0.000(1) & 0.8544$\pm$0.012(8) & 0.9590$\pm$0.003(7) & 0.9871$\pm$0.000(6) & 0.9913$\pm$0.000(4) & 0.9904$\pm$0.000(5) & 0.9939$\pm$8.070(2) & 0.9932$\pm$0.000(3)\\
Yeast-cold & \bf 0.9893$\pm$0.001(1) & 0.8884$\pm$0.008(8) & 0.9859$\pm$0.001(6) & 0.9838$\pm$0.000(7) & 0.9871$\pm$0.000(5) & 0.9886$\pm$0.000(3) & 0.9892$\pm$0.034(2) & 0.9883$\pm$0.001(4)\\
Yeast-diau & \bf 0.9884$\pm$0.001(1) & 0.8644$\pm$0.007(8) & 0.9860$\pm$0.000(5) & 0.9821$\pm$0.000(7) & 0.9853$\pm$0.000(6) & 0.9880$\pm$0.000(2) & 0.9876$\pm$0.063(4) & 0.9878$\pm$0.001(3)\\
Yeast-dtt & \bf 0.9943$\pm$0.000(1) & 0.8976$\pm$0.012(8) & 0.9909$\pm$0.001(6) & 0.9889$\pm$0.000(7) & 0.9928$\pm$0.000(5) & 0.9939$\pm$0.000(3) & 0.9940$\pm$0.021(2) & 0.9939$\pm$0.001(4)\\
Yeast-elu & \bf 0.9942$\pm$0.000(1) & 0.8600$\pm$0.008(8) & 0.9623$\pm$0.003(7) & 0.9876$\pm$0.000(6) & 0.9912$\pm$0.000(5) & 0.9939$\pm$0.000(3) & 0.9938$\pm$0.001(4) & 0.9940$\pm$0.000(2)\\
Yeast-heat & \bf 0.9884$\pm$0.001(1) & 0.8655$\pm$0.008(8) & 0.9814$\pm$0.001(6) & 0.9810$\pm$0.000(7) & 0.9857$\pm$0.000(5) & 0.9880$\pm$0.000(2) & 0.9880$\pm$0.029(3) & 0.9876$\pm$0.001(4)\\
Yeast-spo & \bf 0.9776$\pm$0.001(1) & 0.8672$\pm$0.010(8) & 0.9686$\pm$0.003(7) & 0.9718$\pm$0.001(6) & 0.9745$\pm$0.000(5) & 0.9768$\pm$0.000(4) & 0.9772$\pm$0.010(2) & 0.9770$\pm$0.001(3)\\
Yeast-spo5 & \bf 0.9753$\pm$0.002(1) & 0.8968$\pm$0.010(8) & 0.9731$\pm$0.001(4) & 0.9706$\pm$0.002(7) & 0.9710$\pm$0.000(6) & 0.9732$\pm$0.001(3) & 0.9723$\pm$0.007(5) & 0.9743$\pm$0.002(2)\\
Yeast-spoem & \bf 0.9803$\pm$0.001(1) & 0.9187$\pm$0.010(8) & 0.9728$\pm$0.003(7) & 0.9764$\pm$0.001(6) & 0.9786$\pm$0.000(3) & 0.9784$\pm$0.001(4) & 0.9796$\pm$0.008(2) & 0.9784$\pm$0.002(5)\\
Natural\_Scene & \bf 0.7637$\pm$0.015(1) & 0.5583$\pm$0.006(8) & 0.6954$\pm$0.014(7) & 0.6986$\pm$0.008(6) & 0.7144$\pm$0.008(5) & 0.7442$\pm$0.007(3) & 0.7624$\pm$0.003(2) & 0.7486$\pm$0.014(4)\\
Movie & \bf 0.9385$\pm$0.002(1) & 0.8495$\pm$0.003(8) & 0.8767$\pm$0.006(7) & 0.9089$\pm$0.002(4) & 0.8780$\pm$0.004(5) & 0.9205$\pm$0.002(3) & 0.8780$\pm$0.005(6) & 0.9381$\pm$0.003(2)\\
SUB\_3DFE & \bf 0.9644$\pm$0.004(1) & 0.9167$\pm$0.004(8) & 0.9181$\pm$0.005(7) & 0.9202$\pm$0.004(5) & 0.9482$\pm$0.001(3) & 0.9436$\pm$0.000(4) & 0.9636$\pm$0.002(2) & 0.9198$\pm$0.002(6)\\
\midrule
Avg. Rank & 1.00 & 8.00 & 6.38 & 6.15 & 4.77 & 3.54 & 3.08 & 3.38 \\
\bottomrule
\end{tabular}
}\label{BD_LDL_RESULTS_2}
\end{table}
\subsection{Methodology}
To show the effectiveness of the proposed methods, we conducted comprehensive experiments on the aforementioned datasets. For LE, the proposed BD-LE method is compared with five classical LE approaches presented in \cite{xu2018LE}, i.e., FCM, KM, LP, ML, and GLLE. The hyper-parameter in the FCM method is set to 2. We select the Gaussian Kernel as the kernel function in the KM algorithm. For GLLE, the parameter $\lambda$ is set to 0.01. Moreover, the number of neighbors $K$ is set to $c+1$ in both GLLE and ML.
For the LDL paradigm, the proposed BD-LDL method is compared with eight existing algorithms including PT-Bayes \cite{geng2013LDL}, PT-SVM \cite{geng2014facialadaptive}, AA-KNN \cite{geng2010facialestimation}, AA-BP \cite{geng2013facialestimation}, SA-IIS \cite{geng2013facialestimation}, SA-BFGS \cite{geng2016LDL}, LDL-SCL \cite{zheng2018labelCorrelationSample}, and EDL-LRL \cite{jia2019facialEDLLRL}, to demonstrate its superiority. The first two algorithms are implemented by the strategy of problem transformation. The next two ones are carried out by means of the adaptive method. Finally, from the fifth algorithm to the last one, they are specialized algorithms. In particular, the LDL-SCL and EDL-LRL constitute state-of-art methods recently proposed. We utilized the ``C-SVC'' type in LIBSVM to implement PT-SVM using the RBF kernel with parameters $C=10^{-1}$ and $\gamma=10^{-2}$. We set the hyper-parameter $k$ in AA-kNN to 5. The number of hidden-layer neurons for AA-BP was set to 60. The parameters $\lambda_{1}$, $\lambda_{2}$ and $\lambda_{3}$ in LDL-SCL were all set to $10^{-3}$. Regarding the EDL-LRL algorithm, we set the regularization parameters $\lambda_{1}$ and $\lambda_{2}$ to $10^{-3}$ and $10^{-2}$, respectively. For the intermediate algorithm K-means, the number of cluster was set to 5 according to Jia`s suggestion \cite{jia2018labelCorrelation}. For the BFGS optimization used in SA-BFGS and BD-LDL, parameters $c_{1}$ and $c_{2}$ were set to $10^{-4}$ and 0.9, respectively.
Regarding the two bi-directional algorigthms, parameters are tuned from the range $10^{\{-4,-3,-2,-1,0,1,2,3\}}$ using grid-search method. The two parameters in BD-LE $\alpha$ and $\lambda$ are both set to $10^{-3}$. As for BD-LDL, the parameters $\lambda_{1}$ and $\lambda_{2} $ are set to $10^{-3}$ and $10^{-2}$, respectively. Finally, we train the LDL model with the recovered label distributions for further evaluation of BD-LE. The details of parameter selections are shown in the parameter analysis section. And the experiments for the LDL algorithm on every datasets are conducted on a ten-fold cross-validation.
\subsection{Results}
\subsubsection{BD-LE performance}
Tables 1 and 2 present the results of six LE methods on all the datasets. Constrained by the page limit, we have only shown two representative results measured on Chebyshev and Cosine in this paper.
For each dataset, the results made by a specific algorithm are listed as a column in the tables in accordance with the used matrix. Note that there is always an entry highlighted in boldface. This entry indicates that the algorithm evaluated by the corresponding measurement achieves the best performance. The experimental results are presented in the form of ``score (rank)''; ``score'' denotes the difference between a predicted distribution and the real one measured by the corresponding matrix; ``rank'' is a direct value to evaluate the effectiveness of these compared algorithms. Moreover, the symbol ``$\downarrow$'' means ``the smaller the better'', whereas ``$\uparrow$'' indicates ``the larger the better.''
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{CHEB_BD.png}
\caption{Comparison results of different LE algorithms against the ‘Ground-truth’ used in BD-LDL measured by Chebyshev $\downarrow$}
\label{fig:BD-LDL-CHEB}
\end{figure}
It is worth noting that given that the LE method is regarded as a pre-processing method, there is no need to run it several times and record the mean as well as the standard deviation. After analyzing the results we obtained, the proposed BD-LE clearly outperforms other LE algorithms in most of the cases and renders sub-optimum performance only in about 4.7\% of cases according to the statistics. In addition, BD-LE achieves better prediction results than GLLE in most of the cases, especially on dataset movie. From Table 1 we can see that the largest dimensional gap between input space and the output one is exactly in dataset moive. This indicates that the reconstruction projection can be added in LE algorithm reasonably. Two specialized algorithms, namely BE-LE and GLLE, rank first in 91.1\% of cases. By contrast, the label distributions are hardly recovered from other four algorithms. This indicates the superiority of utilizing direct similarity or distance as the loss function in LDL and LE problems. In summary, the performance of the five LE algorithms is ranked from best to worst as follows: BD-LE, GLLE, LP, FCM, ML and KM. This proves the effectiveness of our proposed bi-directional loss function for the LE method.
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{CHEB_SA.png}
\caption{Comparison results of different LE algorithms against the ‘Ground-truth’ used in SA-BFGS measured by Chebyshev $\downarrow$}
\label{fig:SA-BFGS-CHEB}
\end{figure}
\subsubsection{BD-LDL performance}
As for the performance of BD-LDL method, we show the numerical result on 13 real-world datasets over the measurement Clark and Cosine in Tables 3 and 4 with the format ``mean$\pm$std (rank)'' similarly, and the item in bold in every row represents the best performance. One may observe that our algorithm BD-LDL outperforms other classical LDL algorithms in most cases. When measured by Cosine, it is vividly shown that BD-LDL achieves the best performance on every datasets, which strongly demonstrates the effectiveness of our proposed method. Besides, it can be found from Table 4 that although LDLLC and SA-BFGS obtains the best result on dataset Yeast-dtt and SBU\_3DFE respectively when measured with Chebyshev, BD-LDL still ranks the second place. It also can be seen from the results that two PT and AP algorithms perform poorly on most cases. This verifies the superiority of utilizing the direct similarity or distance between the predicted label distribution and the true one as the loss function. Moreover, it can be easily seen from the results that our proposed method gains the superior performance over other existing specialized algorithms which ignore considering the reconstruction error. This indicates that such a bi-directional loss function can truly boost the performace of LDL algorithm.
\begin{table}[]
\label{tab:ablation}
\centering
\caption{ Ablation experiments results of UD-LDL and BD-LDL Algorithms Measured by Canberra $\downarrow$ and Intersection $\uparrow$ }\smallskip
\resizebox{0.8\columnwidth}{!}{
\begin{tabular}{l|ll|ll}
\toprule
\multirow{2}{*}{Dataset} & \multicolumn{2}{l|}{$\qquad$$\qquad$Canberra $\downarrow$} & \multicolumn{2}{l}{$\qquad$$\qquad$Intersection $\uparrow$} \\
\cline{2-5}
& UD-LDL & BD-LDL & UD-LDL & BD-LDL \\
\midrule
\centering
Yeast-alpha & 0.7980$\pm$0.013 & \bf 0.6013$\pm$0.011 & 0.8915 $\pm$ 0.001 & \bf 0.9624$\pm$0.001 \\
Yeast-cdc & 0.9542$\pm$0.012 & \bf 0.6078$\pm$0.012 & 0.8869 $\pm$ 0.001 & \bf 0.9580$\pm$0.001 \\
Yeast-cold & 0.4515$\pm$0.010 & \bf 0.2103$\pm$0.007 & 0.8779 $\pm$ 0.002 & \bf 0.9430$\pm$0.002 \\
Yeast-diau & 0.6751$\pm$0.010 & \bf 0.4220$\pm$0.013 & 0.8338 $\pm$ 0.001 & \bf 0.9414$\pm$0.002 \\
Yeast-dtt & 0.3724$\pm$0.010 & \bf 0.1659$\pm$0.006 & 0.8775 $\pm$ 0.002 & \bf 0.9590$\pm$0.001 \\
Yeast-elu & 0.8005$\pm$0.007 & \bf 0.5789$\pm$0.011 & 0.8776 $\pm$ 0.001 & \bf 0.9591$\pm$0.001 \\
Yeast-heat & 0.5706$\pm$0.010 & \bf 0.3577$\pm$0.010 & 0.8591 $\pm$ 0.002 & \bf 0.9412$\pm$0.002 \\
Yeast-spo & 0.7075$\pm$0.016 & \bf 0.5036$\pm$0.019 & 0.8301 $\pm$ 0.003 & \bf 0.9171$\pm$0.003 \\
Yeast-spo5 & 0.4834$\pm$0.010 & \bf 0.2745$\pm$0.011 & 0.8184 $\pm$ 0.003 & \bf 0.9112$\pm$0.003 \\
Yeast-spoem & 0.2998$\pm$0.010 & \bf 0.1716$\pm$0.007 & 0.8129 $\pm$ 0.005 & \bf 0.9169$\pm$0.003 \\
Natural\_Scene & 6.9653$\pm$0.095 & \bf 0.7319$\pm$0.040 & 0.3822 $\pm$ 0.010 & \bf 0.5395$\pm$0.011 \\
Movie & 1.4259$\pm$0.024 & \bf 0.0218$\pm$0.001 & 0.7429 $\pm$ 0.004 & \bf 0.8298$\pm$0.002 \\
SBU\_3DFE & 0.8562$\pm$0.021 & \bf 0.0119$\pm$0.001 & 0.8070 $\pm$ 0.004 & \bf 0.8590$\pm$0.004 \\
\midrule
\end{tabular}
}
\end{table}
\subsubsection{LDL algorithm Predictive Performance}
The reason to use the LE algorithm is that we need to recover the label distributions for LDL training. For the purpose of verifying the correctness and effectiveness of our proposed LE algorithm, we conducted an experiment to compare predictions depending on the recovered label distributions with those made by the LDL model trained on real label distributions. Moreover, for further evaluation of the proposed BD-LDL, we selected BD-LDL and SA-BFGS as the LDL model in this experiment. Owing to the page limit, we hereby present only the experimental results measured with Chebyshev. The prediction results achieved from SA-BFGS and BD-LDL are visualized in Figs. 1 and 2 in terms of histograms. Note that `Ground-truth' appearing in the figures represents the results depending on the real label distributions. We regard these results as a benchmark instead of taking them into consideration while conducting the evaluation. Meanwhile, we use `FCM', `KM', `LP', `ML', `GLLE' and `BD-LE' to represent the performance of the corresponding LE algorithm in this experiment. As illustrated in Figs. 1-2, although the prediction on datasets movie and SUB\_3DFE is worse than other datasets, `BD-LE' is still relatively close to `Ground-Truth' in most cases, especially in the first to eleventh datasets. We must mention that `BD-LE' combined with BD-LDL is closer to `Ground-truth' than with SA-BFGS over all cases. This indicates that such a reconstruction constraint is generalized enough to bring the improvement into both LE and LDL algorithms simultaneously.
\begin{figure}[]
\centering
\includegraphics[width=1\columnwidth]{fig3.png}
\caption{Influence of parameter $\lambda$ and $\alpha$ on dataset \textit{cold} in BD-LE}
\label{fig:PARAMETER_LDL}
\end{figure}
\begin{figure}[]
\centering
\includegraphics[width=1\columnwidth]{ldl.png}
\caption{Influence of parameter $\lambda_{1}$ and $\lambda_{2}$ on dataset \textit{cold} in BD-LDL}
\label{fig:PARAMETER_LE}
\end{figure}
\subsection{Ablation Experiment}
It is clear that the bi-directional loss for either LE or LDL consists of two parts, i.e., naive mapping loss and the reconstruction loss. In order to further demonstrate the effectiveness of the additional reconstruction term, we conduct the ablation experiment measured with Canberra and Intersection on the whole 13 datasets. We call the unidirectional algorithm without reconstruction term as \textit{UD-LE} and \textit{UD-LE} respectively which are fomulated as:
\begin{equation}
\min _{W} L(\hat{W})+ \Omega(W)
\end{equation}
\begin{equation}
\min _{\theta} L(\theta, S)+\lambda \Omega(\theta, S)
\end{equation}
Since the objective function of UD-LE is identical to that of GLLE, the corresponding comparison can be referred to Tables 2 and 3. The LDL prediction results in metrics of Canberra and Intersection are tabulated in Table 6.
From Table 6 we can see that BD-LDL gains the superior results in all benchmark datasets, i.e., introducing the reconstruction term can truly boost the performance of LDL algorithm. It is expected that the top-3 improvements are achieved in datasets Natural Scene, Movie, SBU\_3DFE respectively which are equiped with the relative large dimensional gap between the feature and label space.
\subsection{Influence of Parameters}
To examine the robustness of the proposed algorithms, we also analyze the influence of trade-off parameters in the experiments, including $\lambda_{1}$, $\lambda_{2}$ in BD-LDL as well as $\alpha$ and $\lambda$ in BD-LE. We run BD-LE with $\alpha$ and $\lambda$ whose value range is [$10^{-4}$,$10^{3}$], and parameters $\lambda_{1}$ and $\lambda_{2}$ involved in BD-LDL use the same candidate label set as well. Owing to the page limit, we only show in this paper the experimental results on Yeast-cold dataset which are measured with Chebyshev and Cosine. For further evaluation, the results are visualized with different colors in Figs. 5 and 6. When measured with Chebyshev, a smaller value means better performance and closer to blue; by contrast, with cosine, a larger value indicates better performance and closer to red.
It is clear from Fig. 5 that when $\lambda$ falls in a certain range [$10^{-4}$,$10^{-1}$], we can achieve relatively good recovery results with any values of $\alpha$. After conducting several experiments, we draw a conclusion for BD-LE, namely that when both $\alpha$ and $\lambda$ are about $10^{-2}$, the best performance is obtained. Concerning the parameters of BD-LDL, when the value of $\lambda_{1}$ is selected within the range [$10^{-1}$,$10^{3}$], the color varies in an extremely steady way, which means that the performance is not sensitive to this hyper parameter in that particular range. In addition, we can also see from Fig. 6 that $\lambda_{1}$ has a stronger influence on the performance than $\lambda_{2}$ when the value of $\lambda_{2}$ is within the range [$10^{-4}$,$10^{-1}$].
\section{Conclusion}
Previous studies have shown that the LDL method can effectively solve label ambiguity problems whereas the LE method is able to recover label distributions from logical labels. To improve the performance of LDL and LE methods, we propose a new loss function that combines the mapping error with the reconstruction error to leverage the missing information caused by the dimensional gap between the input space and the output one. Sufficient experiments have been conducted to show that the proposed loss function is sufficiently generalized for application in both LDL and LE with improvement. In the future, we will explore if there exists an end-to-end way to recover the label distributions with the supervision of LDL training process.
\bibliographystyle{cas-model2-names}
|
2,869,038,154,640 | arxiv | \section{Introduction}
Convolutional codes were introduced by Elias in 1955 \cite{Eli}. These codes became popular when in 1967 Viterbi invented his decoding algorithm \cite{Vit} and Forney \cite{For} drew a \emph{code trellis} which made understanding the Viterbi algorithm easy and its maximum-likelihood nature obvious.
Convolutional codes are widely used in telecommunications, e.g., in Turbo codes and in the WiFi IEEE 802.11 standard, in cryptography, etc.
The most common versions are \emph{binary} convolutional codes;
\emph{non-binary} convolutional codes are used for higher orders of modulation~\cite{Ouahada} or data streaming \cite{Holzbaur}.
It is known that periodic time-varying convolutional codes improve the free distance and weight distribution over fixed codes, see; e.g., Mooser \cite{Moo} and Lee \cite{Lee}. This is a motivation to introduce the new class of linear \emph{skew convolutional codes} that can be represented as ordinary periodic non-binary convolutional codes. The new class is defined as a \emph{left} module over a skew field $\Q$ that will be introduced later. A \emph{right} module over $\Q$ defines another interesting class of \emph{nonlinear trellis codes}.
Our goal is to define and to give a first encounter with the introduced skew codes. The proofs and additional results about the skew convolutional codes as well as more examples and references can be found in the journal version \cite{SLGK2020}.
\section{Skew convolutional codes}
\subsection{Skew polynomials and fractions}
Consider a field $\F$ and an automorphism
$\theta$ of the field.
Later on, we will use the finite field $\F=\F_{q^m}$ with the Frobenius
automorphism
\begin{equation}\label{theta}
\theta(a)=a^{q}
\end{equation}
for all $a\in \F$. Denote by $\R = \F[D;\theta]$ the noncommutative
ring of skew polynomials in $D$ over $\F$ (with zero derivation)
\begin{equation*}
\resizebox{1\hsize}{!}{$\F[D;\theta]=\{a(D)= a_0 +a_1D +\dots + a_{n}D^{n} \ |\ a_i\in \F \mbox{ and } n\in \mathbb{N}\}.$}
\end{equation*}
The addition in $\R$ is as usual.
The multiplication is defined by the basic rule
$$Da=\theta(a)D$$
and is extended to all elements of $\R$ by
associativity and distributivity.
The ring $\R$ has a unique left skew field of fractions $\Q$, from which it inherits its linear algebra properties; see, e.g., \cite{clark:2012} for more details and \cite{SLGK2020} for some examples.
\subsection{Definition of skew convolutional codes}
Much of linear algebra can be generalized from vector spaces over a field to (either left or right) modules over the skew field $\Q$. Indeed, it is shown in \cite[Theorem 1.4]{clark:2012} that any left $\Q$-module $\C$ is free, i.e., it has a basis,
and any two bases of $\C$ have the same cardinality, which is the dimension of $\C$.
\begin{definition}[Skew convolutional code]\label{def:SkewCode}
A skew convolutional $[n,k]$ code $\C$ over the field $\F$ is a left sub-module of dimension $k$ of the free module $\Q^n$.
\end{definition}
The elements of the code $\C$ are called its \emph{codewords}. A codeword is an $n$-tuple over $\Q$, where every component is a fraction of skew polynomials from $\R$. The (Hamming) weight of a fraction is the number of nonzero coefficients in its expansion as a left skew Laurent series $\F((D))$ in increasing powers of $D$. The code $\C$ is $\F=\F_{q^m}$-linear. The \emph{free distance} $d_f$ of a skew convolutional code is defined to be the minimum nonzero weight over all codewords.
\subsection{Relations with ordinary convolutional codes}
\begin{lemma}
The class of skew convolutional codes includes ordinary time-invariant (fixed) convolutional codes.
\end{lemma}
Indeed, when $\theta = id$, a skew convolutional code coincides with an ordinary convolutional code.
A \emph{generator matrix} of a skew convolutional $[n,k]$ code $\C$ is a $k\times n$ matrix $G(D)$ over the skew field $\Q$ whose rows form a basis for the code $\C$. If the matrix $G(D)$ is over the ring $\R$ of skew polynomials, then $G(D)$ is called a \emph{polynomial generator matrix} for $\C$. Every skew code $\C$ has a polynomial generator matrix. Indeed, given a generator matrix $G(D)$ over the skew field of fractions $\Q$, a polynomial generator matrix can be obtained by left multiplying each row of $G(D)$ by the left least common multiple of the denominators in that row.
\section{Encoding}
From Definition~\ref{def:SkewCode}, every codeword $v(D)$ of a skew code $\C$, which is an $n$-tuple over the skew field of fractions $\Q$,
\begin{equation}\label{eq:v(d)}
v(D)=\left( v\up[1](D), \dots, v\up[n](D) \right), \ v\up[j](D) \in \Q \ \ \ \forall j,
\end{equation}
can be written as
\begin{equation}\label{eq:enc1}
v(D) = u(D) G(D),
\end{equation}
where $u(D)$ is a $k$-tuple ($k$-word) over $\Q$:
\begin{equation}\label{eq:u(d)}
u(D)=\left( u\up[1](D), \dots, u\up[k](D) \right), \ u\up[i](D) \in \Q \ \ \ \forall i
\end{equation}
and is called an information word, and $G(D)$ is a $k \times n$ generator matrix of $\C$. Relation (\ref{eq:enc1}) already provides an encoder. This encoder is an encoder of a block code over $\Q$ and the skew code $\C$ can be considered as the set of $n$-tuples $v(D)$ over $\Q$ that satisfy (\ref{eq:enc1}), i.e., we have $\C =\{v(D)\}$.
We write the components of $u(D)$ and $v(D)$ as skew Laurent series
\begin{equation}\label{eq:u^i}
u\up[i](D) = u\up[i]_0 + u\up[i]_1 D + u\up[i]_2 D^2 + \dots, \ i= 1,\dots, k
\end{equation}
and
\begin{equation}\label{eq:v^j}
v\up[j](D) = v\up[i]_0 + v\up[i]_1 D + u\up[i]_2 D^2 + \dots, \ j= 1,\dots, n.
\end{equation}
Actually, in a Laurent series, the lower (time) index of coefficients can be a negative integer, but in practice, information sequence $u\up[i](D)$ should be causal for every component $i$, that is, the coefficients $u\up[i]_t D^t$ are zeros for time $t < 0$. Causal information sequences should be encoded into causal code sequences, otherwise an encoder can not be implemented, since it would then have to output code symbols before it receives an information symbol.
Denote the block of information symbols that enters an encoder at time $t=0,1,\dots$ by
\begin{equation}\label{eq:u_t}
u_t = \left(u_t\up[1],u_t\up[2],\dots u_t\up[k] \right) \in \F^k.
\end{equation}
The block of code symbols that leaves the encoder at time $t=0,1,\dots$ is denoted by
\begin{equation}\label{eq:v_t}
v_t = \left(v_t\up[1],v_t\up[2],\dots v_t\up[n] \right) \in \F^n.
\end{equation}
Combining (\ref{eq:u(d)}), (\ref{eq:u^i}), and (\ref{eq:u_t}) we obtain the following information series with vector coefficients
\begin{equation}\label{eq:u(D)ser}
u(D) = u_0 + u_1 D + ...+u_t D^t+\dots, \ u(D)\in \F((D))^k.
\end{equation}
Using (\ref{eq:v(d)}), (\ref{eq:v^j}), and (\ref{eq:v_t}) we write a codeword as a series
\begin{equation}\label{eq:v(D)ser}
v(D) = v_0 + v_1 D + ...+v_t D^t+\dots, \ v(D)\in \F((D))^n.
\end{equation}
We can write a skew polynomial generator matrix $G(D) = \left(g_{ij}(D)\right) \in \R^{k\times n}$ as a skew polynomial with matrix coefficients:
\begin{equation}\label{eq:G(D)Ser}
G(D) = G_0 + G_1 D + G_2 D^2 + ...+G_\mu D^\mu,
\end{equation}
where $\mu$ is the maximum degree of polynomials $g_{ij}(D)$. Matrices $G_i$ are $k\times n$ matrices over the field $\F$ and $\mu$ is called the generator matrix \emph{memory}.
From (\ref{eq:enc1}), (\ref{eq:u(D)ser}) and (\ref{eq:v(D)ser}) we obtain that $v_t$ is a coefficient in the product of skew series $u(D)$ and skew polynomial $G(D)$, which is the following \emph{skew convolution} (see Fig.~\ref{fig:encoder1})
\begin{equation}\label{eq:enc_conv}
v_t = u_t \theta^t (G_0) + u_{t-1} \theta^{t-1} (G_1) + \dots + u_{t-\mu} \theta^{t-\mu} (G_\mu),
\end{equation}
where $u_t = 0$ for $t<0$.
This encoding rule explains the title \emph{skew convolutional code}, which can be also seen as the set $\C = \{v(D)\}$ of series $v(D)$ defined in (\ref{eq:v(D)ser}).
\begin{figure}
\centering
\begin{tikzpicture}[scale=0.65, every node/.style={scale=.8}]
\node(w0) [draw,circle,minimum size=1.5cm,inner sep=0pt] at (0,2) {$\theta^{t-\mu}(G_\mu)$};
\node(w3) [draw,circle,minimum size=1.5cm,inner sep=0pt] at (3,2) {$\theta^{t-2}(G_2)$};
\node(w5) [draw,circle,minimum size=1.5cm,inner sep=0pt] at (5,2) {$\theta^{t-1}(G_1)$};
\node(w7) [draw,circle,minimum size=1.5cm,inner sep=0pt] at (7,2) {$\theta^{t}(G_0)$};
\draw (0,4) node[minimum size=1cm,draw](s0) {$u_{t-\mu}$};
\draw (3,4) node[minimum size=1cm,draw](s3) {$u_{t-2}$};
\draw (5,4) node[minimum size=1cm,draw](s5) {$u_{t-1}$};
\foreach \x in {0,3,5} {
\draw (\x,0) node(c\x) [circ]{$+$};
\draw [->,-latex'] (s\x) edge (w\x) (w\x) edge (c\x);
}
\filldraw (7, 4) node(dot) [circle,fill,inner sep=1pt]{};
\node(u) at (9,4) {$u_t, u_{t+1}, \dots$};
\draw [->,-latex'] (u) edge (s5);
\node(v) at (-2,0) {$v_0, \dots, v_t$};
\draw [->,-latex'] (c0) edge (v);
\node(udots) at (1.5,4) {$\dots$};
\node(vdots) at (1.5,0) {$\dots$};
\draw [->,-latex'] (s5) edge (s3) (s3) edge (udots) (udots) edge (s0);
\draw [->,-latex'] (c5) edge (c3) (c3) edge (vdots) (vdots) edge (c0);
\draw [->,-latex'] (dot) edge (w7) (w7) edge [topath](c5);
\end{tikzpicture}
\caption{Encoder of a skew convolutional code.}\label{fig:encoder1}
\end{figure}
At time $t$, the decoder receives an information block $u_t$ of $k$ symbols from $\F$ and puts out the code block $v_t$ of $n$ code symbols from $\F$ using (\ref{eq:enc_conv}), hence, the \emph{code rate} is $R=k/n$. The encoder (\ref{eq:enc_conv}) uses $u_t$ and also $\mu$ previous information blocks $u_{t-1}, u_{t-2}, \dots, u_{t-\mu}$, which should be stored in the encoder's memory. This is why $\mu$ is also the encoder \emph{ memory}.
The coefficients $\theta^{t-i} (G_i)$, $i=0,1,\dots,\mu$, in the encoder (\ref{eq:enc_conv}) depend on the time $t$. Hence, the \emph{skew convolutional code is a time varying ordinary convolutional code}. Denote
\begin{equation}\label{eq:period}
\tau = \min\left\{i>0\ :\ \theta^i(G_j) = G_j \ \ \forall j=0,1,\dots, \mu \right\}.
\end{equation}
For the field $\F =\F_{q^m}$ we have $\theta^m = \theta$, hence, the coefficients in (\ref{eq:enc_conv}) are periodic with period $\tau\le m$, and
the \emph{skew convolutional code is periodic with period $\tau \le m$}. If $\tau < m$ then coefficients of polynomials $g_{ij}(D)$ in the matrix $G(D)$ belong to a subfield $\F_{q^\tau}\subset\F_{q^m}$, and hence $\tau|m$.
The input of the encoder can also be written as an information sequence $u$ of $k$-blocks \eqref{eq:u_t} over $\F$
\vskip -10pt
\begin{equation}\label{eq:u}
u = u_0,\: u_1, \: u_2 , \dots, u_t, \dots \ ,
\end{equation}
and the output as a code sequence $v$ of $n$-blocks \eqref{eq:v_t} over $\F$
\begin{equation}\label{eq:v}
v = v_0,\: v_1, \: v_2 , \dots,\: v_t ,\: \dots \ .
\end{equation}
Then, the encoding rule (\ref{eq:enc_conv}) can be written in a scalar form
\begin{equation}\label{eq:enc_scalar}
v = u G
\end{equation}
with semi-infinite scalar generator block matrix $G=$
\begin{equation}\label{eq:G}
\left(
\begin{array}{ccccccc}
G_0 & G_1 & G_2 & \dots & G_\mu & & \\
& \theta(G_0) & \theta(G_1) & \dots & & \theta(G_\mu) & \\
& & \theta^2(G_0)& \dots & & \theta^2(G_{\mu-1}) &\theta^2(G_{\mu}) \\
& & & \dots \\
\end{array}
\right)
\end{equation}
Thus, a skew convolutional code can be equivalently represented in scalar form as the set $\C=\{v\}$ of sequences $v$ defined in (\ref{eq:v}) that satisfy (\ref{eq:enc_scalar}). By changing variables $G_i = \theta^i(\widetilde G_i)$ for $i=1,2,\dots,\mu$ we obtain the following result.
\begin{lemma}\label{lem:G_equiv}
A scalar generator matrix (\ref{eq:G}) can be written in the following equivalent form
\begin{equation}\label{eq:G_equiv}
G=\left(
\begin{array}{ccccccc}
\widetilde G_0& \theta(\widetilde G_1)& &\theta^\mu(\widetilde G_\mu) & & \\
& \theta(\widetilde G_0)&\vdots&\theta^\mu(\widetilde G_{\mu-1})&\theta^{\mu+1}(\widetilde G_\mu) & \\
& & &\vdots &\theta^{\mu+1}(\widetilde G_{\mu-1})& \vdots \\
& & &\theta^\mu(\widetilde G_0) &\vdots \\
& & & &\theta^{\mu+1}(\widetilde G_{0}) \\
\end{array}
\right).
\end{equation}
\end{lemma}
In case of identity automorphism, i.e., $\theta = id$, the scalar generator matrix \eqref{eq:G} of the skew code becomes a generator matrix of a fixed convolutional code \cite{JZ}.
For fixed convolutional codes, polynomial generator matrices with $G_0$ of full rank $k$ are
of particular interest \cite[Chapter~3]{JZ}. The skew convolutional codes use the following nice property: if $G_0$ has full rank, then $\theta^i(G_0)$ has full rank as well for all $i=1,2,\dots$.
Thus, above we proved the following theorem.
\begin{theorem}
Given a field $\F=\F_{q^m}$ with automorphism $\theta$ in (\ref{theta}), any skew convolutional $[n,k]$ code $\C$ over $\F$ is equivalent to a periodic time-varying (ordinary) convolutional $[n,k]$ code over $\F$, with period $\tau\le m$ (\ref{eq:period}). If $G(D)$ is a skew polynomial generator matrix (\ref{eq:G(D)Ser}) of the code $\C$, then the scalar generator matrix $G$ of the time-varying code is given by (\ref{eq:G}) or (\ref{eq:G_equiv}).
\end{theorem}
\section{An example}\label{sec:example}
As an example consider [2,1] skew convolutional code $\C$ over the field $\F_Q = \F_{q^m}=\F_{2^2}$ with automorphism $\theta(a) = a^q=a^2$, $a\in \F_{2^2}$. The field $\F_{2^2}$ consists of elements $\{0,1,\alpha,\alpha^2\}$, where a primitive element $\alpha$ satisfies $\alpha^2+\alpha+1=0$ and we have the following relations
\medskip
\begin{tabular}{lc}
$\alpha^2 = \alpha +1$, \\
$\alpha^3 = 1$,\\
$\alpha^4 = \alpha$, \\
\end{tabular}
\ and \
$\forall i\in \mathbb Z \quad \theta^i =
\left\{
\begin{array}{cc}
\theta & \mbox{if $i$ is odd,} \\
\theta^2 & \mbox{if $i$ is even.}
\end{array}
\right.$
Let the generator matrix in polynomial form be
\begin{equation}\label{eq:G(D)example}
G(D)=(1+\alpha D, \ \alpha +\alpha^2 D) = G_0 + G_1 D,
\end{equation}
where $G_0 = (1,\alpha)$ and $G_1 = (\alpha,\alpha^2)$.
The generator matrix in scalar form (\ref{eq:G}) is
\begin{equation}\label{eq:G_ex}
G=\left(
\begin{array}{ccccccc}
1\ \alpha & \alpha \ \alpha^2 \\
& 1\ \alpha^2 & \alpha^2 \ \alpha \\
& & 1\ \alpha & \alpha \ \alpha^2 \\
& & & 1\ \alpha^2 & \alpha^2 \ \alpha \\
& & \dots \\
\end{array}
\right).
\end{equation}
Here $\mu=1$, hence it is a \emph{unit memory} code. The encoding rule is $v= uG$, or from (\ref{eq:enc_conv}) it is
\begin{equation}\label{eq:enc_rule_ex}
v_t = u_t \theta^t (G_0) + u_{t-1} \theta^{t-1} (G_1),\ \mbox{for }t=0,1,\dots \ .
\end{equation}
From this example we can see that the class of skew convolutional codes \emph{extends} the class of fixed codes. Indeed, the codeword for the information sequence $u = 1,0,0,1$ is $v = (1,\alpha),(\alpha,\alpha^2), (0,0), (1,\alpha^2),(\alpha^2,\alpha)$, which cannot be obtained by any fixed $[2,1]$ memory $\mu=1$ code.
The encoder (in controller canonical form \cite{JZ}) with generator matrix (\ref{eq:G(D)example}) is shown in Fig.~\ref{fig:enc_even}(a) for even $t$ and in Fig.~\ref{fig:enc_odd}(b) for odd $t$. The encoder has one shift register, since $k=1$. There is one $Q$-ary memory element in the shift register shown as a rectangular, where $Q= q^m =4$ is the order of the field. We need only one memory element since maximum degree of items in $G(D)$, which consists of a single row in our example, is $1$. A large circle means multiplication by the coefficient shown inside.
\begin{figure}[htp]
\subfloat[][even $t$]
\begin{tikzpicture}[scale=0.8, every node/.style={scale=0.9}]
\draw (0,0) node[minimum size=1cm,draw](s) {$u_{t-1}$};
\draw (0,2.5) node(c1) [circ]{$+$};
\draw (0,-2.5) node(c2) [circ]{$+$};
\node(w1) [draw,circle,minimum size=.7cm,inner sep=0pt] at (0,1.5) {$\alpha^2$};
\node(w2) [draw,circle,minimum size=.7cm,inner sep=0pt] at (0,-1.5) {$\alpha$};
\node(w3) [draw,circle,minimum size=.7cm,inner sep=0pt] at (1.5,-1.5) {$\alpha$};
\filldraw (1.5, 0) node(dot) [circle,fill,inner sep=1pt]{};
\node(u) at (2.5,0) {$u_t$};
\node(v1) at (-2,2.5) {$v^{(1)}_t$};
\node(v2) at (-2,-2.5) {$v^{(2)}_t$};
\draw [->,-latex'] (u) edge (s) (dot) edge [topath](c1) (c1) edge (v1);
\draw [->,-latex'] (dot) edge (w3) (w3) edge [topath](c2) (c2) edge (v2);
\draw [->,-latex'] (s) edge (w1) (w1) edge (c1);
\draw [->,-latex'] (s) edge (w2) (w2) edge (c2);
\end{tikzpicture}
}
\hfill
\subfloat[][odd $t$]
\begin{tikzpicture}[scale=0.8, every node/.style={scale=0.9}]
\draw (0,0) node[minimum size=1cm,draw](s) {$u_{t-1}$};
\draw (0,2.5) node(c1) [circ]{$+$};
\draw (0,-2.5) node(c2) [circ]{$+$};
\node(w1) [draw,circle,minimum size=.7cm,inner sep=0pt] at (0,1.5) {$\alpha$};
\node(w2) [draw,circle,minimum size=.7cm,inner sep=0pt] at (0,-1.5) {$\alpha^2$};
\node(w3) [draw,circle,minimum size=.7cm,inner sep=0pt] at (1.5,-1.5) {$\alpha^2$};
\filldraw (1.5, 0) node(dot) [circle,fill,inner sep=1pt]{};
\node(u) at (2.5,0) {$u_t$};
\node(v1) at (-2,2.5) {$v^{(1)}_t$};
\node(v2) at (-2,-2.5) {$v^{(2)}_t$};
\draw [->,-latex'] (u) edge (s) (dot) edge [topath](c1) (c1) edge (v1);
\draw [->,-latex'] (dot) edge (w3) (w3) edge [topath](c2) (c2) edge (v2);
\draw [->,-latex'] (s) edge (w1) (w1) edge (c1);
\draw [->,-latex'] (s) edge (w2) (w2) edge (c2);
\end{tikzpicture}
}
\caption{Encoder of the skew code $\C$.}\label{fig:enc_even}\label{fig:enc_odd}
\end{figure}
In general case of a $k\times n$ matrix $G(D)$, we define the degree $\nu_i$ of its $i$-th row as the
maximum degree of its components. The \emph{external degree} $\nu$ of $G(D)$ is the sum of its row degrees. The encoder (in controller canonical form) of $G(D)$ over $\F_Q$ has $k$ shift registers, the $i$-th register has $\nu_i$ memory elements, and total number of $Q$-ary memory elements in the encoder is $\nu$.
For our example, the minimal code trellis, which has the minimum number of states, is shown in Fig.~\ref{fig:code_trellis}. The trellis consists of sections periodically repeated with period $\tau=m=2$. The trellis has $Q^\nu =4^1=4$ states labeled by elements of the field $\F_Q$. For the $t$-th section for time $t=0,1,\dots$, every edge connects the states $u_{t-1}$ and $u_t$ and is labeled by the code block $v_t$ computed according to the encoding rule (\ref{eq:enc_rule_ex}) as follows
\begin{equation}\label{eq:trellis_labels}
v_t =
\left\{
\begin{array}{ll}
u_{t-1} (\alpha,\alpha^2) + u_t (1,\alpha^2) & \mbox{ for odd } t, \\
u_{t-1} (\alpha^2,\alpha) + u_t (1,\alpha) & \mbox{ for even } t.
\end{array}
\right.
\end{equation}
We assume that $u_{-1}=0$, i.e., the initial state of the shift register is $0$.
\begin{figure}
\centering
\begin{tikzpicture}[scale=1.1, every node/.style={scale=0.8}]
\node at (1,-.5){$t=0$};
\node at (3,-.5){$t=1$};
\node at (5,-.5){$t=2$};
\foreach \x in {0,...,3} {
\foreach \y in {0,...,3} {
\filldraw (2*\x,\y) node [circle,fill,inner sep=2pt]{};
}
}
\node[left] at (0,3){$0$};
\node[left] at (0,2){$1$};
\node[left] at (0,1){$\alpha$};
\node[left] at (0,0){$\alpha^2$};
\foreach \x in {0,2} {
\draw (2*\x,3) -- (2*\x+2,3) node [above,very near end] {$00$};
\draw (2*\x,3) -- (2*\x+2,2) node [above,sloped,very near end] {$1 \alpha$};
\draw (2*\x,3) -- (2*\x+2,1) node [above,sloped,very near end] {$\alpha \alpha^2$};
\draw (2*\x,3) -- (2*\x+2,0) node [above,sloped,very near end] {$\alpha^2 1$};
}
\foreach \x in {1} {
\draw (2*\x,3) -- (2*\x+2,3) node [above,very near end] {$00$};
\draw (2*\x,3) -- (2*\x+2,2) node [above,sloped,very near end] {$1 \alpha^2$};
\draw (2*\x,3) -- (2*\x+2,1) node [above,sloped,very near end] {$\alpha 1$};
\draw (2*\x,3) -- (2*\x+2,0) node [above,sloped,very near end] {$\alpha^2 \alpha$};
}
\foreach \x in {1,...,2} {
\foreach \y in {2,1,0} {
\draw (2*\x,\y) -- (2*\x+2,3) ;
\draw (2*\x,\y) -- (2*\x+2,2) ;
\draw (2*\x,\y) -- (2*\x+2,1) ;
\draw (2*\x,\y) -- (2*\x+2,0) ;
}
}
\draw (2*1,0) -- (2*1+2,0) node [below,very near end] {$\alpha 0$};
\draw (2*2,0) -- (2*2+2,0) node [below,very near end] {$1 0$};
\end{tikzpicture}
\caption{Time-varying minimal trellis of the skew code $\C$.}\label{fig:code_trellis}
\end{figure}
There are two important characteristics of a convolutional code: the free
distance $d_f$ and the slope $\sigma$ of increase of the active burst distance, defined below as in \cite{JZ}.
The weight of a branch labeled by a vector $v_t$ is defined to be the Hamming weight $w(v_t)$ of $v_t$. The weight of a path is the sum of its branch weights. A path in the trellis that diverges from zero state, that does not use edges of weight $0$ from a zero state to another zero state, and that returns to zero state after $\ell$ edges is called a loop of length $\ell$ or \emph{$\ell$-loop}.
The \emph{$\ell$-th order active burst distance} $d_\ell^{\text{b}}$ is defined to be the minimum weight of $\ell$-loops in the minimal code trellis. The slope is defined as $\sigma=\lim_{\ell \rightarrow \infty} d_\ell^{\text{b}}/\ell$. The free distance is $d_f = \min_{\ell} d_\ell^{\text{b}}$.
\begin{lemma}
The skew convolutional code $\C$ defined by $G(D)$ in (\ref{eq:G(D)example}) has the active burst distance
$ d_\ell^{\text{b}} = \ell + 2$ for $\ell = 2,3,\dots$, the slope of the active distance is $\sigma = 1$, and free distance is $d_f = 4$.
\end{lemma}
General upper bounds on the free distance and the slope are given in \cite{PollaraAbdel}, which in our case of unit memory $[2,1]$ code $\C$ become
$ d_{free} \le 2n - k + 1 = 4, \mbox{ and } \sigma \le n-k =1.$
Hence, the skew code $\C$ defined by (\ref{eq:G(D)example}) reaches the upper bounds on $d_{free}$ and on the slope $\sigma$, hence, the \emph{code is optimal}.
A generator matrix $G(D)$ of a skew convolutional code (and corresponding encoder) is called \emph{catastrophic} if there exists an information sequence $u(D)$ of an infinite weight such that the code sequence $v(D) = u(D)G(D)$ has a finite weight. The generator matrix $G(D)$ in (\ref{eq:G(D)example}) of skew convolutional code $\C$ with $\theta = (\cdot)^q$ is non-catastrophic, since for $G(D)$ the slope $\sigma >0$. Note that in case of ordinary convolutional code $\C'$, i.e., for $\theta = id$, the generator matrix (\ref{eq:G(D)example}) is a catastrophic generator matrix of the repetition $[2,1]$ \emph{block} code with distance $d=2$.
A skew convolutional code, represented as a $\tau$-periodic $[n,k]$ code, can be considered as $[\tau n,\tau k]$ fixed code by \emph{$\tau$-blocking}, described in \cite{McE:1998}. The $[2,1]$ skew code $\C$ from our example has period $\tau=m=2$ and can be written as $[4,2]$ \emph{fixed} code with generator matrix
\begin{equation}\label{eq:G(D)_block}
G=\left(
\begin{array}{cccc}
1 & \alpha & \alpha & \alpha^2 \\
\alpha^2 D & \alpha D & 1 & \alpha \\
\end{array}
\right).
\end{equation}
In this way, known methods to analyze fixed convolutional codes can be applied to skew convolutional codes.
\section{Dual codes}
Duality for skew convolutional codes can be defined in different ways.
First, consider a skew convolutional code $\C$ over $\F$ in scalar form as a set of sequences as in ($\ref{eq:v}$). For two sequences $v$ and $v'$, where at least one of them is finite, define the scalar product $(v,v')$ as the sum of products of corresponding components, where missing components are assumed to be zero. We say that the sequences are orthogonal if $(v,v') = 0$.
\begin{definition}\label{def:Cperp}
The dual code $\C^\perp$ to a skew convolutional $[n,k]$ code $\C$ is an $[n,n-k]$ skew convolutional code $\C^\perp$ such that $(v,v^\perp)=0$ for all finite length words $v\in \C$ and $v^\perp \in \C^\perp$.
\end{definition}
Another way to define orthogonality is, for example, as follows. Consider two $n$-words $v(D)$ and $v^\perp(D)$ over $\Q^n$. We say that $v^\perp(D)$ is left-orthogonal to $v(D)$ if $v^\perp(D)v(D)=0$ and right-orthogonal if $v(D)v^\perp(D)=0$. A left dual code to a skew convolutional code $\C$ can be defined as
\begin{equation*}
\C^\perp_\text{left} = \{v^\perp \in \Q^n : v^\perp(D)v(D)=0 \mbox{ for all } v\in \C\}.
\end{equation*}
The dual code $\C^\perp_\text{left}$ is a left submodule of $\Q^n$, hence it is a skew convolutional code.
We consider below dual codes according to Definition~\ref{def:Cperp}, since it is more interesting for practical applications. Given a code $\C$ with generator matrix $G$, we show how to find a parity check matrix $H$ such that $G H^T=0$.
Let a skew $[n,k]$ code $\C$ of memory $\mu$ be defined by a polynomial generator matrix $G(D)$ in (\ref{eq:G(D)Ser}), which corresponds to the scalar generator matrix $G$ in (\ref{eq:G}). For the dual $[n,n-k]$ code $\C^\perp$ we write a transposed parity check matrix $H^T$ of memory $\mu^\perp$, similar to ordinary convolutional codes, as
\begin{equation}\label{eq:HT}
H^T=\left(
\begin{array}{clccccc}
H^T_0& H^T_1 &\dots & H^T_{\mu^\perp}& & \\
& \theta(H^T_0) &\dots & &\theta(H^T_{\mu^\perp}) & \\
& &\dots \\
\end{array}
\right),
\end{equation}
where $\text{rank} (H_0) = n-k$.
Similar to \cite{JZ}, we call the matrix $H^\perp$ the \emph{syndrome former} and write it in polynomial form as
\begin{equation}\label{eq:HT(D)}
H^T(D) = H^T_0 + H^T_1 D + \dots + H^T_{\mu^\perp} D^{\mu^\perp}.
\end{equation}
Then, we have the following \emph{parity check matrix} of the causal code $\C$ with the generator matrix (\ref{eq:G_equiv})
\begin{equation}\label{eq:H}
H=\left(
\begin{array}{clccccc}
H_0 \\
H_1 & \theta(H_0) \\
\vdots & \vdots \\
H_{\mu^\perp} & \theta(H_{\mu^\perp-1}) & \vdots \\
& \theta(H_{\mu^\perp}) \\
\end{array}
\right),
\end{equation}
which in case of $\theta = id$ coincides with the check matrix of an ordinary fixed convolutional code.
From Definition~\ref{def:Cperp} we have that $vH^T=0$ for all sequences $v\in \C$ over $\F$. On the other hand, from (\ref{eq:enc1}) we have that every codeword $v(D)\in \C$ can be written as $v(D)=u(D)G(D)$. Hence, if we find an $n\times (n-k)$ matrix $H^T(D)$ over $\R$ of full rank such that $G(D)H^T(D) = 0$, then every codeword satisfies $v(D)H^T(D) = u(D)G(D)H^T(D)=0$ and vice versa, i.e., if $v(D)H^T(D)=0$ then $v(D)$ is a codeword of $\C$.
\begin{theorem}
With the above notations, $G(D)H^T(D)=0$ if and only if $GH^T = 0$.
\end{theorem}
We continue with the example given in Section~\ref{sec:example}. Let $H(D) = H_0 + H_1(D)$. Using $G_0$ and $G_1$ from (\ref{eq:G(D)example}) and the condition $G(D)H^T(D)=0$, we obtain $H_0 = (\alpha, 1)$ and $H_1 = (1,\alpha)$. Hence, $H(D) = (\alpha+D, 1+\alpha D)$ and
\begin{equation*}\label{eq:HTex1}
H=\left(
\begin{array}{ccccccc}
\alpha \ 1 \\
1 \ \alpha & \alpha^2 \ 1 \\
& 1 \ \alpha^2 & \alpha \ 1 & \vdots \\
& & 1 \ \alpha \\
\end{array}
\right).
\end{equation*}
Using $H$ one can draw the minimal trellis of the dual code $\C^\perp$ and decode the original code symbol-wise with the method in, e.g., \cite{Berkmann}. For high rate codes, this approach gains in
computational complexity as compared to the BCJR algorithm.
\section{Skew trellis codes}
In this section, by $\Q$ we denote the skew field of \emph{right} fractions of the ring $\R$ and we consider \emph{right} $\Q$-modules $\C$. Every module $\C$ is free \cite[Theorem 1.4]{clark:2012}, i.e., it has a basis, and any two bases of $\C$ have the same cardinality, that is the dimension of $\C$. By $\F((D))$ we denote the skew field of \emph{right} skew Laurent series.
\begin{definition}[Skew trellis code]\label{def:SkewTrCode}
A skew trellis $[n,k]$ code $\C$ over the field $\F$ is a right sub-module of dimension $k$ of the free module $\Q^n$.
\end{definition}
Every codeword $v(D)$ given by \eqref{eq:v(d)} can be written as
\begin{equation}\label{eq:EncTr}
v^T(D) = G^T(D) u^T(D),
\end{equation}
where $u(D)$ is an information word defined in \eqref{eq:u(d)} and $G(D)\in \Q^{k \times n}$ is a generator matrix of $\C$. Equivalently, one can consider a polynomial matrix $G(D)\in \R^{k \times n}$. Using \eqref{eq:u(D)ser} - \eqref{eq:G(D)Ser}, we rewrite the encoding rule \eqref{eq:EncTr} for sequences $u$ \eqref{eq:u} and $v$ \eqref{eq:v} over the field $\F_{q^m}$ as
\begin{equation}\label{eq:EncTr2}
v_t = u_t G_0 + \theta(u_{t-1}) G_1 + \dots + \theta^\mu (u_{t-\mu}) G_\mu.
\end{equation}
The corresponding encoders are shown in Figs.~\ref{fig:encoderTr} and \ref{fig5} as finite state machines. This allows us to obtain a code trellis and apply known trellis-based decoding algorithms \cite{Vit}, \cite{BCJR}. The encoders can be obtained from the ones used for the ordinary convolutional code
generated by $G(D)$
replacing the ordinary shift registers by the skew shift registers introduced in \cite{SJB}.
The case $\theta = id$ gives ordinary convolutional codes. For $\theta \ne id$ it follows from \eqref{eq:EncTr2} that the skew trellis code $\C=\{v\}$ as a set of sequences $v$ over $\F_{q^m}$ is $\F_{q^m}$-nonlinear since so is the function $\theta(\cdot)$, but the code $\C$ is $\F_{q}$-linear.
\begin{figure
\centering
\begin{tikzpicture}[scale=0.75, every node/.style={scale=0.85}]
\node(w0) [draw,circle,minimum size=.7cm,inner sep=0pt] at (0,2.5) {$G_\mu$};
\node(w4) [draw,circle,minimum size=.7cm,inner sep=0pt] at (4,2.5) {$G_1$};
\node(w6) [draw,circle,minimum size=.7cm,inner sep=0pt] at (6.5,2.5) {$G_0$};
\draw (0,4) node[minimum size=1cm,draw](s0) {$\theta^\mu(u_{t-\mu})$};
\draw (4,4) node[minimum size=1cm,draw](s4) {$\theta(u_{t-1})$};
\node(th1) [draw,circle,minimum size=.5cm,inner sep=0pt] at (1.5,4) {$\theta$};
\node(th2) [draw,circle,minimum size=.5cm,inner sep=0pt] at (5.5,4) {$\theta$};
\foreach \x in {0,4} {
\draw (\x,1.4) node(c\x) [circ]{$+$};
\draw [->,-latex'] (s\x) edge (w\x) (w\x) edge (c\x);
}
\filldraw (6.5, 4) node(dot) [circle,fill,inner sep=1pt]{};
\node(u) at (8,4) {$u_t, \dots$};
\draw [->,-latex'] (u) edge (th2);
\node(v) at (-1.8,1.4) {$v_0, \dots, v_t$};
\draw [->,-latex'] (c0) edge (v);
\node(udots) at (2.5,4) {$\dots$};
\node(vdots) at (2.5,1.4) {$\dots$};
\draw [->,-latex'] (th2) edge (s4) (s4) edge (udots) (udots) edge (th1) (th1) edge (s0);
\draw [->,-latex'] (dot) edge (w6) (w6) edge [topath](c4) (c4) edge (vdots) (vdots) edge (c0);
\end{tikzpicture}
\caption{Encoder of a skew trellis code.}\label{fig:encoderTr}
\end{figure}
\begin{figure}[htp]
\centering
\begin{tikzpicture}[scale=0.8, every node/.style={scale=0.9}]
\draw (0,0) node[minimum size=1cm,draw](s) {$\theta(u_{t-1})$};
\node(theta) [draw,circle,minimum size=.7cm,inner sep=0pt] at (1.5,0) {$\theta$};
\draw (0,2.5) node(c1) [circ]{$+$};
\draw (0,-2.5) node(c2) [circ]{$+$};
\node(w1) [draw,circle,minimum size=.7cm,inner sep=0pt] at (0,1.5) {$\alpha$};
\node(w2) [draw,circle,minimum size=.7cm,inner sep=0pt] at (0,-1.5) {$\alpha^2$};
\node(w3) [draw,circle,minimum size=.7cm,inner sep=0pt] at (2.5,-1.5) {$\alpha$};
\filldraw (2.5, 0) node(dot) [circle,fill,inner sep=1pt]{};
\node(u) at (3.5,0) {$u_t$};
\node(v1) at (-2,2.5) {$v^{(1)}_t$};
\node(v2) at (-2,-2.5) {$v^{(2)}_t$};
\draw [->,-latex'] (u) edge (theta) (theta) edge (s) (dot) edge [topath](c1) (c1) edge (v1);
\draw [->,-latex'] (dot) edge (w3) (w3) edge [topath](c2) (c2) edge (v2);
\draw [->,-latex'] (s) edge (w1) (w1) edge (c1);
\draw [->,-latex'] (s) edge (w2) (w2) edge (c2);
\end{tikzpicture}
\caption{Encoder of the skew trellis code generated by \eqref{eq:G(D)example}.}\label{fig5}
\vspace*{-0.2cm}
\end{figure}
\section{Conclusion}
We defined two new classes of skew codes over a finite field. The first class consists of linear skew convolutional codes, which are equivalent to time-varying periodic convolutional codes but have as compact description as fixed convolutional codes. The second class consists of nonlinear trellis codes.
\bibliographystyle{IEEEtran}
|
2,869,038,154,641 | arxiv | \section{Introduction}
Brown dwarfs are stellar-like objects with a mass too low for stable nuclear fusion. During the first Gyr of a brown dwarf's life, the luminosity decreases by a factor of $\sim$100, and 1 -- 73 Jupiter-mass brown dwarfs cool to
effective temperatures ($T_{\rm eff}$) of $\sim$200 -- 2000~K respectively (Baraffe et al. 2003, Saumon \& Marley 2008). As photometric sky surveys are executed at longer wavelengths and with larger mirrors, fainter and cooler brown dwarfs are identified. Most recently, the {\it Wide-field Infrared Survey Explorer} ({\it WISE}; Wright et al. 2010) revealed a population with $250 \lesssim T_{\rm eff}$~K $\lesssim 500$, and these have been classified as Y dwarfs (Cushing et al. 2011, Kirkpatrick et al. 2012).
The Y dwarfs are an extension of the T-type brown dwarfs which typically have $500 \lesssim T_{\rm eff}$~K $\lesssim 1300$ (e.g. Golimowski et al. 2004; Leggett et al. 2009, 2012).
Significantly, even for the predominantly isolated brown dwarfs in the solar neighborhood, the fundamental parameters mass and age can be estimated if models can be fit to observations and $T_{\rm eff}$ and surface gravity $g$ constrained. Evolutionary models show that $g$ constrains mass, because the radii of brown dwarfs do not change significantly after about 200 Myr and are within 25\% of a Jupiter radius (Burrows et al. 1997). Also, the cooling curves as a function of mass are well understood, so that $T_{\rm eff}$ combined with $g$ constrains age (Saumon \& Marley 2008).
Models of brown dwarf atmospheres have advanced greatly in recent years. Opacities have been updated for CH$_4$, H$_2$ and NH$_3$ (Yurchenko, Barber \& Tennyson 2011, Saumon et al. 2012, Yurchenko \& Tennyson 2014). Models which include non-equilibrium chemistry driven by vertical gas transport are available (Tremblin et al. 2015, hereafter T15), as are models which include sedimentation by various species i.e. clouds (Morley et al. 2012, 2014, hereafter M12 and M14).
The models are accurate enough that the physical parameters of the brown dwarf atmospheres can be constrained by comparing the observed output energy of the brown dwarf, in the form of a flux-calibrated spectral energy distribution (SED), to synthetic colors and spectra. This paper enhances the number and quality of Y dwarf SEDs in order to improve our understanding of this cold population. We do this by presenting new photometry and spectra, and improved trigonometric parallaxes.
We present new near-infrared spectra for three Y dwarfs obtained with the Gemini near-infrared spectrograph (GNIRS; Elias et al. 2006) and the Gemini imager and spectrometer FLAMINGOS-2 (Eikenberry et al. 2004).
We also present new infrared photometry of late-type T and Y dwarfs, obtained with the Gemini observatory near infrared imager (NIRI; Hodapp et al. 2003), and previously unpublished near- and mid-infrared photometry of late-type T and Y dwarfs taken from data archives. The near-infrared archive photometry is either on the
Mauna Kea Observatories (MKO) system (Tokunaga \& Vacca 2005) or on the {\em Hubble Space Telescope (HST)} WFC3 system.
We derive transformations between these two systems and use these to produce a sample of late-T and Y-type dwarfs with single-system photometry. We also present improved parallaxes for four Y dwarfs using {\em Spitzer} images.
Our new data set is large enough that trends and outliers can be identified. We compare color-color and color-magnitude plots, and near-infrared spectra, to available models. The Y dwarf atmospheric parameters $T_{\rm eff}$, $g$ and metallicity are constrained, and mass and age estimated. We also compare models to the photometric SED of the coolest known brown dwarf,
WISE J085510.83$-$071442.5 (Luhman 2014, hereafter W0855) and constrain the properties of this extreme example of the known Y class.
In \S 2 we set the context of this work by illustrating how the shape of synthetic Y dwarf SEDs
vary as model parameters are changed. We show the regions of the spectrum sampled by the filters used in this work, and demonstrate the connection between luminosity, temperature, mass and age as given by evolutionary models. In \S 3 we describe the model atmospheres used in this work. \S 4 presents the new GNIRS and FLAMINGOS-2 spectra and the new NIRI photometry, and \S 5 presents the previously unpublished photometry extracted from data archives; \S 6 gives transformations between the MKO and WFC3 photometric systems. New parallaxes and proper motions are given in \S 7. \S 8 compares models to the photometric data set, which allows us to estimate some of the Y dwarf properties, and also allows us to select a preferred model type for a comparison to near-infrared spectra, which we present in \S 9. \S 10 combines our results to give final estimates of atmospheric and evolutionary parameters for the sample. Our conclusions are given in \S 11.
\section{Spectral Energy Distributions and Filter Bandpasses}
This work focusses on observations and models of Y dwarfs. Figure 1 shows synthetic spectra for a Y dwarf with $T_{\rm eff} = 400$~K, generated from T15 models. Flux emerges through windows between strong absorption bands of primarily CH$_4$, H$_2$O and NH$_3$ (e.g. M14, their Figure 7). The four panels demonstrate the effect of varying the atmospheric parameters. Near- and mid-infrared filter bandpasses used in this work are also shown.
Figure 1 shows that for a Y dwarf with $T_{\rm eff} \sim 400$~K changes in $T_{\rm eff}$ of 25~K have large, factor of $\sim$2, affects on the absolute brightness of the near-infrared spectrum at all of $YJH$; the flux in the [4.5] bandpass changes by 15\%.
An increase in metallicity or a decrease in surface gravity $g$ changes the slope of the near-infrared spectrum,
brightening the $Y$ and $J$ flux while having only a small effect on $H$. An increase in metallicity [m/H] of 0.2 dex increases the $YJ$ flux by 20 -- 30\% and decreases the [4.5] flux by 40\%.
An increase in gravity $g$ cm$\,$s$^{-2}$ of 0.5 dex decreases the $YJ$ and [4.5] flux by about 15\%.
Finally, an increase in the eddy diffusion coefficient $K_{\rm zz}\,$cm$^2\,$s$^{-1}$ (the chemical mixing parameter, see \S 3) from $10^6$ to $10^8$ increases the $YJK$ flux by 15\% while decreasing the [4.5] flux by 50\%. The parameter $\gamma$ is discussed in \S 3.
The near-infrared spectra of Y dwarfs are therefore expected to be sensitive to all the atmospheric parameters, and especially sensitive to $T_{\rm eff}$. The shape and brightness of the near-infrared spectrum combined with the [4.5] flux can usefully contrain a Y dwarf's atmospheric parameters. We test this later, in \S 9.
The shape of the $Y$-band flux peak appears sensitive to gravity in Figure 1. Not shown in Figure 1 (but demonstrated later in the fits of synthetic to observed spectra), a decrease in metallicity has a similar effect.
Leggett et al. (2015, their Figure 5) show that the 1~$\mu$m flux from a 400~K Y dwarf emerges between H$_2$O and CH$_4$ absorption bands, in a region where NH$_3$ and pressure-induced H$_2$ opacity is important. H$_2$ opacity is sensitive to both gravity and metallicity (e.g. Liu, Leggett \& Chiu 2007) and the change in shape of the $Y$-band flux peak is likely due to changes in the H$_2$ opacity.
Figure 1 shows that much of the flux from a 400~K Y dwarf is emitted in the {\em Spitzer} [4.5] bandpass, which is similar to the {\it WISE} W2 bandpass. In fact the T15 models show that, for $300 \leq T_{\rm eff}$~K $\leq 500$ and $4.0 \leq \log g \leq 4.5$, 45--54\% of the total flux is emitted though this bandpass. The percentage of the total flux emitted at $\lambda < 2.5~\mu$m decreases from 20\% to $< 1$\% as $T_{\rm eff}$ decreases from 500~K to 300~K, with the remaining 30--50\% emitted at $\lambda > 5~\mu$m.
Because half the energy of Y dwarfs is emitted in the [4.5] bandpass, the value of [4.5] is an important constraint on bolometric luminosity and therefore $T_{\rm eff}$. Figure 2 shows $T_{\rm eff}$ as a function of $M_{[4.5]}$ in the left panel, and $T_{\rm eff}$ as a function of $\log g$ in the right panel. The sequences in the left panel are from the various atmospheric models used in the work, which are described below. The sequences in the right panel are taken from the evolutionary models of Saumon \& Marley (2008).
The cold Y dwarfs are intrinsically faint as they have a radius similar to that of Jupiter's. This low luminosity limits
detection to nearby sources only, and all the known Y dwarfs with measured parallaxes are within 20~pc of the Sun (see \S 7). We assume therefore that the ages of the Y dwarfs should be typical of the solar neighborhood and we limit the evolutionary sequences in Figure 2 to ages of 0.4 -- 10 Gyr. For reference, the Galactic thin disk is estimated to have an age of $4.3 \pm 2.6$ Gyr (e.g. Bensby et al. 2005). The right panel of Figure 2 shows that for our sample we expect a range in $\log g$ of 3.8--4.8, and a range in mass of 3 -- 23 Jupiter masses.
\section{Model Atmospheres}
In this work we use cloud-free model atmospheres from Saumon et al. (2012, hereafter S12) and T15. We also use models which include homogeneous layers of chloride and sulphide clouds from M12, and patchy water cloud models from M14. We do not use PHOENIX models which have not been validated for $T_{\rm eff} < 400$~K
\footnote{http://www.perso.ens-lyon.fr/france.allard/} or the Hubeny \& Burrows (2007) models which do not include the recent improvements to the CH$_4$, H$_2$ and NH$_3$ line lists.
Models for surface gravities given by $\log g = 4.0$, 4.5 and 4.8 were used, with a small number of $\log g = 3.8$ models for the lowest temperatures, as appropriate for this sample (see Figure 2). The T15 models include non-solar metallicities of [m/H]$ = -0.5$ and [m/H]$ = +0.2$, and a few models were also generated with [m/H]$ = -0.2$. This range in metallicities covers the expected range for stars in the Galactic thin disk (e.g. Bensby et al. 2005). The T15 models include an updated CH$_4$ line list (Yurchenko \& Tennyson 2014) however they do not include opacities of PH$_3$ or rain-out processes for condensates, as do the S12, M12 and M14 models. For this work, a small number of T15 models with an adjusted adiabat were generated as described below.
The T15 models include non-equilibrium chemistry driven by vertical gas transport and parameterized with an eddy diffusion coefficient $K_{\rm zz}\,$cm$^2\,$s$^{-1}$. The S12, M12 and M14 models are in chemical equilibrium. Vertical gas transport brings long-lived molecular species such as CO and N$_2$ up into the brown dwarf photosphere. Mixing occurs in the convective zones of the atmosphere and may occur in the nominally quiescent radiative zone via processes such as gravity waves (Freytag et al. 2010) or fingering instability (T15). If mixing occurs faster than local chemical reactions can return the species to local equilibrium, then abundances can be different by orders of magnitude from those expected for a gas in equilibrium (e.g. Noll, Geballe \& Marley 1997, Saumon et al. 2000, Golimowski et al. 2004, Leggett et al. 2007, Visscher \& Moses 2011, Zahnle \& Marley 2014). The left panel of Figure 2 shows that, for a given $T_{\rm eff}$ and for $T_{\rm eff} \gtrsim 450$~K, the introduction of mixing increases $M_{[4.5]}$. This is due to the dredge up of CO which absorbs at 4.4--5.0~$\mu$m (e.g. M14, their Figure 7). For the coldest objects the CO lies very deep in the atmosphere and is not expected to significantly impact the 4.5~$\mu$m flux. While CO absorption is enhanced by mixing, NH$_3$ absorption is diminished because of the dredge up of N$_2$. In Figure 1 the black lines in the top and bottom panels are model spectra calculated for the same temperature, gravity and metallicity, but with different values of $K_{\rm zz}$. The increased mixing in the bottom panel results in stronger CO absorption at 4.5~$\mu$m and weaker NH$_3$ absorption in the near-infrared and at $\lambda \sim 10~\mu$m.
Various species condense in these cold atmospheres, forming cloud decks. For T dwarfs with $500 \lesssim T_{\rm eff}$~K $\lesssim 1300$ the condensates consist of chlorides and sulphides (e.g. Tsuji et al. 1996, Ackerman \& Marley 2001, Helling et al. 2001, Burrows et al. 2003, Knapp et al. 2004, Saumon \& Marley 2008, Stephens et al. 2009, Marley et al. 2012, M12, Radigan et al. 2012, Faherty et al. 2014). As
$T_{\rm eff}$ decreases further, the next species to condense are calculated to be H$_2$O for $T_{\rm eff} \approx$ 350~K and NH$_3$ for $T_{\rm eff} \approx$ 200~K (Burrows et al. 2003, M14). Comparison of the cloudy and cloud-free sequences in Figure 2 shows that the clouds are not expected to impact the 4.5~$\mu$m flux until temperatures are low enough for water clouds to form. These water clouds are expected to scatter light in the near-infrared and absorb at $\lambda \gtrsim 3~\mu$m (e.g. M14, their Figure 2). For the warmer Y dwarfs with $T_{\rm eff} \approx$ 400~K,
the chloride and sulphide clouds lie deep in the atmosphere but they may nevertheless impact light emitted in particularly clear opacity windows, such as the $Y$ and $J$ bands. Such clouds may be the cause of the (tentative) variability seen at $Y$ and $J$ for the Y0 WISEA J173835.52$+$273258.8 (hereafter W1738), which also exhibits low-level variability at [4.5] (Leggett et al. 2016b).
Tremblin et al. (2016) show that brown dwarf atmospheres can be subject to thermo-chemical instabilities which could induce turbulent energy transport. This can change the temperature gradient in the atmosphere which in turn can produce the observed brightening at $J$ across the L- to T-type spectral boundary, without the need for cloud disruption (e.g. Marley, Saumon \& Goldblatt 2010). Tremblin et al. model the L to T transition by increasing the adiabatic index
which leads to warmer temperatures in the deep atmosphere and cooler temperatures in the upper regions. We have similarly experimented with modified adiabats for this work, i.e. using pressure-temperature profiles not described by adiabatic cooling of an ideal gas. For an ideal gas, adiabatic cooling is described by $P^{(1-\gamma)}T^{\gamma} =$ constant. $\gamma$ is the ratio of specific heats at constant pressure and volume. For hydrogen gas $\gamma = 1.4$. Model spectra were generated with $\gamma =$ 1.2, 1.3 and 1.35. We found that $\gamma =$ 1.35 produced spectra indistinguishable from adiabatic cooling, and the observations presented later do not support a $\gamma$ value as low as 1.2. Hence we only explore models with $\gamma =$ 1.3 here.
Brown dwarf atmospheres are turbulent. It is likely that vertical mixing, cloud formation, thermal variations and non-adiabatic energy transport are all important. Full three-dimensional hydrodynamic models are needed. In the mean time, we compare available models to new data we present in the next two sections. Although no model is perfect, we do find that the models which include vertical mixing can be used to estimate the properties of Y dwarfs.
\section{New Gemini Observations}
\subsection{GNIRS Near-Infrared Spectrum for WISEA J041022.75$+$150247.9}
WISEA J041022.75$+$150247.9 (hereafter W0410) is a Y0 brown dwarf that was discovered in the {\it WISE} database by Cushing et al. (2011).
Cushing et al. (2014) present a spectrum of W0410 which covers the wavelength range 1.07--1.70~$\mu$m at a resolution $R \approx 130$.
The shape of the spectrum at 0.98--1.07~$\mu$m is sensitive to gravity and metallicity (\S 2), and for this reason we obtained a
$0.95 \leq \lambda~\mu$m $\leq 2.5$ spectrum using GNIRS at Gemini North on 2016 December 24 and 25, via program GN-2016B-Q-46. GNIRS was used in cross-dispersed mode with the 32 l/mm grating, the short camera and the 0$\farcs$675 slit, giving $R \approx 700$. A central wavelength of 1.65~$\mu$m resulted in wavelength coverage for orders 3 to 7 of
1.87--2.53~$\mu$m, 1.40--1.90~$\mu$m, 1.12--1.52~$\mu$m, 0.94--1.27~$\mu$m, 0.80--1.08~$\mu$m.
Flatfield and arc images were obtained using lamps on the telescope, and pinhole images were obtained to trace the location of the cross-dispersed spectra.
A total of 18 300$\,$s frames were obtained on W0410 on December 24 and 10 300$\,$s frames on December 25. Both nights were clear, with seeing
around 0$\farcs$8 on the first night and around 1$\farcs$0 on the second. GNIRS suffered from electronic noise on the second night, and we used the data from December 24 only. An ``ABBA'' offset pattern was used with offsets of $3\arcsec$ along the slit. Bright stars were observed before and after W0410 on December 24 to remove telluric absorption features and produce an instrument response function;
the F2V HD 19208 was observed before and the F3V HD 33140 was observed after. Template spectra for these spectral types were obtained from the spectral library of Rayner et al. (2009).
The data were reduced in the standard way using routines supplied in the IRAF Gemini package.
The final flux calibration of the W0410 spectrum was achieved using the observed $YJHK$ photometry. Figure 3 shows the new spectrum, and the lower resolution Cushing et al. (2014) spectrum for reference. We compare the spectrum to models later, in \S 9.
\subsection{FLAMINGOS-2 Near-Infrared Spectra for \\ WISE J071322.55$-$291751.9 and WISEA J114156.67$-$332635.5}
WISE J071322.55$-$291751.9 (hereafter W0713) is a Y0 brown dwarf that was discovered in the {\it WISE} database by Kirkpatrick et al. (2012). Kirkpatrick et al. present a spectrum of W0713 which covers the $J$-band only. WISEA J114156.67$-$332635.5 (hereafter W1141) was discovered in the {\it WISE} database and presented by Tinney et al. (2014). No near-infrared spectrum has been published for this object, but a type of Y0 was estimated by Tinney et al. based on absolute magnitudes and colors. We obtained near-infrared spectra for these two objects using FLAMINGOS-2 at Gemini South on 2017 February 3 and 7, via program GS-2017A-FT-2. The $JH$ grism was used with the 4-pixel (0$\farcs$72) slit, giving $R \approx 600$. The wavelength coverage was 0.98--1.80~$\mu$m. We compare the spectra to models later, in \S 9.
W1141 was observed on 2017 February 3 in thin cirrus with seeing 0$\farcs$8. An ``ABBA'' offset pattern was used with offsets of $10\arcsec$ along the slit. A total of 18 300$\,$s frames were obtained, as well as flat field and arc images using lamps on the telescope. The F5V star HD 110285 was observed immediately following W1141. A template spectrum for F5V was obtained from the spectral library of Rayner et al. (2009). The bright star was used to remove telluric features and provide an instrument response function,
the final flux calibration was achieved using the $J$ photometry given by Tinney et al. (2014). The data were reduced in the standard way using routines supplied in the IRAF Gemini package. Figure 3 shows our spectrum for this object, compared to the low-resolution Cushing et al. (2011) $JH$ spectrum of W1738, which currently defines the Y0 spectral standard. The spectral shapes are almost identical, and we confirm the Y0 spectral type estimated photometrically by Tinney et al. (2014).
W0713 was observed on 2017 February 7 in clear skies with seeing 0$\farcs$7. An ``ABBA'' offset pattern was used with offsets of $10\arcsec$ along the slit. 12 300$\,$s frames were obtained, as well as flat field and arc images using lamps on the telescope. The A3V star HD 43119 was observed immediately before W0713. A template spectrum for A3V was obtained from the Pickles (1998) spectral atlas. The bright star was used to remove telluric features and provide an instrument response function, the final flux calibration was achieved using the $J$ photometry given by Leggett et al. (2015). This was consistent with the more uncertain $Y$ magnitude, but inconsistent with $H$ by 5~$\sigma$. Although variability cannot be excluded, the model fit shown later does a reasonable job of reproducing the entire spectrum, and we believe the discrepancy is
due to the lower signal to noise in the $H$ spectral region. Figure 3 shows our W0713 spectrum together with the Kirkpatrick et al. (2012) $J$-band spectrum of this object, which has been flux-calibrated by the $J$ photometry. The Kirkpatrick et al. spectrum appears noisier, but is consistent with our data.
\subsection{NIRI $Y$, $CH_4$(short) and $M^{\prime}$ for WISE J085510.83$-$071442.5}
W0855 was discovered as a high proper motion object by Luhman (2014) in {\it WISE} images. W0855 is the intrinsically faintest and coolest object known outside of the solar system at the time of writing, with effective temperature $T_{\rm eff} \approx$ 250~K and $L/L{\odot} \approx 5e-8$ (based on Stefan’s law and radii given by evolutionary models). W0855 is 2.2~pc away and has a high proper motion of $-8\farcs10$ yr$^{-1}$ in Right Ascension and $+0\farcs70$ yr$^{-1}$ in Declination (Luhman \& Esplin 2016, hereafter LE16).
We obtained photometry for W0855 on Gemini North using NIRI at $Y$ and $CH_4$(short) via program GN-2016A-Q-50, and at $M^{\prime}$ via program GN-2016A-FT-10. The photometry is on the MKO system however there is some variation in the $Y$ filter bandpass between the cameras used on Maunakea, and $Y_{\rm NIRI} - Y_{\rm MKO} = 0.17 \pm 0.03$ magnitudes for late-type T and Y dwarfs (Liu et al. 2012). At the time of our observations (2015 December to 2016 March), the only published near-infrared detection of W0855 was a $J$-band measurement (Faherty et al. 2014). The $Y$ and $CH_4$(short) observations were obtained in order to provide a near-infrared SED for this source. The $M^{\prime}$ observation was obtained to probe the degree of mixing in the atmosphere, as described in \S 8.1.
All nights were photometric, and the seeing was $0\farcs5$ -- $0\farcs8$. Photometric standards FS 14, FS 19 and FS 126 were used for the $Y$ and $CH_4$(short) observations, and HD 77281 and LHS 292 were used for the $M^{\prime}$ observations (Leggett et al. 2003, 2006; UKIRT
online catalogs\footnote{http://www.ukirt.hawaii.edu/astronomy/calib/phot\_cal/}). The photometric standard FS 20 with a type of DA3 was also observed in the $CH_4$(short) filter. This standard has $J - H = -0.03$ magnitudes and $H - K = -0.05$ magnitudes, i.e. very close to a Vega energy distribution across the $H$ bandpass. FS 20 confirmed that NIRI $CH_4$(short) zeropoints could be determined by adopting $CH_4 = H$ for all the standards observed, and we found the zeropoint to be 22.95 $\pm$ 0.03 magnitudes. W0855 and the calibrators were offset slightly between exposures using a 5- or 9-position telescope dither pattern. Atmospheric extinction corrections between W0855 and the nearby calibrators were not applied as these are much smaller than the measurement uncertainty (Leggett et al. 2003, 2006). The measurement uncertainties were estimated from the sky variance and the variation in the aperture corrections.
The $Y$ and $CH_4$(short) data were obtained using the NIRI f/6 mode, with a pixel size of $0\farcs12$ and a field of view (FOV) of 120''. Individual exposures were 120~s at $Y$ and 60~s at $CH_4$(short). $Y$ data were obtained at airmasses ranging from 1.1 to 1.9 on 2016 February 16, 17, 18 and 23. $CH_4$(short) data were obtained at airmasses ranging from 1.1 to 1.5 on 2015 December 25 and 26, 2016 January 19, 2016 February 1, 2016 March 12 and 13. The total on-source integration time was 7.1 hours at $Y$ and 14.7 hours at $CH_4$(short). Calibration lamps on the telescope were used for flat fielding and the data were reduced in the standard way using routines supplied in the IRAF Gemini package. Images from different nights were combined after shifting the coordinates to allow for the high proper motion of the target. The shift per day was $-0.191$ pixels in $x$ and $+0.017$ pixels in $y$. Aperture photometry with annular skies was carried out, using an aperture diameter of $1\farcs2$ and using point sources in the image to determine the aperture corrections.
The $M^{\prime}$ data were obtained using the NIRI f/32 mode, with a pixel size of $0\farcs02$ and a FOV of 22''; individual exposures were 24~s composed of 40 coadded 0.6~s frames. Data were obtained
at an airmass of 1.1 to 1.4 on 2016 March 11. The total on-source integration time was 1.6 hours at $M^{\prime}$. Flat fields were generated from sky images created by masking sources in the science data. Although the exposure time was short, the background signal through this $5~\mu$m filter is high and can vary quickly. Because of this, after flat fielding the data we subtracted adjacent frames and then shifted the subtracted frames to align the calibrator or W0855 before combining the images. As the data were taken on one night no correction had to be made for W0855's proper motion. Aperture photometry with annular skies was carried out, using an aperture diameter of $0\farcs8$. Aperture corrections were determined from the photometric standards.
W0855 was not detected in the $Y$ filter, but was detected in $CH_4$(short) and $M^{\prime}$. Figure 4 shows two $CH_4$(short) images. One uses data taken in December 2015 and January 2016, and the other uses data taken in March 2016. The North-West motion of W0855 is apparent. Figure 4 also shows
the stacked $M^{\prime}$ and $Y$ image. The measured magnitudes or detection limits are given in Table 1.
Our measurement of $Y > 24.5$ magnitudes is consistent with the Beamin et al. (2014) measurement of $Y > 24.4$ magnitudes.
Our measurement of $CH_4$(short) $= 23.38 \pm 0.20$ magnitudes is consistent with the 23.2 $\pm$ 0.2 magnitudes measured by LE16 and the 23.22 $\pm$ 0.35 magnitudes determined by Zapatero Osorio et al. (2016) from analysis of the LE16 data.
\subsection{NIRI $M^{\prime}$ for CFBDS J005910.90$-$011401.3, 2MASSI J0415195$-$093506, UGPS J072227.51$-$054031.2, 2MASSI J0727182$+$171001 and \\ WISEPC J205628.90$+$145953.3}
$M^{\prime}$ data for a sample of T and Y dwarfs was obtained via program GN-2016B-Q-46 using NIRI on Gemini North in the same configuration as described in \S 4.3. The $M^{\prime}$ observations were obtained to probe the degree of mixing in brown dwarf atmospheres, as described in \S 8.1.
All nights were photometric with seeing varying night to night from $0\farcs4$ to $1\farcs1$. The data were reduced in the same way as the W0855 $M^{\prime}$ data. The results are given in Table 1.
CFBDS J005910.90$-$011401.3 is a T8.5 dwarf discovered by Delorme et al. (2008). The T dwarf was observed on 2016 July 18 and 2016 October 11. The second data set was taken at a lower airmass, and we used the data from October 11 only. 207 24-second dithered images were obtained for an on-source time of 1.4 hours. The airmass range was 1.07--1.26. Also observed on 2016 October 11 was UGPS J072227.51-054031.2, a T9 dwarf discovered by Lucas et al. (2010). Thirty-six 24-second dithered images were obtained for an on-source time of 14 minutes, at an airmass of 1.2. The photometric standards HD 1160, HD 22686 and HD 40335 were used as $M^{\prime}$ calibrators on 2016 October 11.
2MASSI J0415195$-$093506 is a T8 dwarf discovered by Burgasser et al. (2002). The T dwarf was observed on 2016 October 22. 181 24-second dithered images were obtained for an on-source time of 1.2 hours. The airmass range was 1.18--1.30. The photometric standard HD 22686 was used for calibration.
2MASSI J0727182$+$171001 is a T7 dwarf discovered by Burgasser et al. (2002). The T dwarf was observed on 2017 January 9. 153 24-second dithered images were obtained for an on-source time of 1.0 hours. The airmass range was 1.0--1.1. The photometric standards HD 40335 and HD 44612 were used for calibration.
WISEPC J205628.90$+$145953.3 (hereafter W2056) is a Y0 brown dwarf that was discovered in the {\it WISE} database by Cushing et al. (2011). W2056 was observed on
2016 July 13. 244 24-second dithered images were obtained for an on-source time of 1.6 hours. The airmass range was 1.0--1.5. The photometric standards G 22-18 and HD 201941 were used as calibrators. The offsets were such that one corner of the stacked image contained the 2MASS star 20562847$+$1500092. This star has 2MASS magnitudes $J=13.45\pm0.03$, $H=13.18\pm0.04$ and $K_s=13.20\pm0.03$ magnitudes. We measure $M^{\prime}=13.21\pm0.15$ magnitudes for this star. The near-infrared colors suggest a spectral type of G0 (Covey et al. 2007), and the measured $K_s - M^{\prime}=-0.01\pm0.15$ magnitudes is consistent with the color expected for the spectral type (e.g. Davenport et al. 2014).
\subsection{Revised FLAMINGOS-2 $H$ for WISEA J064723.24$–$623235.4 }
We obtained $H$ data for the Y1 WISEA J064723.24$-$623235.4 (Kirkpatrick et a. 2013, hereafter W0647) using FLAMINGOS-2 on Gemini South, which were presented in Leggett et al. (2015). Leggett et al. (2015) give a lower limit for $H$ only. We have examined in more detail the reduced image at the location of the source (provided by the contemporaneous $J$ detection) and obtained a 3.5 $\sigma$ measurement, which is given in Table 1.
\section{Photometry from Image Archives}
We searched various archives for late-type T and Y dwarf images in order to determine transformations between photometric systems and complement our data set. The archived images were downloaded in calibrated form, and we carried out aperture photometry using annular sky regions. Aperture corrections were derived using bright sources in the field of the target. This section gives the resulting, previously unpublished, photometry.
We have also updated our near-infrared photometry for the T8 dwarf ULAS J123828.51$+$095351.3 using data release 10 of the UKIRT Infrared Deep Sky Survey (UKIDSS), processed by the Cambridge Astronomy Survey Unit (CASU) and available via the WFCAM Science Archive WSA\footnote{http://www.wsa.roe.ac.uk}. We added two late-type T dwarfs which have UKIDSS and WISE data and which were identified by Skrzypek, Warren \& Faherty (2016): J232035.29$+$144829.8 (T7) and J025409.58$+$022358.7 (T8).
\subsection{{\em HST} WFC3}
We used the {\em HST} Mikulski Archive for Space Telescopes (MAST) to search for Wide Field Camera 3 (WFC3) data for late-type T and Y dwarfs taken with the F105W, F125W, F127M or F160W filters. These filters were selected as they more closely map onto the ground-based $Y$, $J$ and $H$ bandpasses (Figure 1), compared to for example the F110W and F140W which have also been used for brown dwarf studies. The ``drz'' files were used, which have been processed through the {\tt calwf3} pipeline and geometrically corrected using {\tt AstroDrizzle}. The photometric zeropoints for each filter were taken from the WFC3 handbook\footnote{http://www.stsci.edu/hst/wfc3/phot\_zp\_lbn}. Previously unpublished WFC3 photometry for five T dwarfs and one Y dwarf was obtained, and is presented in Table 2.
\subsection{ESO VLT HAWK-I}
Images obtained with the European Southern Observatory's (ESO) High Acuity Wide field K-band Imager (HAWK-I), are published as reduced data via the ESO science archive facility. The data were processed by CASU which produced astrometrically and photometrically calibrated stacked and tiled images. The integration time was obtained from the ``DIT'' and ``NDIT'' entries in the FITS headers. Previously unpublished $J$ and $H$ photometry on the MKO system was obtained for four T dwarfs and is presented in Table 2.
\subsection{{\em Spitzer}}
The NASA/IPAC Infrared Science Archive (IRSA) was used to search the mid-infrared {\em Spitzer}
[3.6], [4.5], [5.8] and [8.0] IRAC images. The post basic calibrated data (PBCD) were downloaded and photometry obtained using the Vega fluxes given in the IRAC instrument handbook\footnote{http://irsa.ipac.caltech.edu/data/SPITZER/docs/irac/iracinstrumenthandbook/17/}. Previously unpublished IRAC photometry for four T dwarfs, two confirmed Y dwarfs and one unconfirmed Y dwarf is presented in Table 3.
We also re-extracted [3.6] photometry for W2056 from six images taken in 2012, 2013 and 2014, in order to more accurately remove artefacts caused by a nearby bright star. This result is also given in Table 3.
\subsection{{\em WISE}}
IRSA was used to examine the ALLWISE calibrated images taken in the W1 (3.4~$\mu$m) and W3 (12~$\mu$m) filters where the photometry was not listed in the WISE catalog. Zeropoints were provided in the data FITS headers. Previously unpublished W1 photometry is given for two Y dwarfs, and W3 for
two T and three Y dwarfs, in Table 3.
In the process of examining the {\em WISE} image data for the known Y dwarfs we also determined that the W1 photometry for the Y dwarf WISE J154151.65$-$225024.9
was compromised by nearby sources and this measurement was removed from our photometric database.
\section{ Synthesized Photometry and Transformations between \\ {\em HST} F1.05W, F1.25W, F1.27M, F1.60W; $CH_4$(short); MKO $Y$, $J$, $H$ }
Schneider et al. (2015) present observed WFC3 F105W and/or F125W photometry for five late-T and eleven Y dwarfs. Beichman et al. (2014) present F105W and F125W photometry for an additional Y dwarf. Schneider et al. also present grism spectroscopy from which they calculate synthetic F105W and F125W photometry. In order to transform {\em HST} photometry and our CH$_4$ photometry on to the MKO system, we calculated the following colors (or a subset) from available near-infrared spectra: $Y -$ F1.05W, $J -$ F1.25W, $J -$ F1.27M, $ H -$ F1.60W and $H - CH_4$(short). Table 4 lists these newly synthesized colors for five T dwarfs and six Y dwarfs, using spectra from this work, Kirkpatrick et al. (2012), Knapp et al. (2004), Leggett et al. (2014, 2016a), Lucas et al. (2010), Schneider et al. (2015), and Warren et al. (2007).
Table 4 also gives synthetic MKO-system colors for three T dwarfs and two Y dwarfs using spectra from this work, Kirkpatrick et al. (2011) and Pinfield et al. (2012, 2014). These five objects were selected as additions to our data set because they are either very late-type, or have been classified as peculiar and so potentially sample unusual regions of color-color space.
We have used the synthesized colors and measured MKO and {\em HST} photometry where they both exist, to determine a set of transformations between the two systems as a function of type, for late-T and Y dwarfs. The photometry is taken from Leggett et al. (2015) and references therein, Schneider et al. (2015) and references therein, and this work. We have included colors from T15 spectra for $T_{\rm eff} = 400$, 300, 250 and 200~K, with $\log g = 4.5$ and $\log K_{\rm zz} = 6$, to constrain the transformations at very late spectral types. For the purposes of the fit, we adopt spectral types of Y0.5, Y1.5, Y2 and Y2.5 for the colors generated by models with $T_{\rm eff} = 400$, 300, 250 and 200~K, respectively.
We explored the sensitivity of the synthetic colors to the atmospheric parameters using $T_{\rm eff} = 300$~K models with $\log g = 4.0$ and 4.5, [m/H] $=$ 0.0 and $-0.5$ and $\log K_{\rm zz} = 6$ and 8. We found a dispersion in $Y -$ F1.05W, $J -$ F1.25W, $J -$ F1.27M, $ H -$ F1.60W and $H - CH_4$(short) of 0.01 -- 0.09 magnitudes. We adopt a
$\pm 0.1$ magnitude uncertainty in these model colors.
We performed weighted least-squares quadratic fits to the data. Figure 5 shows the data and the fits, and Table 5 gives the fit parameters for the transformations. Based on the scatter seen in Figure 5 we estimate the uncertainty in the transformations for the Y dwarfs to be $\pm$0.10 magnitudes.
We have used the relationships given in Table 5 to estimate $Y$, $J$ and $H$ magnitudes (or a subset) on the MKO system for seven Y dwarfs with {\em HST} and $CH_4$(short) photometry. The results are given in Table 6. We have expanded wavelength coverage for five of these Y dwarfs by adopting the synthetic colors derived by Schneider et al. (2015) from their spectra. These values are also given in Table 6. Table 6 also gives MKO photometry for W1141, determined using the synthesized colors given in Table 4.
WD0806$-$661B and W0855 have estimates of $J$, and $J$ and $H$, respectively, determined in two ways (Table 6). The two values of $J$ agree within the quoted uncertainties for both Y dwarfs (although only marginally so for W0855). The two values of $H$ for W0855 differ by $1.8 \sigma$. We use a weighted average of the two measurements in later analysis, and estimate the uncertainty in the average to be the larger of the uncertainty in the mean, or half the difference between the two values.
\section{New Astrometry, and the Luminosity of WISE J014656.66$+$423410.0AB}
LE16 refined the parallax and proper motion for W0855 using astrometry measured with multi-epoch
images from {\it Spitzer} and {\it HST}. They also presented new parallaxes for three Y dwarfs whose previous measurements had large uncertainties, consisting of WISE J035000.32$-$565830.2 (hereafter W0350), WISE J082507.35$+$280548.5 (hereafter W0825) and WISE J120604.38$+$840110.6 (hereafter W1206). Those measurements were based on the {\it Spitzer} IRAC images of these objects that were publicly available and the distortion corrections for IRAC
from Esplin \& Luhman (2016). We have measured new proper motions and parallaxes
in the same way for three additional Y dwarfs whose published measurements are uncertain: WISE J053516.80$-$750024.9 (hereafter W0535), WISEPC J121756.91$+$162640.2AB (hereafter W1217AB) and WISEPC J140518.40$+$553421.5 (hereafter W1405).
We have also determined an improved parallax for WISE J014656.66$+$423410.0AB (hereafter W0146AB) which was classified as a Y0 in the discovery paper (Kirkpatrick et al. 2012), and reclassified as a T9 when it was resolved into a binary with components of T9 and Y0 (Dupuy, Liu \& Leggett 2015).
In Table 7, we have compiled the parallaxes for W0350, W0825 and W1206 from
LE16, the proper motions for those objects that were derived by LE16 but
were not presented, and our new parallaxes and proper motions for W0146AB, W0535, W1217AB and
W1405.
The uncertainty in the new parallax measurements are significantly smaller than those of the previously published values --- 5 -- 12 mas compared to 14 -- 80 mas. The measurements for W1217AB and W1405 are consistent with previous measurements by Dupuy \& Kraus (2013), Marsh et al. (2013) and Tinney et al. (2014). The measurement for W0535 differs from the previous measurement by Marsh et al. by $2 \sigma$. The measurement for W0146AB differs from the previous measurement by Beichman et al. (2014) by $3 \sigma$. In the Appendix, Tables 11 -- 14 give the astrometric measurements for W0146AB, W0535, W1217AB amd W1405.
We show in the next section that the revised parallax for W0146AB places the binary, and its components, in a region of the color-magnitude diagrams that is occupied by other T9/Y0 dwarfs. The previous parallax measurement implied an absolute magnitude 1.2 magnitudes fainter, suggesting an unusually low luminosity (Dupuy, Liu \& Leggett 2015). The upper panel of Figure 6 shows that the combined-light spectrum is very similar to what would be produced by a pair of Y0 dwarfs, i.e. the system is not unusual. We have deconvolved the spectrum using near-infrared spectra of late-T and early-Y dwarfs as templates (a larger number of spectra are available compared to when Dupuy, Liu \& Leggett deconvolved the spectrum).
The absolute brightness of each input spectrum has been ignored, but the relative brightness of each input pair has been made to match the $\delta J$ magnitudes measured for the resolved system. The lower panel of Figure 6 shows that T9 $+$ T9.5 and T9 $+$ Y0 composite spectra have slightly broader $J$ and $H$ flux peaks than observed for W0146AB, while a T9.5 primary with a Y0 secondary reproduces the spectrum quite well. We adopt a spectral type of T9.5 for W0146AB and W0146A, and a type of Y0 for W0146B.
\section{Photometry: The Sample and Comparison to Models}
Table 8 compiles the following observational data for the currently known sample of 24 Y dwarfs: parallax (in the form of a distance modulus), MKO-system $YJHK$, {\em Spitzer} [3.6] and [4.5], and {\em WISE} W1, W2 and W3 magnitudes. The data sources are given in the Table. In the Appendix, Table 15 gives an on-line data table with these values for the larger sample of late-T and Y dwarfs used in this work.
We have compared these data to calculations by the models described in \S 3 via a large number of color-color and color-magnitude plots.
\subsection{Constraining the Eddy Diffusion Coefficient $K_{\rm zz}\,$cm$^2\,$s$^{-1}$}
The $M^{\prime}$ observations allow a direct measurement of the strength of the CO absorption at $4.7~\mu$m, as shown in Figure 1. Figure 7 shows [4.5] $-M^{\prime}$ as a function of $M_{[4.5]}$. The reduction in $M^{\prime}$ flux for the T dwarfs is evident in the Figure. The CO absorption does not appear to be a strong function of gravity, as indicated by the similarity between the T15 $\log g = 4.0$ and $\log g = 4.5$ sequences. The absorption does appear to be a function of metallicity, and of the adiabat used for heat transport (see \S 3, note the $\gamma=1.3$ sequence in Figure 7 has $\log K_{\rm zz}=8$). We make the assumption that the majority of the dwarfs shown in Figure 7 do not have metallicities as low as $-0.5$ dex, and we show below that while the ad hoc change to the adiabat improves the model fits at some wavelengths, it is not preferred over the models with standard adiabatic cooling. With those assumptions Figure 7 indicates that $4 \lesssim \log K_{\rm zz} \lesssim 6$ for mid-T to early-Y type brown dwarfs. This is consistent with previous model fits to T6 -- T8 brown dwarfs, where the fits were well constrained by mid-infrared spectroscopy (Saumon et al. 2006, 2007; Geballe et al. 2009). For the latest-T and early-Y dwarfs we adopt $\log K_{\rm zz}=6$. Figure 7 suggests that the coolest object currently known, W0855, may have a larger diffusion coefficient and we explore this further in \S 10.5.
\subsection{Metallicity and Multiplicity}
The color-color plot best populated by the sample of Y dwarfs is $J -$ [4.5]:[3.6] $-$ [4.5]. This plot is shown in Figure 8. Figure 9 shows the color-magnitude plot $J -$ [4.5]:$M_{[4.5]}$. The plots are divided into three panels. The top panel is data only, with a linear fit to sources with $J-[4.5] > 3.0$ magnitudes, excluding sources that deviated by $> 2\sigma$ from the fit. The average deviation from the linear fit along the $y$ axes is 0.09 and 0.14 magnitudes in Figures 8 and 9 respectively. The fit parameters are given in the Figures. The middle panel of each Figure compares the data to non-equilibrium T15 models which differ in metallicity and adiabat gradient. The bottom panel compares the data to S12, M12 and M14 equilibrium models which differ in gravity and cloud cover.
Figures 8 and 9 show that none of the models reproduce the observed [3.6] $-$ [4.5] color. The cloud-free non-equilibrium models are better than the cloud-free equilibrium models for the T dwarfs, which are more impacted by the dredge-up of CO than the Y dwarfs (Figure 7). The reduction in the adiabatic index and the introduction of clouds improves the fit for the T dwarfs because in both cases the $\lambda \sim 1~\mu$m light emerges from cooler regions of the atmosphere than in the adiabatic or cloud-free case (Morley et al. 2012 their Figure 5, Tremblin et al. 2016 their Figure 5). In the $J -$ [4.5]:$M_{[4.5]}$ plot (Figure 9), however, the chemical equilibrium chloride and sulphide cloud model does not reproduce the observations of the T dwarfs and the modified adiabat model does not reproduce the observations as well as the adiabatic model does. If we assume that the model trends in the colors with gravity, cloud and metallicity are nevertheless correct, we can extract important information from Figures 8 and 9 for the Y dwarfs.
Figure 8 suggests that the $J -$ [4.5]:[3.6] $-$ [4.5] colors of Y dwarfs are insensitive to gravity and clouds, but are sensitive to metallicity. The model trends imply that the following objects are metal-rich: W0350 (Y1) and WISE J041358.14$-$475039.3 (W0413, T9). Similarly the following are metal-poor: WISEPA J075108.79$-$763449.6 (W0751, T9), WISE J035934.06$-$540154.6 (W0359, Y0), WD0806$-$661B (Y1) and W1828 ($>$Y1).
Figure 9 suggests that $J -$ [4.5]:$M_{[4.5]}$ is insensitive to gravity, but is sensitive to clouds and metallicity. Figure 9 supports a sub-solar metallicity for W0359 and W1828, and a super-solar metallicity for W0350. Note that
SDSS J1416$+$1348B (S1416B, T7.5) is a known metal-poor and high-gravity T dwarf (e.g. Burgasser, Looper \& Rayner 2010). The Y dwarfs, including W0855 in this plot, appear to be essentially cloud-free.
In any color-magnitude plot multiplicity leads to over-luminosity. The Y
dwarf sample size is now large enough, and the data precise enough, that
we can identify W0535 and W1828 as likely multiple objects. We examined
the drizzled WFC3 images of these two Y dwarfs for signs of elongation
or ellipticity. Images of W0535 taken in 2013 September and December show
no significant elongation or ellipticity, implying that if this is a
binary system then the separation is $<$ 3 AU. A tighter limit was found by
Opitz et al. (2016)
who used Gemini Observatory's multi-conjugate adaptive optics system to determine
that any similarly-bright companion must be within $\sim 1$ AU.
For W1828 five WFC3
images taken in 2013 April, May, June and August show marginal
elongation and ellipticity of $16\pm8$\%. For this (nearer) source the
putative binary separation is $\lesssim$ 2 AU.
\subsection{Further Down-Selection of Models}
Figures 10 and 11 show near-infrared colors as a function of $J -$ [4.5], and absolute $J$ as a function of near-infrared colors. In both figures, the T15 non-equilibrium models reproduce the trends in $Y-J$ and $J-H$ quite well. The S12, M12 and M14 equilibrium models reproduce $Y-J$ but do poorly with $J-H$ except for the chloride and sulphide models which reproduce the T dwarfs' location. Non-equilibrium effects are important in the near-infrared as gas transport leads to an enhancement of N$_2$ at the expense of NH$_3$. This increases the flux in the near-infrared, especially at $H$ (e.g. Leggett et al. 2016a), hence the better fit to $J - H$ by the T15 models. The only model that reproduces the $J-K$ colors of the Y dwarfs is the T15 model with the change to the adiabatic index, because of the large reduction in the $J$ flux (Figure 1).
Comparison of the S12 and M14 sequences in Figures 10 and 11 suggest that the near-infrared colors of Y dwarfs are insensitive to clouds, although clouds may be important at the $\sim 10$\% level for Y dwarfs (see also \S 3). Gravity appears to be an important parameter for $J-H$ and $J-K$, and metallicity appears to be important for $Y-J$, $J-H$ and $J-K$. The interpretation of $Y-J$ is not straightforward however as both W0350 and W1828 appear bluer in $Y-J$ than the other Y dwarfs, while Figures 8 and 9 implied that W0350 is metal-rich and W1828 is metal-poor. Figures 10 and 11 suggest that the W0146AB system may be metal-rich, while WISE J033515.01+431045.1 (W0335, T9) may be metal-poor.
Figure 12 shows absolute [4.5] as a function of the mid-infrared colors [3.6]$-$[4.5] and [4.5]$-$W3 (see Figure 1 for filter bandpasses). In this mid-infrared color-color space the change of the adiabat in the T15 models does not significantly change the location of the model sequence. None of the models can reproduce the [3.6]$-$[4.5]:$M_{[4.5]}$ observations of the Y dwarfs, although the non-equilibrium models do a much better job of reproducing these colors for the T dwarfs. The models are mostly within $2\sigma$ of the observational error in the [4.5]$-$W3:$M_{[4.5]}$ plot, although there is a suggestion that for the Y dwarfs the model [4.5] fluxes are too high and/or the W3 fluxes are too low (see also \S 10.5).
Comparison of the S12 and M14 sequences in Figure 12 suggests that the mid-infrared colors of Y dwarfs with $M_{[4.5]}>16$ magnitudes, such as W0855, are sensitive to the presence of water clouds. Gravity does not appear to play a large role in these mid-infrared color-magnitude diagrams, but metallicity does. The discrepancy with observations however makes it difficult to constrain parameters, or determine whether or not W0855 is cloudy, from this Figure.
In the next section we compare near-infrared spectra of Y dwarfs to synthetic spectra. The photometric comparisons have demonstrated that the late-T and Y dwarfs are mostly cloud-free and that non-equilibrium chemistry is important for interpretation of their energy distributions (see also Leggett et al. 2016a). We therefore compare the spectra to T15 models only. We use a single diffusion coefficient of $\log K_{\rm zz} = 6$, as indicated by our $M^{\prime}$ measurements (\S 8.1). We also use only non-modified adiabats. Although the modified adiabat produces redder colors which in some cases agree better with observations, it does so by reducing the $YJH$ flux (Figure 1). Based on the mid-infrared color-magnitude plot (Figure 12), the problem appears to be a shortfall of flux in the models at [3.6] (and $K$). Note that the [3.6] (and W1) filter covers a region where the flux increases sharply to the red as the very strong absorption by CH$_4$ decreases (Figure 1, bottom panel). A relatively small change in this slope may resolve the observed discrepancy.
\section{Spectroscopy: The Sample and Comparison to Models}
In this section we compare near-infrared spectra of Y dwarfs to T15 non-equilibrium cloud-free models. We analyse Y dwarfs that have trigonometric parallax measurements only, so that the model fluxes can be scaled to the distance of the Y dwarf.
Given the problems at $K$ (\S 8.3), we only use the $YJH$ wavelength region (most of the observed spectra only cover this region).
Figures 13 and 14 are color-magnitude plots for the known Y dwarfs and latest T dwarfs. Sequences for the T15 solar, super-solar and sub-solar metallicity models are shown, as well as sequences for $\log g=$4.0, 4.5 and 4.8. For this low--temperature solar--neighborhood sample $\log g$ almost directly correlates with mass, and $T_{\rm eff}$ provides age once $\log g$ (mass) is known (Figure 2). Figures 2 and 14 suggest that $M_{[4.5]}$ is almost directly correlated with $T_{\rm eff}$ for the Y dwarfs. This is consistent with the radii of the Y dwarfs being approximately constant, and the model calculation that half the total flux is emitted through the [4.5] bandpass for this range of $T_{\rm eff}$ (\S 2). Figures 13 and 14 show that the T15 models
indicate effectively the same value of $T_{\rm eff}$ based on $M_J$ or $M_{[4.5]}$ for each Y dwarf. Excluding the very low-luminosity W0855, the Y dwarfs have $325 \lesssim T_{\rm eff}$~K $\lesssim 450$.
Near-infrared spectra of 20 Y dwarfs or Y dwarf systems with trigonometric parallaxes are available from this work, Cushing et al. (2011), Kirkpatrick et al. (2012), Tinney et al. (2012), Kirkpatrick et al. (2013), Leggett et al. (2014), Schneider et al. (2015), and Leggett et al. (2016a). We flux calibrated the spectra using the observed near-infrared photometry which has a typical uncertainty of 10 -- 20\% (Table 8). For W0535 the flux calibration of the $H$ region of the spectrum is inconsistent with the $Y$ and $J$ region by a factor of four, and our fit suggests that the $H$ region of the spectrum is too bright. For W1828 the flux calibration of the $J$ and $H$ spectral regions differ by a factor of two. We explored fits to the spectrum using both scaling factors, and the fits suggest there is a spurious flux contribution in the shorter wavelengths of the spectrum. The color-magnitude plots imply that W0535 and W1828 are multiple systems (\S 8.2), and contemporaneous near-infrared spectroscopy and photometry would be helpful in excluding variability in these sources, and enabling a more reliable spectral fit.
We compared the spectra to a set of T15 cloud-free, standard-adiabat models with $\log K_{\rm zz} = 6$. Solar metallicity models were computed with surface gravities given by $\log g =$ 4.0, 4.5 and 4.8, for $T_{\rm eff}$ values of 300~K to 500~K in steps of 25~K. Solar metallicity $\log g=$3.8 models were calculated for $T_{\rm eff} = 325$, 350 and 375~K also, which evolutionary models show are plausible for a solar neighborhood sample (Figure 2). A few metal-poor
([m/H]$=-0.2$ and $-0.5$) and metal-rich ([m/H]$=+0.2$, $+0.3$ and $+0.4$) models were calculated as needed, when exploring individual fits.
The model fluxes are converted from stellar surface flux to flux at the distance of the Y dwarf using the observed trigonometric parallax and the radius that corresponds to the $T_{\rm eff}$ and $\log g$ of the model as given by Saumon \& Marley (2008) evolutionary models. The typical uncertainty in the parallax-implied distance modulus is
10 -- 20\% (Table 8). No other scaling was done to the models to improve agreement with the observations.
Due to the coarse nature of our model grid and the poor signal to noise of most of the spectra (due to the faintness of the sources), we fit the spectra by eye only. We determined the difference between the computed and observed [4.5] magnitude as a further check of the validity of the selected models. The models that are preferred for the near-infrared spectral fitting give [4.5] values that are within 0.35 magnitudes of the observed value, and on average they are within 0.15 magnitudes of the observed [4.5] magnitude. The spectroscopic fits and $\delta$[4.5] values support the photometrically identified
non-solar metallicity values for W0350 (metal-rich), W0359 (metal-poor) and W1828 (metal-poor) (\S 8.2).
Our selected fits for the sample of 20 Y dwarfs are shown in Figures 15 -- 18, where the spectra are grouped by $T_{\rm eff}$. For several of the Y dwarfs we show two fits which straddle the observations. Those multiple fits indicate that the uncertainty in the derived $T_{\rm eff}$, $\log g$ and [m/H] is approximately half the model grid spacing: $\pm 15$~K, $\pm 0.25$ dex and $\pm 0.15$ dex respectively. Better fits could be determined with a finer grid of models and a least-squares type of approach, but this would only be worthwhile when higher signal to noise spectra are available.
Overall, the fits to the warmer half of the sample, with $425 \leq T_{\rm eff}$~K $\leq 450$, are very good. For the cooler half of the sample, with $325 \leq T_{\rm eff}$~K $\leq 375$, the model spectra appear to be systematically too faint in the $Y$-band or, alternatively, too bright at $J$ and $H$. The discrepancy may be associated with the formation of water clouds, which are expected to become important at these temperatures, and which are not included in the T15 models. We discuss this further in \S 10.5.
\section{Properties of the Y Dwarfs}
Table 9 gives the estimated properties of the sample of 24 Y dwarfs, based on near-infrared spectra and photometry, or photometry only if there is no spectrum available, or in one case the photometry and the properties of its companion. Mass and age is estimated from $T_{\rm eff}$ and $\log g$ using the evolutionary models of Saumon \& Marley (2008, Figure 2), allowing for the uncertainty in the temperature and gravity determinations. Table 9 also lists the tangential velocities ($v_{\rm tan}$) for the Y dwarfs with parallax measurements. Dupuy \& Liu (2012, their Figure 31) use a Galaxy model to show that low-mass dwarfs with $v_{\rm tan} < 80$~kms$^{-1}$ are likely to be thin disk members, and those with $80 < v_{\rm tan}$~kms$^{-1} <100$ may be either thin or thick disk members.
Twenty-one of the twenty-two Y dwarfs with $v_{\rm tan}$ measurements have $v_{\rm tan} < 80$~kms$^{-1}$, the remaining Y dwarf, the very low-temperature W0855, has $v_{\rm tan} = 86$~kms$^{-1}$. There is significant overlap in the Galactic populations in kinematics, age and metallicity but generally the thin disk is considered to be younger than $\sim7$~Gyr and have a metallicity $-0.3 \lesssim$ [Fe/H] $\lesssim +0.3$, while the thick disk is older than $\sim9$~Gyr and has $-1.0 \lesssim$ [Fe/H] $\lesssim -0.3$ (e.g. Bensby, Feltzing \& Oey 2014).
Leggett et al. (2016a) compare near-infrared spectra and photometry to T15 models for three Y dwarfs in common with this work: W0350, W1217B and W1738. The technique used is similar to that used here (although fewer models were available) and the derived $T_{\rm eff}$ and $\log g$ are in agreement, given our new determination for the distance to W0350. Schneider et al. (2015) compare {\it HST} near-infrared spectra and {\it Spitzer} mid-infrared photometry for a set of Y dwarfs to the S12, M12 and M14 solar-metallicity equilibrium-chemistry models. A goodness-of-fit parameter is used which incorporates the distance and evolutionary radius associated with the model parameters. As stated by Schneider et al., the fits to the data are poor in many cases. This is likely due to a combination of the omission of chemical non-equilibrium in the models and poorly constrained parallaxes for some of the Y dwarfs. The range in the Schneider et al. temperature and gravity values for each Y dwarf is about twice that derived here. There is generally good agreement between our values and those of Schneider et al.. Of the sample of 16 objects in common, only five have $T_{\rm eff}$ or $\log g$ values that differ by more than the estimated uncertainty. For two of these we use different values for the parallax (W0535 and W0825); for another pair (W0647 and WISEA J163940.84$-$684739.4 (W1639)) the $T_{\rm eff}$ values are consistent but the Schneider et al. gravities are significantly higher; and for the remaining object (WISEA J220905.75+271143.6 (W2209)), Schneider et al. obtain a much higher temperature. For the last three Y dwarfs the higher gravities or temperature are unlikely, based on age and luminosity arguments.
We discuss our results in terms of populations, and also discuss individual Y dwarfs of particular interest, in the following sub-sections. Two Y dwarfs without trigonometric parallax measurements are not discussed further: W0304 and WISEA J235402.79$+$024014.1. Two T9 dwarfs appear to have significantly non-solar metallicity and should be followed up: WISE J041358.14$-$475039.3 (metal-rich) and WISEPA J075108.79$-$763449.6 (metal-poor).
\subsection{Likely Young, Metal-Rich, Y dwarfs}
Five Y dwarfs have low tangential velocities of $8 \leq v_{\rm tan}$~kms$^{-1} \leq 40$, appear to be metal-rich and also have an age $\lesssim 3$~Gyr as estimated from $T_{\rm eff}$ and $\log g$. These are W0350, W0825, W1141, W1206 and W1738. They also appear to be low-mass $\sim$8 Jupiter-mass objects.
\subsection{Likely Solar-Age and Solar-Metallicity Y dwarfs}
Fourteen Y dwarfs have kinematics, metallicities and age, as implied by $T_{\rm eff}$ and $\log g$, that suggest they are generally solar-like in age and chemistry. These have estimated ages of 3 -- 8 Gyr and masses of 10 -- 20 Jupiter-masses:
W0146B, W0359, W0410, W0535 (if an equal-mass binary), W0647, W0713, WISE J073444.02$-$715744.0 (W0734), W1217B, W1405, W1541, W1639, W2056, W2209 and W2220.
All these Y dwarfs have a tangential velocity and estimated age consistent with thin disk membership, although W0713 and W2209 have upper limits on their age of 12 and 15~Gyr respectively.
\subsection{WISEPA J182831.08$+$265037.8}
The super-luminosity of W1828 in the color-magnitude diagrams does not seem explainable by any other means than binarity.
The selected model in the case of W1828 being an equal-mass binary implies that this system is relatively young. It would be composed of two $\sim$6 Jupiter-mass objects and have an age $\sim$1.5 Gyr. The $HST$ images imply that the binary separation is $\lesssim 2$~AU (\S 8.2). An age of 1.5~Gyr is notionally at odds with the apparently very metal-poor nature of the system.
A better near-infrared spectrum, and a mid-infrared spectrum when the {\it James Webb} telescope is on-line should improve our understanding of this Y dwarf. Exploration of a non-identical binary pair solution would also be worthwhile once a better spectrum is available.
\subsection{WD0806$-$661B}
The primary of this binary system, WD0806$-$661A, is a helium-rich DQ-class white dwarf separated from the brown dwarf by
2500~pc (Luhman, Burgasser \& Bochanski 2011). Hydrogen-deficient post-asymptotic-giant-branch (AGB) stars may evolve into DB (helium-rich white dwarfs) and then DQ white dwarfs (Althaus et al. 2005; Dufour, Bergeron \& Fontaine 2005). Calculations of the late stages of AGB evolution
can produce the less common non-DA (non-hydrogen-rich) white dwarfs in about the correct proportion although there are multiple paths that lead to hydrogen deficiency (Lawlor \& MacDonald 2006). One factor in these AGB evolution models is the metallicity of the star, and it is possible that the sub-solar metallicity we find for the Y dwarf WD0806$-$661B is related to the DQ (i.e. non-DA) nature of the primary.
Table 9 gives the properties of this Y dwarf, using the white dwarf primary to constrain the age of the system to 1.5 -- 2.7~Gyr (Rodriguez et al. 2011).
\subsection{WISE J085510.83$-$071442.5}
Figure 19 shows the photometric data points observed for W0855, as fluxes as a function of wavelength. Table 8 lists
$YJH$, [3.6], [4.5], W1, W2 and W3 magnitudes for W0855. $YJH$ were derived by us from {\it HST} photometry (\S 6, Table 6). The [3.6] and [4.5] are from LE16, the W1 and W2 magnitudes are from the {\it WISE} catalogue, and W3 was determined here from {\it WISE} images (\S 5.4, Table 3). Also shown are shorter wavelength data points from LE16: LP850 obtained using the {\it HST} Advanced Camera for Surveys (ACS) and the $i$-band upper limit obtained using GMOS at Gemini South.
Model spectra are shown for comparison, all with $T_{\rm eff}=250$~K. Models with $T_{\rm eff}=225$~K produce too little flux at 5~$\mu$m, and models with $T_{\rm eff}=275$~K produce too much flux at 5~$\mu$m, by a factor of 1.5.
The models show that $\sim40$\% of the total flux is emitted through the [4.5] bandpass and $\sim30$\% through the W3 bandpass. An additional $\sim20$\% is emitted at $19 < \lambda~\mu$m $<28$ (the W4 bandpass), and $\sim5$\% at $\lambda > 30~\mu$m. Less than 1\% of the total flux is emitted at $\lambda < 4~\mu$m.
The effective temperature (or luminosity) is tightly constrained by the mid-infrared flux, and we estimate that for W0855 $T_{\rm eff}=250 \pm 10$~K. This is consistent with previous studies (Luhman 2014, Beamin et
al. 2014, Leggett et al. 2015, Schneider et al. 2016, Zapatero Osorio et al. 2016).
We compared several models to the spectral energy distribution. T15 models were calculated with $\log g =$ 3.5, 3.8, 4.0, 4.3 and 4.5. T15 models with $T_{\rm eff}=250$~K and non-solar metallicities of [m/H] $= -0.2$ and $+0.2$ were also calculated. Finally, T15 models were calculated with $\log K_{\rm zz} = 6$, 8 and 9, as
Figure 7 suggests that W0855 may be undergoing more chemical mixing than the warmer Y dwarfs, and Jupiter's
atmosphere has been modelled with a vertical diffusion coefficient of $\log K_{\rm zz} = 8$
(Wang et al. 2015). The top panel of Figure 19 demonstrates the effect of varying $K_{\rm zz}$, and the central panel shows models with different gravities.
We also compared the observations to M14 cloud-free and partly-cloudy models that are in chemical equilibrium. These are shown in the bottom panel of Figure 19.
Cloud-free models with solar metallicity and $\log g=$ 3.5 and 4.0 were calculated, as well as a $\log g=$ 4.0 solar metallicity model with thin clouds decks (parameterized by $f_{\rm sed} = 7$) covering 50\% of the surface.
The models are updated versions of the models published in Morley et al. (2014).
The new models include updates to both chemistry and opacities, which will be described in detail in an upcoming paper (Marley et al. in prep.). Briefly, the opacities are as described in Freedman et al. (2014) with the exception of the CH$_4$ and alkali opacities; CH$_4$ line lists have been updated using Yurchenko \& Tennyson (2014) and the alkali line lists have been updated to use the results from Allard, Allard \& Kielkopf (2005). Chemical equilibrium calculations are based on previous thermochemical models (Lodders \& Fegley 2002; Visscher, Lodders \& Fegley 2006; Visscher 2012), and have been revised and extended to include a range of metallicities.
Up to this point in our analysis we have neglected the $\lambda < 0.9~\mu$m region. Given the importance of W0855, and the availability of shorter wavelength data, Figure 19 includes observations and model spectra and photometry at $0.8 \lesssim \lambda~\mu$m $\lesssim 0.9$. The models are about an order of magnitude too bright at $\lambda < 0.9~\mu$m, with the T15 models more discrepant than the modified M14 models. At the temperatures and pressures that are likely in the W0855 photosphere, H$_2$S is a significant opacity source at $\lambda \lesssim 0.9~\mu$m (M14, their Figures 7 and 8). Possibly the strength of this opacity is dependent on the treatment of the condensation of sulfides at warmer temperatures. This issue will be explored in future work.
All the models show the discrepancy with observations at the [3.6] bandpass noted previously in \S 8.2.
At this temperature all the models also appear too faint at $H$ by about a factor of two. Increasing the mixing coefficient from $\log K_{\rm zz} = 6$ to 9 improves the agreement at $YJH$ by 10 -- 15\%, and at [4.5] and W3 by 5\%. The addition of water clouds improves the agreement at $H$ but makes $Y$ and $J$ too bright by about an order of magnitude. While the addition of water clouds and associated brightening at $Y$ and $J$ may improve the model fits for the
$T_{\rm eff} \approx 350$~K Y dwarfs (Figures 17 and 18), at 250~K the addition of water clouds (as currently modelled) does not improve the fit in the near-infrared. We note that condensation of NH$_3$ is not expected until lower temperatures of $\sim$200~K are reached.
Esplin et al. (2016) have detected variability at [3.6] and [4.5] for W0855, at the $\sim4$\% (peak-to-peak) level. Similar variability is seen in two Y dwarfs that are too warm for water clouds although they may have low-lying sulphide clouds (Cushing et al. 2016, Leggett et al. 2016b). Skemer et al. (2016) present a $5~\mu$m spectrum for W0855 which suggests that water clouds are present. No analysis yet has robustly confirmed the presence of clouds in the W0855 atmosphere (Esplin et al. 2016) and new models are needed which better reproduce the SED of W0855 before their presence or absence can be confirmed.
About 70\% of the total flux from W0855 emerges through the [4.5] and W3 filters and it is important therefore that the models reproduce the observed [4.5] and W3 magnitudes. However, Figure 12 suggests that there is a systematic offset between modelled and observed values of [4.5] $-$ W3. The analysis presented here has shown good agreement between observations and models at [4.5] (Figures 7, 9, 14), which suggests that the calculated values of W3 may be $\sim 0.5$ magnitudes too faint. The uncertainty in the measured [4.5] and W3 magnitudes for W0855 are 0.04 and 0.30 magnitudes respectively. We restrict models of the W0855 energy distribution to those where
the difference $\delta = M({\rm model}) - M({\rm observed})$ magnitude is such that $-0.25 \leq \delta([4.5]) \leq +0.25$ and $-0.3 \leq \delta(W3) \leq +0.6$. Table 10 lists the M14 and T15 models considered here which satisfy those criteria.
We find that current models imply that the 250~K Y dwarf W0855 is undergoing vigorous mixing, has a metallicity within $\sim$0.2 dex of solar, has little or no cloud cover, and has a range in surface gravity of $3.5 \lesssim \log g \lesssim 4.3$. The Saumon \& Marley (2008) evolutionary models then give a mass range of 1.5 -- 8 Jupiter masses, and an age range of 0.3 -- 6 Gyr (Table 9). The relatively high tangential velocity of W0855 of $86 \pm 3$ km s$^{-1}$ suggests the higher age (and higher mass) may be more likely.
\section{Conclusion}
We present new Gemini near-infrared spectroscopy for three Y dwarfs,
near-infrared photometry for two Y dwarfs, and $5~\mu$m photometry for four late-T and two Y dwarfs. We also present new near- and mid-infrared photometry for 19 T6.5 and later-type brown dwarfs, including 8 Y dwarfs, using archived images. We have determined improved astrometry for four Y dwarfs, also by using archived images. Combining the new photometry with data taken from the literature allows us to transform $CH_4$(short) and WFC3 photometry on to MKO $YJH$. We give a newly homogenized photometric data set for the known Y dwarfs (Table 8) which enables better comparisons to models as well as the identification of trends and outliers.
Using MKO-system color-magnitude diagrams and the new
parallaxes, we find that two of the Y dwarfs are likely to be binaries
composed of similar-mass objects: W0535 and W1828 (Figures 9, 14). WFC3 and Gemini adaptive optics images of W0535 from Opitz et al. (2016) do not resolve W0535. WFC3 images of W1828 show marginal elongation and ellipticity of $16
\pm 8$\%. The separation of the putative binaries are $\lesssim 2$~AU
for W1828 and $< 1$~AU for W0535.
The models show that the $J -$ [4.5]:[3.6] $-$ [4.5]
and the $J -$ [4.5]:$M_{[4.5]}$ diagrams can be used to estimate
metallicity (Figures 8, 9).
We refine our atmospheric parameter estimates by comparing near-infrared spectra for 20 of the Y dwarfs
to synthetic spectra generated by cloud-free non-equilibrium chemistry models (Figures 15 -- 18).
We find that all the known Y dwarfs have metallicities within
0.3~dex of solar, except for W0350 which has
[m/H]$\sim +0.4$~dex and W1828 which has
[m/H]$\sim -0.5$~dex. All the known Y dwarfs with measured parallaxes are
within 20~pc of the Sun, and therefore solar-like metallicities are expected.
Assuming W1828 is an equal-mass binary, we derive a low gravity for the
pair, which translates into a relatively young age of $\approx 1.5$~Gyr.
Notionally, this is inconsistent with the degree of metal paucity that
we find, as the thin disk generally has $-0.3 \lesssim$ [Fe/H] $\lesssim +0.3$ (e.g. Bensby, Feltzing \& Oey 2014).
An improved near-infrared spectrum is needed for this source,
preferably taken close in time to photometry, as the current spectrum
and photometry are discrepant and variability needs to be excluded.
The atmospheric parameters determined by fitting the near-infrared spectra are consistent with the values estimated photometrically. The
synthetic spectra generated by T15 non-equilibrium chemistry cloud-free models reproduce observations well for the
warmer half of the sample with $425 \leq T_{\rm eff}$~K $\leq 450$.
For the cooler Y dwarfs with $325 \leq T_{\rm eff}$~K $\leq 375$ the models seem consistently faint at $Y$. A comparison of models to a pseudo-spectrum of the 250~K W0855 shows that models with patchy clouds are brighter at $Y$ than cloud-free models, and the discrepancy seen in the $1~\mu$m flux of Y dwarfs with $T_{\rm eff} \approx$ 350~K may be due to the onset of water clouds. However the cloudy models produce too much flux at $0.8 < \lambda~\mu$m $<1.3$ for the cooler W0855. It is unclear if there is missing opacity at lower temperatures, or if the atmosphere of this cold object is cloud-free. All the Y dwarf atmospheres appear to be turbulent, with vertical mixing leading to non-equilibrium chemistry.
We determine masses and ages for 22 Y dwarfs from evolutionary
models, based on our temperature and gravity estimates (Table 9). Approximately 90\% of the sample has an estimated age of 2 to 6~Gyr, i.e. thin-disk-like as would be expected for a local sample. W1141 appears younger, with age $\sim$0.6~Gyr, and W0713 and W2209 appear older, with ages 7 and 8.5~Gyr respectively. About 70\% of the sample has a mass of 10 -- 15 Jupiter-masses. W0350, WD0806$-$661B, W0825, W0855, W1141 and W1828 (if an equal-pair binary) have masses of 3 -- 8 Jupiter-masses. W0713 appears to be a 20 Jupiter-mass Y dwarf.
A larger sample is needed to constrain the shape of the mass function and the low-mass limit for star-like brown dwarf formation.
We may not find more Y dwarfs however, unless or until a more sensitive version of {\it WISE} is flown.
\clearpage
|
2,869,038,154,642 | arxiv | \section{Introduction}
Interaction between the driver and the car is becoming an increasingly popular topic. Speech modality, which is commonly used in personal assistant systems in production vehicles, assists the driver by reducing touch-based interaction which can be a source of distraction \cite{tscharn2017stop, rumelin2013free}. However, using only speech as interacting modality can be cumbersome, especially in situations, where the users want to reference objects which are unknown to them. Therefore, for natural interaction involving deictic references, one needs other modalities as well, such as head pose, eye-gaze, or finger-pointing gestures. Integrating another modality for deictic referencing with speech, thereby increases usability as well as naturalness. Bolt's pioneering work, "Put that there" \cite{bolt1980put}, combined speech and gesture demonstrating the practicality of using multimodal input for natural user interaction. Multimodal interaction has since been rigorously studied and incorporated into the car \cite{ohn2014head, nesselrath2016combining, gomaa2020studying, pfleging2012multimodal, muller2011multimodal}.
In the context of driving experience, gesture recognition enables direct interaction with the vehicle surroundings while also allowing interaction with a wide range of in-vehicle functions. Deictic references, such as "what is \textit{that} landmark?" or "what does \textit{this} button do?," provide drivers with feedback for inquisitive commands if the referenced object can be identified. On the other hand, control commands, such as "stop over \textit{there}" or "close \textit{that} window," can assist the driver in ease of control. Though accurate detection of the driver's pointing direction is a technical challenge in itself, a more crucial problem in this task is the lack of sufficient precision in the user's pointing direction \cite{roider2018implementation, brand2016pointing}. Ray casting based finger pointing techniques are limited by the user's pointing accuracy \cite{mayer2018effect}. Therefore, gaze input has previously been used to obtain information about the driver's focus of attention.
In this paper, we use features from the three modalities, head pose, eye-gaze and finger pointing and fuse them to identify driver's referencing, where the referenced object may be situated inside or outside the vehicle. The drivers' behaviour while pointing to objects is also extensively studied. The contributions of this paper include: 1) a comparison between driver's pointing behavior to objects inside the car and to objects outside the car, 2) a study of precision of the three modalities, eye-gaze, head pose and finger pointing, and their fusion to identify the referenced object and 3) an effective approach to differentiate the referencing of inside-vehicle control elements and outside-vehicle landmarks. Furthermore, we discuss the importance of multimodal fusion and its effectiveness as compared to single modalities in different cases.
\section{Related Work}
Pointing gestures and eye gaze tracking as input modalities for user interaction have been studied extensively. Selection made by using gaze and head pose has been done such as by Kang et al. \cite{kang2015you}. Using gaze, even if accompanied by head pose, presents challenges particularly in cases where objects are located in close proximity to each other \cite{hild2019suggesting}. This is because gaze lacks a natural trigger due to its always-on characteristic, and has the potential to be quite volatile \cite{ahmad2017intelligent}. Use of an additional modality such as speech in addition to gaze (in order to trigger the selection of objects) has often been made. \cite{maglio2000gaze}. EyePointing \cite{schweigert2019eyepointing} makes use of finger pointing to trigger the selection of objects on a screen using gaze direction. Misu et al.\cite{misu2014situated} and Kim et al. \cite{kim2014identification} make use of head pose using speech as a trigger for driver queries of outside-vehicle objects.
Gestures and free hand pointing add to the naturalness of the driver as they lessen the driver's cognitive demand as compared to touch based inputs, making finger pointing a useful input as well \cite{rumelin2013free, tscharn2017stop}. This was also demonstrated by Pfleging et al. in their work combining speech and gestures with minimal touch inputs on the steering wheel \cite{pfleging2012multimodal}. For selection of Points-of-Interest (POI) while driving a vehicle, Suras-Perez et al. \cite{sauras2017voge} and Fujimura et al.\cite{fujimura2013driver} used finger pointing with speech and hand-constraint finger pointing, respectively. However, the use of finger pointing can be a difficult task especially when trying to identify objects that do not lie straight ahead \cite{akkil2016accuracy}. In fact, while studying driver behaviors in a driving simulator, Gomaa et al. found gaze accuracy to be higher than pointing accuracy \cite{gomaa2020studying}. To improve the accuracy of pointing for object selection inside the car, Roider et al. \cite{roider2018see} utilized a simple rule based approach that involves gaze tracking, and demonstrated that it improved finger pointing accuracy. However, the experiment was limited to four objects on the screen. Chaterjee et al. also combined gaze and gesture as inputs and showed better results with integration of the two than if either input is used separately \cite{chatterjee2015gaze+}.
As research has shown, the use of multiple input modalities can surpass a single input modality in terms of performance \cite{esteban2005review, liu2018efficient, turk2014multimodal}, multimodal user interaction offers a significant utility for in-vehicle application. Mitrevska et al. demonstrate an adaptive control of in-vehicle functions using an individual modality (speech, gaze or gesture) or a combination of two or more \cite{mitrevska2015siam}. Mu\"ller and Weinberg discuss methods for a multimodal interaction using gaze, touch and speech for in-vehicle tasks presenting a few advantages and disadvantages of individual modalities \cite{muller2011multimodal}. Moniri et al. \cite{moniri2012multimodal} combined eye gaze, head pose, and pointing gestures for multimodal interaction for outside-vehicle referencing for object selection. Nesselrath et al. adopted a multimodal approach, combining three input modalities: gaze, gestures, and speech in a way that objects were first selected by gaze, e.g., windows or side mirrors, and then controlled using speech or gesture \cite{nesselrath2016combining}.
These approaches mostly use gaze information and increase the naturalness of the user interaction by including a secondary input such as speech or gesture. However, these multimodal approaches do not use the opportunity to enhance the preciseness of gaze tracking, although some use semantics from speech to narrow down the target. Our work achieves this enhancement in preciseness with the use of multimodal fusion of relevant deictic information from gaze, finger pointing, and head pose as input modalities \cite{akkil2016accuracy}. Instead of using finger pointing as a trigger for selection, we use it as an equal input modality, while utilizing speech modality as a trigger.
For multimodal fusion, the use of deep neural networks has been explored previously \cite{ngiam2011multimodal, wu2014survey, meng2020survey}.
Gomaa et al. study gaze and pointing modality for the driver's behavior while pointing to outside objects \cite{gomaa2020studying}. They further proposed various machine learning methods, including deep neural networks for a personalized fusion to enhance the predictions \cite{gomaa2021ml}. Aftab et al. demonstrate how multiple inputs, namely, head pose, gaze and finger pointing gesture, enhance the predictions of the driver's pointing direction for object selection inside the vehicle \cite{aftab2020you} and what limitations arise when pointing to objects outside the vehicle \cite{aftab2021multimodal}.
In our approach, we use a model-level fusion approach for selection of a wide range of objects that may be situated inside the vehicle or outside the vehicle. As head pose and gaze direction are directly related in identifying visual behavior \cite{ji2002real, mukherjee2015deep}, we use these two modalities along with finger pointing for two tasks: i) to identify whether the object to be selected lies inside or outside the vehicle, and ii) to precisely predict the pointing direction in either case. Each of the three modalities is processed as equal input, and the network learns from the training data.
In summary, object selection inside the vehicle and driver queries to outside-vehicle objects have been rigorously studied. However, to our knowledge, no study simultaneously deals with objects both inside and outside the vehicle. We merge concepts from previous research work to perform a comparison of the two above mentioned types of pointing. Our work differs from past work in that we deal with both in-vehicle objects as well as outside-vehicle objects. We compare and demonstrate modality specific limitations in both types of pointing and learn to distinguish between the two types. While most studies use simulators, we perform our experiments in a real car within authentic environment which both gives drivers a relatively realistic impression and helps us achieve more genuine and applicable results. Furthermore, unlike some of the related work, we use non contact sensors for tracking eye and gestures that allows users to behave more naturally.
\section{Experiment Design}
For the application of deictic referencing with finger pointing and gaze, we used a real vehicle for data collection. For simplicity and ease of data collection, the vehicle was kept stationary at different locations on the road. Consequently, during all the pointing events at various objects inside and outside the vehicle, the primary focus of the driver was not on driving the vehicle, as would be the case in the self-driving cars in the future. Various non-contact and unobtrusive sensors were used to measure the drivers' gestural, head and eye movements.
\subsection{Apparatus} \label{sec:apparatus}
We set the apparatus up in the same way as by Aftab et al. \cite{aftab2020you, aftab2021multimodal}. Two types of camera systems were used: 1) Gesture Camera System (GCS) and 2) Visual Camera System (VCS). The two camera systems were carefully chosen and consist of sensors placed in positions which are being used in production vehicles, e.g., BMW 7-series and BMW iX offer a camera fitted behind steering wheel to analyze the driver’s face as well as another camera at the car ceiling for gesture controls.
The GCS, which was mounted on the car ceiling next to the rear-view mirror, consisted of a Time-of-Flight (ToF) 3D camera with a QVGA resolution (320 $\times$ 240). It tracked both of the driver's hands for gestures and detected "one finger pointing gesture" from either or both hands, providing the 3D position of the finger tip as well as the direction of the pointing gesture as a 3D normalized vector. The direction vector was calculated from the 3D position of the finger base to the 3D position of the finger tip, and normalized to have a unit norm.
The VCS, installed behind the steering wheel, captured the driver's head and eye movements. It provided the 3D position of the head center and the eyes along with the head orientation (as euler angles) and gaze direction as a 3D vector with unit norm.
Apart from these two camera systems, four additional cameras were placed inside the car, two of which recorded the driver's actions while the other two recorded the environment. These four cameras were used to analyze the events visually.
For speech, the Wizard-of-Oz (WoZ) method was used to note the timestamp of the speech command used with the pointing gesture. A secondary person (acting as a wizard) noted the instant (hereafter called the WoZ timestamp) when the driver made the referencing gesture and said, "what is \textit{that}?" with the help of a push button. This timestamp was used to identify the approximate time when the gesture took place.
\subsection{Feature Extraction} \label{sec:features}
From the two camera systems, GCS and VCS, we extracted the finger pose, eye pose and head pose, where pose constitutes both position and direction of the modalities. In total, we have six features. These are explained as follows:
\begin{itemize}
\item Finger pose: the 3D position of the finger tip and (normalized) direction vector of the finger pointing gesture in the 3D vector space.
\item Eye pose: the 3D position of the point between the two eyes and (normalized) direction vector of the eye gaze in the 3D vector space.
\item Head pose: the 3D position of the center of the head and the Euler angles (as yaw, pitch and roll) of the head orientation.
\end{itemize}
\subsubsection{Pre-processing Data}
For each pointing event, we extracted a time interval of 0.8 seconds such that it included the WoZ timestamp (denoting the time of the speech command) within it. The duration of the time interval was based on the observation by Ru\"melin et al.\cite{rumelin2013free} for comfortable pointing time. This interval amounted to 36 frames (at 45 frames per second), forming a short temporal sequence. We used the whole temporal sequence for the model training explained later in Section \ref{sec:fusion}.
During the data collection, some of the referencing events contained occlusion in one or more modalities, which resulted in some frames with a few missing features. The occlusion in eye pose or head pose mainly occurred when the arm, that was used to point, was held in front of the face (as in Figure \ref{fig:teaser}) or when the head was turned to the far sides preventing the tracking of the eyes. In a few cases while pointing, the participants extended their arms beyond the field-of-view of the gesture camera, especially when pointing with the left hand to the left side, resulting in missing features of the finger modality. In order to fill in the missing features, we used linear interpolation from the two nearest neighbouring frames.
\subsubsection{Axes Translation}
\begin{figure}[t]
\centering
\includegraphics[trim=0 0 0 0,clip, width=0.478\textwidth]{images/origin.jpg}
\caption{Translation of origin from the center of the car's front axle to driver's seat}\label{fig:origin}
\end{figure}
\begin{figure}
\centering
\includegraphics[trim=0 0 0 0,clip, width=0.478\textwidth]{images/coordinates_conversion.jpg}
\caption{Camera systems, AOIs, and map of POIs are converted to car coordinate system}\label{fig:car_coordinates}
\end{figure}
We followed the ISO 8855 \cite{iso2011road} standard for the car coordinate system with the exception of the origin's position (see Figure \ref{fig:origin}). We did not choose the origin at the center of the front axle because the inside objects lie before the front axle of the car and the outside objects lie ahead of car's axle. Consequently, having the origin at the center of the front axle, the ground truth vector, which is calculated from the origin to the center of the AOI or POI (defined in Section \ref{sec:gt}), seemingly becomes in opposite directions for inside and outside objects. For example, consider an object inside the vehicle, AOI 7, and an object outside the vehicle, POI 1. Despite the two objects lying in almost the same horizontal direction from the driver's perspective (i.e., approximately similar yaw angles for both) as shown in Figure \ref{fig:origin}, the ground truth vectors have a difference of about 130°. Therefore, for a fair comparison of pointing to inside and outside objects, and to have a comparable ground truth direction, we translate the origin by $[x=2\text{m}, y=-0.4\text{m}, z=0\text{m}]$, such that the origin resides at the approximated center point behind the driver's seat. Throughout the paper, this is kept fixed for all experiments for consistency. With this translation the ground truth vectors have a similar yaw direction for both objects (see Figure \ref{fig:origin}, right). Consequently, all features from both cameras were transformed to the car coordinate system, with the origin behind the driver's seat (see Figure 3).
\subsection{Experiment Types}
The experiments were divided into two types, the cockpit use case and the environment use case. For both of these types, the apparatus and the vehicle were kept the same, and vehicle was kept stationary. In both use cases, the participants were asked to point naturally to the pre-selected objects, and say "what is \textit{that}?". They were free to choose either hand for pointing. Some objects were larger than others and drivers could choose to point to any visible part of the surface area. The difference between the two use cases lay in the chosen objects. Consequently, the pointing directions differed as well as the angular width and angular height of objects. The objects in both use cases were chosen such that they were in front of the driver, including the far right as well as far left sides with respect to the driver, in order to have a sufficiently large variance of direction angles in both cases.
\subsubsection{Cockpit Use Case}
\begin{figure}[h]
\begin{subfigure}{.478\textwidth}
\centering
\includegraphics[trim=0 40 0 10,clip, width=\textwidth]{images/AOIs.jpg}
\caption{AOIs shown with red highlighted areas}
\label{fig:AOIs_car}
\end{subfigure}
\begin{subfigure}{.478\textwidth}
\centering
\includegraphics[trim=0 0 0 0,clip, width=\textwidth]{images/scatter_AOI.png}
\caption{Measured corner points of AOIs}
\label{fig:AOIs_scatter}
\end{subfigure}
\caption{The 12 selected AOIs in the cockpit of a car (left), and the scatter plot of the measured corner points of the AOIs w.r.t the car origin (right) \cite{aftab2020you}. }
\label{fig:AOI}
\end{figure}
In the first type of experiment, 12 distinct areas inside the vehicle were chosen, called the Areas-of-Interest (AOIs). These were different control elements of the car that the driver could potentially reference for touch-free control, shown in Figure \ref{fig:AOIs_car} illustrated by red circles. Figure \ref{fig:AOIs_scatter} shows the measured points of the corners or vertices of the AOIs with crosses 'x', and the mean point of each AOI with a circle 'o'. These define the areas where the users in this first type of experiments should point to. Consequently, the AOIs have different (but fixed) sizes at chosen distances and locations as shown in Figure \ref{fig:AOIs_scatter}.
Referencing of AOIs was independent of car position. All participants were asked to point to the given 12 AOIs for 0 times, not necessarily in the same sequence. However, not all samples could be correctly recorded and therefore, had to be discarded due to technical issues with the setup. In total, we had 2514 samples that were used for training and testing for the cockpit use case.
\subsubsection{Environment Use Case}
\begin{figure*}[t]
\includegraphics[trim=0 0 0 0,clip, width=\textwidth]{images/map.PNG}
\caption{Map of the chosen 5 POIs along with the 4 car poses shown by black rectangles \cite{aftab2021multimodal}.}
\label{fig:map}
\end{figure*}
In the second type of experiment, 5 different landmarks situated in front of the vehicle were chosen. The landmarks were buildings and antennas. These are referred to as Points-of-Interest (POIs). Referencing of POIs was conducted in 4 different car poses\footnote{GPS coordinates 1: 48.220446N 11.724796E, 2: 48.220363N 11.724800E, 3: 48.220333N 11.724782E, 4: 48.221293N 11.724942E}, where pose constitutes both position and orientation, to add a large variety of pointing directions. The coordinates of the car poses and the POIs were manually measured with a laser sensor, Leica Multistation\footnote{{https://leica-geosystems.com/en-us/products/total-stations/multistation}}, and converted from geodetic coordinates to Cartesian coordinates with origin at the driver's seat. The POIs and car poses in the environment use case are shown in Figure \ref{fig:map}. It can be seen that the outside objects or POIs in this stationary use case also have different but fixed sizes, each with fixed distance and location w.r.t. the vehicle.
In the fourth car pose, only 3 out of the 5 POIs were visible to the driver. Consequently, the 4 car poses and 5 POIs provided 18 different pointing directions in front of the driver as well as on the right and left sides of the driver (see car position '2' and '3' in Figure \ref{fig:map}).
Similar to the cockpit use case, the participants were asked to point to each POI from each of the four different car poses. The car pose and POI were repeatedly changed so that users did not get accustomed to the next POI to be referenced. We collected 6590 samples for the environment use case. The reason for the relatively larger data collection for the environment case is two fold: i) there were more pointing directions, and ii) we needed a larger variance in data to get adequate results and to obtain a robust model for environment use case. This was because of the relatively larger pointing errors by the users in the environment use case as compared to the cockpit use case - as can be seen in Section \ref{sec:analysis}, Figures \ref{fig:measurements_yaw} and \ref{fig:measurements_pitch}.
\subsection{Participants and Data Collection}
For our experiments, thirty participants took part in at least one of the two experiments. However, for the sake of fair comparison, we only considered those 11 participants which took part in both experiments. The participants ranged from 20 years old to 40 years old, with a mean age of 28.7 and a standard deviation of 5.7. Two of the eleven participants were females. Three participants wore glasses and one wore contact lenses, while the rest did not wear any glasses or lenses. Only one participant was left handed. However, the hand used for gesture by the right handed users was not always the right hand. In the cockpit use case, 23\% of the events were performed with left hand. In the environment use case, about 12\% were carried out with the left hand. It is important to mention here that due to a few administrative and technical reasons, the number of samples per driver are not perfectly balanced. Furthermore, we collected more samples for the environment use case than the cockpit use case.
\subsection{Dataset Split} \label{sec:data_split}
We split the dataset into three sets: training set, validation set and test set. The division of the sets was participant based. This means that no reference sample from participants in the training set appeared in either the validation or the test set, and vice versa. This ensures real-world validity. For generalization, we used a leave-one-out cross validation to evaluate our models. The leave-one-out split resulted in 11 splits of the dataset that were used for testing as we had 11 participants. Weighted average is used to calculate the final metrics. In this, the entire dataset is covered in the test set. For each test split, a different participant (not present in the training set), was used for validation. Consequently, we had 11 splits of the training set as well, each with a different subset of the participants.
\subsection{Analysis of Modality Measurement} \label{sec:analysis}
\subsubsection{Preciseness of Measured Modalities}
\begin{figure}[t]
\begin{subfigure}{.478\textwidth}
\centering
\includegraphics[trim=0 0 0 0,clip, width=\textwidth]{images/measurements_t0_yaw.png}
\caption{Mean and Std. in yaw (horizontal) angles}
\label{fig:measurements_yaw}
\end{subfigure}
\begin{subfigure}{.478\textwidth}
\centering
\includegraphics[trim=0 0 0 0,clip, width=\textwidth]{images/measurements_t0_pitch.png}
\caption{Mean and Std. in pitch (vertical) angles}
\label{fig:measurements_pitch}
\end{subfigure}
\caption{Mean and standard deviation (Std.) of the angular distance between measured direction of the modalities and the ground truth direction at the WoZ timestamp instant.}
\label{fig:measurements}
\end{figure}
We started by analyzing the quality of data collection. To this end, for each modality we calculated the mean and standard deviation of the angular distance between the measured direction and the ground truth direction at the instant when the WoZ button is pressed (i.e., at the WoZ timestamp). This instance lies in the middle of the pointing gestures in almost all events. Only for simplifying the analysis of the 3D vectors, all coordinates were converted from Cartesian ($x,y,z$) to spherical coordinates ($r, \theta, \phi$). The mean and standard deviation of the modalities for the two use cases are illustrated in Figures \ref{fig:measurements_yaw} and \ref{fig:measurements_pitch} for the yaw and pitch directions, respectively.
It can be seen that in the environment use case, the measured direction angles w.r.t. ground truth for the finger modality (30.3° in yaw and 26.3° in pitch) are significantly larger than the other two modalities in both yaw and pitch. Furthermore, the standard deviations of the finger direction w.r.t. ground truth (33.0° in yaw and 17.0° in pitch) are also large than the standard deviation of eye direction and head direction. This might be caused by the relatively low availability of the finger modality in the environment use case (discussed in more details in sections \ref{sec:results_environment} and \ref{sec:cross_dataset}).
In the cockpit use case, we observe a relatively large angular distance in the pitch angles for all three modalities. The eye direction has the largest angular distance in the pitch angles (54.7°) in comparison with the other two modalities. This larger angular deviation of eye direction stems from the position of the origin lying below the eye position. As the ground truth vector is calculated from origin, the pitch direction of ground truth for all AOIs and POIs would be upwards, whereas the eye direction for the majority of the AOIs would be downwards. Therefore, relatively high values were measured for the eye direction as well as the head direction towards AOIs in the cockpit use case. However, the pitch angles of the head (20.7°) exhibit a much smaller offset as compared to the pitch of the eye direction. This indicates that, on average, the head direction was not entirely turned towards the AOI.
\subsubsection{Distribution of Direction Angles}
\begin{figure}[t]
\begin{subfigure}{.236\textwidth}
\includegraphics[trim=0 0 0 0,clip, width=\textwidth]{images/fngr_pos.png}
\caption{Finger position (pitch)}
\label{fig:fngr_pos}
\end{subfigure}
\begin{subfigure}{.236\textwidth}
\includegraphics[trim=0 0 0 0,clip, width=\textwidth]{images/fngr_dir.png}
\caption{Finger direction (pitch)}
\label{fig:fngr_dir}
\end{subfigure}
\begin{subfigure}{.236\textwidth}
\includegraphics[trim=0 0 0 0,clip, width=\textwidth]{images/eye_dir.png}
\caption{Eye direction (pitch)}
\label{fig:eye_dir}
\end{subfigure}
\begin{subfigure}{.236\textwidth}
\includegraphics[trim=0 0 0 0,clip, width=\textwidth]{images/head_dir.png}
\caption{Head direction (pitch)}
\label{fig:head_dir}
\end{subfigure}
\caption{Distribution of the pitch (vertical) angles of (a) finger position, (b) finger direction, (c) eye direction and (d) head direction.}
\label{fig:hists}
\end{figure}
The distribution of the direction of the eye, head and finger pointing is vital to understand the contrast between pointing inside the vehicle and outside. As the control elements mainly lie below the windscreen, the referencing direction for the AOIs would mostly be downwards while for the POIs, it would be slightly upwards or parallel to the road. This trend is observed in our data collection, which is shown by distribution of the modalities' (pitch) direction in Figure \ref{fig:hists}. From the distributions of the two cases, we see a clear separation in the eye direction in Figure \ref{fig:eye_dir}. The finger direction has overlapping distributions (see Figure \ref{fig:fngr_dir}), whereas the head direction has no separation in pitch directions at all (see Figure \ref{fig:head_dir}). Interestingly, the finger position also plays a distinctive role in differentiating between the types of referencing objects (see Figure \ref{fig:fngr_pos}). For all drivers, the pitch angles of the finger tip position w.r.t. the origin at the center point behind driver's seat for the cockpit use case has a mean and standard deviation of 29.0° and 5.4°, respectively (see Figure \ref{fig:fngr_dir}). Whereas, for the environment use case, considering all drivers, the pitch of the finger tip position has a mean of 40.0° and a standard deviation of 6.8°.
\section{Multimodal Fusion Models}
We propose a two-step approach, illustrated in the overall architecture in Figure \ref{fig:methodology}, for the recognition of use case type and for the fusion of modalities.
\begin{figure*}[t]
\centering
\includegraphics[trim=0 0 0 0,clip, width=\textwidth]{images/model_architecture.png}
\centering
\caption{The overall architecture of our approach.}
\label{fig:methodology}
\end{figure*}
\subsection{Case Distinction Model}
Firstly, taking the six features as input, for early fusion, we use a shallow Convolutional Neural Network (CNN) model (consisting of 2 convolutional layers and 1 dense layer) with a binary cross-entropy loss to predict whether the driver referenced an object inside or outside the car. Each of the two convolutional layers contains 64 kernels with a size of $3 \times 3$. The shallow model for use case distinction is trained independently from the fusion model (which is discussed in Section \ref{sec:fusion}). The model consisted of approximately 21,000 trainable parameters.
The output of the model is used to load the appropriate weights for the fusion model, which is trained separately for the two use cases. If the referenced object is determined to be an AOI, the weights of the cockpit use case are applied to the fusion model, and if the referenced object is determined to be a POI, the weights of the environment use case are applied to the fusion model.
\subsection{Fusion Model} \label{sec:fusion}
For each referencing event, we aim to predict the direction angle towards an AOI or POI w.r.t the ground truth. Therefore, the problem we deal with is a regression problem as the output is continuous. We adopt a model-level fusion method \cite{chen2016multi} for integration of the set of pre-processed features mentioned in Section \ref{sec:features}. The model-level fusion also has the tendency to implicitly learn the temporal relations between modalities \cite{wu2014survey}. A deep CNN is designed having linearly regressed output, which is trained on the collected data from the 11 participants. The motivation behind the choice of the deep CNN model is to cover the large number of behavioral cases exhibited by the participants which a rule-based or a simple linear regression model may not be able to cover. As the input features form a temporal sequence, the convolution block operates in the temporal dimension as well as the feature dimension. The overall architecture is shown in Figure \ref{fig:methodology}.
\subsubsection{Model Description}
The input, $x$, to the CNN model is a batch of size $b$, such that $x \in \mathbb{R}^{b \times t \times f \times d}$, where $t=36$ is the number of temporal (consecutive) frames that form the sequence, $f=6$ is the number of features (2 for each modality) and $d=3$ is the number of dimensions in each feature (cartesian coordinates). The CNN model consists of two convolutional layers, applied on each modality (eye, head and finger) separately, each with a kernel size of $2 \times 2$ and Rectified Linear Unit (ReLU) activiation function, followed by an average pooling layer of size $2 \times 1$.
The convolutional feature maps are then concatenated and two more convolutional layers of kernel size $3 \times 3$ with ReLU activiation functions are applied. The convolutions in these layers share the information from each modality. The number of kernels for each convolution layer is selected to be 128. A flatten layer is used to vectorize the feature maps before finally applying a fully connected layer, which uses linear activation to provide the linearly regressed fused 3D direction vector, $y \in \mathbb{R}^{b \times d}$. The CNN model has approximately 0.5M parameters when all three modalities were used as inputs, approximately 0.43M parameters when two modalities were used and about 0.36M parameters when only one modality was used.
\subsubsection{Ground Truth} \label{sec:gt}
In order to calcuate the ground truth in the cockpit use case, the measured points of the AOIs (shown in Figure \ref{fig:AOIs_scatter}) were translated to the car coordinate system. In the environment use case, the GPS coordinates of the eight corners of each POI were measured in WGS84 (World Geodetic System) standard. The geodetic coordinates of the POI corners were first converted to Cartesian Earth-Centered, Earth-Fixed (ECEF) and then, to the car coordinate system using an affine transformation with the rotation and translation matrices calculated from the car pose.
We, then, defined ground truth as the normalized 3D vector with unit norm calculated from origin (i.e. driver's seat) to the center of the AOI or POI, calculated by taking the mean of the measured corner points of AOI or POI, respectively.
\subsubsection{Loss function}
For the training of the network, Mean Angular Distance (MAD) between the output vector and the ground truth vector was used as the loss function, $\mathcal{L}$. In other words, the angle between the two vectors is minimized. Mathematically:
\begin{align}
\mathcal{L} = \text{MAD} &= \frac{1}{N} \sum^N_{i=1} \theta_i \qquad \qquad \quad \qquad \in \ [0,\pi]
\label{eq:eq1}
\\
&= \frac{1}{N} \sum^N_{i=1} \text{arccos} \left( \frac{{\hat{\bm y}}_i \cdot \ {\bm y}_i}{\|{\hat{\bm y}}_i \| \ \| {\bm y} _i\|} \right)
\end{align}
where $ \hat{\bm y}_i$ is the $i$-th 3-dimensional predicted vector, $\textbf{y}_i$ is the $i$-th 3-dimensional ground truth vector, $\theta_i$ is the angle between the two 3D vectors, and $N$ is the total number of samples.
\subsection{Model Training}
For each modality or combination of modalities in both use cases, the shallow case distinction model and the CNN fusion model were trained separately. Each training was performed with the same parameters: a batch size of 32, Adam optimizer with a variable learning rate starting from 0.001, and 50 epochs. The model which minimized the validation loss was chosen to evaluate the test set. Finally, the weighted average of the test sets provided the performance metrics.
\subsection{Performance Metrics}
\subsubsection{Classification Accuracy}
To measure the performance of the case distinction model, we use a binary classification accuracy, as there are only two cases. The accuracy is the percentage of correctly identified cases, i.e., true positive rate for binary classification.
\subsubsection{Mean Angular Distance (MAD) and Standard deviation of Angular Distance (Std.AD)}
We use two metrics to measure the performance of the fusion model in terms of precision. The MAD is used to evaluate the precision of the regression output from the model. It is defined as mean of the angular distance between the predicted and ground truth vectors. It is the same as the loss function used above and is mathematically shown in Eq. \ref{eq:eq1}. The smaller the angular distance between them, the more precise the prediction is. Therefore, lower MAD means better precision. We also calculate the standard deviation of the angular distance (Std.AD) to analyze the variation in the angular distances.
\subsubsection{Hit Rate}
\begin{figure}[t]
\centering
\includegraphics[trim=0 0 0 0,clip, width=0.478\textwidth]{images/pointing_angles.jpg}
\centering
\caption{Top-down view of user pointing to POI. Here, the direction vector of head is not counted as a hit as it is outside the angular range of the POI, while directions of gaze and finger are considered as hits.}
\label{fig:hits}
\end{figure}
For each pointing reference, a hit was considered if the direction angle of predicted (output) direction vector was within the range of angular width and angular height of the object (i.e AOI or POI) with a tolerance of 2° and 1°, respectively. This is illustrated in Figure \ref{fig:hits}.
The hit rate was then calculated by dividing the sum of all hits by the total number of referencing events, shown in Eq. \ref{eq:eq2}.
\begin{align}
hit\_rate& = \frac{\sum hit}{\sum referencing\_events} \label{eq:eq2} \\
\text{and } \qquad hit &=
\begin{cases}
1 \quad \text{if } \; (\theta_{min} - 2 < \theta < \theta_{max} + 2) \\ \text{ and } \ (\phi_{min} - 1 < \phi_{pitch} < \phi_{max} + 1)\\
0 \quad \text{otherwise}
\end{cases}
\end{align}
where $\theta$ and $\phi$ are the horizontal (hereafter called yaw) and vertical (hereafter called pitch) angles of the vectors (when converted to spherical coordinates), respectively, and $\theta_{min}$, $\theta_{max}$, $\phi_{min}$ and $\phi_{max}$ are the minimum and maximum angles of the angular width in yaw and pitch for the referenced object with respect to the car.
It is important to note that the hit rate is not the accuracy of correct identification of the AOI or POI. The object (AOI or POI) can ,in some cases, still be correctly identified even though it was not hit, based on the closest cosine proximity. In the context of this paper, we do not compare the object identification accuracy as this depends upon the location and density of the objects, which are different in the cases we study.
\section{Experiments and Results} \label{exp}
In this section, we analyze and discuss the results obtained from our various experiments. For each experiment, we test our results using a weighted average from the 11-fold cross-validation as discussed before in Section \ref{sec:data_split}.
\subsection{Case Distinction Results}
\begin{figure*}[t]
\centering
\includegraphics[trim=0 0 0 0,clip, width=\textwidth]{images/case_distinction.png}
\centering
\caption{Classification accuracies for use case distinction for all modalities}
\label{fig:distinction_accuracy}
\end{figure*}
As there is a considerable difference in the distributions of the pitch angles of the modalities, especially finger direction and position for the two use cases (shown in Figures \ref{fig:fngr_pos} and \ref{fig:fngr_dir}), we expected a simple CNN model to handle the classification task. While using all six features from the three modalities, an 11-fold accuracy for classification of 98.6\% was achieved (see Figure \ref{fig:distinction_accuracy}). We observed eye gaze as the dominant contributor with an accuracy of 96.3\%, which was further enhanced by the finger modality to 98.1\% possibly because of the different finger positions for inside and outside pointing. When using only finger pose as input to the model, a classification accuracy of 91.6\% was achieved.
\subsection{Ablation Study: Modality Specific and Fusion Results}
The ablation study helps to understand the effects of different components of the network. We use single modalities, a combination of two modalities and all three modalities simultaneously to train the fusion model and analyze the effects of adding modalities in both cockpit and environment cases. Figures \ref{fig:MAD} and \ref{fig:hit_rate} show a comparison of the results for all modality combinations for both cases. It is important to keep in mind that each result involved a new training of the CNN model based on the chosen subset of modalities.
\begin{figure*}[t]
\centering
\includegraphics[trim=0 0 0 0,clip, width=\textwidth]{images/MAD_and_STDAD.png}
\caption{MAD and Std.AD of the resultant for different combinations of modalities in the cockpit and environment use cases}
\label{fig:MAD}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[trim=0 0 0 0,clip, width=\textwidth]{images/hit_rate.png}
\centering
\caption{Hit rates of the resultant for different combinations of modalities in the cockpit and environment use cases}
\label{fig:hit_rate}
\end{figure*}
\subsubsection{Cockpit Use Case}
In the cockpit use case, the model trained with only finger modality performs the best amongst the models trained with single modalities. It has an MAD of 6.1° which is lower than the ones obtained using only eye or only head, even though the standard deviation of finger is relatively high (see Figure \ref{fig:MAD}). By adding eye pose or head pose, the MAD is reduced to 3.7° or 2.7°, respectively, thereby enhancing the precision. Fusion of all three modalities has the best outcome with an MAD of 2.5°. The same trend can be seen in hit rates. Finger has a hit rate of 75\%, which is the highest hit rate amongst the three modalities. The combination of all three modalities increases the hit rate to about 87\% (see Figure \ref{fig:hit_rate}).
Finger modality exhibits the highest precision (i.e., an MAD of 6.1°). This is due to the small distance of the AOIs from the finger tip as the AOIs are close by to the driver's hands. Eye gaze being less precise is counter intuitive at first glance. However, a deeper analysis of the recorded videos of the drivers from the additional four cameras discussed in Section \ref{sec:apparatus} revealed that the drivers were mostly looking downwards and in some cases, the eyelids covered the pupils partially which caused erroneous tracking of the gaze. This was especially true for AOIs 1, 2 and 3 which lie near the gearbox. Because of the erroneous tracking and the volatile nature of eye gaze, its MAD of 7.2° is the largest in the cockpit use case, with a Std.AD of 8.6°. It is interesting to see that despite the larger MAD of gaze modality compared to head, the hit rate for gaze modality of 56\% is slightly higher than that for head modality. This indicates that with gaze, the predictions hit the target AOI slightly more, but on the other hand, predictions using only gaze have more outliers compared to head, which cause the Std.AD to be relatively high.
Even though finger has the most hits amongst all three modalities, it has a relatively high Std.AD, indicating many outliers (i.e., predictions with very large error). Upon careful analysis of the sensor data and looking at the recorded videos from the additional cameras, we were able to identify a few reasons for the outliers. The main reason in the cockpit use case was the driver's left hand pointing towards AOI 10 and 11. Since these two lie below the steering wheel, far from the field-of-view of the GCS, the tracking of the left hand often had erroneous measured data, or there was no tracking of the left hand. Among other reasons, is different pointing direction for the same AOI, e.g. for AOI 9 the majority of the users pointed with right hand but there are a few samples with the left hand as well. Since the AOI 9 is so close to the driver, the change of hand causes a huge difference in the angle of pointing direction. This is because the finger position and direction were different for each event w.r.t the ground truth, which is always kept the same for consistency.
\subsubsection{Environment Use Case}\label{sec:results_environment}
In this case, finger modality has the least precision (i.e., the highest MAD at 19.9°), while eye has the best precision among the three modalities with an MAD of 9.3°. Finger modality has a very high standard deviation as well, i.e. an Std.AD of 21.7 (see Figure \ref{fig:MAD}). It was observed that when using only the finger modality, the predictions had many outliers with very large errors which cause the larger value of MAD as well as Std.AD. In addition to outliers, due to the relatively larger size of POIs in comparison with the size of AOIs, the participants' pointing directions have a large variance as they were free to choose any place on the POI to point. This is one of the reasons why a relatively larger dataset was required for an adequate prediction precision in environment use case as compared to cockpit use case. However, the main reason for the finger modality being least precise was the use of left hand for pointing to the left side, which resulted in the hand dropping out of the field of view of the gesture camera causing partial unavailability of the finger pose. This happens in about 20\% of the referencing events. The majority of the outliers lie in the car poses 2 and 3, each having about 49\% outliers, while car pose 1 and 4 only had about 1\% outliers. In car pose 3, there are about 30\% pointing events with the left hand use, while other positions each have about 10\% with the left hand.
Looking at the hit rate in the environment use case in Figure \ref{fig:hit_rate}, eye modality has the highest hit rate at 61\% while finger modality has the lowest hit rate at 38\%. These results are almost contrary to the cockpit use case. Amongst single modalities, eye has the highest hit rate in environment use case and the lowest precision in the cockpit use case (see Figure \ref{fig:hit_rate}), while finger has the lowest hit rate as well as precision. Whereas in the cockpit use case, finger has the highest hit rate and precision, while eye has the lowest precision. Therefore, to tackle both use cases, the use of only uni-modal input would not be an optimal choice as the above-mentioned results reveal that two different modalities perform best in the two use cases.
Furthermore, the fusion of all modalities increases the hit rate up to 68\% and reduces the MAD to 7°. However, the Std.AD of 8.6° obtained when fusing all three modalities is still relatively high in the environment use case. This is because, despite the relatively high availability of eye compared to finger, the eye gaze features are often missing especially when looking to the far right or the far left, such as in car pose 2 or car pose 3, respectively. Occlusion of the face occurs often as well when the pointing arm appears in front of the face, thereby occluding the head. Nevertheless, the results obtained from fusing all three modalities show a clear improvement over uni-modal inputs in terms of hit rate as well as MAD.
\begin{figure*}[ht]
\centering
\includegraphics[trim=0 0 0 0,clip, width=\textwidth]{images/drivers.png}
\centering
\caption{Drivers' specific results in terms of MAD and hit rate using the fusion of all three modalities (head, eye and finger)}
\label{fig:drivers}
\end{figure*}
\subsection{Driver Specific Results}
The results pertaining to specific driver behaviors are shown in Figure \ref{fig:drivers} for all 11 drivers. The precision and hit rate obtained for each driver in both cases have a large variance. An interesting outcome that we observe here is that some drivers have good pointing precision as well as hit rate in one use case, but not in the other. For example, driver '2' has a relatively high hit rate and low MAD, i.e., high precision, in the cockpit use case (shown with blue diamond markers), but has the lowest hit rate and a relatively low precision in the environment use case (shown with orange square markers). Similarly, driver '3' has the lowest hit rate in cockpit use case, but has slightly above average hit rate and precision in the environment case. Drivers '1' and '9' appear to have the best hit rate and precision in cockpit referencing, while drivers '11' and '8' appear to have the best hit rate and precision for referencing in the environment use case. With this, we can conclude that drivers behave differently in different cases. More specifically, drivers have different strengths and weaknesses as to how accurate they are in employing the various modalities, and an extensive study of both cases is indeed necessary.
\subsection{Cross-dataset Learning} \label{sec:cross_dataset}
\begin{figure}[t]
\begin{subfigure}{.478\textwidth}
\centering
\includegraphics[trim=0 0 0 0,clip, width=\textwidth]{images/cross_model_MAD.png}
\caption{MAD and Std.AD of cross-dataset learning}
\label{fig:cross_dataset_mad}
\end{subfigure}
\begin{subfigure}{.478\textwidth}
\centering
\includegraphics[trim=0 0 0 0,clip, width=\textwidth]{images/cross_model_hit_rate.png}
\caption{Hit rate of cross-dataset learning}
\label{fig:cross_dataset_hit}
\end{subfigure}
\caption{Cross-dataset learning MAD, Std.AD and hit rate using the fusion of all three modalities (head, eye and finger)}
\label{fig:cross_dataset}
\end{figure}
In the previous sections, we either used only cockpit data for training as well as testing or we used only environment data for training as well as testing. In this section, in order to show the differences between the two use cases, we conducted multiple tests with cross-dataset learning of the CNN model. This means for testing on environment use case, the model training uses the cockpit dataset only and model testing uses the environment data only, and vice versa for testing on cockpit use case. The results from the cross-dataset learning are shown in Figure \ref{fig:cross_dataset}.
The MAD and Std.AD of the model trained on cockpit dataset and tested on the environment dataset are 30.8° and 14.1°, respectively (see Figure \ref{fig:cross_dataset_mad}), which are significantly higher than when the model was trained and tested on environment data only. A hit rate of only 2.7\% was achieved in this test (see Figure \ref{fig:cross_dataset_hit}). Similarly, using only the environment dataset for model training, and testing on cockpit dataset, an MAD and Std.Ad of 28.3° and 12.6°, respectively, were achieved, while only 0.7\% of the events hit the target AOI. Despite the fact that the users simply point to either an AOI or POI in both uses cases, the results obtained from cross-dataset learning indicate that one model can not be generalized for both use cases in an optimal manner. This might be caused by the unavailability of certain modalities in either use case. For example, we observed that finger modality has the highest availability in the cockpit use case with the highest hit rate as well, while head modality had the highest availability in the environment use case and eye modality had the highest hit rate. Consequently, a model trained using only inside vehicle data will rely more on the finger modality, which we have shown to be less precise than gaze in the outside vehicle use case, resulting in decreased precision for POIs. Therefore, in order to have an application with deictic referencing, it is vital to have the appropriate variation of pointing directions in the dataset that will be used for training.
Furthermore, we conducted tests using all data (i.e., using both cockpit data and environment data simultaneously for training). The test MAD and test Std.AD for cockpit use case were 3.9° an 8.1°, respectively, which is 1.4° larger in MAD and 4.4° larger in the Std.AD than when we used only cockpit data for training. The hit rate decreased from 86.8\% to 76.9\%. When testing on the environment data with the model trained using both datasets, no significant changes were seen. This might be induced by the higher ratio of environment data compared to cockpit data.
\section{Conclusion}
In this paper, we analyzed and studied features from three modalities, eye-gaze, head and finger, to determine the driver's referenced object, while using speech as a trigger. The experiments were divided into two types, pointing to objects inside the vehicle and pointing to objects outside the vehicle. For the objects inside the vehicle, finger pointing was observed to be the dominant modality, whereas, for the objects outside, gaze was the dominant modality amongst the three. This shows that there is not a single modality that would be optimal for both types of pointing. Rather, as the sensors do not offer 100\% tracking availability for single modalities because of multiple factors such as occlusion or movement out of view, there is a need for multimodal fusion for improved recognition of the referenced direction as well as for better generalization for different use cases.
Therefore, we propose a 2-stage CNN based multimodal fusion architecture to initially determine whether the driver's referenced object lies inside or outside the vehicle. In the second step, based on the recognized use case type, the appropriately trained model for fusion of the modalities is applied to better estimate the pointing direction. We successfully identified the placement of the referenced object to be inside or outside the vehicle with an accuracy of 98.6\%. The fusion of all three modalities has been shown to outperform individual modalities, in terms of both the mean angular distance as well as the hit rate. Referencing interior objects reveals to be more precise than for the exterior objects. The hit rate for the interior objects is also shown to be greater than the hit rate of exterior objects mainly due to the shorter distance of the interior objects to the driver's hands.
Furthermore, we compared mean angular distance and hit rate of the drivers in both cases, and concluded that drivers' referencing behavior is different in the two cases.
In addition to this, we compared cross-dataset performances in the two use cases and illustrated that one use case can not produce sufficiently good results in the other because the two use cases in fact exhibit different limitations and conditions. Simultaneous use of all data for a generalized approach may be one solution, however, this results in a slight reduction in precision (i.e., increase in mean angular distance) as well as hit rate for inside-vehicle objects.
In general, our paper provides a novel application of natural user interaction for driver assistance systems exploiting the inter-dependencies between multiple modalities. This paves new ways for further work in recognizing the driver's referencing intent which would include both the referenced object as well as the action to be taken, allowing a more natural user experience.
\begin{acks}
We are grateful to Ovidiu Bulzan and Stefan Schubert (BMW Group, Munich) for their contributions to the experiment design and apparatus setup. We would also like to thank Steven Rohrhirsch, Tobias Brosch and Benoit Diotte (BMW Car IT GmbH, Ulm) for their support in data extraction and pre-processing steps.
\end{acks}
\bibliographystyle{ACM-Reference-Format}
|
2,869,038,154,643 | arxiv |
\section{\texorpdfstring
{Orbital $n$ and bond $\bar{n}$ notations}
{Orbital n and bond bar(n) notations}}
In the subsequent analysis, sites are labeled by $n$, and bonds by $\bar{n}$.
The labels $n$ are understood to take on a numerical value indicating the location of the orbital, in units of $\frac{2 \pi \ell_B^2}{L}$.
As will be discussed in Eq.~\eqref{eq:cyl_orb}, when a flux $\Phi_x$ threads through the cylinder, $n \in \mathbb{Z} + \frac{\Phi_x}{2 \pi}$.
We let $\bar{n}$ take on a numerical value which is the average of the site locations to the bond's left and right, i.e. $\bar{n} \in \mathbb{Z} + \frac{\Phi_x + \pi}{2\pi}$.
For example, the bond between sites $n=0$ and $n=1$ is labeled by $\bar{n} = \tfrac12$.
\section{\texorpdfstring
{Symmetry properties of Quantum Hall \lowercase{i}MPS}
{Symmetry properties of Quantum Hall iMPS}}
\label{sec:sym}
As the symmetry properties of MPS are a crucial part of the details that follow, we present the symmetries of quantum Hall systems on a cylinder, a review of $\mathrm{U}(1)$ symmetry conservation for MPS, and elaborate on the $\braket{\bondop{Q}}$, $\dxp{\bondop{Q}}$ notation.
\subsection{Conserved quantities of the infinite cylinder}
In addition to the conservation of number, it is essential to implement conservation of momentum (also called `center of mass'), both because of the tremendous reduction in numerical effort and for the ability to label the entanglement spectrum by momentum. At filling factor $\nu$, we define the charges $(C, K)$ to be
\begin{subequations}\begin{align}
\hat{C} &= \sum_n \hat{C}_n,
\quad\quad \hat{C}_n = \hat{N}_n - \nu
\quad\quad \textrm{(particle number)} , \\
\hat{K} &= \sum_n \hat{K}_n,
\quad\quad \hat{K}_n = n ( \hat{N}_n - \nu )
\quad\quad \textrm{(momentum)} ,
\label{eq:app_charges}
\end{align}\end{subequations}
where $\hat{N}_n$ is the on-site number operator. Unlike the charge $C$, the momentum $K$ is peculiar as it behaves nontrivially under a translation by $n$ sites, $\hat{T}_n$,
\begin{equation}
\label{TCM}
\hat{T}_{-n} \hat{K} \hat{T}_n = \hat{K} + n \hat{C}
\end{equation}
which must be accounted for when conserving the two charges in the DMRG algorithm.
If the state has a unit cell of $M$, the \emph{matrices} of the MPS are periodic in $M$; however, due to Eq.~\eqref{TCM}, the \emph{charges} assigned to the Schmidt states are not.
Under translation, the charges of the Schmidt states on bonds $\bar{n}, \bar{n} + M$ are related by
\begin{align}
\label{auxtcm}
\big( \bondop{C}_{\overline{n}+M}, \bondop{K}_{\overline{n}+M} \big)
= \big( \bondop{C}_{\bar{n}}, \bondop{K}_{\bar{n}} + M \bondop{C}_{\bar{n}} - M \dxp{\bondop{C}} \big).
\end{align}
The value $\dxp{\bondop{C}}$ is a constant of the MPS defined in the next section [Eq.~\eqref{eq:dxp}].
\subsection{\texorpdfstring
{$\mathrm{U}(1)$ charge conservation for iMPS}
{U(1) charge conservation for iMPS}}
\label{sec:U1}
Let $\hat{Q}_T = \sum_n \hat{Q}_n$ be an Abelian charge given by a sum of all single site terms $\hat{Q}_n$.
Making a cut on bond $\bar{n}$, we can decompose $\hat{Q}_T = \sum_{n < \bar{n}} \hat{Q}_n + \sum_{n > \bar{n}} \hat{Q}_n = \hat{Q}_L + \hat{Q}_R$.
For a \emph{finite} size state $\ket{\psi}$ with a Schmidt decomposition $\ket{\psi} = \sum_\alpha \lambda_\alpha \ket{\alpha}_L \ket{\alpha}_R$ on bond $\bar{n}$, the Schmidt states must have definite charge, $\hat{Q}_L \ket{\alpha}_L = \bondop{Q}_{\bar{n}; \alpha } \ket{\alpha}_L$.
In the MPS representation, charge conservation is expressed through the corresponding constraint \cite{Schollwock2011,SinghVidal-2011}
\begin{align}
\left[\bondop{Q}_{\overline{r}; \beta} - \bondop{Q}_{\overline{l}; \alpha} - \hat{Q}_{n; j} \right] B^{[n] j}_{\alpha \beta} = 0
\label{eq:Qrep}
\end{align}
where $\overline{r}/\overline{l}$ denote the bonds to the right/left of site $n$. Pictorially, exponentiating the constraint implies
\begin{align}
\raisebox{5mm}{ \xymatrix @!0 @M=0.3mm @R=9mm @C=12mm{
\ar[r] & {\square} \ar[r] & \\ & \ar[u]|{\lhd}_{ \displaystyle\, e^{i\theta \hat{Q}} }
} }
\;\;=\;\;
\raisebox{5mm}{ \xymatrix @!0 @M=0.3mm @R=9mm @C=12mm{
\ar[r]|{\bigtriangledown}^{ \displaystyle e^{-i\theta\bondop{Q}} }
& {\square} \ar[r]|{\bigtriangleup}^>{ \displaystyle e^{i\theta \bondop{Q}} } & \\ & \ar[u]
} } .
\label{eq:Q_pic}
\end{align}
We can define a diagonal operator $\bondop{Q}_{\bar{n}}$ acting on the auxiliary bonds of the MPS with diagonal entries $\bondop{Q}_{\bar{n}; \alpha}$.
It is convenient to define a `bond' expectation values of a $\bar{Q}_{\bar{n}}$ by
\begin{align}
\label{eq:bond_exp}
\langle \bar{Q}_{\bar{n}} \rangle \equiv \sum_{\alpha} \lambda^2_{\bar{n}; \alpha} \bondop{Q}_{\bar{n}; \alpha}
\end{align}
where $\lambda_{\bar{n}}$ are the Schmidt values on the bond $\bar{n}$. In the finite case, this gives the expected charge to the left of the cut, $\langle \bar{Q}_{\bar{n}} \rangle = \langle \hat{Q}_L \rangle$.
In the case of an iMPS, the necessary and sufficient condition for the iMPS to have definite charge is again that there exist bond operators $\bondop{Q}_{\bar{n}}$ such that Eq.~\eqref{eq:Qrep} is satisfied.
However, Eq.~\eqref{eq:Qrep} is clearly invariant under a uniform shift of the bond charges, $\bondop{Q}_{\bar{n}} \to \bondop{Q}_{\bar{n}} + c$, so we can't obviously interpret $\bondop{Q}_{\bar{n}}$ as the physical charges of the Schmidt states.
This ambiguity was absent in the finite case because the left-most `bond' at the boundary can canonically be assigned charge 0.
To resolve this ambiguity, we can explicitly calculate the charge of a Schmidt state using a regulator, $Q_{\alpha L} = \lim_{\epsilon \to 0} \sum_{m < \bar{n}} e^{\epsilon m} \hat{Q}_{m} \ket{\alpha}_L$.
Using Eq.~\eqref{eq:Qrep} we can rewrite this charge using that auxiliary operators $\bondop{Q}_{\bar{m}}$.
Through an abuse of notation, we write $\bondop{Q}_{\bar{m}} \ket{\alpha}_L$ as an operation on the state, by which we mean we insert $\bondop{Q}_{\bar{m}}$ into the corresponding bond of the iMPS.
We find
\begin{align}
Q_{\alpha L} \ket{\alpha}_L &= \lim_{\epsilon \to 0} \sum_{m < \bar{n}} e^{\epsilon m} (\bondop{Q}_{m+1/2} - \bondop{Q}_{m-1/2}) \ket{\alpha}_L \quad \text{[Eq.~\eqref{eq:Qrep}]}
\notag\\
&= \left[ \hat{Q}_{\bar{n}} - \lim_{\epsilon \to 0} \epsilon \sum_{\bar{m} < \bar{n}} e^{ \epsilon \bar{m} } \bondop{Q}_{\bar{m}} \right] \ket{\alpha}_L
\end{align}
Because $\epsilon \to 0$, the second contribution is independent of an arbitrary number of sites near the boundary $\bar{n}$. Assuming the state has finite correlation length, far from $\bar{n}$ the result becomes independent of the choice of Schmidt state $\alpha$. Making use of translation invariance (assuming a unit cell of length $M$) and the limit $\epsilon \to 0$ we find
\begin{align}
\lim_{\epsilon \to 0} \epsilon \sum_{\bar{m} < \bar{n}} e^{ \epsilon \bar{m} } \bondop{Q}_{\bar{m}} \ket{\alpha}_L = \frac{1}{M} \sum_{0 < \bar{m} < M } \langle \bondop{Q}_{\bar{m}} \rangle \ket{\alpha}_L.
\end{align}
where $\braket{ \bondop{Q}_{\bar{m}} }$ is again the bond expectation value of the \emph{infinite} MPS.
Hence the \emph{physical} charge of the Schmidt state is
\begin{align}
Q_{\alpha L} &= \bondop{Q}_{\bar{n}; \alpha} - \frac{1}{M} \sum_{0 < \bar{m} < M } \langle \bondop{Q}_{\bar{m}} \rangle \equiv \bondop{Q}_{\bar{n}; \alpha} - \dxp{\bondop{Q}},
\end{align}
where define the $\dxp{}$ notation:
\begin{align}
\dxp{\bondop{Q}} \equiv& \frac{1}{M} \sum_{0 < \bar{m} < M } \braket{ \bondop{Q}_{\bar{m}} }.
\label{eq:dxp}
\end{align}
The second term $\dxp{\bondop{Q}}$ is an average of $\langle \bondop{Q}_{\bar{m}} \rangle$ across the $M$ sites of the unit cell.
Using the freedom to shift $\bar{Q} \to \bar{Q} + c$, we can always cancel the second term $\dxp{\bondop{Q}}$ so that $\bar{Q}$ has its `naive' interpretation as the charge of the left Schmidt state.
We will call this the `canonical' choice of bond charges.
The state has fractional charges if the required constant $c$ is a fraction of the elementary charge.
\section{Numerical Methods}
In the following we explain in detail three algorithmic issues particular to FQH DMRG on an infinite cylinder: construction of the matrix product operator (MPO) for the Hamiltonian, $\mathrm{U}(1)$ charge conservation for the momentum $K$ around the cylinder, and ergodicity issues of the DMRG update.
The MPO formulation of iDMRG used here is explained in Refs.~\onlinecite{McCulloch-2008, Kjall-2013}.
\subsection{MPO representation of the FQH Hamiltonian}
\label{sec:numMPO}
An MPO is a direct generalization of an MPS to the space of operators \cite{Verstraete-2004,McCulloch-2007,Kjall-2013},
\begin{equation}
\hat{H} = \sum_{0 \leq \alpha_i < D} \cdots \otimes \hat{W}^{[1]}_{\alpha_1 \alpha_2} \otimes \hat{W}^{[2]}_{\alpha_2 \alpha_3} \otimes \cdots \, \, \, .
\end{equation}
Each $\hat{W}^{[n]}$ is a matrix of operators acting on site $n$. The dimension $D$ of the matrices depends on the Hamiltonian, increasing as longer range components are added. For the systems studied here, $D\sim 100 \mbox{-} 300$.
An arbitrary two-body interaction in the LLL takes the form
\begin{align}
\hat{H} = \sum_i \sum_{0 \leq m, n} U_{mn} \psi^{\vphantom{\dag}}_{i+ 2m + n} \psi^{\dag}_{i + m + n} \psi^\dagger_{i+m} \psi^{\vphantom{\dag}}_i + h.c.
\owns \sum_i \sum_{0 < m, n}^\Lambda U_{m n} \psi_{i + 2m + n}^{\vphantom{\dag}} \psi^\dagger_{i + m + n} \psi^\dagger_{i + m} \psi_i^{\vphantom{\dag}} .
\label{happ}
\end{align}
To represent the MPO exactly in the limit of an infinite cylinder would require taking $D \to \infty$, but as the terms in the Hamiltonian of Eq.~\eqref{happ} decay like Gaussians when insertions become far apart, it is reasonable to truncate the Hamiltonian at some fixed accuracy by keeping only the largest terms. For simplicity, we will assume here a cutoff $m, n < \Lambda \sim L$.
\begin{figure}[t]
\includegraphics[width=0.43\textwidth]{MPOeps}
\caption{%
Illustration of the finite state machine associated with the MPO representation of terms $U_{mn}$ in Eq.~\eqref{happ}.
The machine begins in the node on the far left.
When the first operator $\psi_i$ is placed the machine enters the square grid.
Each row of the grid corresponds to a value of $m$; each column, a value of $n$.
The machine proceeds vertically up the left-most column until placing $\psi^\dagger_{i + m}$, then proceeds through row $m$ until $\psi^\dagger_{i + m + n}$ is placed with weight $U_{mn}$, at which point it skips to the rightmost column.
After descending down the rightmost column the final insertion $\psi_{i + 2m + n}$ is placed, at which point the term is complete and the machine remains in the terminating node to the far right.
The set of all routes from the far left to far right nodes generates precisely the terms $U_{mn}$.
A copy, with $\psi$ and $\psi^\dag$ swapped, also exists to generate the Hermitian conjugate, as well as terms for special cases $m, n = 0$.
}
\label{fig:mpo}
\end{figure}
To illustrate how to construct the MPO for Eq.~\eqref{happ} we view the MPO as a `finite state machine' for constructing Hamiltonians \cite{CrosswhiteBacon2008}.
Assuming translation invariance, to each index $\alpha$ of the matrix $\hat{W}_{\alpha \beta}$ we associate a state `$\alpha$' in a finite state machine, illustrated by a node in a graph.
Each non-zero entry $\hat{W}_{\alpha \beta}$ is a transition probability in the finite state machine, illustrated with an edge.
At the $n$th step, if the machine makes the transition $\beta \to \alpha$ then the operator $\hat{W}_{\alpha \beta}$ is placed at site $n$.
The machine is non-deterministic; if there are two possible transition out of the state $\beta$, then the paths are taken in superposition, which generates the sum over all terms in the Hamiltonian.
Assuming bosons and focusing on the particular contribution highlighted in Eq.~\eqref{happ}, at each step the MPO will place one of $\mathds{1}$, $\psi$ or $\psi^\dagger$. The MPO has a set of nodes which can be organized into a square grid, essentially in correspondence with the terms $U_{mn}$, as explained and illustrated in Fig.~\ref{fig:mpo}. The rectangular nature of the graph leads to the scaling $D \sim \Lambda^2$ of the MPO.
For fermions, the Jordan-Wigner string can be accounted for by replacing $\mathds{1}$ with the string operator $(-1)^F$ where appropriate.
As for an MPS, $\mathrm{U}(1)$ charge conservation for both number and momenta can be implemented by assigning the appropriate charges $(C, K)$ to the $D$ indices of the auxiliary bond.
\subsection{Momentum conservation on an infinite cylinder}
\label{sec:numK}
Knowing the charges transform as Eq.~\eqref{auxtcm}, the algorithm proceeds as follows.
We store the $M$ $B$-matrices of sites $n = 0, \dots, M - 1$, and keep track of the auxiliary quantum numbers on the bonds to their right,
$\bar{n} = \frac12, \frac32, \dots, M-\frac12$.
For updates within the unit cell, charge conservation is implemented as usual.
However, for an update acting on sites $0$ and $M-1$, we must `translate' the charge data associated with site-0 to site-$M$ using Eq.~\eqref{auxtcm}.
After updating $B$, the new charge data is translated back to site 0.
\subsection{\texorpdfstring
{Ergodicity of the \lowercase{i}DMRG algorithm}
{Ergodicity of the iDMRG algorithm}}
The final peculiarity of applying iDMRG to the QH effect concerns the `ergodicity' of the 2-site update.
The standard iDMRG algorithm optimizes two neighboring $B$-matrices per step in order to avoid getting stuck in local minima of the energy landscape \cite{McCulloch-2008, White-1992, Kjall-2013}.
This has the added advantage that, unlike a naive 1-site update, the bond dimension $\chi$ can grow during the simulation.
For most Hamiltonians the 2-site update is sufficient to find the optimal state, even if the initial state is taken to be a product state.
However, due to the additional constraint of momentum conservation for QH, starting from some particular state it is \emph{impossible} for the 2-site update to generate amplitude in all possible configurations.
The most naive explanation is that the smallest move available to the DMRG is a `squeeze' involving 4 sites, though this picture is not quite exact.
For the case of fermions, we have formalized and proven the following bound.
Recall that because $(C, K)$ are good quantum numbers, each Schmidt state $\alpha$ on bond $\bar{n}$ can be assigned a definite quantum number $(\bondop{C}_{\bar{n}; \alpha}, \bondop{K}_{\bar{n}; \alpha})$.
We define a combination $\bondop{P}$ of these charges by
\begin{align}
\bondop{P}_{\bar{n}; \alpha} \equiv \bondop{K}_{\bar{n}; \alpha} - \frac{1}{2 \nu} \bondop{C}_{\bar{n}; \alpha}^2 - \bar{n}\bondop{C}_{\bar{n}; \alpha}.
\quad\quad\quad\text{(assume $\dxp{\bondop{C}}=0$)}
\label{eq:defP}
\end{align}
$\bondop{P}$ has been defined so as to be invariant under translation, unlike $\bondop{K}$. For the Laughlin states, which have an entanglement spectrum in one-to-one correspondence with a chiral CFT, $\bondop{P}$ is precisely the total momentum of the CFT's oscillator modes \cite{Zaletel-2012}.
Let $\{P\}$ be the set of $\bondop{P}$'s present on all bonds in the MPS, and let $P_\text{min}, P_\text{max}$ be the minimum and maximum values they take before beginning of DMRG.
Using the standard 2-site DMRG update, $\{P\}$ always remains bounded by $P_\text{min}$ and $P_\text{max}$.
Hence the entanglement spectrum will appear to have a momentum cutoff set by the initial state, and the 2-site update will fail to find a variationally optimal state.
For example, if we use the exact Laughlin state as the seed for DMRG (which has $P_\text{min} = 0$, as the model state is purely `squeezed'), the 2-site update will not arrive at the ground state of the Coulomb Hamiltonian, which has $P_\text{min} < 0$.
Though not stated in these terms, to our knowledge this ergodicity problem was previously dealt with via two methods. One approach initialized the DMRG with a large spectrum of random initial fluctuations, and ensured by hand that the DMRG update preserves several states in each charge sector (e.g.~Ref.~\onlinecite{Zhao-2011}).
The algorithm was nevertheless observed to get stuck for several sweeps at a time, but did converge.
In this approach care is required to ensure the initial state supplies adequate fluctuations $P_\text{min}, P_\text{max}$, and there may be additional more subtle restrictions missed by this bound.
A second approach used `density matrix corrections' (e.g. Ref.~\cite{White2005, Feiguin-2008}).
In the current work we take a brute force (but fail safe) approach by generalizing the DMRG to a $n$-site update, optimizing $n$ sites as a time.
In the MPS/MPO formulation of DMRG we implement a $2n$-site update by grouping $n$ sites together into a single site of dimension $d = 2^n$ and then perform the usual 2-site update on these grouped sites.
For $n=2$, for example, we simply contract 2 adjacent $B$-matrices of the MPS, taking $B^{[0]j_0} B^{[1]j_1} \to B^{[\tilde{0}] j_0 j_1}$, and likewise for the $W$-matrices of the MPO.
As the complexity of the DMRG update scales as $d^3$, this does come at a cost, though it is partially offset by the increased speed of convergence. We have not seen the algorithm get stuck while using a sufficiently expanded update.
Though we have not proved it, we believe that a $q+3$-site update will be ergodic for filling fractions $\frac{1}{q}, 1 - \frac{1}{q}$.
For more complicated fractions, such as $\frac{2}{5}$, we have checked ergodicity by trial and error; for instance, a 10-site update arrived at the same final state as a 6-site update, so the latter was used in the reported simulations.
The main advantage of previous approaches, which use a two site update, is the decreased memory required, which quickly becomes a limitation for the 6-site update when $\chi \gtrsim 4000$.
The density matrix correction approach is potentially the optimal way to proceed, once adapted to the infinite DMRG algorithm with a long range MPO, and is being developed for future work.
\subsection{Convergence with respect to truncation errors}
The FQH iDMRG algorithm introduces two truncation errors; an error due to the finite number of Schmidt states kept, and an error due to the finite number of terms kept in the Hamiltonian (which is necessary for the MPO to have finite bond dimension). We refer to these as the MPS and MPO truncation errors respectively, which we address in turn.
\subsubsection{MPS truncation error}
The MPS ansatz implies that only a finite number of states are kept in the Schmidt decomposition, which bounds the possible overlap between the MPS and the true ground state.
While the truncation relative to the exact ground state is not accessible, it is customary in iDMRG to define the `truncation error' $\epsilon_\text{MPS}$ to be the weight of the Schmidt states dropped when projecting the variationally optimal $2n$-site wave function back to the desired bond dimension.
The truncation error was kept constant while the circumference $L$ was scaled.
To simulate the system at fixed truncation error, the bond dimension grows with the circumference $L$ as \cite{Zaletel-2012}
\begin{align}
\chi \sim b e^{v c L - d_a}
\end{align}
where $c$ is the central charge of the orbital entanglement spectrum, $d_a$ is the topological entanglement entropy, and `$v$' is a non-universal number expected to vary inversely with the correlation length.
$b$ is determined by the chosen error $\epsilon$.
For the data presented in the text, $\epsilon_\text{MPS} = 10^{-5}$.
To assess the effect of finite $\epsilon_\text{MPS}$ on the extracted entanglement entropy $\gamma$, we have calculated $S(L; \epsilon_\text{MPS})$ for $\epsilon_\text{MPS} = 10^{-4}, 10^{-5}, 10^{-5.5}$.
The scaling of $S$, and the resulting estimate of $\gamma$, are reported in Fig.~\ref{fig:t_trunc}.
While $\epsilon_\text{MPS} = 10^{-4}$ introduces spurious oscillations, the relative error in $S$ between $10^{-5}$ and $10^{-5.5}$ remains about 0.2 \% (while the memory requirement is multiplied by about 1.5).
A similar analysis must be made on a case by case basis in order to assess the tradeoff between the reduced accuracy in $S$ and the larger accessible $L$.
We note that in the most naive analysis, the truncation error contributes an error in the topological entanglement entropy, rather than the coefficient of the area law. This may be the source of the 3\% error in out estimate of $\gamma$ for the $\nu = \tfrac{2}{5}$ state. It would be worth investigating (both for FQH an other 2D DMRG studies) whether letting $\epsilon_\text{MPS}$ scale with the circumference might remove this error.
\begin{figure}[t]
\includegraphics[width=18cm]{25t_error}
\caption{%
Effect of finite $\epsilon_\text{MPS}$ on $S$ (data for $\epsilon_\text{MPS} = 10^{-5.5}$ does not extend to $L = 23.5$).
The relative error $\Delta S/ S$ between different $\epsilon_\text{MPS}$ remains bounded.
While large truncation error $\epsilon_\text{MPS} = 10^{-4}$ introduces spurious oscillations, $\epsilon_\text{MPS} = 10^{-5}$ and $\epsilon_\text{MPS} = 10^{-5.5}$ give the same $\gamma$ to within several percent.
}
\label{fig:t_trunc}
\end{figure}
\subsubsection{MPO truncation error}
Truncation of the Hamiltonian to a finite range smears out the interaction along the direction $y$ of the cylinder, but does preserve its locality.
To quantify the truncation error, recall that we keep only a finite number of the $V_{km}$ (say, the set $km \in A$), so we define the truncation error as $1 - \epsilon_\text{MPO} = ( \sum_{km \in A} |V_{km}| ) / ( \sum_{km} |V_{km}| )$.
We hold $\epsilon_\text{MPO} \sim 10^{-2} \mbox{-} 10^{-3}$ constant as we scale the circumference of the cylinder.
Because the spatial extent of this smearing is held constant as the circumference $L$ is increased, it is as if a fixed `cutoff' has been introduced to the Hamiltonian. When scaling $S(L) = \alpha L - \gamma$, the coefficient of the area law may modified, but the topological entanglement entropy should not be.
To illustrate the effect of the truncated Hamiltonian, we consider the model Hamiltonian for the $\nu = \frac{1}{3}$ Laughlin state.
The entanglement spectrum of the model wave function is known to have identical counting as the edge CFT; by truncating the model Hamiltonian, the entanglement spectrum is modified at large momenta.
As illustrated in Fig.~\ref{fig:mpo_trunc}, a finite `entanglement gap' is introduced with a magnitude that increases as $\epsilon_\text{MPO} \to 0$.
\begin{figure}[t]
\includegraphics[width=17cm]{MPOtrunc}
\caption{
Effect of finite $\epsilon_\text{MPO}$ on the ground state entanglement spectrum of the $\nu = \frac{1}{3}$ Laughlin state at $L = 16 \ell_B$.
The Hamiltonian approximates the `model' Hamiltonian $V_1$, but is truncated to a finite number of terms.
As illustrated in the inset, this induces a cutoff in the squeezing distance $m \leq m_{\Lambda}$.
It appears that the entanglement gap intersects the spectrum at $P_{CFT} = m_{\Lambda}$, as indicated by the arrows.
}
\label{fig:mpo_trunc}
\end{figure}
\subsection{\texorpdfstring
{Obtaining the full set of minimal entangled ground states $\{\ket{\Xi_a}\}$ from \lowercase{i}DMRG}
{Obtaining the full set of minimal entangled ground states from iDMRG}}
There are two issues when constructing the MES basis.
a) Does the DMRG converge to MESs, or is it favorable for it to converge to superpositions of them?
b) If it does produce MES states, how do you initialize the DMRG in order to obtain all of them?
As to a), it has been shown that the MES basis is in fact the eigenstate basis on an infinite cylinder, with energy densities that have an exponentially small splitting at finite circumference $L$ \cite{Cincio-2012}.
As there is no energetic reason for the DMRG to produce superpositions of the MES (and at finite $\chi$, there is in fact a finite entanglement bias to produce MES), we expect the DMRG to produce MES.
As to b), we first consider the role of the momentum $K$ per unit cell.
The MESs are eigenstates of the momentum $K$: if two infinite cylinder ground states with different momenta per unit cell are added in superposition, then the entanglement entropy must increase as their Schmidt states are respectively orthogonal.
The DMRG preserves $K$; so if the DMRG is initialized using a state in a momentum sector $K$ that contains an MES, then the DMRG will produce an MES; if the sector does not contain an MES, the optimized energy will observed to be higher than those of sectors that do.
The first step, then, is choose an orbital configuration $\lambda$ of the desired $K$ (such as $\lambda = 010$), initialize the iDMRG with the corresponding $\chi = 1$ MPS $\ket{\lambda}_0$ by updating the iDMRG `environments' without further optimizing the state, and then run iDMRG to obtain an optimized state and energy $E_{\lambda}$. Repeating for orbital configuration $\lambda$ of different $K$, we compare the energies $E_\lambda$ to determine which $K$ sectors contain a MES, rejecting those sectors such that $E_{\lambda}$ is not minimal within some tolerance set by the exponentially small splitting due to $L$.
For many of the expected phases (such at the Laughlin and hierarchy states), the ground-states are uniquely distinguished by $K$, so all MES will be obtained by the procedure just outlined.
For certain cases, however, several of the MES have the same $K$, for instance those corresponding to root configurations 01110 and 10101 of the $k=3$ Read-Rezayi (RR) phase at $\nu=\frac35$~\cite{ReadRezayi99}.
If we initialize the DMRG with 10101, we might worry that it will tunnel into the 01110 MES during the iDMRG, either due to the exponentially small splitting of the physical energies, or because the latter state has $d_a > 1$ and hence higher entanglement.
We argue that the iDMRG, if initialized with an approximation of one MES $b$, will not tunnel into a different MES $a$.
Consider the following three energy scales:
(\textit{1}) the exponentially small, but physical, splitting between the ground-state energy per site due to the finite circumference, $E_{0; ab} = E_{0;a} - E_{0;b}$;
(\textit{2}) the difference in the DMRG truncation error per site, $E_{\chi; ab} = E_{\chi; a} - E_{\chi; b}$, which is inherent to the MPS representation. Each of $E_{0; a/b}$ can be made arbitrarily small for sufficient $\chi$, but, at fixed $\chi$, one of the truncation errors may be larger if $d_a \neq d_b$, as the state has higher entanglement (an effect observed for the MR state);
(\textit{3}) the gap $\Delta_{c}$ for inserting a quasiparticle of the type `$c$' that would arise at a domain wall between the two states $a, b$.
If $E_{0; ab} + E_{\chi; ab} < 0$, heuristically the iDMRG may prefer to find the $a$ state.
In this scenario, if we initialize the iDMRG with the $b$ state, it can `tunnel' into the $a$ state by inserting a $c, \bar{c}$ pair near the sites being updated.
As the iDMRG proceeds, the state `grows' by repeatedly inserting new sites at the center of the chain.
The $c, \bar{c}$ pair then get successively pushed out to the left/right of the chain, leaving the $a$ type GS in the central region, which the state eventually converges to.
Effectively, a $c/\bar{c}$ pair has been drawn out to the edges of the `infinite' cylinder, thus tunneling between ground-states, a problem the geometry is supposed to avoid.
However, we do not expect this to happen for energetic reasons. At a given step, the DMRG can only modify the state significantly within a correlation length $\xi$ of the bond.
Hence if a $c, \bar{c}$ pair is created, they can be drawn at most $\xi$ sites apart during the first step.
The cost to tunnel into the $a$ state during the update is
\begin{align}
E = \Delta_{c} - ( \xi E_{0; ab} + E_{\chi; ab})
\end{align}
$E_{0; ab}$ is exponentially small at large $L$, and $E_{\chi}$ should be very small if sufficient $\chi$ is used, while the quasiparticle gap $\Delta_c$ remains finite.
Hence the energy of the quasiparticle provides an energetic barrier for the iDMRG to tunnel between the MESs.
If the root configurations $\lambda$ are close enough to the desired MES for the purposes of the above argument, then by initializing the DMRG with different $\lambda$ of the same $K$, the DMRG should produce the corresponding orthogonal MES.
Testing successively more complicated orbitals, we can check if we have obtained a full set by summing the quantum dimensions of the states accepted thus far \cite{Cincio-2012}. We have not verified if this proves to be the case for a non-trivial case such at the $k=3$ RR state.
If the root configurations $\lambda$ are \emph{not} sufficiently close to the MES for the purposes of the above argument, it is also possible to run iDMRG while including a bias against the MES obtained so far in order to find the additional MES.
\section{Quasiparticle charges}
\label{sec:qp_charges}
In the main text we claimed that the quasiparticle charges $Q_a$ are determined entirely by the entanglement spectrum of the iMPS $\ket{\Xi_a}$, which we demonstrate here in detail. As discussed, we suppose the MPS takes the form of $\ket{\Xi_{\mathds{1}}}$ for $y < 0$ and $\ket{\Xi_a}$ for $y > 0$. The most general form such a state can take is
\begin{equation}
\ket{a} = \sum_{\alpha \beta} \ket{\alpha}_L \otimes \ket{\alpha \beta}_C \ket{\beta}_R
\end{equation}
where $\ket{\alpha}_L$ are the left Schmidt states of $\ket{\Xi_{\mathds{1}}}$, $\ket{\beta}_R$ are the right Schmidt states of $\ket{\Xi_a}$, and $\ket{\alpha \beta}_C$ is an arbitrary set of states in the central `gluing' region. Without loss of generality, we suppose that the the gluing region has a length which is a multiple of $q$, $n = \{0, 1, \cdots, q l - 1\}$ for $l \in \mathbb{Z}$. The boundary bonds $\bar{n}_L = \bar{\mathds{1}}$, $\bar{n}_R = \bar{a}$ are indexed by $\alpha/\beta$ respectively. Schematically, we should think of the low-lying Schmidt states on the left/right bond as being in the CFT sector $\mathcal{V}_{\mathds{1}}/\mathcal{V}_{a}$ respectively.
We exploit three basic facts. First, because the central region contains a multiple of $q$ sites, the charge $C$ of $\ket{\alpha \beta}_C$ must be a multiple of the electron charge (which has been chosen to be 1) for all $\alpha, \beta$. Second, the charges of the left Schmidt states are $\bondop{C}_{\bar{\mathds{1}}} - \dxp{\bondop{C}}_{\mathds{1}}$, which differ from each other only by multiples of the electron charge. Likewise, the charges of the right Schmidt states are $- (\bondop{C}_{\bar{a}} - \dxp{\bondop{C}}_{a})$, which differ from each other only by multiples of the electron charge. Hence the charge of the quasiparticle, $Q_a$, satisfies
\begin{align}
e^{2 \pi i Q_a } = e^{2 \pi i \left[ (\bondop{C}_{\bar{\mathds{1}}} - \dxp{\bondop{C}}_{\mathds{1}}) - (\bondop{C}_{\bar{a}} - \dxp{\bondop{C}}_{a}).\right]}
\end{align}
Again, we have taken advantage of the fact that $\bondop{C}_{\bar{n}}$ is a constant modulo 1, the charge of an electron.
Finally, because charge conjugation $\mathcal{C}$ acts as spatial inversion on the LLL orbitals, we must have $e^{2 \pi i (\bondop{C}_{\bar{\mathds{1}}} - \dxp{\bondop{C}}_{\mathds{1}})} = 1$.
As claimed,
\begin{align}
e^{2 \pi i Q_a } = e^{2 \pi i \big(\dxp{\bondop{C}}_{a} - \bondop{C}_{\bar{a}}\big)} .
\label{eq:Q_a}
\end{align}
When cutting a MES on any bond, $\bondop{C} \bmod 1$ is single-valued, so the equation has no ambiguity.
Note that we have implicitly assumed the sector $\mathds{1}$ appeared in the OES -- but this need not be the case, as for instance in the $\nu = \frac{1}{2}$ bosonic Laughlin state.
If we naively apply the above formula to $\nu = \frac{1}{2}$ Laughlin, we find particles of charge $\pm e/4$, rather than $0, e/2$.
This issue is clarified using an alternate derivation of the charge via \hyperref[sec:flux_mat]{flux matrices}.
\section{Matrix product states on a torus}
\newcommand{\phicyl}{\varphi^\text{cyl}}
Here we explain in detail the procedure to take an infinite cylinder iMPS to a torus MPS.
Our approach is closely related to that of Ref.~\onlinecite{Cincio-2012}, but with the additional complication of having twisted boundary conditions.
\subsection{From an infinite chain to a periodic one}
We first step back and show how to convert the MPS of any gapped, infinite chain to a periodic chain, both for the trivial case when there is no `twist,' and then in the presence of a twist generated by a symmetry.
The construction in the first case, which we explain for completeness, is obvious: we cut out a segment of the iMPS and reconnect the two dangling bonds to form a ring.
For simplicity in what follows we will assume bosonic chains, and introduce the correction for Fermionic chains due to the Jordan-Wigner string at a later point.
Consider two systems, an infinite chain with a unit cell of $N$, and a periodic chain of length $N$.
The sites of the chains are labeled by `$n$' (with $n \sim n + N$ in the periodic case).
Restricting to local Hamiltonians, we can decompose $\hat{H} = \sum_n \hat{H}^{[n]}$, where each $\hat{H}^{[n]}$ is localized around site $n$ over a length $\xi_H \ll N$ (for bosons, for example, these are the terms $\hat{H}^{[n]} = \hat{b}^\dagger_{n+1} \hat{b}_n + h.c.$, though could extend over many sites).
In the infinite case, translation symmetry implies $\hat{T}^{N} \hat{H}^{[n]} \hat{T}^{-N} = \hat{H}^{[n+N]}$, $\hat{T}$ being the translation operator.
The energetics of the state are determined by the reduced density matrices of the system, $E = \sum_n \mbox{Tr}( \hat{\rho}^{[n]} \hat{H}^{[n]})$.
The $\hat{\rho}^{[n]}$ are reduced density matrices in a region around $n$ large enough to include all the sites affected by $\hat{H}^{[n]}$.
If the $\hat{H}^{[n]}$ of the finite and infinite chains are identical, then to find the ground state of the periodic system it will be sufficient to reproduce the local density matrices $\hat{\rho}^{[n]}$ of the infinite system.
To do so, we first cut out a segment of the iMPS with two dangling bonds, $\Psi_{\alpha\beta} = B^{[0]} B^{[1]} \cdots B^{[N-1]}$, or in the pictorial representation of MPS:
\begin{align}
\Psi_{\alpha\beta} \;=\;
\raisebox{5mm}{\xymatrix @M=0.3mm @R=8mm @C=6mm @!C{
{\alpha} \ar@{-}[r]
& {\square} \ar@{-}[r] \ar@{-}[d]
& {\square} \ar@{--}[rr] \ar@{-}[d]
&
& {\square} \ar@{-}[r] \ar@{-}[d]
& {\square} \ar@{-}[r] \ar@{-}[d]
& {\beta}
\\ & 0 & 1 & & N-2 & N-1 &
}}.
\end{align}
We then connect (trace over) the dangling bonds to form a ring MPS:
\begin{align}
\label{eq:ringtrick}
\operatorname{Tr}[\Psi] &\;=
\raisebox{10mm}{\xymatrix @M=0.3mm @R=7mm @C=6mm @!C{
&& \ar@{--}[r] &&&
\\ & {\square} \ar@{-}[r] \ar@{-}[d]
\ar@{-} `l[u] `u[u] [ur]
& {\square} \ar@{-}[r] \ar@{-}[d]
& {\square} \ar@{-}[r] \ar@{-}[d]
& {\square} \ar@{-}[d]
\ar@{-} `r[u] `u[u] [ul] &
\\ & N-2 & N-1 & 0 & 1 &
}},
\end{align}
There is no need to further optimize the MPS at the `seam,' because when the complement to the region of $\hat{\rho}^{[n]}$ is large compared to the correlation length, $N - \xi_H \gg \xi$, then up to corrections of order $e^{-N/\xi}$ the $\hat{\rho}^{[n]}$ of the periodic MPS are identical to those of the iMPS.
\subsubsection{Twists}
We now introduce a twist generated by a local symmetry, $\hat{Q} = \sum_n \hat{Q}^{[n]}$, assuming each $\hat{H}^{[n]}$ is individually symmetric under $\hat{Q}$ and that $\hat{T}^{N} \hat{Q}^{[n]}\hat{T}^{-N} = \hat{Q}^{[n+N]}$.
We use a unitary `twist' operator $\hat{G}$ to introduce a twist every $N$ sites of infinite chain, which generates a twisted Hamiltonian:
\begin{align}
\hat{G}(\theta) = \prod_b e^{i b \theta \sum_{n = 0}^{N-1} \hat{Q}^{[n+b N]} }, \quad \quad \hat{H}_{\infty}(\theta) \equiv \hat{G}(\theta) \hat{H}_\infty \hat{G}(-\theta) = \sum_n \hat{H}^{[n]}(\theta).
\end{align}
The chain remains translation invariant, $\hat{T}^{N} \hat{H}^{[n]}(\theta) \hat{T}^{-N} = \hat{H}^{[n+N]}(\theta)$, because $\hat{Q}$ is a symmetry.
We define the twisted Hamiltonian of the periodic chain to be $\hat{H}_{\circ}(\theta) = \sum_{n=0}^{N-1} \hat{H}^{[n]}(\theta)$ with $\hat{H}^{[n]}(\theta)$ taken from the infinite chain.
In the bosonic case, for example, there is a single link with a twist, $b_{0}^\dag b_{-1} e^{i\theta} + h.c.$.
Note that $\hat{H}_{\circ}(\theta)$ is not unitarily related to $\hat{H}_{\circ}(0)$ except when $\theta$ is a multiple of a `flux quantum' $\Phi$, which we can assume is $\Phi = 2 \pi$ .
What is the ground state $\ket{\theta}_\circ$ of $\hat{H}_{\circ}(\theta)$?
By construction, the Hamiltonians $H_{\circ}, H_{\infty}$ remain locally identical, so given the iMPS for $\ket{0}_\infty$, we can utilize the gluing trick of Fig.~\eqref{eq:ringtrick} already justified.
As $\ket{\theta}_\infty = \hat{G}(\theta) \ket{0}_\infty$, with $\ket{0}_\infty$ the untwisted ground state, the desired iMPS is
\begin{align}
\ket{\theta}_\infty &=
\raisebox{5mm}{\xymatrix @M=0.3mm @R=8mm @C=6mm @!C{
{\cdots\;\;} \ar@{-}[r]
& {\square} \ar@{-}[r] \ar@{-}[d]
& {\square} \ar@{-}[r] \ar@{-}[d]
& {\square} \ar@{-}[r] \ar@{-}[d]|{\lhd}^{\displaystyle\;e^{i\theta\hat{Q}}}
& {\square} \ar@{-}[r] \ar@{-}[d]|{\lhd}^{\displaystyle\;e^{i\theta\hat{Q}}}
& {\square} \ar@{-}[r] \ar@{-}[d]|{\lhd}^{\displaystyle\;e^{i\theta\hat{Q}}}
& {\;\;\cdots}
\\ & -2 & -1 & 0 & 1 & 2 &
}}
\end{align}
and so on throughout the chain. Using the conservation rule of \eqref{eq:Q_pic}, we can rewrite the above as
\begin{align}
\label{eq:sitetobond}
\ket{\theta}_\infty &=
\raisebox{5mm}{\xymatrix @M=0.3mm @R=8mm @C=6mm @!C{
{\cdots\;\;} \ar@{-}[r]
& {\square} \ar@{-}[r] \ar@{-}[d]
& {\square} \ar@{-}[r]|{\bigtriangledown}^{ \displaystyle \; e^{-i\theta\bondop{Q}} } \ar@{-}[d]
& {\square} \ar@{-}[r] \ar@{-}[d]
& {\square} \ar@{-}[r] \ar@{-}[d]
& {\square} \ar@{-}[r] \ar@{-}[d]
& {\;\;\cdots}
\\ & -2 & -1 & 0 & 1 & 2 &
}}
.
\end{align}
To obtain $\ket{\theta}_\circ$, we again cut out a segment and glue,
\begin{align}
\label{eq:torus_mps_def}
\ket{\theta}_\circ \;=
\raisebox{10mm}{\xymatrix @M=0.3mm @R=7mm @C=6mm @!C{
&& \ar@{--}[r] &&&
\\ & {\square} \ar@{-}[r] \ar@{-}[d]
\ar@{-} `l[u] `u[u] [ur]
& {\square} \ar@{-}[r]|{\displaystyle\bullet}^*+++{\displaystyle\drehen} \ar@{-}[d]
& {\square} \ar@{-}[r] \ar@{-}[d]
& {\square} \ar@{-}[d]
\ar@{-} `r[u] `u[u] [ul] &
\\ & N-2 & N-1 & 0 & 1 &
}} .
\end{align}
where $\bondop{G} = e^{-i\theta\bondop{Q}}$.
\subsubsection{Fermions}
One additional modification must be made for fermions, due to the Jordan Wigner string.
MPS for fermionic chains are always expressed through the occupation of bosonic operators $\sigma^+_n = (-1)^{\sum_{j < n} \hat{N}_j }\psi^\dagger_n$, where $\hat{N}_i$ is the occupation at site $i$.
In other words, the $B$ matrix at site $n$ generates the state according to $B^{j_n} (\sigma^+_n)^{j_n} \ket{0}$.
On a periodic chain, the fermionic operators $\psi_n \sim \psi_{n + N}$ are periodic, but because of the string the $\sigma^+_n$ are not. If $\bondop{N}$ is the bond operator corresponding to number, we find the string can be accounted for via
\begin{align}
\bondop{G} = \eta^{(N^F - 1) \bondop{C}} e^{-i\theta\bondop{Q}}, \quad \eta = \pm\textrm{ for Boson/Fermion}
\label{eq:fermionsign}
\end{align}
where $N^F$ is the total fermion number and $\eta = \pm1 $ for bosons and fermions respectively.
In the following sections, we present the details for this construction pertaining to quantum Hall systems.
First we describe the single-particle basis on an infinite \hyperref[sec:orb_cyl]{cylinder} by solving the Hamiltonian $H_0 = \frac12 (\mathbf{p} + \mathbf{A})^2$, then adapt the basis to a \hyperref[sec:orb_torus]{torus}.
We then show that in the orbital basis, the cylinder and torus Hamiltonians $H_{\infty}, H_\circ$, satisfy the criteria just discussed, so we can obtain the \hyperref[sec:mps_torus]{torus MPS} from the cylinder MPS.
\subsection{The cylinder geometry}
\label{sec:orb_cyl}
Consider a cylinder finite in the $x$-direction with circumference $L_x$, infinite in the $y$-direction, with the following boundary condition and Hamiltonian:
\begin{align}
\psi(x, y) &= \psi(x - L_x, y) e^{i\Phi_x} ,
\label{eq:torus_bc_x}
\\
H_0 &= \frac12 (\mathbf{p} + \mathbf{A})^2 ,
\end{align}
where $\Phi_x$ is the flux threading the $x$-cycle and $\mathbf{p} = -i\nabla$.
In the Landau gauge $\mathbf{A} = \ell_B^{-2}(-y, 0)$, the $x$-momentum $k$ is a conserved quantity.
In that eigenbasis, the wave functions $\phicyl_n(x,y) = \braket{x,y | \phicyl_n}$ in the lowest Landau levels (LLL) are
\begin{align}
\label{eq:cyl_orb}
\phicyl_n(x,y) &= \frac{1}{\sqrt{L_x\ell_B{\sqrt{\pi}}}} e^{ik_nx} e^{-\frac{(y-y_n)^2}{2\ell_B^2}} ,
&&\textrm{with}\;
k_n = n \frac{2\pi}{L_x} ,\;\;
y_n = k_n \ell_B^2 ,\;\;
n \in \mathbb{Z} + \frac{\Phi_x}{2\pi} .
\end{align}
We see that the centers of the wave functions $y_n$ depends on $\Phi_x$; pumping a flux through $x$-loop will shift all the orbitals by unit distance $\Delta y = 2\pi\ell_B^2/L_x$.
Although the phases of these orbitals are arbitrary, it is convenient to resolve the ambiguity via the translation operator:
\begin{align}
\hat{T}_y = \exp\left[ -i\tfrac{2\pi}{L_x} \big({-i} \ell_B^2 \partial_y - x\big) \right] ,
\end{align}
which shifts the wavefunctions by $\Delta y$.
We have chosen the phases of the orbitals such that $\ket{\phicyl_{n+1}} = \hat{T}_y \ket{\phicyl_n}$.
\textit{Note.}
We emphasize that the flux $\Phi_x$ is accounted for by labeling sites as $n \in \mathbb{Z} + \frac{\Phi_x}{2\pi}$.
This carries over to the definition of the momentum: $\hat{K}_n = n \hat{C}_n$ includes the \emph{fractional} part $\frac{\Phi_x}{2\pi}$.
This definition of $\hat{K}_n$ is an important detail for the formulas that follow.
\subsection{The torus geometry}
\label{sec:orb_torus}
To go from a cylinder to a torus, we identify points $(x,y)$ with $(x-\tau_xL_x, y-L_y)$ as follows
\begin{align}
\psi(x, y) = \psi(x-\tau_xL_x, y-L_y) e^{i\ell_B^{-2}L_yx + i\Phi_y} .
\label{eq:torus_bc_y}
\end{align}
or equivalently $\psi(x, y) = \psi(x+\tau_xL_x, y+L_y) e^{-i\ell_B^{-2}L_y(x+\tau_xL_x) - i\Phi_y}$.
The phase $i\ell_B^{-2}L_yx$ is necessary since the gauge potential $\mathbf{A}$ is not periodic when taking $y \rightarrow y+L_y$,
the difference being $\mathbf{A}(x-\tau_xL_x,y-L_y) - \mathbf{A}(x,y) = \nabla(\ell_B^{-2}L_yx)$.
Note that for the above to be well-defined, we need an integer number of fluxes in the torus:
\begin{align}
\frac{L_x L_y}{2\pi\ell_B^2} = N_\Phi \in \mathbb{Z} .
\end{align}
The fluxes $(\Phi_x,\Phi_y)$ parameterizes a set of Hamiltonians and corresponding Hilbert space of ground states.
To figure out the eigenstates for the torus, we sum over combinations of $\phicyl_n$ on the cylinder such that they respect the boundary condition above.
The single-particle wave functions $\varphi_n(x,y) = \braket{x,y | \varphi_n}$ for the torus are
\begin{align}
\label{eq:phitor}
\varphi_n(x,y) &= \sum_b \phicyl_{n+bN_\Phi}(x,y) \,
e^{-2\pi iN_\Phi \frac{b(b-1)}{2} \tau_x - 2\pi ibn\tau_x + ib\Phi_y} .
\end{align}
We will use as our basis the orbitals $ 0 \leq n < N_\Phi$.
\subsection{MPS for twisted tori}
\label{sec:mps_torus}
Since the flux $\Phi_y$ corresponds to a twist generated by the charge $\hat{C}$, while the modular parameter $\tau_x$ corresponds to a twist generated by the momentum operator $\hat{K}$, it is natural to suppose the orbital MPS can be obtained using a twist generated by $2 \pi \tau_x \hat{K} - \Phi_y \hat{C}$.
We prove this intuition is correct, but the surprise is that the ground state remains exact (up to corrections $\mathcal{O}^{-L_y/\xi}$) even though the twist is performed in \emph{orbital} space, rather than real space.
We first determine the form of the orbital Hamiltonian of the torus, $\hat{H}_\circ(\theta)$, in the basis of Eq.~\eqref{eq:phitor}.
We assume the Hamiltonian arises from products of the electron density operators $\rho(\vec{x})$, is translation invariant in both $x$ and $y$, and that any interactions, such as $\rho(\vec{x})V(\vec{x} - \vec{y})\rho(\vec{x})$, are short ranged in comparison to the length $L_y$ of the torus.
As we will eventually take $L_y \to \infty$, this is not really a restriction.
We make use of the interaction overlap integrals for the \emph{cylinder} orbitals $\varphi^{cyl}_{n_i}$, which we denote $V_{ \{n_i\}}$.
The two body term, for instance, is encoded in a term $V_{n_1 n_2 n_3 n_4}$.
$V_{\{n_i\}}$ is non-zero only if $\sum_i n_i = 0$, due to momentum conservation around the cylinder, and $V_{\{n_i + 1\}} = V_{\{n_i\}}$ due magnetic translation invariance along the cylinder.
Furthermore, $V_{ \{n_i\} }$ decays when the separation between indices are such that $L_x/\ell_B \ll |n_i - n_j|$.
Since $L_x/\ell_B \ll N_\Phi$, to exponentially good accuracy we can assume $V_{ \{n_i\} } = 0 $ if any two $n_i$ differ by $N_\Phi$ sites.
For notational simplicity we will illustrate the calculation for the 1-body term, which generalizes in an obvious fashion.
The 1-body Hamiltonian is
\begin{align}
\hat{H} &= \sum_{ 0 \leq n_i < N_\Phi } \sum_{0 \leq b_i \leq 1} V_{n_1 + b_1 N_\Phi, n_2 + b_2 N_\Phi} e^{-f(n_1, b_1) + f(n_2, b_2)} c^\dagger_{n_1} c_{n_2} \\
f(n, b) &= -2\pi iN_\Phi \frac{b(b-1)}{2} \tau_x - 2\pi ibn\tau_x + ib\Phi_y
\end{align}
The summand is invariant under taking all $b_i \to b_i + 1$, so we can safely restrict to terms such that $b_i \in \{0, 1\}$ with at least \emph{one} of the $b_i = 0$ ( $V$ vanishes if two indices are $N_\Phi$ apart).
This leads to two types of terms: the `bulk' terms, in which all $b_i = 0$, and the `seam' terms, in which some $b_i = 0$ and some $b_i = 1$.
The seam terms arise when site at the beginning and end of the unit cell interact.
Since $f(n, 0) = 0$, in the bulk the local Hamiltonians $H^{[n]}(\theta) = H^{[n]}(0)$ are identical to the infinite cylinder Hamiltonians.
Along the seam the $H^{[n]}(\theta)$ acquire phases $f(n_i, 1) = -2 \pi i \tau_x n_i + i \Phi_y$ for each orbital with $b_i = 1$.
For example, if $N_\Phi = 10$, in the interaction $V_{9, 11} c^\dagger_9 c_1 \to V_{9, 11} c^\dagger_9 c_1 e^{f(1, 1)}$.
Consequently the torus $\hat{\rho}^{[n]}$ should be identical to the infinite cylinder $\hat{\rho}^{[n]}$ in the bulk, but differ by phases $f(n_i, 1)$ near the seam.
To account for the phase $f(n_i, 1) = -2 \pi i \tau_x n_i + i \Phi_y$, we use the same conserved quantity manipulations as we did in Eq.~\eqref{eq:sitetobond}, and find the correct twist operator $\bondop{G}$ is indeed
\begin{align}
\drehen &= \eta^{(N^e - 1) \bondop{C}}
\exp\left[ -2\pi i\tau_x \bondop{K} + i\Phi_y\bondop{C} \right] .
\label{eq:drehen}
\end{align}
$\eta = \pm1$ accounts for bosons and fermions respectively, as discussed for Eq.~\eqref{eq:fermionsign}.
There is one subtlety we must emphasize: recall that $\bondop{K}$ is not periodic, due to Eq.~\eqref{TCM}, so the quantum numbers $\bondop{K}_\alpha$ according to the right bond of the last site and the left bond of the first site differ by $N_\Phi \bondop{C}$.
In the above form we assume $\bondop{K}$ uses the quantum numbers according to the bond before the first site, at $\bar{n} = \frac{\Phi_x}{2 \pi}-\frac12$, \emph{not} after the last site.
The first factor $\eta^{(N^e-1)\bondop{C}}$ reflects the fermion statistics among the orbitals.
\section{Computation of flux and modular matrices via the entanglement spectrum}
\label{sec:berry_matrix_comp}
The flux matrices $\mathcal{F}_{x,y}$ gives the action of threading a $2\pi$ flux in the $x,y$-loop, while the modular $\mathcal{T}$-matrix gives the action of a Dehn twist.
To derive their expression, we compute the (non-Abelian) Berry phase from the adiabatic changes
$\Phi_{x,y} \rightarrow \Phi_{x,y} + 2\pi$ and $\tau_x \rightarrow \tau_x + 1$, respectively.
In each case a parameter $\kappa$ is varied and the Berry phase $U$ is a $\gsd \times \gsd$ matrix which is a product of two pieces:
\begin{align}
U &= W \left(\mathcal{P} e^{ i \int_{\kappa_i}^{\kappa_f}\!\!d\kappa \, A }\right) ,
\end{align}
with $A$ and $W$ being matrices defined as
\begin{align}
A_{ab}(\kappa) &= \Braket{ \Xi_b(\kappa) | -i\tfrac{\partial}{\partial\kappa} | \Xi_a(\kappa) } ,
& W_{ab} &= \Braket{ \Xi_b(\kappa_f) | \Xi_a(\kappa_i) } .
\end{align}
As $\kappa$ is varied, $\ket{\Xi_a(\kappa)}$ denotes the set of ground states for the boundary conditions specified by $\kappa$;
$A$ is the Berry connection and $W$ is the overlap between the initial and final set of MES.
While the result is independent of the choice of basis, in the MPS construction $ \ket{\Xi_a(\kappa_i) }$ remains the MES basis.
The action of $\mathcal{F}_{x/y}, \mathcal{T}$ are characteristic of the topological phase and robust to perturbations.
The results presented are summarized in Tab.~\ref{tab:BerryPhases}.
Although the flux matrices and topological spins are formalized for a state in the torus geometry, we will show that these quantities only depend on the OES of an infinite cylinder.
In particular, knowing the entanglement spectrum of each MES $\ket{\Xi_a}$ allows us to determine the charge $Q_a$ and topological spin $h_a$ of the quasiparticle $a$.
\textit{Note.}
There is a subtle point in the interpretation of the resulting equations.
The charge $Q_a$ and the spin $h_a$ we will subsequently derive are those of the sector $\mathcal{H}_a$ that would arise in the real space entanglement spectra \emph{if we were to cut the system at} $y=0$.
The entanglement spectra appearing at $y=0$ depend on $\Phi_x$.
It is most natural to choose $\Phi_x$ such that $a = \mathds{1}$ can appear on the cut at $y=0$: the resulting set of `$a$' are then those that would appear on the plane.
We find that for fermions, this occurs for $\Phi_x = \pi$, while for bosons, this occurs for $\Phi_x = 0$.
Hence it is most natural to set $\Phi_x = 0$ for bosons, and $\Phi_x = \pi$ for fermions in the following equations.
\begin{table}[h]
\begin{tabular}{cccc}
\hline\hline
$\kappa$ parameter & Berry phase matrix & Physical observable & Formula \\
\hline
$\Phi_x$ & $\mathcal{F}_x$ & translation structure & Eq.~\eqref{eq:Fx} \\
$\Phi_y$ & $\mathcal{F}_y$ & charge & Eq.~\eqref{eq:Fy} \\
$\tau_x$ & $\mathcal{T}$ & topological spin, Hall viscosity, central charge & Eq.~\eqref{eq:T} \\
\hline\hline
\end{tabular}
\caption{%
Summary of the non-Abelian phases from pumping $\Phi_x, \Phi_y$ and $\tau$.
}
\label{tab:BerryPhases}
\end{table}
\subsection{Computing the Berry phase}
We have already determined the many body state $\ket{\Xi_a(\kappa)} = \Psi^a_{j_0 j_1 \cdots}(\kappa)\ket{j_0, j_1 \cdots; \kappa}$, so it is in principle a mechanical matter to calculate the Berry connection.
The orbital wave function $\Psi^a$ is given by the periodic MPS with a twist [Eq.~\eqref{eq:torus_mps_def}], and $j_n$ is the occupation number of orbital $\varphi_n$ defined in Eq.~\eqref{eq:phitor}.
Our approach is as follows: we first derive the general structure of the Berry phase, by decomposing the result into the `wavefunction' and `orbital' parts.
We then report the results for the constituent terms for $\mathcal{F}_{x/y}, \mathcal{T}$.
\begin{itemize}
\item{$A$. }
The connection $A_{ab}(\kappa)$ can be separated into two components, one from the changing twist in the MPS $\Psi^a$, and one from the changing orbitals:
\begin{align}
\label{eq:A_masterEq}
A_{ab}(\kappa) &= \sum_{ \{j\} }\bar{\Psi}^a_{ \{j\} } (-i \partial_\kappa) \Psi^b_{\{j\}} + \sum_{ \{j\}, \{k\} }\bar{\Psi}^a_{\{j\}} \Psi^b_{\{k\}} \, \bra{\{j\}; \kappa} (-i \partial_\kappa) \ket{\{k\}; \kappa} \equiv A_{ab}^{(\drehen)}(\kappa) + A_{ab}^{(\varphi)}(\kappa)
\end{align}
The first term of Eq.~\eqref{eq:A_masterEq} involves the Berry connection for $\Psi^a$, which depends on $\kappa$ through the twist operator $\drehen$.
In the limit of $L_y\gg \xi$, the required overlap reduces to a `bond expectation value' as defined in Eq.~\eqref{eq:bond_exp},
\begin{align}
A_{ab}^{(\drehen)} = -i \delta_{ab}\Braket{\drehen^\ast \frac{\partial\drehen}{\partial \kappa}} .
\end{align}
The expectation value is taken on the bond where $\drehen$ sits, to the left site $n = \frac{\Phi_x}{2\pi}$.
The second term of Eq.~\eqref{eq:A_masterEq} involves the Berry connection between orbitals:
\begin{align}
A_{ab}^{(\varphi)}(\kappa) = -i \delta_{ab} \sum_n \braket{ \hat{N}_n } \braket{\varphi_n | \partial_\kappa | \varphi_n},
\quad \int_{\kappa_i}^{\kappa_f}\!\! A_{ab}^{(\varphi)}(\kappa) \, d\kappa \equiv \sum_n \theta^{(\varphi)}_n \braket{ \hat{N}_n }
\end{align}
where $\braket{ \hat{N}_n }$ is the average occupation at site $n$.
\item{$W$. }
Likewise, the overlap $W_{ab}$ also contains a twist and orbital component,
\begin{align}
W_{ab}&= \sum_{ \{j\}, \{k\} }\bar{\Psi}^a_{\{j\}}(\kappa_f) \braket{\{j\}; \kappa_f | \{k\}; \kappa_i} \Psi^b_{\{k\}}(\kappa_i).
\end{align}
We distinguish between two cases: for $\mathcal{F}_y$ and $\mathcal{T}$, the orbitals return to themselves (), giving
\begin{align}
W_{ab}= \sum_{ \{j\}}\bar{\Psi}^a_{\{j\}}(\kappa_f) \Psi^b_{\{j\}}(\kappa_i) = \delta_{ab} \frac{\drehen|_{\kappa=\kappa_i}}{\drehen|_{\kappa=\kappa_f}} \quad ( \mbox{for }\, \mathcal{F}_y,\, \mathcal{T} ) .
\end{align}
For $\mathcal{F}_x$, the orbitals are translated by 1 site, and we find
\begin{align}
W_{ab}&=\braket{\Xi_b | \hat{T}_y^{-1} | \Xi_a} \frac{\drehen|_{\kappa=\kappa_i}}{\drehen|_{\kappa=\kappa_f}} \quad ( \mbox{for }\, \mathcal{F}_x).
\end{align}
\end{itemize}
Because of this decomposition, the total Berry phase takes the form
\begin{align}
U_{ab} &= W_{ab} \, \exp\left[ i\int\! A^{(\drehen)} \right]
\exp \Braket{ i \sum_\textrm{sites $n$}\! \theta^{(\varphi)}_n \hat{N}_n }
, \label{eq:U_masterEq}
\end{align}
Below we summarize the results for each case, giving an explicit formula for $\mathcal{F}_x, \mathcal{F}_y$ and $\mathcal{T}$.
\subsubsection{Orbital Berry Phase}
A technical difficulty arises when we try to compute the orbital Berry phase $\theta^{(\varphi)}_n$, due to the changing boundary conditions.
To remove all ambiguity, we follow the approach of Ref.~\cite{Keski-Vakkuri-1993} by working in a coordinate system $X, Y$ with \emph{fixed} boundary conditions (which determines the Hilbert space) and vary the metric and gauge potential, which determines the Hamiltonian. In this approach, the boundary conditions, metric, and vector potential read
\begin{subequations}\begin{align}
\tilde\psi(X,Y) &= \tilde\psi(X+L_x, Y) ,
\\ \tilde\psi(X,Y) &= \tilde\psi(X, Y+L_y) \, e^{-i\ell_B^{-2}L_yX} ,
\\ \tilde{g}_{\mu\nu} &= \begin{pmatrix}
1 & \dfrac{\tau_x}{\tau_y} \\ \dfrac{\tau_x}{\tau_y} & 1+\tau_x^2/\tau_y^2
\end{pmatrix} ,
\quad\textrm{with } \tau_y = L_y/L_x ,
\\ \tilde{A}_\mu &= \begin{pmatrix} \frac{\Phi_x}{L_x} - \ell_B^{-2}Y, & \frac{\Phi_y}{L_y} + \frac{\pi\tau_xN_\Phi}{L_y} \end{pmatrix} .
\end{align}\end{subequations}
The odd looking term $\frac{\pi\tau_xN_\Phi}{L_y} \in \tilde{A}_Y$ exists to counteract the $N_\Phi/2$ flux quanta inserted as $\tau_x$ increases by 1.
The coordinate system $(x,y)$ and boundary conditions of Eq.~\eqref{eq:torus_bc_y} are unitarily related to $(X, Y)$ through a change of coordinates and gauge transformation. While we omit the details of the computation, all the orbital overlaps and Berry phases can be unambiguously defined by transforming the orbitals to the $(X, Y)$ coordinates.
We also note that under $\mathcal{T}$, examining the single particle orbitals discussed above shows that the fluxes transform as $(\Phi_x, \Phi_y) \to (\Phi_x, \Phi_y + \Phi_x)$, so $U_T$ does not take the system back to itself unless $\Phi_x = 0$. We return to this issue when discussing modular transformations.
\subsection{\texorpdfstring
{Flux matrices $\mathcal{F}_{x/y}$: quasiparticle charge and Hall conductance}
{Flux matrices: quasiparticle charge and Hall conductance}}
\label{sec:flux_mat}
\begin{itemize}
\item{$\mathcal{F}_y$. }
The flux matrix $\mathcal{F}_y$ describe how the MES $\ket{\Xi_a}$ transform as a flux quanta is threaded through the $y$-loop.
Letting $\kappa = \Phi_y$, the pieces in Eq.~\eqref{eq:U_masterEq} are as follows,
\begin{align}
\theta^{(\varphi)}_n &= -2\pi\frac{n}{N_\Phi} ,
& A^{(\drehen)} &= \braket{\bondop{C}} ,
& W_{ab} &= \delta_{ab} e^{-2\pi i\bondop{C}_{\bar{a}}} .
\end{align}
The number operator $\hat{N}$ may be rewritten in terms of bond operators,
\textit{e.g.} $\hat{N}_n = \hat{C}_n + \nu = \bondop{C}_{\overline{n+1/2}} - \bondop{C}_{\overline{n-1/2}} + \nu$.
With a bit of algebraic manipulation, the Berry phase can be written as
\begin{align}
\mathcal{F}_y &= \exp\left[ 2\pi i (\dxp{\bondop{C}}-\bondop{C} ) + i\nu(\Phi_x-\pi) \right]
\label{eq:Fy}
\end{align}
We see that our formula for $\mathcal{F}_y$ is similar to that for the charge $e^{2\pi i Q_a}$ in Eq.~\eqref{eq:Q_a}.
Note that $\mathcal{F}_y$ is diagonal in the basis of the MES $\ket{\Xi_a}$.
Physically, this is because each quasiparticle type has a well-defined charge, modulo the electron charge.
In Eq.~\eqref{eq:Fy} we explicitly include the $\Phi_x$ dependence, whereas in Eq.~\eqref{eq:Q_a} we implicitly assumed $\Phi_x = \pi$, which is most natural for fermions.
Since $\Phi_x$ translates the state (see below), the term $\nu \Phi_x$ encodes the Su-Schrieffer counting.
\item{$\mathcal{F}_x$. }
Inserting a flux quanta through the $x$-loop will in effect translate the MPS by one unit cell.
We can see that the orbitals momenta, in units of $\frac{2\pi}{L_x}$, are quantized and of the form $n = \mathbb{Z} + \frac{\Phi_x}{2\pi}$,
and hence as $\Phi_x$ adiabatically increases by $2\pi$, the orbitals $\varphi_n$ will evolve in to $\varphi_{n+1}$.
It is easy to see that $\mathcal{F}_x$ must proportional to the translation operator $\hat{T}_y$.
For $\kappa = \Phi_x$, the pieces of Eq.~\eqref{eq:U_masterEq} are
\begin{align}
\theta^{(\varphi)}_n &= 2\pi\tau_x\frac{n}{N_\Phi} ,
& A^{(\drehen)} &= -\tau_x\braket{\bondop{C}},
& W_{ab} &= \delta_{a,b+1} e^{2\pi i\bondop{C}_{\bar{a}}}
\end{align}
($b+1$ refers to the MES which is $b$ translated by one site).
The flux matrix takes the form
\begin{align}
\mathcal{F}_x &= \delta_{a,b+1} \mathcal{F}_y^{-\tau_x} \exp\left[ i\nu(-\Phi_y-\pi) \right]
. \label{eq:Fx}
\end{align}
\end{itemize}
In general, the flux matrices must satisfy the algebra
\begin{align}
\mathcal{F}_x \mathcal{F}_y = e^{2\pi i\nu} \mathcal{F}_y \mathcal{F}_x .
\end{align}
\subsection{\texorpdfstring
{Dehn twist $\mathcal{T}$: topological spin, central charge, and Hall viscosity}
{Dehn twist: topological spin, central charge, and Hall viscosity}}
Varying $\kappa = \tau_x$ from 0 to 1, we get the Berry phase $U_T$. Note that under $\mathcal{T}$, the fluxes transform as $\mathcal{T}: (\Phi_x, \Phi_y) \to (\Phi_x, \Phi_y + \Phi_x)$.
Thus we expect that at $\Phi_x=0$, $U_T$ should be unambiguously defined, while at $\Phi_x = \pi$, only $U_T^2$ should be unambiguously defined, a point we will return to later.
For the $l$\textsuperscript{th} Landau level, the individual parts of Eq.~\eqref{eq:U_masterEq} are
\begin{align}
\theta^{(\varphi)}_n &= 2\pi\frac{n(n-N_\Phi)}{2N_\Phi} - (2l+1)\frac{L_x}{4L_y} ,
& A^{(\drehen)} &= -\braket{\bondop{K}} ,
& W_{ab} &= \delta_{ab} e^{2\pi i\bondop{K}_{\bar{a}}} .
\label{eq:T_terms}
\end{align}
(The lowest Landau level is labeled by $l=0$.) We remind the reader that for $\Phi_x \neq 0$, the site index $n$ and $\hat{K}_n$ include a fractional part $\frac{\Phi_x}{2 \pi}$, see Eq.~\ref{eq:cyl_orb}.
We note that the first term of $\theta^{(\varphi)}_n$ has been derived by Wen and Wang in Ref.~\onlinecite{WenWang-2008}.
Combining them, we have
\begin{align}
U_{T;ab} &= \delta_{ab} \,
\exp\left\{2\pi i \left[
\bondop{K} - \dxp{\bondop{H}}
+ \frac{\nu}{2} \left( \frac{\Phi_x^2}{4\pi^2} - \frac{\Phi_x}{2\pi} + \frac{1}{6} \right)
- \frac{(2l+1)\nu}{16 \pi^2 \ell_B^2} L_x^2 \right] \right\}
, \label{eq:T}
\end{align}
where
\begin{align}
\bondop{H}_{\bar{n}} = \bondop{K}_{\bar{n}} - \bar{n}\bondop{C}_{\bar{n}} + \bar{n}\dxp{\bondop{C}} .
\end{align}
In light of Eq.~\eqref{eq:defP} (where $\dxp{\bondop{C}} = 0$), we can write $\bondop{H} = \bondop{P} + \frac{1}{2\nu}\bondop{C}^2$
and interpret $\bondop{H}$ as the ``energy operator'' for auxiliary states.
$\bondop{H}$ is convenient as it is invariant under translation by $q$ sites.
As mentioned, at $\Phi_x=0$, $U_T$ is unambiguously defined because $\bondop{K} \bmod 1$ is single-valued on any bond (for a MES) and so Eq.~\eqref{eq:T} is well-defined.
For $\Phi_x = \pi$, however, only $\bondop{K} \bmod \frac12$ is single-valued which implies that only $U_T^2$ is well-defined (cf.~section on \hyperref[sec:Modular]{modular transformations}).
Finally, we also note that a term proportional to $N^e N_\Phi$ has been dropped from the result.
If we use a convention where $\dxp{\bondop{C}} = 0$, set $\Phi_x = \pi$ (which is most natural for fermions) and restrict to the lowest Landau level,
then Eq.~\eqref{eq:T} simplifies to
\begin{align}
U_{T;ab} &=
\exp\left\{2\pi i \left[
\bondop{K} - \dxp{\bondop{K} - \bar{n}\bondop{C}_{\bar{n}}}
- \frac{\nu}{24}
- \frac{\nu}{16 \pi^2 \ell_B^2} L_x^2 \right] \right\}
.
&& \textrm{(When $l=0$, $\dxp{\bondop{C}} = 0$, and $\Phi_x=\pi$.)}
\label{eq:T_piflux}
\end{align}
This is the form shown in the main text.
\textit{Comments.}
In both Eq.~\eqref{eq:T} and \eqref{eq:T_piflux}, the quantity $\bondop{K} - \dxp{\bondop{H}}$ measures the average momentum to the left of a cut, which is what distinguishes the various ground states from one another (allowing $h_a$ to be extracted).
There are terms dependent of $\nu$ because the formulas are written in terms of the orbital entanglement spectra, as opposed to the real-space entanglement.
In addition, in the thermodynamic limit $\braket{\bondop{C}_{\bar{n}}}$ and $\dxp{\bondop{C}}$ would be equal as long as charge-density wave order is absent.
(The same hold for $\bondop{H}$.)
Hence the formulas above are also useful is only the entanglement spectrum of a \emph{single cut} is known.
\textit{Discussion.}
In order to interpret $U_T$, we picture the MES as a torus with a topological flux $a$ winding around the $y$-cycle.
We cut the torus at $y=0$, shear the segment of cylinder evenly throughout the bulk so that $y=0$ remains fixed while $y = L_y$ rotates by $L_x$, then reglue the ends.
Since an anyonic flux $a$ terminates at the edge, there is a quasiparticle $a$ on the edge $y = L_y$ which moves once around the circumference under the shear.
The quasiparticle has momentum $\frac{2 \pi}{L_x} ( h_a - c/24)$ (if the edge is not chiral, this should be understood as $h_a - \bar{h}_a - \frac{c - \bar{c}}{24}$),
hence traversing a distance $L_x$ generates a phase $e^{2\pi i(h_a - c/24)} = \theta_a e^{-2\pi i c/24}$.
($\theta_a$ is known as the `topological spin' of the quasiparticle $a$.)
The bulk itself shears as well, with the strain changing as $\frac{L_x}{L_y} d\tau_x$.
The finite `Hall viscosity' $\HallVis$ results in a phase per unit area proportional to the changing strain \cite{Avron-QHViscosity95},
\begin{align}
\theta_\text{bulk}
= \hbar^{-1} \int_{\tau_x=0}^1\!\! (L_xL_y) \HallVis \frac{L_x}{L_y} d\tau_x
= \hbar^{-1} \HallVis L_x^2 .
\end{align}
Together, the expected result is
\begin{align}
U_{T;ab}
&= \delta_{ab} \exp\left[ 2 \pi i \left( h_a - \frac{c}{24} - \frac{\HallVis}{2\pi\hbar} L_x^2 \right) \right]
. \label{eq:UT_hc}
\end{align}
The Hall viscosity is known to be related to the `shift' $\mathscr{S}$ on a sphere via $\HallVis = \frac{\hbar}{4} \frac{\nu}{2 \pi \ell_B^2} \mathscr{S}$ \cite{Read-QHViscosity09}.
To extract the quantities independently, we can first fit the quadratic part to extract $\HallVis$ and isolate the constant part $h_a - c/24$.
Once the $\mathds{1}$ MES and bond is determined, we obtain $c$, and the remaining $h_a$'s can be read off directly from the ratio $e^{2\pi i h_a} = U_{T;aa} / U_{T;\mathds{1}\mathds{1}}$.
\begin{figure}[tb]
\includegraphics[width=170mm]{mr_dehn}
\caption{%
Various quantities characterizing the topological Moore-Read phase at $\nu = 1/2$.
The Berry phase $U_T$ arising from a Dehn twist is acquired from the model Moore-Read wave function via Eq.~\eqref{eq:T_piflux}
and $h, c, \HallVis$ are extracted from Eq.~\eqref{eq:UT_hc} at various circumference $L = L_x$.
(a) The argument of the phase $U_T$ plot vs.\ $L^2/\ell_B^2$.
For large $L$, the argument becomes linear in $L^2$.
(b) $h$ of the $\sigma_\pm$ quasiparticle, extracted from the ratio of $U_T$ between the $\sigma_\pm$ and $\mathds{1}$ ground states,
via the ratio $\exp (2 \pi i h_{\sigma_\pm}) = U_T(\sigma_\pm) / U_T(\mathds{1})$.
(c) The Hall viscosity $\HallVis$ extracted by fitting to the form $U_{T;\mathds{1}\mathds{1}} = e^{-2\pi i c/24} \exp(-\frac{\HallVis}{2\pi\hbar}L^2)$ for the identity sector.
The data is presented as the `shift' $\mathscr{S} = (2\pi\ell_B^2 / \nu) (4\HallVis / \hbar)$.
(d) The chiral central charge extracted from $U_{T;\mathds{1}\mathds{1}}$, assuming $\mathscr{S} = 3$ for the Moore-Read state.
(b)-(d) In all cases $L/\ell_B$ must be sufficiently large for these topological quantities to be reliably extracted from the entanglement spectrum.
}
\label{fig:mr_dehn}
\end{figure}
In Ref.~\onlinecite{WenWang-2008}, the differences $h_a - h_b$ can be extracted from `pattern of zeros,' which can be understood as the $L\to0$ limit in which the state becomes a $\chi=1$ tensor product.
In this limit, only the term $\theta^{(\varphi)}_n$ from Eq.~\eqref{eq:T_terms} contributes to the Berry phase $U_T$ and hence
the differences $h_a - h_b$ inferred from Eq.~\ref{eq:T} reproduces Wen and Wang's result (Eq.~(32) and (33) of Ref.~\cite{WenWang-2008}).
However, in the limit $L \to \infty$, the ratio $\theta_a/\theta_b = e^{2\pi i(h_a - h_b)}$ computed via Eq.~\eqref{eq:T} matches that of Ref.~\onlinecite{WenWang-2008} \emph{only} when the quasiparticles $a$, $b$ are related by attaching fractional fluxes (in the MES language, the states are related by translation).
Hence in the (single-Landau level) Abelian case, where all MES are related by flux attachment, the spins $h_a$'s can be recovered in the limit $L \to 0$ (using $h_{\mathds{1}} = 0$).
But for non-Abelian cases, where not all MES are related by translation, only in the $L \to \infty$ limit gives the correct result,
so the result of Ref.~\onlinecite{WenWang-2008} [Eq.~(33)] appears to be valid only for Abelian phases.
As an example, consider the Moore-Read state at $\nu=\frac12$.
The MR state contains two non-Abelian quasiparticle `$\sigma_+$' and `$\sigma_-$',
with charges $\pm \frac{e}{4}$ and both with quantum dimension $d_\sigma = \sqrt{2}$.
Its topological spin is expected to be $h_{\sigma_\pm} = \frac{1}{2\nu}Q^2 + h_\sigma = \frac{1}{16} + \frac{1}{16} = \frac18$.
The edge of the MR phase consists of a free boson and Majorana mode with combined chiral central charge of $1 + \frac12 = \frac32$.
In Fig.~\ref{fig:mr_dehn}, we plot the values of $h$, $c$ and $\HallVis$ extracted by fitting $U_T$ of the model MR wavefunctions \cite{MooreRead1991} to Eq.~\eqref{eq:UT_hc} at various $L_x$.
\subsection{Modular transformations}
\label{sec:Modular}
The modular transformations are affine maps from the torus to itself, the set of which is the modular group
$\mathrm{PSL}(2, \mathbb{Z}) \cong \mathrm{SL}(2, \mathbb{Z}) / \mathbb{Z}_2$.
For example, the `$T$' transformation corresponds to a Dehn twist sending $\tau \rightarrow \tau+1$ (the same as $\tau_x \rightarrow \tau_x+1$).
The `$S$' transformation rotates the torus sending $\tau \rightarrow -1/\tau$.
(When $\tau_x = 0$, this corresponds to a $\pi/2$ rotation swapping $L_x$ with $L_y$.)
Since $T$ and $S$ generate the entire modular group, we focus only on these two transformations.
The $\mathcal{T}$- and $\mathcal{S}$-matrices describes how the set of ground states transform under their respective modular transformations \cite{Keski-Vakkuri-1993}.
As discussed in the previous section, $\mathcal{T}$ is a diagonal matrix with entries $\theta_a$ known as the `topological spin', the action of rotating a quasiparticle type $a$ by $2\pi$.
$\mathcal{S}_{ab}$ gives the mutual statistics of braiding $a$ and $b$ around each other.
Generically $\mathcal{T}, \mathcal{S}$ are elements in a projective representation of $\mathrm{SL}(2, \mathbb{Z})$; the double cover of the modular group.
The double cover is necessary because $S^2 = 1 \in \mathrm{PSL}(2, \mathbb{Z})$, but $\mathcal{S}^2$ corresponds to charge-conjugation and is not a multiple of the identity; rather $\mathcal{S}^4 \propto \mathds{1}$.
Note that under the modular transformations the fluxes transform as $\mathcal{T}: (\Phi_x, \Phi_y) \to (\Phi_x, \Phi_y + \Phi_x)$, $\mathcal{S}: (\Phi_x, \Phi_y) \to (\Phi_y, -\Phi_x)$.
As we have discussed, $\Phi_i = 0$ is most natural for bosons, but $\Phi_i = \pi$ is most natural for fermions, so we must return to this subtlety.
\begin{itemize}
\item{Constraining (or determining) $\mathcal{S}$ from $\mathcal{F}$.}
When $\tau_x = 0$, we can use $\mathcal{S}$ to relate the two flux matrices \cite{WenWang-2008},
\begin{align}
\mathcal{F}_y &= \mathcal{S} \mathcal{F}_x \mathcal{S}^{-1} .
\label{eq:S_from_F}
\end{align}
While this alone cannot be used to solve for $\mathcal{S}$, there are additional constraints,
\begin{align}\label{eq:S_constraint}
\mathcal{S}_{\mathds{1}a} = \frac{d_a}{\mathcal{D}} = e^{-\gamma_a} , \quad \mathcal{S}_{ab} = \mathcal{S}_{ba} ,
\end{align
where $d_a$ is the quantum dimension for the quasiparticle $a$, with $\mathcal{D} = \sqrt{\sum_a d_a^2}$ being the total quantum dimension.
$\gamma_a$ is the topological entanglement entropy for the MES $\ket{\Xi_a}$ defined in the main text.
For certain phases such as the Moore-Read state and the $\nu=2/5$ Jain state the modular $\mathcal{S}$-matrix may be determined from these constraints alone.
Solving for $\mathcal{S}$ in the MES basis essentially amounts to diagonalizing $\mathcal{F}_x$.
\item{Flux sectors and modular transformations.}
A subtlety in the computing $\mathcal{T}$ and $\mathcal{S}$ is the interplay of modular transformations with boundary conditions ($\Phi_x, \Phi_y$).
Since $T, S$ change the fluxes, we need to instead consider a larger Hilbert space for which the boundary conditions may take on four possible combinations:
$(\Phi_x,\Phi_y) \in \{ (0,0), (0,\pi), (\pi,0), (\pi,\pi)\}$, which as a shorthand we refer to as \textbf{PP}, \textbf{PA}, \textbf{AP}, \textbf{AA}, respectively.
Each of these sectors consist of $\gsd$ linear independent ground states, for a total of $4\gsd$-dimensional ground state manifold.
The \textbf{PP} sector is closed under the action of $\mathcal{T}$ and $\mathcal{S}$, but the other three sectors mixes under modular transformations \cite{Ginsparg}.
We write $\mathcal{T}$ and $\mathcal{S}$ as block matrices, where each block is an $\gsd\times\gsd$ matrix describing transitions between sectors.
The order of the four columns/rows are \textbf{PP}, \textbf{PA}, \textbf{AP}, \textbf{AA}.
\begin{align}
\mathcal{T} &=
\left[\!\begin{array}{c|c|c|c}
\mathcal{T}^\textbf{PP} &&& \\\hline
& \mathcal{T}^\textbf{PA} && \\\hline
&&& \mathcal{T}^+ \\\hline
&& \mathcal{T}^- &
\end{array}\!\right] ,
& \mathcal{S} &=
\left[\!\begin{array}{c|c|c|c}
\mathcal{S}^\textbf{PP} &&& \\\hline
&& \mathcal{S}^+ & \\\hline
& \mathcal{S}^- && \\\hline
&&& \mathcal{S}^\textbf{AA}
\end{array}\!\right] .
\end{align}
In the minimal entangled basis, each $\mathcal{T}$-submatrix are still diagonal.
In the \textbf{AP} and \textbf{AA} sectors ($\Phi_x=\pi$), the formula Eq.~\eqref{eq:T} \emph{squared} gives the product $\mathcal{T}^+\mathcal{T}^-$, as two Dehn twists are required to come back to the same wavefunction.
In other words, Eq.~\eqref{eq:T} will only give $h$ modulo $\frac12$.
\end{itemize}
\end{widetext}
\ifdefined\qhsubputreferences |
2,869,038,154,644 | arxiv | \section{Introduction}
There are quite a few effects in quantum field theory due to
boundaries or non-trivial spacetime topology. Topological mass
generation and the Casimir effect are beautiful and simple
manifestations of them (see for example
\cite{top-mass}-\cite{Casimir}). Theories at non-zero temperature,
which are equivalent to those with the time dimension being curled up
to the circle with the radius (temperature)$ ^{-1}$, give another
physically interesting example of such effects (see
\cite{temperature}).
Scattering of particles in theories on the spacetime with non-trivial
topology was studied in ref. \cite{torus-scat}, \cite{sphere-scat} and
\cite{DIKT}. There a simple model of one scalar field $\phi$ with
quartic self-interaction on the spacetime of the type $M^{n} \times
K$, where $M^{n}$ is the $n$-dimensional Minkowski spacetime and $K$
is a two-dimensional compact space, was analyzed. As it is well known,
such a model can be re-written as an effective model on $M^{n}$ with
an infinite number of fields $\phi_{N}$, where $N$ is a multi-index
labelling the eigenfunctions of the Laplace operator $\Delta_{K}$ on
the manifold $K$. These fields have rising spectrum of masses given by
the formula of the type $M_{N}^{2} = m^{2} + N^{2}/L^{2}$, where $m$
is the mass of the original ($n+2$)-dimensional model and $L$ is a
characteristic size of the space $K$. Often the fields $\phi_{N}$ are
referred to as Kaluza-Klein modes and the infinite set of them is
called the Kaluza-Klein tower of modes. For the problems of physical
interest $Lm \ll 1$, so the mode with the mass $m$ is called the light
mode or zero mode, the rest are refered to as heavy modes. It is worth
to note that in the case of models with non-zero temperature $T$ in
the Euclidean time formalism the spacetime is described by $M^3 \times
S^{1}$. Then the scale $L$ is the radius of $S^1$ equal to $T^{-1}$
and the modes $\phi_{N}$ are labelled just by one integer ranging from
$-\infty$ to $+\infty$ and are usually called Matsubara modes.
The object of our study here is the total cross section of the
scattering process (2 light particles) $\rightarrow$ (2 light
particles). As one can easily see, in this model the heavy modes do
not contribute at the tree level. Thus, there is no difference between
the cross section $\sigma ^{(\infty)}$, calculated in the model
described above, and the cross section $\sigma ^{(0)}$, calculated in
the model on the spacetime $M^{n}$ (with the same mass $m$ and
corresponding coupling constants). It is the one loop order where
effects due to the non-trivial topology of the spacetime come into
play. It appears that because of contributions of virtual heavy
particles propagating along the loop the cross section has a behaviour
which differs significantly from that of $\sigma^{(0)}$ even for
energies of scattering particles noticeably below the threshold of the
first heavy particle. We should note that in models of another type
heavy modes can contribute already at the tree level, what makes the
effect stronger. Examples of such models appear in the superstring
theory with certain orbifold compactifications. Physical predictions
for some realistic processes and estimates on the size of $L$ were
obtained in \cite{antoniadis}.
The cases $n=2$, $K=T^{2}$ and $n=4$, $K=T^{2}$, where $T^{2} = S^{1}
\times S^{1}$ is the two-dimensional equilateral torus with the radius
$L$, with periodic boundary conditions were considered in
ref.~\cite{DIKT} and ref.~\cite{torus-scat} respectively. A detailed
analysis of the case of the spherical compactification, namely $n=4$,
$K=S^{2}$, with $L$ being the radius of the sphere, was carried out in
ref.~\cite{sphere-scat}. There it was also shown, that in the interval
of the centre of mass energies $\sqrt{s}$ of the particles below the
threshold of the first heavy mode, the contribution of the
Kaluza-Klein tower to the total cross section of the scattering is
essentially proportional to $\zeta(2| K) (sL^2)$, where $\zeta (2 |
K)$ is the zeta function of the Laplace operator on the manifold $K$.
This is the way the topology of the spacetime enters into the
characteristic of the high energy process.
The calculations of the total cross section for $n=4$ are of physical
interest because in that case the model, in spite of its simplicity,
mimics some essential features of Kaluza-Klein type extensions of
physical models, for example of the Standard Model. One might think of
this type of extensions as low energy effective theories emerging, for
example, from superstrings. Provided we are considering a realistic
model and the scale $L^{-1}$ is of the order of a few TeV (see below),
the effect can be (in principle) measured experimentally at future
colliders. This could yield evidence about the validity of the
Kaluza-Klein hypothesis within a given model.
Two important comments are in order here. The first one is, that the
scalar $\lambda \phi^{4}$ model in more than four dimensions, which is
considered in \cite{torus-scat} and \cite{sphere-scat} and which is
the object of the analysis in the present article, is
non-renormalizable. In the formalism used here this reveals in an
additional divergence of the sum over one-loop diagrams with heavy
modes propagating along the loop which contribute to the four-point
Green function or the scattering amplitude. Since the model is
regarded as a low energy effective theory coming from a more
fundamental theory, we do not consider the non-renormalizability to be
a principle obstacle. However, an additional prescription must be
imposed to treat this additional divergence. We choose it, as we
believe, in a physically acceptable way so as to make the difference
between the six dimensional model and the four dimensional one at the
scale of the four dimensional physics (i.e. at the energies of the
order of $m$ or smaller) as small as possible. If the prescription had
been imposed in some other way, then the difference between the models
would have been a low energy effect and could be observed immediately.
This would rule out the six-dimensional model from the very beginning.
It is worth mentioning that the six-dimensional models, considered
here, possess an additional property which makes their analysis more
interesting: the one-loop non-renormalizable contributions to the
total cross section of the two particle scattering process cancel out
on the mass shell when the $s$-, $t$- and $u$-channels are summed up.
Thus, we avoid the additional ambiguity due to non-renormalizability
in our calculations.
The second comment is related to the magnitude of the scale $L$,
characterizing new physics due to non-trivial topology of the original
model. In accordance with an analog of the decoupling theorem for this
class of theories \cite{decoupl} the effect naturally disappears if
the scale $L^{-1}$ is too large compared to the mass $m$. In many
approaches the scale of the compactification $L$ is assumed to be (or
appears to be) of the order of the inverse Planck mass $M_{Pl}^{-1}$
(see, for example, \cite{compact} and the reviews \cite{KK-review2}).
In this case additional dimensions could reveal themselves only as
peculiar gravitational effects or at an early stage of the evolution
of the Universe. On the other hand, there are some arguments in favour
of a larger compactification scale. One of them comes from
Kaluza-Klein cosmology and stems from the fact that the density of
heavy Kaluza-Klein particles cannot be too large, in order not to
exceed the critical density of the Universe. Estimates obtained in
ref. \cite{kolb} give the bound $L^{-1} < 10^{6}$ GeV. Other arguments
are related to results of papers \cite{kapl-88} and suggest that the
compactification scale should be of the order of the supersymmetry
breaking scale $M_{SUSY} \sim (1 \div 10)$ $TeV$. No natural mechanism
providing compactification of the space of extra dimensions with such
a scale is known so far. Having the above mentioned arguments in mind
we would like to study physical consequences in a multidimensional
model {\em assuming} that a compactification of this kind is indeed
possible.
In the present article we continue the analysis of the model described
above in the case of the spacetime $M^{4} \times T^{2}$. The aim is
twofold. First we carry out a more careful study of analytical
properties of the one-loop amplitude for positive and negative $p^{2}$
using the well developed machinery of zeta functions of the torus (see
for example \cite{eekk,kk}). The analysis will be extended to the case
of the non-equilateral torus and antiperiodic boundary conditions. The
second aim is to consider an extension of the model by coupling the
scalar field minimally to a constant Abelian gauge potential $A_{m}$
on the torus. Due to the non-trivial topology the constant components
$A_{m}$ cannot be removed by gauge transformations and are physical
parameters of the theory. We will show that the presence of such
classical gauge potential gives rise to an increase of the cross
section of the scattering of light particles for certain regions of
energies.
The paper is organized as follows. In Sect. 2 we describe the model,
choose the renormalization condition and discuss the general structure
of the 1-loop result for the four-point Green function and the total
cross section. In Sect. 3 detailed representations for the sum of
contributions of the Kaluza-Klein modes are derived. Behaviour of the
total cross section is analyzed in Sect. 4. Sect. 5 is devoted to the
analysis of the scattering cross section in the model with abelian
gauge potential. Conclusions and some discussion of the results are
presented in Sect. 6.
\section{Description of the model, mode expansion and renormalization}
We consider a one component scalar field on the $(4+2)$-dimensional
manifold $E=M^{4}\times T^{2}$, where $M^{4}$ is the Minkowski
space-time and $T^{2}$ is the two-dimensional torus of the radii
$L_{1}$ and $L_{2}$. In spite of its simplicity this model captures
some interesting features of both the classical and quantum properties
of multidimensional theories. The action is given by \begin{equation}
S= \int_{E} d^{4}x d^{2}y \left[\frac{1}{2}(\frac{\partial \phi
(x,y)}{\partial x^{\mu}})^2-\frac{1}{2} \frac{\partial \phi (x,y)}
{\partial y^{i} } \frac{\partial \phi (x,y) }{\partial y^{j} } -
\frac{1}{2} m_{0}^{2} \phi^{2}(x,y) - \frac{\hat{\lambda}}{4!} \phi
^{4} (x,y) \right], \label{eq:action0} \end{equation} where $x^{\mu},
\mu = 0,1,2,3,$ are the coordinates on $M^{4}$, $y^{1}$ and $y^{2}$
are the coordinates on $T^{2}$, $0 < y^{1} < 2\pi L_{1}$, $0 < y^{2} <
2 \pi L_{2}$ and the field $\phi (x,y)$ satisfies periodic boundary
conditions on the torus. To re-interpret this model in
four-dimensional terms we make an expansion of the field $\phi (x,y)$,
\begin{equation} \phi (x,y) = \sum_{N} \phi _{N} (x) Y_{N} (y),
\label{eq:expansion} \end{equation} where $N=(n_{1},n_{2})$, $-\infty
< n_{i} < \infty$ and $Y_{N}(y)$ are the eigenfunctions of the Laplace
operator on the internal space, \begin{equation} Y_{(n_{1},n_{2})}
= \frac{1}{2 \pi \sqrt{L_{1} L_{2}}} \exp \left[i \left(
\frac{n_{1}y^{1}}{L_{1}} + \frac{n_{2}y^{2}}{L_{2}} \right)
\right]. \label{eq:laplace} \end{equation} Substituting this expansion
into the action and integrating over $y$, one obtains \begin{eqnarray}
S & = & \int_{M^{4}} d^{4} x \left\{ \frac{1}{2} ( \frac{\partial
\phi_{0} (x)} {\partial x^{\mu}} )^{2} - \frac{1}{2} m_{0}^{2}
\phi_{0}^{2}(x) - \frac{\lambda_{1}}{4!} \phi _{0}^{4} (x) \right.
\nonumber \\ & & + \sum_{N>0} \left[ \frac{\partial
\phi_{N}^{*} (x)}{\partial x^{\mu}} \frac{\partial \phi_{N}
(x)}{\partial x_{\mu}} - M_{N}^{2} \phi _{N}^{*} (x) \phi _{N} (x)
\right] \nonumber \\ & & - \left. \frac{\lambda_{1}}{2} \phi
_{0}^{2} (x) \sum_{N > 0} \phi_{N}^{*}(x) \phi_{N}(x) \right\} -
S'_{int}, \label{eq:action1} \end{eqnarray} where the four-dimensional
coupling constant $\lambda_{1}$ is related to the multidimensional one
$\hat{\lambda}$ by $ \lambda_{1} = \hat{\lambda} / {\mbox volume}
(T^{2})$. In eq. (\ref{eq:action1}) the term $S_{int}'$ includes
vertices containing third and fourth powers of $\phi_{N}$ with $N^{2}
> 0$. We see that the model contains one real scalar field $\phi
\equiv \phi_{(0,0)}(x)$ describing a light particle of mass $m_{0}$,
and an infinite set (``tower") of massive complex fields $\phi_{N}(x)$
corresponding to heavy particles, or pyrgons, of masses given by
\begin{equation} M_{N}^{2} = m_{0}^{2} + \frac{n_{1}^{2}}{L_{1}^2} +
\frac{n_{2}^{2}}{L_{2}^2} . \label{eq:mass} \end{equation}
Let us consider the 4-point Green function $\Gamma^{(\infty)}$ with
external legs corresponding to the light particles $\phi $. The index
$(\infty)$ indicates that the whole Kaluza-Klein tower of modes is
taken into account.
It is easy to see that the tree level contribution is the same as in
the dimensionally reduced theory whose action is given by the first
line in eq. (\ref{eq:action1}). At the one-loop level, owing to the
infinite sum of diagrams, the Green function to one-loop order is
quadratically divergent. This is certainly a reflection of the fact
that the original theory is actually six-dimensional and, therefore,
non-renormalizable. Thus, the divergencies cannot be removed by
renormalization of the coupling constant alone. We must also add a
counterterm $\lambda_{2B}\phi ^{2}(x,y) \Box _{(4+d)} \phi^{2}(x,y)$,
where $\Box _{(4+d)}$ is the D'Alembertian on $E$ and $\lambda_{2B}$
has mass dimension two. Of course, for the calculation of other Green
functions, or higher-order loop corrections, other types of
counterterms, which are not discussed here, are necessary. Hence, the
Lagrangian we will use for our investigation is \begin{eqnarray}
{\cal L} & = & \frac{1}{2} ( \frac{\partial \phi _{0}(x)} {\partial x}
)^{2} - \frac{m_{0}^{2}}{2} \phi ^{2}_{0}(x) \nonumber \\ & + &
\sum_{N > 0} \left[ \frac{\partial \phi_{N}^{*} (x)}{\partial
x^{\mu}} \frac{\partial \phi_{N} (x)}{\partial x_{\mu}} - M_{N}^{2}
\phi _{N}^{*} (x) \phi _{N} (x) \right] \nonumber \\ & - &
\frac{\lambda_{1B}}{4!} \phi _{0}^{4} (x) - \frac{\lambda_{1B}}{2}
\phi _{0}^{2} (x) \sum_{N > 0} \phi_{N}^{*}(x) \phi_{N}(x) -
\frac{\lambda_{2B}}{4!} \phi _{0}^{2}(x) \Box \phi _{0}^{2}(x),
\label{eq:action2} \end{eqnarray} where $\lambda _{1B}$ and
$\lambda_{2B}$ are bare coupling constants. To regularize the
four-dimensional integrals we will employ dimensional regularization,
which is performed, as usual, by making analytical continuation of the
integrals to $(4-2\epsilon)$ dimensions. $\kappa$ will be a mass scale
set up by the regularization procedure. The sum over the Kaluza-Klein
tower of modes will be regularized by means of the zeta function
technique \cite{hawk}.
The renormalization scheme is chosen in the same way as in ref.
\cite{torus-scat}, \cite{sphere-scat}. Let us refer the reader to
those articles for details and present here only the main formulas
which we will need for our calculations.
As subtraction point we choose the following point in the space of
invariant variables built up out of the external four-momenta $p_{i}$
$(i=1,2,3,4)$ of the scattering particles: \begin{eqnarray} & &
p_{1}^{2} = p_{2}^{2} = p_{3}^{2} = p_{4}^{2} = m_{0}^{2},
\nonumber \\ & & p_{12}^{2} = s = \mu_{s}^{2}, \; \; p_{13}^{2}
= t = \mu_{t}^{2}, \; \; p_{14}^{2} = u = \mu_{u}^{2},
\label{eq:point} \end{eqnarray} where $p_{1j}^{2} = (p_{1} +
p_{j})^{2}$, $(j=2,3,4)$, and $s$, $t$ and $u$ are the Mandelstam
variables. Since the subtraction point is located on the mass shell,
it satisfies the standard relation $\mu_{s}^{2} + \mu_{t}^{2} +
\mu_{u}^{2} = 4 m_{0}^{2}$. The renormalization prescriptions for the
4-point Green function are as follows \begin{equation} \left.
\Gamma^{(\infty)} (p_{1j}^{2}; m_{0}, L_{1},L_{2}, \lambda_{1B},
\lambda_{2B}, \epsilon) \right|_{s.p.} = \left. \Gamma ^{(0)}
(p_{1j}^{2}; m_{0}, \lambda_{1B}', \epsilon) \right|_{s.p.} = g
\kappa ^{2\epsilon}, \label{eq:renorm1} \end{equation}
\begin{equation} \left. \left[ \frac{\partial}{\partial p_{12}^{2}} +
\frac{\partial }{\partial p_{13}^{2}} + \frac{\partial }{\partial
p_{14}^{2}}\right] \Gamma^{(\infty)} \right|_{s.p.} = \left. \left[
\frac{\partial}{\partial p_{12}^{2}} + \frac{\partial }{\partial
p_{13}^{2}} + \frac{\partial }{\partial p_{14}^{2}}\right]
\Gamma^{(0)} \right|_{s.p.} + \frac{\lambda_{2}}{4} \kappa ^{-2 +
2\epsilon}. \label{eq:renorm2} \end{equation} Here $\Gamma ^{(0)}$
is the four-point Green function of the four-dimensional theory with
the zero mode field only (i.e., the dimensionally reduced theory),
$\lambda_{1B}'$ being its bare coupling constant. In the first line we
have written down the dependence of the Green functions on the
momentum arguments and parameters of the theory explicitly, and we
have taken into account that to one-loop order they depend on
$p_{12}^{2}$, $p_{13}^{2}$, and $p_{14}^{2}$ only. The label $s.p.$
means that the corresponding quantities are taken at the subtraction
point (\ref{eq:point}). $g$ and $\lambda_{2}$ are renormalized
coupling constants. The last one is included for the sake of
generality only, and we will see that our result does not depend on
it. More detailed discussion of the renormalization prescriptions
(\ref{eq:renorm1}), (\ref{eq:renorm2}) can be found in
\cite{sphere-scat}.
To one-loop order, the Green functions of the complete theory and of
the theory with only the zero mode are given by \begin{eqnarray}
\Gamma ^{(\infty)} (p_{1j}^{2}; m_{0}, L_{1},L_{2}, \lambda_{1B},
\lambda_{2B}, \epsilon) & = &\lambda_{1B} + \lambda_{2B}
\frac{p_{12}^{2}+p_{13}^{2}+p_{14}^{2}}{12} \nonumber \\ & + &
\lambda_{1B}^{2} [ K_{0}(p_{1j}^{2}; m_{0},\epsilon) + \Delta K
(p_{1j}^{2}; m_{0},L_1,L_2,\epsilon) ], \label{eq:1loop} \\ \Gamma
^{(0)} (p_{1j}^{2}; m_{0}, \lambda_{1B}',\epsilon) & = &
\lambda_{1B}' + \lambda_{1B}'^{2} K_{0}(p_{1j}^{2}; m_{0}, \epsilon).
\nonumber \end{eqnarray} Here \begin{equation}
K_{0}(p_{1j}^{2};m_{0},\epsilon) \equiv K_{00}(p_{1j}^{2};m_{0}^{2},
\epsilon), \; \; \; \Delta K (p_{1j}^{2};m_{0},L_{1},L_{2}
,\epsilon)= \sum_{N>0} K_{N}(p_{1j}^{2};M_{N}^{2},\epsilon)
\label{eq:Kexpansion} , \end{equation} and $K_{N}$ is the contribution
of the mode $\phi _{N}$ with the mass $M_{N}$ (see eq.
(\ref{eq:mass})) to the one-loop diagram of the scattering of two
light particles, \begin{equation} K_{N} (p_{1j}^{2};
M_{N}^{2},\epsilon) = \frac{-i}{32 \pi^{4}}
\frac{1}{M_{N}^{2\epsilon}} \left[ I(\frac{p_{12}^{2}}{M_{N}^{2}},
\epsilon) + I(\frac{p_{13}^{2}}{M_{N}^{2}}, \epsilon) +
I(\frac{p_{14}^{2}}{M_{N}^{2}}, \epsilon)\right].
\label{eq:Kdefinition} \end{equation} Here we assume that
$\lambda_{2B} \sim \lambda_{1B}^{2}$, so that the one loop diagrams
proportional to $\lambda_{1B} \lambda_{2B}$ or $\lambda_{2B}^{2}$ can
be neglected. It can be shown that this hypothesis is consistent (see
ref. \cite{decoupl}). As it stands, the function $\Delta K$is well
defined for $ \Re \epsilon > 1$, the sum being convergent for this
case. We will need, however, its value at $\epsilon = 0$ where it has
to be understood as the analytical continuation of
(\ref{eq:Kexpansion}). The same remark holds for all expressions of a
similar type appearing in the following. The function $I$ in the
formula above is the standard one-loop integral \begin{eqnarray}
I(\frac{p^{2}}{M^{2}},\epsilon ) & = & M^{2\epsilon} \int
d^{4-2\epsilon} q \ \frac{1}{(q^{2}+M^{2})((q-p)^{2}+M^{2})}
\nonumber \\ & = & i \pi^{2-\epsilon} \Gamma (\epsilon)
M^{2\epsilon} \int_{0}^{1} dx \ \frac{1} {[M^{2} - p^{2}
x(1-x)]^{\epsilon}}. \label{eq:integral} \end{eqnarray} Let us also
introduce the sum of the one-loop integrals over all Kaluza-Klein
modes \begin{equation} \Delta I
(p^{2}L_{1}^{2},m_{0}L_{1},w,\epsilon) = \sum_{N>0}
(\frac{1}{L_{1}^{2}M_{N}^{2}})^{\epsilon}
I(\frac{p^{2}}{M_{N}^{2}},\epsilon ), \label{eq:int-sum}
\end{equation} so that \begin{eqnarray} \Delta K (p_{1j}^{2};
m_{0},L_{1},L_{2},\epsilon) & = & \frac{-i}{32 \pi ^{4}}
L_{1}^{2 \epsilon} \left[ \Delta I
(p_{12}^{2}L_{1}^{2},m_{0}L_{1},w,\epsilon)
\right. \nonumber \\ &+& \left. \Delta I
(p_{13}^{2}L_{1}^{2},m_{0}L_{1},w,\epsilon) + \Delta I
(p_{14}^{2}L_{1}^{2},m_{0}L_{1},w,\epsilon) \right].
\label{deltaK} \end{eqnarray} Here we denote $w = (L_{1}/L_{2})^{2}$.
Performing the renormalization, we obtain the following expression for
the renormalized four-point Green function \begin{eqnarray} & \Gamma
^{(\infty)}_{R}& \left(\frac{p_{1j}^{2}}{\mu_{j}^{2}};
\mu_{j}^{2}L_{1}^{2};m_{0}L_{1},w,g, \lambda_{2}\right)
\nonumber \\ &=& \lim _{\epsilon
\rightarrow 0} \kappa ^{-2 \epsilon} \Gamma ^{(\infty)}
\left(p_{1j}^{2}; m_{0}, L_{1},L_{2}, \lambda_{1B} (g,\lambda_{2}),
\lambda_{2B}(g, \lambda_{2} \right), \epsilon) \nonumber \\ & = &
\lim _{\epsilon \rightarrow 0} \left\{ g + \lambda_{2}
\frac{p_{12}^{2}+p_{13}^{2}+p_{14}^{2}-\mu_{s}^{2}-\mu_{t}^{2}-
\mu_{u}^{2}}{12 \kappa^{2}} \right. \label{eq:gfren1} \\ & +
& g^{2} \kappa ^{2 \epsilon} \left[ K_{0}(p_{1j}^{2};
m_{0},\epsilon) - K_{0}(\mu_{j}^{2}; m_{0},\epsilon) \right.
\nonumber \\ & + & \Delta K (p_{1j}^{2}; m_{0},L_{1},L_{2},\epsilon)
- \Delta K (\mu_{j}^{2}; m_{0},L_{1},L_{2},\epsilon)
\nonumber \\ & - & \left. \left. \left. \frac{\sum_{j=2}^{4}
p_{1j}^{2}-\mu_{s}^{2}- \mu_{t}^{2}-\mu_{u}^{2}}{3} \sum_{j=2}^{4}
\frac{\partial }{\partial p_{1j}^{2}} \Delta K (p_{1j}^{2};
m_{0},L_{1},L_{2},\epsilon) \right|_{s.p.} \right] \right\},
\nonumber \end{eqnarray} where we denote $\mu_{2}^{2} = \mu_{s}^{2}$,
$\mu_{3}^{2} = \mu_{t}^{2}$ and $\mu_{4}^{2} = \mu_{u}^{2}$. The
r.h.s. of this expression is regular in $\epsilon$, and after
calculating the integrals and the sums over $N=(n_1,n_2)$ in $\Delta
K$ we take the limit $\epsilon \rightarrow 0$.
The above expression is rather general and valid for an arbitrary
subtraction point. For the four-point Green function of the complete
theory (i.e. with all the Kaluza-Klein modes) renormalized according
to the conditions (\ref{eq:renorm1}) and (\ref{eq:renorm2}) at the
subtraction point (\ref{eq:point}), and taken at a momentum point
which lies on the mass shell of the light particle \begin{eqnarray} &&
\Gamma ^{(\infty)}_{R}
(\frac{s}{\mu_{s}^{2}},\frac{t}{\mu_{t}^{2}},\frac{u}{\mu_{u}^{2}};
\mu_{s}^{2}L_{1}^{2},\mu_{t}^{2}L_{1}^{2},
\mu_{u}^{2}L_{1}^{2};m_{0}L_{1},w, g) \nonumber \\ & & \ \ \
= g + g^{2} \lim _{\epsilon \rightarrow 0} \kappa ^{2 \epsilon} [
K_{0}(s,t,u;m_{0},\epsilon) -
K_{0}(\mu_{s}^{2},\mu_{t}^{2},\mu_{u}^{2}; m_{0},\epsilon) \nonumber
\\ & & \ \ \ + \Delta K (s,t,u; m_{0},L_{1},L_{2},\epsilon) -
\Delta K (\mu_{s}^{2},\mu_{t}^{2},\mu_{u}^{2};
m_{0},L_{1},L_{2},\epsilon) ]. \label{eq:gfren2} \end{eqnarray} The
variables $s$, $t$ and $u$ are not independent, since they satisfy the
well known Mandelstam relation $s+t+u=4m_{0}^{2}$.
The formula above is rather remarkable. It turns out that on the mass
shell, due to cancellations between the $s$-, $t$- and $u$-channels,
the contribution proportional to $\lambda_{2}$ and the terms
containing derivatives of the one-loop integrals vanish. Thus, heavy
Kaluza-Klein modes contribute to the renormalized Green function on
the mass shell in exactly the same way as the light particle in the
dimensionally reduced theory does. Indeed, it can be easily checked
that the additional non-renormalized divergences, arising from the
infinite summation in $\Delta K$, cancel among themselves when the
three scattering channels are summed up together.
Next we calculate the total cross section $\sigma^{(\infty)}(s)$ of
the scattering process \ (2 {\em light particles}) \ $\longrightarrow
$ \ (2 {\em light particles}), \ in the case when the whole
Kaluza-Klein tower of heavy particles contribute. We compare it with
$\sigma ^{(0)} (s)$, which is the cross section in the dimensionally
reduced model, i.e. when only the light particle contributes. From the
discussion above it is clear that at low energies $\sigma
^{(\infty)}\approx \sigma^{(0)}$, so in what follows we take
$s>4m_0^2$.
The quantity which describes the deviation of the total cross section
from that in the four-dimensional theory due to the contributions of
the heavy particles is the following ratio: \begin{equation} R
\left(\frac{sL_{1}^2}{4};\mu_{s}^{2}L_{1}^{2},
\mu_{u}^{2}L_{1}^{2},\mu_{t}^{2}L_{1}^{2}; m_{0}L_{1},w \right)
\equiv 16 \pi^{2} \frac{\sigma^{(\infty)}(s) - \sigma^{(0)}(s)}{ g
\sigma^{(0)}(s)} . \label{eq:delta-def} \end{equation} Using the
expression for the 4-point Green function (\ref{eq:gfren2}),
renormalized according to (\ref{eq:renorm1}) and (\ref{eq:renorm2}),
we calculate the corresponding total cross sections and obtain that,
to leading order (i.e. 1-loop order) in the coupling constant $g$, the
function (\ref{eq:delta-def}) is equal to \begin{eqnarray} && R
\left(\frac{sL_{1}^2}{4};\mu_{s}^{2}L_{1}^{2},
\mu_{u}^{2}L_{1}^{2},\mu_{t}^{2}L_{1}^{2}; m_{0}L_{1},w \right)
\nonumber \\ \ \ \ \ & & = - \frac{i}{\pi^{2}}
\lim_{\epsilon \rightarrow 0} (L_{1} \kappa)^{2 \epsilon} \left\{
\Re \Delta I \left(sL_{1}^{2},m_{0}L_{1},w,\epsilon \right) - \Delta
I \left(\mu_{s}^{2}L_{1}^{2},m_{0}L_{1},w,\epsilon \right)
\right.\nonumber \\ \ \ \ \ & & + \frac{2}{s-4m_{0}^{2}} \int_{-(s-4
m_{0}^{2})}^{0} du \Delta I
\left(uL_{1}^{2},m_{0}L_{1},w,\epsilon \right) - \Delta I
\left(\mu_{u}^{2}L_{1}^{2},m_{0}L_{1},w,\epsilon \right)
\nonumber \\ \ \ \ \ && - \left. \Delta I
\left(\mu_{t}^{2}L_{1}^{2},m_{0}L_{1},w,\epsilon \right) \right\}.
\label{eq:delta-exp} \end{eqnarray} Here we assume that
\begin{equation} \mu_{s}^{2},\mu_{t}^{2},\mu_{u}^{2} < 4 m_{0}^{2}.
\label{mu-condition} \end{equation}
\section{Calculation of the 1-loop contribution}
In this section we will analyze the 1-loop contribution of the heavy
Kaluza-Klein modes. The starting point is the expression
(\ref{eq:int-sum}). More detailed, the relevant expression is
\begin{equation} \zeta (\epsilon;
b^2,c^2,w)=\int\limits_0^1dx\,\,{\sum_{n_{1},n_{2}=
-\infty}^{\infty}}\!\!\!\!\!' \left[n_{1}^2+w n_{2}^2+c^{2} -
b^{2}x(1-x) \right]^{-\epsilon}, \label{k2}
\end{equation} which is related to $\Delta I$ by \begin{equation}
\Delta I (p^{2}L_{1}^{2},m_{0}L_{1},w,\epsilon) = i \pi^{2-\epsilon}
\Gamma (\epsilon) \zeta (\epsilon; p^{2}L_{1}^{2},m_{0}^2 L_{1}^2,w),
\label{deltaI-zeta} \end{equation} and
$w=(L_1/L_2)^2$ was already introduced before. A detailed knowledge of
the behaviour of $\zeta (\epsilon ;b^2,c^2,w)$ as a function of $b^2$
around $\epsilon = 0$ is necessary. Specifically for the calculation
of the cross section and the function (\ref{eq:delta-exp}) we need an
analytical expression of eq.~(\ref{k2}) for positive and negative
$b^2$. For this two cases different techniques are needed and we will
present the two calculations one after the other.
Let us fix $b^2>0$ and start with $\zeta (\epsilon ;-b^2,c^2,w)$. Then
the effective mass term in eq.~(\ref{k2}), which is now $c^2 + b^2
x(1-x)$ is always greater than $0$ (we choose $c^2>0)$. For that case
it is very useful to perform re-summations, employing for
$t\in\mbox{${\rm I\!R }$} _+$, $z\in \mbox{${\rm I\!\!\!C }$}$ the
identity \cite{hille} \begin{equation} \sum_{n=-\infty}^{\infty}
\exp\{-tn^2+2\pi inz\}=\left(\frac{\pi} t\right)^{\frac 1
2}\sum_{n=-\infty}^{\infty}\exp\left\{-\frac{\pi^2} t
(n-z)^2\right\},\label{k3} \end{equation} which is due to Jacobi's
relation between theta functions. Using this and an integral
representation of the McDonald functions \cite{grad} (for details see,
for example, \cite{eekk,kk}), one finds the representation
\begin{eqnarray} \lefteqn{ \zeta(\epsilon ;-b^2,c^2,w)
=\frac{\pi}{\sqrt{w}}\frac 1 {\epsilon -1}\int\limits_0^1dx\,\, \left[
c^2+b^2x(1-x)\right]^{1-\epsilon}-\int\limits_0^1dx\,\,\left[
c^2+b^2x(1-x)\right]^{-\epsilon} } \nonumber\\ &
&+\frac{\pi^{\epsilon}}{\sqrt{w}}\frac 2 {\Gamma (\epsilon )}
{\sum_{l,n=-\infty}^{\infty}\!\!\!^{\prime}}\int\limits_0^1dx\,\, \left[
c^2+b^2x(1-x)\right]^{\frac{1-\epsilon} 2}
\left[l^2+wn^2\right]^{\frac{\epsilon-1} 2}\times\nonumber\\ &
&\qquad\qquad K_{1-\epsilon}\left(2\pi \left[
c^2+b^2x(1-x)\right]^{\frac 1 2}[l^2+wn^2]^{\frac 1 2}\right)
\label{k4} \\ & & = - \frac{\pi}{\sqrt{w}} \left( c^2 +
\frac{b^{2}}{6} \right) - 1 + {\cal O} (\epsilon).
\label{k4prime} \end{eqnarray} The advantage of eq.~(\ref{k4}) is,
that it consists of different contributions with a completely
different behaviour for large values of $b^2$. So it may be shown,
that the contributions of the McDonald functions, even though there is
a double sum, is negligable against the first terms due to the
exponentially fall off of the McDonald functions for large argument.
To obtain the cross section and the ratio (\ref{eq:delta-exp}) we need
the integral of the function (\ref{k4}): \begin{equation} S(s;\epsilon
)=\frac{1}{s} \int\limits_{0}^{s}du\,\, \zeta (\epsilon
;-uL_1^2,a^2,w). \label{k5} \end{equation} Doing first the
$u$-integration and continuing like in the calculation of
eq.~(\ref{k4}), a similar representation for $S(s;\epsilon )$ can be
found. We need only the first two terms of its Taylor expansion in
$\epsilon$ at $\epsilon = 0$. They read \begin{eqnarray} S(s;
\epsilon) & = & S(s;0) + \epsilon S'(s;\epsilon = 0) +
{\cal O} (\epsilon ^2), \nonumber \\ S(s; 0) & = & -\frac{\pi}{2
\sqrt{w}} \left( 2 c^2 + \frac{s L_{1}^2} {6} \right)
-1 \label{k6prime} \end{eqnarray} and \begin{eqnarray} \lefteqn{
S'(s; 0)=\frac{\pi}{2s\sqrt{w}L_1^2}\int\limits_0^1\frac{dx}{x(1-x)}\left\{[
c^2+sL_1^2x(1-x) ]^2 \ln [ c^2+sL_1^2x(1-x) ]\right. }\nonumber\\ &
&\left.-c^4\ln c^2+\frac 3 2 c^4-\frac 3 2 [ c^2+sL_1^2x(1-x)
]^2\right\}
+\frac{1}{sL_1^2}\int\limits_0^1\frac{dx}{x(1-x)}\left\{-sL_1^2x(1-x)
\right. \label{k6} \\ & & \left. +[ c^2+sL_1^2x(1-x) ]\ln [
c^2+sL_1^2x(1-x) ]-c^2\ln c^2\right\} \nonumber\\ & &+\frac
{2} {\pi s
\sqrt{w}L_1^2}\int\limits_0^1\frac{dx}{x(1-x)}{\sum_{l,n=-\infty}^{\infty}\!\!\!
^{\prime}}
[l^2+wn^2] ^{-1}\left\{c^2K_2\left(2\pi c[l^2+wn^2]^{\frac 1 2}\right)
\right. \nonumber\\ & & \left. -[
c^2+sL_1^2x(1-x)]K_2\left(2\pi [ c^2+sL_1^2x(1-x) ]^{\frac 1 2}
[l^2+wn^2]^{\frac 1 2}\right)\right\}. \nonumber \end{eqnarray}
It may be seen, that for finite values of $s$ this expression is well
defined. Apart from the contributions including the McDonald
functions, all integrations are elementary \cite{grad}. However, the
result is even longer than the one presented and we will not do so
explicitly. Instead, let us only mention, that the leading behaviour
for $s\to \infty$ is \begin{equation} S'(s\to \infty ; \epsilon =0)
=\frac{\pi}{6 \sqrt{w}}sL_1^2\left\{-\frac{11}{6} +\ln
(sL_1^2)\right\}+ {\cal O} (sL_1^2\ln (sL_1^2)) \label{k7}
\end{equation}
Up to now, the presented results were derived for $-b^2$, that is for
$p^2 < 0$, especially useful for large $b^2$. When one tries to use
the presented techniques for $\zeta (\epsilon; b^2,c^2,w)$ with
$b^2>0$, one directly encounters infinities. That there must be
problems is seen in eq.~(\ref{k4}), because the argument of the
McDonald functions then lies on its cut.
Thus we have to proceed in a different way which is to use the
binomial expansion method. This method will yield suitable results for
small values of $b^2$, valid independent of its sign. The result is
given as a power series in $b^2$, the radius of convergence is
determined by the first mass in the Kaluza-Klein tower. Using this
method we get \begin{eqnarray} \zeta (\epsilon; b^2,c^2,w)&=&
\int\limits_0^1dx\,\,
{\sum_{n_{1},n_{2}=-\infty}^{\infty}\!\!\!\!\!\!^{\prime}}
[n_{1}^2+wn_{2}^2+c^2]^{-\epsilon}\left\{1-\frac {b^2x(1-x)}
{n_{1}^2+wn_{2}^2+c^2}\right\}^{-\epsilon}\nonumber\\
&=&\sum_{k=0}^{\infty}\frac{\Gamma(\epsilon +k)}{\Gamma(\epsilon
)}\frac{k!}{(2k+1)!}Z_2^{c^2}(\epsilon +k; 1,w)b^{2k}, \label{k8}
\end{eqnarray} where we introduced the Epstein-type zeta-function
\begin{equation} Z_2^{c^2}(\nu
;u_1,u_2)={\sum_{n_{1},n_{2}=-\infty}^{\infty}\!\!\!\!\!\!^{\prime}}
[u_1 n_{1}^2+u_2 n_{2}^2+c^2]^{-\nu}. \label{k9} \end{equation}
It can be shown that this function has the following properties:
\begin{eqnarray} Z_{2}^{c^2}(\epsilon;1,w) & = & -
\left( \frac{\pi c^2}{\sqrt{w}} + 1 \right) + \epsilon
{Z'}^{c^2}_2(0;1,w)+{\cal O}(\epsilon ^2), \nonumber \\
Z_{2}^{c^2}(1+\epsilon;1,w) & = & \frac{\pi}{\sqrt{w}}
\frac{1}{\epsilon} + PP\,\,Z_2^{c^2}(1;1,w)+{\cal O}(\epsilon ).
\label{k10} \end{eqnarray} Using these formulas we obtain that the
expression (\ref{k8}) reads \begin{eqnarray} \zeta (\epsilon;
b^2,c^2,w)&=& \zeta(0;b^2,c^2,w) + \epsilon
\zeta'(0;b^2,c^2,w) + {\cal O} (\epsilon^2) \nonumber \\
&=& - \frac{\pi}{\sqrt{w}} \left(c^2 - \frac{b^2}{6} \right) - 1
\nonumber \\ &+& \epsilon \left[{Z'}_2^{c^2}(0;1,w)
+\frac{b^2} 6 PP\,\,Z_2^{c^2} (1;1,w)\right]\nonumber\\ & + &
\epsilon \sum_{k=2}^{\infty}\frac{\Gamma(k)k!} {(2k+1)!}Z_2^{c^2}(k;
1,w)b^{2k}. \label{k10prime} \end{eqnarray} Expressions for
${Z'}_2^{c^2}$ and the finite part $PP\,\,Z_2^{c^2}$ are rather
lengthy \cite{eekk}. We need not to present them here explicitly,
because for the calculation of our main objective $R$,
eq.~(\ref{eq:delta-exp}), contributions $\sim p^2$ and constant in
$p^2$ cancel out as it was explained in Sect.~2.
We observe that in the limit $\epsilon \rightarrow 0$ (\ref{k10prime})
coincides with (\ref{k4prime}). This ensures that after substituting
eqs. (\ref{k4prime}), (\ref{k6prime}), (\ref{k6}) and (\ref{k10prime})
into (\ref{eq:delta-exp}) all terms singular in $\epsilon$ cancel so
that $R$ is regular at $\epsilon = 0$. For this cancellation it is
important that we choose the subtraction point to be on the mass
shell, i.e. $\mu_{s}^2+\mu_{u}^2+\mu_{t}^2 =m_{0}^2$, as it was
discussed in Sect. 2. The finite part of (\ref{eq:delta-exp}) is
calculated using eq. (\ref{k6}) for the integral term and eq.
(\ref{k10prime}) for the rest of the terms.
As mentioned, the representation (\ref{k8}) is valid up to the first
threshold. This is seen as follows. In eq.~(\ref{k10prime}) we need
the behaviour for $k\to \infty$. Without loss of generality let us
assume $w=(L_1/L_2)^2 \geq 1$. In the considered limit the smallest
summation indices are only important and we find \begin{equation}
Z_2^{c^2}(k; 1,w) \sim m_{1}(w) [1+c^2]^{-k}, \label{k10bis}
\end{equation} where $m_{1}(w)$ is the multiplicity of the first heavy
mode in accordance with eq.~(\ref{eq:mass}). The condition of
convergence then reads \begin{equation} \frac{b^2}{4} < 1+c^2.
\label{k11} \end{equation} This means that for the first term in the
r.h.s. of (\ref{eq:delta-exp}) the representation (\ref{k8}) is valid
up to the threshold of the first heavy particle, i.e. up to $s < 4
M_{(1,0)}^2 = 4 (m_{0}^2 + 1/L_{1}^2)$. Here we assume that the
subtraction points are chosen to satisfy (\ref{mu-condition}), so that
the terms in (\ref{eq:delta-exp}) evaluated at the subtraction points
converge.
In this calculation formally also any manifold of the kind $M^4\times
K$, with $K$ an arbitrary compact two-dimensional manifold may be
dealt with. The only difference is that in the results (\ref{k10}) the
Epstein-type zeta-function has to be replaced by the corresponding one
of $K$.
Of course the question arises, how one may obtain a similar
representation for the cross section extended beyond the first
threshold. One needs to find an analytical continuation of
eq.~(\ref{k8}), or more detailed for $\zeta'(0;b^2,c^2,w)$, for values
$(|b^2|/4)\geq 1+c^2$. The behaviour of the sum near $(|b^2|/4) \sim
(1+c^2)$ in eq. (\ref{k10prime}) is determined by the behaviour of the
function \cite{grad} \[ \sqrt{1-x^2}\arctan\frac x
{\sqrt{1-x^2}}=x-\frac 1 4 \sum_{k=1}^{\infty}
\frac{\Gamma(k)k!}{(2k+1)!}(2x)^{2k+1} \] near $|x|=1$ (cf.
(\ref{k10prime}), (\ref{k10bis})). Subtracting it from and adding it
to eq.~(\ref{k10prime}) with $x^2=b^2/[4(1+c^2)]$, we get
\begin{eqnarray} \lefteqn{ \zeta'(0;b^2,c^2,w) =
\sum_{k=2}^{\infty}(-1)^k\frac{(k-1)!k!}{(2k+1)!}Z_2^{c^2}(k;1,w)b^{2k}
}\nonumber\\ &=&2 m_{1}(w)\left\{1-\frac 1 3 \frac{x^2}3
-\frac{\sqrt{1-x^2}} x \arctan \frac x
{\sqrt{1-x^2}}\right\}\nonumber\\ &
&+\sum_{k=2}^{\infty}\frac{(k-1)!k!}{(2k+1)!} \left[Z_2^{c^2}(k;1,w)
-\frac {m_{1}(w)}{(1+c^2)^k} \right]b^{2k}. \label{artur}
\end{eqnarray} The advantage of this representation is apparent. The
sum in eq.~(\ref{artur}) is convergent up to the second threshold. The
remaining terms contain explicitly the analytical behaviour of
$\zeta'(0;b^2,c^2,w)$ at the first threshold. This is seen very well
by means of the formula $\ln (ix+\sqrt{1-x^2})=i\arctan
x/\sqrt{1-x^2}$, which may be used in eq.~(\ref{artur}) as well and
provides the analytical continuation of $\zeta'$ in $x^2$ up to the
second threshold. It is clear how to continue the procedure in order
to obtain a representation in principle valid up to any given
threshold.
Using representation (\ref{artur}) of $\zeta'$, the function
$S'(s;0)$, see eq.~(\ref{k5}), which is also important for the
calculation of the ratio $R$, see eq.~(\ref{eq:delta-exp}), may be
written in the form \begin{eqnarray} S'(s;0)&=&2 m_{1}(w)\left[1+\frac
1 6 \frac{sL_1^2}{4(1+c^2)}\right]-
m_{1}(w)F\left(\frac{sL_1^2}{4(1+c^2)}\right)\nonumber\\ &
&+\sum_{k=2}^{\infty}\frac{(k-1)!k!}{(2k+1)!} \left[Z_2^{c^2}(k;1,w)
-\frac {m{1}(w)}{(1+c^2)^k} \right](sL_1^2)^{2k},\label{neuartur}
\end{eqnarray} with \begin{equation} F(z) =-1 +2\sqrt{\frac{1+z} z}
\ln(\sqrt{1+2}+\sqrt{z})+\frac 1 z (\ln
(\sqrt{1+z}+\sqrt{z}))^2.\nonumber \end{equation} The presented
formulas (\ref{artur}) and (\ref{neuartur}) will appear to be quite
effective for the calculation of the ratio $R$ in the next section.
\section{Numerical results for the scattering cross section}
Using representations (\ref{k6prime}) and (\ref{k10prime}) we write
the ratio $R$ as \begin{eqnarray} \lefteqn{ R
\left(\frac{sL_{1}^2}{4};\mu_{s}^{2}L_{1}^{2},
\mu_{u}^{2}L_{1}^{2},\mu_{t}^{2}L_{1}^{2}; m_{0}L_{1},w \right) =
}\nonumber\\ & &+\Re \left\{ \zeta'(0;sL_{1}^2,m_{0}^2L_{1}^2,w) -
\zeta'(0;\mu_{s}^2 L_{1}^2,m_{0}^2L_{1}^2,w) \right.\nonumber \\
& & + \left. 2 S'(s-4m_{0}^2;0) - \zeta'(0;\mu_{u}^2
L_{1}^2,m_{0}^2L_{1}^2,w) - \zeta'(0;\mu_{t}^2
L_{1}^2,m_{0}^2L_{1}^2,w) \right\}, \label{n1} \end{eqnarray} where
$S'$ is given by eq. (\ref{k6}) or (\ref{neuartur}) and $\zeta'$ is
given by (\ref{k10prime}) or (\ref{artur}) depending on the range of
the energy of the colliding particles.
For the numerical computation we take the zero mode particle to be
much lighter than the first heavy mode and choose the subtraction
points to be at the low energy interval. We take \begin{equation}
m_{0}^2 L_{1}^2 = 10^{-4}, \; \; \; \; \mu_{s}^2 / m_{0}^2 = 10^{-2},
\; \; \; \; \mu_{u}^2 = \mu_{t}^2 = (4m_{0}^2 - \mu_{s}^2)/4.
\label{n2} \end{equation} By making such a choice of parameters we
were motivated by the arguments in favour of the possibility of having
the compactification scale to be of the order of the supersymmetry
breaking scale, $L_{1}^{-1} \sim M_{SUSY}$ (see discussion in the
Introduction). Then the values (\ref{n2}) could mimic a situation
with, for example, $m_{0} = 100$ GeV, $L_{1}^{-1} = 10$ TeV.
Now, we assume that $w=(L_{1}/L_{2})^2 \geq 1$ and compute $R$ as a
function of $z=s/(4M_{(1,0)}^2)$, where due to our choice of $w$ we
have $M_{(1,0)}^2 = 1/L_{1}^2 + m_{0}^2$ for the square of the mass of
the first heavy mode. An approximate expression for the ratio with the
parameter values $m_0L_1^2,\mu_s^2L_1^2,\mu
_u^2L_1^2,\mu_t^2L_1^2\approx 0$ is easily obtained to be
\begin{equation} R(z;w) = \frac{4}{9} Z_2^{c^2}(2;1,w) z^2 +
\frac{8}{105} Z_2^{c^2}(3;1,w) z^3 + {\cal O} (z^4) \label{ratio1}
\end{equation} for $0 \leq z < 1$. Here we have supressed a part of
the arguments of the function (\ref{eq:delta-def}): $R(z;w) \equiv
R(z;0,0,0,0,w)$. To have an expression for $R$ valid at the first
threshold and above up to the second threshold the formula
(\ref{artur}) should be used. Then we get \begin{eqnarray} R(z;w) &=&
m_{1}(w)\left[6-2\sqrt{\frac{1-z} 2}\arctan \sqrt{\frac z {1-z}}
-2F(z)\right]\label{ratio}\\ &+&\frac 4 9
\left(Z_2^{c^2}(2;1,w)-m_{1}(w)\right)z^2+\frac 8 {105} \left(
Z_2^{c^2}(3;1,w)-m_{1}(w)\right) z^3 + {\cal O} (z^4).\nonumber
\end{eqnarray}
Plots of $R$ for various values of $w$ are presented in Fig.~1. Let us
first consider the interval $0 < z \leq 1$, i.e.~below the first
threshold. Even in this range the deviation of the cross section of
the theory on $M^4\times T^2$ from that of the four dimensional one,
characterized by $R$, is quite noticeable. Thus, for example, for
$w=1$ we find $R\approx 0.76$ for $z=0.5$ and $R\approx 0.17$ for
$z=0.25$. We would like to mention that the case $w=1$ was first
studied in \cite{torus-scat}. The closer is the space of extra
dimensions $T^2$ to the equilateral torus with $w=1$ the stronger is
the deviation of the cross section from the four dimensional one. This
might be the most relevant case, because due to the high symmetry of
this compactification the vacuum energy of the spacetime probably
takes a minimum value (for example this result has been found for a
scalar field living on a torus \cite{ambjorn}).
{}From the first line in eq. (\ref{ratio}) we see that for $0 < z \leq
1$ the behaviour of $R$ is basically determined by the multiplicity
$m_{1}(w)$ times some universal function of $z$. Since $m_{1}(1)=4$
and $m_{1}(w>1)=2$, this explains that the quotient $R(z;1)/R(z;w)
\approx 2$, as it can be seen from Fig. 1. The second line gives
corrections depending on $w$. With increasing $w$ these corrections
are getting smaller and for values of $w\geq 10$ they are already
negligable. On the contrary, for higher values of $z$, $z\geq 1$, they
are getting more important and as is seen in Fig.~1 the observations
true for $0\leq z<1$ are not true any more. This indicates that the
contribution of the heavy modes to the total cross section of the (2
light particles) $\to $ (2 light particles) scattering process grows
with the centre of mass energy $\sqrt{s}$ keeping the radii of the
compactification fixed.
The Fig.~2 shows the behaviour of $R$ as a function of the scale $L_1$
with $\sqrt{s}$ being fixed. As implemented by our renormalization
condition, eq.~(\ref{eq:renorm1}), (\ref{eq:renorm2}), for small $L_1$
we have $R\to 0$. Here we see once more, as already mentioned, that
for bigger centre of mass energy the influence of the heavy modes is
increasing. The setting described in Fig.~2 would be more appropriate
for making possible predictions in high energy experiments at modern
colliders (of course in more realistic models). So, in case $\sigma
(s)$ is measured experimentally and a value of $R =(\sigma (s) -\sigma
^{(0)} (s) )/\sigma ^{(0)} (s) \neq 0$ is found, Fig.~2 could be used
to obtain bounds on the size $L_1$ of extra dimensions.
We see that there is a noticeable deviation of the cross section
$\sigma^{(\infty)}$ of the complete theory from the cross section
$\sigma^{(0)}$ of the four-dimensional model, characterized by the
function $R$, due to the presence of the heavy Kaluza-Klein modes. The
maximal "amplitude" of this deviation below or at the threshold of the
first heavy particle is basically determined by the multiplicity of
this mode. Another physical situation illustrating this property is
considered in the next section.
\section{Am\-pli\-fi\-ca\-tion of the cross sec\-tion by con\-stant
abe\-li\-an gauge field}
In generalization of the model described by the action
(\ref{eq:action0}), let us now consider the abelian scalar gauge
theory on $M^{4} \times T^{2}$. We will be interested in the case when
the only non-zero components of the abelian gauge potential are those
of the extra dimensions and, moreover, here we will consider them as
classical external fields. With these assumptions the action of the
theory we are going to study is \begin{eqnarray} S&=& \int_{E} d^{4}x
d^{2}y \left[\frac{1}{2}(\frac{\partial\phi (x,y)} {\partial
x^{\mu}})^2+\frac{1}{2}\left[ \left( \frac{\partial}{\partial y
^i}-A_i\right) (x,y)\right]\left[ \left( \frac{\partial}{\partial y
^i}-A_i\right) \phi (x,y)\right]\right.\nonumber\\ &
&\qquad\qquad\left. - \frac{1}{2} m_{0}^{2} \phi ^{2}(x,y) -
\frac{\hat{\lambda}}{4!} \phi ^{4} (x,y) \right], \label{k13}
\end{eqnarray} with periodic boundary conditions for $\phi(x,y)$ in
the toroidal directions, $\phi(x^{\mu},y^{i}+2\pi L_{i}) =
\phi(x^{\mu},y^i)$, as before. For some previous studies of gauge
theories on the torus see \cite{gauge-torus}, \cite{actor}.
The model has some properties in common with theories with the abelian
gauge field at finite temperature $T$ (see, for example, \cite{actor},
\cite{smilga} for rewiews and references therein). Thus, in such
theories
the gauge potential component $A_{0}$ along the compact Euclidean time
direction becomes an angular variable in the effective action, i.e.
locally $- \pi \leq A_{0}/T \leq \pi$. In our model, as we will see
shortly, the function $R$ is periodic in the variables $A_{i}L_{i}$,
$(i=1,2)$. Also, in our model, similar to the theories at finite $T$
\cite{angular}, due to the fact that $T^2$ is a multiply-connected
space, the $A_{i}$'s
{\em cannot} be gauged away and are physical parameters of the model.
This is similar to the appearence of non-integrable phases of Wilson
line integrals as physical degrees of freedom in non-abelian gauge
theories on multiply-connected spaces \cite{hosotani}, \cite{weiss}.
Our intention here is to study the change of the function $R$ due to
the change of the spectrum of the heavy masses, namely the values of
$M_{(n_{1},n_{2})}$ and their multiplicities, produced by the gauge
potential. The ideal configuration to use for this is $A_{\mu}=0$
(this choice is already done in eq. (\ref{k13})) and $A_{i}=$const.
This is the approximation of the same type which is usually used for
studies in the theories at non-zero $T$. A more general configuration
for the abelian gauge potential on $M^{4} \times T^{2}$ would lead to
the model with the abelian gauge field, massive vector fields and
additional massive scalar fields on $M^{4}$, which is beyond the scope
of our investigation.
Substituting the Fourier expansion (\ref{eq:laplace}) into the action
(\ref{k13}) and integrating over the toroidal components we obtain the
model with the action (\ref{eq:action1}) but now the masses of the
fields depend on the gauge field components and are given by
\begin{equation} M_{N}^2 (a) = m_{0}^2 +
\frac{(n_{1}-a_{1})^2}{L_{1}^2} + \frac{(n_{2}-a_{2})^2}{L_{2}^2},
\label{k14} \end{equation} where $a_{i}=A_{i} L_{i}$, $i=1,2$. Notice
that now the mass of the zero mode, given by \begin{equation}
M_{(0,0)}^2 (a) = m_{0}^2 + \frac{a_{1}^2}{L_{1}^2} +
\frac{a_{2}^2}{L_{2}^2}, \label{k15} \end{equation} also receives a
contribution from the gauge field components. This makes the
separation into the light masses and the heavy ones rather
problematic. We will return to this issue shortly.
We calculate then the cross section of the scattering of two {\em
zero} modes. Imposing the same renormalization conditions as
eqs.~(\ref{eq:renorm1}), (\ref{eq:renorm2}) and repeating all steps of
the calculations of Sects.~2 and 3 we obtain the following analogous
expression for the total cross section: \begin{eqnarray} \sigma
^{(\infty)} (s,a) & = & g^2 + \frac{i g^3}{16 \pi^4} \lim _{\epsilon
\rightarrow 0} (L\kappa)^{2\epsilon}\times\nonumber\\ & & \left\{
\frac{32 \pi^4 i}{L^{2 \epsilon}} \left[
K_{0}(s,t,u;M_{(0,0)}(a),\epsilon) -
K_{0}(\mu_{s}^2,\mu_{t}^2,\mu_{u}^2;M_{(0,0)}(a),\epsilon) \right]
\right. \nonumber \\ & & +\Re \Delta I \left(
sL^2,m_{0}L,a_{i};\epsilon \right) - \Delta I \left( \mu_{s}^2
L^2,m_{0}L,a_{i};\epsilon \right) \label{k16} \\ & & +
\frac{2}{s-4m_{0}^2} \int_{-(s-4m_{0}^2)}^{0} du \Delta I \left(
uL^2,m_{0}L,a_{i};\epsilon \right)\nonumber\\ & & \left.- \Delta I
\left( \mu_{u}^2 L^2,m_{0}L,a_{i};\epsilon \right) - \Delta I \left(
\mu_{t}^2 L^2,m_{0}L,a_{i};\epsilon \right) \right\}, \nonumber
\end{eqnarray} where $K_{0}$ is defined by eq.~(\ref{eq:Kexpansion})
and $\Delta I$ is given by eq. (\ref{eq:int-sum}) with $M_{N}^2$ being
replaced by (\ref{k14}). Here we restrict ourselves to the case of the
equilateral torus with $L_{1}=L_{2}=L$, so we supressed the dependence
on $w$ and instead indicated explicitly the dependence on the gauge
field parameters $a_{i}$. The analog of the function (\ref{k2}) is \[
\zeta ^A(\epsilon; b^2,c^2)=\int\limits_0^1dx\,\,{\sum_{n_{1},n_{2}
=-\infty}^{\infty}}\!\!\!\!\!' \left[(n_{1}-a_1)^2+(n_{2}-a_2)^2+
c^2+b^2x(1-x)\right]^{-\epsilon}. \] The total cross section
(\ref{k16}) includes summation over {\em all} modes and is periodic in
$a_{i}$: \begin{eqnarray} \sigma
^{(\infty)}(s,a_{1}+k_{1},a_{2}+k_{2}) =
\sigma^{(\infty)}(s,a_{1},a_{2}),\label{symmetry} \end{eqnarray} where
$k_{1}$ and $k_{2}$ are integers.
Also the effective potential of $A_i$ in abelian and non-abelian gauge
theories possesses the same periodicity and reaches its minima
(in our notation) at $a_i=n_i$ \cite{hosotani,weiss,hosotani1}. (see
also \cite{actor,smilga}). For further investigation we suppose that we
consider the sector with $0\leq a_i <1$ ($i=1,2$).
Moreover, one can check that \[
\sigma^{(\infty)}(s,a_{1},1-a_{2}) = \sigma^{(\infty)}(s,1 -
a_{1},a_{2}) = \sigma^{(\infty)}(s,a_{1},a_{2}), \] so that
$\sigma^{(\infty)}$ is symmetric with respect to the $a_{i}=1/2$. Thus
it is enough to consider the interval of values \begin{equation} 0
\leq a_{i} \leq 1/2, \; \; \; \; i=1,\ 2. \label{k17}
\end{equation} Let us mention that the special values $a_{1}=a_{2}=0$,
$a_{1}=a_{2}=1/2$ and $a_{1}=0$, $a_{2}=1/2$ represent respectively
the periodic (the case considered in Sects. 2 and 3), antiperiodic (or
twisted) and mixed (i.e. periodic in one toroidal direction and
antiperiodic in another) boundary conditions for the scalar field in
the absence of abelian gauge fields.
Now, let us return to the issue of the light mode. For the $a_{i}$ to
belong to the interval (\ref{k17}) the gauge components $A_{i}$ must
be of the order of $L^{-1}$, and thus there will be terms of the order
of $L^{-2}$ in eq. (\ref{k15}). However, as soon as $a_{i} < 1/2$,
$M_{(0,0)}$ remains to be the lowest mass in the spectrum. This is
also seen in Fig.~3, where we show the spectrum of the states with a
few lowest quantum numbers $N=(n_1,n_2)$ as a function of $a=a_1=a_2$.
We consider here the interval $0\leq a <1$ in order to make apparent
the symmetry of $\sigma ^{(\infty )}$ described in
eq.~(\ref{symmetry}). Restricting the values of $a_{i}$ to the
interval (\ref{k17}), we take the zero mode $\phi_{(0,0)}$ to be the
lightest one. This is the mode whose contribution is subtracted to
obtain $R$, eq. (\ref{eq:delta-def}). The sector of this mode appears
in the zero energy limit of the complete multidimensional theory.
Indeed, the difference between the zero mode mass and the next mass in
the spectrum is given by \[ M_{(1,0)}^2 - M_{(0,0)}^2 =
\frac{1-2a}{L^2} \rightarrow \infty \] as $L \rightarrow 0$ for
$a<1/2$. One also could think of $m_{0}^2$ being adjusted such that
$M_{(0,0)}^2 L^2 \ll 1$.
For numerical computations we take $a_{1}=a_{2}=a$. The behaviour of
$R$ as a function of $z=s/(4 M_{(1,0)}^2 (a=0)$ in the interval $0
\leq z \leq 1$ for various values of $a$ is plotted in Fig.~4. It is
described by the formula \begin{equation} R(z;a) = \frac{4}{9}
Z_2^{c^2}(2;a) z^2 + \frac{8}{105} Z_2^{c^2}(3;a) z^3 + {\cal
O} (z^4), \label{ratio2} \end{equation} (cf. (\ref{ratio1})) valid for
$z < M^{2}_{(1,0)}(a)/M^{2}_{(1,0)}(a=0)$. Here \begin{equation}
Z_2^{c^2}(\nu;a)={\sum_{n_{1},n_{2}=-\infty}^{\infty}\!\!\!\!\!\!^{\prime}}
[(n_{1}-a)^2+(n_{2}-a)^2+c^2]^{-\nu} \label{z-a} \end{equation} and
we supressed the dependence of $R$ on $w$ and instead indicated
explicitly its dependence on the parameter $a$. Again we see that the
behaviour of the deviation $R$ as a function of the parameter $s/(4
M_{(1,0)}(a)$ (which is not the same as $z$ !) is determined mainly by
the multiplicity $m_{1}(a)$ of the first heavy mass $M_{(1,0)}(a)$
which is now the function of $a$. From Fig. 3 one sees that
$m_{1}(0)=4$, $m_{1}(a)=2$ for $0 < a < 0.5$ and $m_{1}(0.5)=3$.
Let us consider the interval $0<z<1/2$. The curves in Fig. 4 show that
the function $R(z;a)$ for fixed $z$ grows with $a$ due to the increase
of the coefficient $(4/9) Z_2^{c^2}(2;a)$ of the first term in eq.
(\ref{ratio2}) with $a$. This can be understood as the result of the
two competing effects: the growth due to the approach of the first
heavy threshold and the decrease due to decrease of $m_{1}(a)$ when
$a$ departs from zero. The first effect wins in this competition. In
addition $m_{1}(a)$ increases when $a$ approaches the value $0.5$. As
the result, for example, $R(0.25;0.5)/R(0.25;0) \approx 2.6$ and
$R(0.5;0.5)/R(0.5;0) \approx 5.9$. This increase is seen clearer in
Fig. 5.
\section{Conclusions}
In continuation of the investigation carried out in
\cite{torus-scat,sphere-scat} in this article we considered the
$\lambda \phi^4$-theory on the space-time $M^4 \times T^2$. We studied
the scattering of two light particles in this model and calculated the
function $R$ which characterizes the deviation of the total cross
section of this process from the cross section of the same one but in
the $\lambda \phi^4$-model on $M^4$. The deviation is due to the
one-loop contributions of the heavy Kaluza-Klein modes, which appear
because of the multidimensional nature of the theory. For the centre
of mass energy $\sqrt{s}$ of the colliding particles below the
threshold of the first heavy particle the deviation grows with $s$ and
is already quite noticeable for $s > 0.25 \times$(energy of the first
threshold).
The behaviour of the function $R$ below the first threshold is given
with a good accuracy by the leading terms in the expansions
(\ref{ratio1}) or (\ref{ratio2}). Our results can be easily
generalized to an arbitrary two-dimensional compact manifold $K$ of
extra dimensions, then the formula for $R$ takes the form:
\begin{equation} R \approx \frac{1}{36} \zeta (2 |K) (s L^{2})^{2}.
\label{ratio3} \end{equation} Here $L$ is a scale such that the
eigenvalues of the Laplace operator are given by $\lambda_{N}/L^{2}$
and $\zeta(s | K)$ is the zeta-function of this operator: \[ \zeta(s
| K) = \sum_{\lambda_{N} \neq 0} \frac{m_{\lambda_{N}}}
{(\lambda_{N})^{s}}, \] where $m_{\lambda_{N}}$ is the multiplicity of
the eigenvalue $\lambda_{N}$. (Compare this with eqs. (\ref{k9}) and
(\ref{z-a}).) This formula was first obtained in ref.
\cite{sphere-scat} for the case of the sphere $K=S^{2}$. The
representation (\ref{ratio3}) tells that the behaviour of $R$ below
the first threshold is mainly determined by its position and the
multiplicity of the first heavy mass, which in their turn are
determined by the geometry of $K$. We demonstrated this in more detail
in the case of the non-equilateral torus and for the model with the
abelian gauge potential.
In the latter case we also have shown that for "low" energies of the
scattering particles, namely for $s L^{2} < 2 ( 1 + m_{0}^{2} L^{2})
\approx 2$, the function $R$ grows significantly (from $3$ to $6$
times)
when the angular variable $a = A_{i}L$, characterizing the gauge
potential, runs through the half-interval of periodicity, which ranges
from $0$ to $0.5$. Again the effect can be understood from the formula
(\ref{ratio3}): the interaction of the scalar field with the classical
constant (also slow varying on $M^{4}$) gauge potential produces the
change of the spectrum (masses and their multiplicities) of the
particles of the Kaluza-Klein tower, which in its turn leads to the
growth of $\zeta (2|K)$ with $a$.
We would like to mention that this effect of amplification of the
cross section might be relevant for the cosmology of the Early
Universe. Of course for $K=T^{2}$, though constant potential $A_{i}$
satisfies the equations of motion, such solution is not of much
physical interest, as is known. However, it seems that the effect of
amplification might take place in the case of more interesting gauge
models with the spaces of extra dimensions with non-zero curvature.
There the growth of the cross section due to the presence of
non-trivial gauge configurations, assuring spontaneous
compactification \cite{compact,KK-review2}, might give rise to
interesting consequences.
A few more remarks are in order here. There is a certain relation
between the effective potential of the constant gauge configuration
$A_{i}$ on $T^2$ (\cite{weiss,hosotani1}, see also \cite{actor,smilga})
and the
scattering of light scalar particles in this background, characterized
by the deviation $R$. The effective potential has its minima at $A_{i}
= n_{i}/L$. In the sector $0 \leq a \leq 0.5$, where we calculated
$R$, the minimum of the effective potential is reached
at $A_{1}=A_{2}=0$ for which the value of $R$ is minimal for a given
energy $\sqrt{s} < 2 M_{(1,0)}$ and the first heavy mass $M_{(1,0)}$
is maximal. Opposite to this, scattering of light scalar particles in
the background with $a=0.5$, corresponding to a maximum of the
effective potential, is characterized by the maximal value of the
deviation $R$ at low energies. It would be interesting to gain deeper
understanding of this relation. Other interesting physical effects in
abelian and non-abelian gauge theories with all or a part of the
space-time dimensions compactified to the torus (like confinement,
crossover and phase transitions, breaking of the gauge symmetry,
etc.) were studied in a number of papers \cite{decoupl},
\cite{gauge-torus}-\cite{polyakov}.
The model considered here is not a realistic one. Our aim was to
demonstrate the existence of the effect due to extra dimensions in the
behaviour of the total cross section of the particles of the low
energy sector of the theory, which is the $\lambda \phi^4$-model in
four dimensions in our case, and to study some characteristic features
of this effect. The function analogous to $R$ calculated in a
realistic extention of the Standard Model is to be compared with
experiment at future colliders. This could give an evidence of
existence of the Kaluza-Klein states or, by using plots like in Fig.
2, provide upper limits on the compactification scale $L$. We should
note that calculations of some processes for a certain class of
superstring models were carried out in \cite{antoniadis}.
\vspace{5mm}
\noindent{\large \bf Acknowledgments} We would like to thank Dom\'enec
Espriu, Andrei Smilga and Joan Soto for valuable discussions and
comments and the Department d'ECM of Barcelona University for the warm
hospitality. KK acknowledges financial support from the Alexander von
Humboldt Foundation (Germany). YK acknowledges financial support from
CIRIT (Generalitat de Catalunya). \bigskip
\newpage
|
2,869,038,154,645 | arxiv | \section{Introduction}
Difficulty usually emerges from the characterization and understanding of complex systems,
since we cannot split a complex system into several simpler subsystems without tampering its
dynamical properties~\cite{Kantelhardt2009}. To address this difficulty,
researchers have to turn to analyze their macroscopic properties.
Time series analysis is a good case in point because the behavioral evolution
of a complex system is characterized by its output records restricted to time scale.
Human activity is deemed as a typical complex system, of which several macroscopic
properties have been unveiled recently via time series analysis (e.g.,
detrended fluctuation analysis, DFA~\cite{Bunde2000,Peng1994,Kantelhardt2001}). For examples, the periodicity
is found in diverse human activities, such as web surfing~\cite{Gonccalves2008},
online game log-in~\cite{Jiang2010}, task submission to linux server~\cite{Baek2008}
and purchasing in E-commerce~\cite{Dong2013}; the long-term correlation,
commonly behaving in many physical, biotic, economic and ecological
systems~\cite{Peng1992,Peng1993,Koscielny-Bunde1998,Makse1995,Makse1998,Liu1999,Cai2006,Cai2009,Rozenfeld2008},
is also discovered in human interactive activity and positively increased with activity level~\cite{Rybski2009}.
Following that, Rybski et al.~\cite{Rybski2012} investigate human communication
activities in a social network to show the relation between long-term correlation and inter-event
clustering. They present that at individual level the long-term correlation in time series of events
is strongly dominated by the power-law distribution of inter-event times (named `Levy correlation' in Ref~\cite{Rybski2012}),
while at whole community level, it is a generic property of system since it arises from interdependencies
in the activities. Meanwhile, Zhao et al.~\cite{Zhao2012} analyze the time series of inter-event times
obtained from online reviewing activities to unveil long-term correlation (i.e., memory property) restricted
to activity level. They find that there is an abnormal scaling behavior associated with long-term anticorrelation,
which is brought by the bimodal distribution of inter-event times.
These long-term correlations also imply the existence of
fractality in time series of human online activity. However,
two unanswered questions limits our much deeper recognition of human activity: (1) what category of
fractality in time series of human activity and (2) what is the origin of such fractality?
To address them, facilitated by the Internet technology, we investigate the time series of human
online activity from two movie reviewing websites, Movielends and Netflix. The analysis of long-term correlation
is presented to reveal the fractality in time series of the online reviewing activity.
At the individual level, we apply DFA on time series of
events composed by users with same activity level and the corresponding
shuffled ones in which each user preserves inter-event times.
The long-term correlations positively increase with activity levels. Moreover,
because the distributions of inter-event intervals at different
activity levels don't strictly follow a power law,
there is a trivial difference between the Hurst exponents~\cite{Hurst1951,Hurst1956}
of original and shuffled time series. The empirical result is a little different
from the finding in human communication activity. At whole community level, the similar analysis
on time series of events aggregated from all users' activities show the
stronger long-term correlation with Hurst exponents roughly close to 0.9 and 1.0 for Movielens and
Netflix, respectively.
To further obtain the category of such fractality and understand its origin,
we use multifractal detrended fluctuation analysis (MFDFA)~\cite{Kantelhardt2002,Movahed2006,Lim2007}
to probe the singularity spectrum. Dependence between generalized Hurst exponent ($H(q)$) and \emph{q}-order
statistical moments ($q$) exhibits the multifractality in time series of events at whole community level.
Though multifractality still keeps after these time series are randomly shuffled, manifestly changes
happen in the value of the generalized Hurst exponent ($H(q)$). A legible result is suggested by
the singularity spectrum. We hypothesize that such multifractality forms for the dual effect of
broad probability density function (PDF) and long-term correlation~\cite{Kantelhardt2002}.
This hypothesis is also verified by our synthesized series. Therefore, we conclude that multifractality
exists in human online activity series and the combination impact of broad probability density
function and long-term correlation is at the root of such multifractality.
\section{Data}
Data sets from Movielens and Netflix both record individuals' reviews and markings on movies at a certain time,
and are filtered according to the criteria of activity level, $M \geq 55$ (see definition in Sec. 4). We thus finally obtain
26,884 users (38.4$\%$ of total users) and 10,000,054 records in a long duration of 4,703 days (nearly 13 years) for Movielens,
and 17,703 users (99.6$\%$ of total users) and 100,477,917 records in a long duration of 2,243 days (nearly 6 years) for Netflix.
As the records in Movielens are almost sampled from its creation date, there are so many noise users. Nevertheless, both the sizes of the filtered
users in these two data sets are more than $10^5$.
To convert these records into time series of events, we introduce two variables $x(t)$ and $x_{tot}(t)$
which means the events per day of a single user and whole community respectively.
These time series serve as the base of our subsequently analysis. A visual illustration is shown in Fig. \ref{fig:timeSeries},
where (a) and (b) indicate the activity records of two typical users for Movielens and Netflix, respectively, (c) and (d)
show corresponding time series of events at individual level, while (e) and (f) represent the time series of events
at whole community level. In Fig. \ref{fig:timeSeries}, we also can observe the clusters of records or events which suggests
the existence of burstyness in these online reviewing activities.
\begin{figure*}[htb!]\centering
\includegraphics[width=0.8\textwidth]{timeSeries
\caption{\label{fig:timeSeries} (Color Online) A visual illustration of activity records of typical users
and time series of events at individual and whole community levels. (a) and (b) indicate the
activity records of two typical users, user23172 ($M=1308$) and user12228 ($M=5606$).
(c) and (d) show corresponding time series of events for these two typical users. (e) and (f)
represent time series of events for whole community. The dark blue and red lines denote Movielens and Netflix, respectively.}
\end{figure*}
\section{Method}
\subsection{Detrended fluctuation analysis}
The method of DFA has been proven for usefully revealing
the extent of long-term correlation~\cite{Bunde2000,Peng1994,Kantelhardt2001}, and is less
sensitive to the scaling range for the additional detrend process~\cite{Shao2012}.
To keep our description self-contained, we briefly introduce the steps of this method
as follows:
i) Calculate the profiles $Y(t')$ of time series $x(t)$,
\begin{equation}\label{DFA_profile}
Y(t') = \sum_{t=1}^{t'}x(t)-\langle x(t)\rangle, t' = 1, ..., N
\end{equation}
ii) Divide $Y(t')$ into $s$ non-overlapping segments with length $N_s$ in an increasing order.
Generally, $N$ doesn't exactly equal to the product of $s$ and $N_s$ (i.e., ($N_s=\lfloor \frac{N}{s} \rfloor)$)),
suggesting that the last part of $Y(t')$ are missed. Therefore we once divide $Y(t')$ from
the opposite direction in order to incorporate whole $Y(t')$. Thus, there are $2N_s$ different segments.
Moreover, the value of $s$ is sampled from the logarithmic space, $s = \frac{N}{2^{int(\log_2^N-2)}}), ..., \frac {N}{2^3}, \frac {N}{2^2}$,
which can keep smoothness of curve between $F(s)$ and $s$.
iii) Given $s$, the profile $Y(t')$ in each segment is detrended separately. Least-square fit is applied for
determining $\chi^2$-functions for each segment, such as for $v = 1, 2, ..., N_s$,
\begin{equation}\label{DFA_flucDivi1}
F^2(v,s) = \frac{1}{s}\sum_{j=1}^s[Y((v-1)s+j)-\omega_v^n]^2,
\end{equation}
and for $v = N_s+1, ..., 2N_s$,
\begin{equation}\label{DFA_flucDivi2}
F^2(v,s) = \frac{1}{s}\sum_{j=1}^s[Y((N-v-N_s)s+j)-\omega_v^n]^2
\end{equation}
where $w_v^n$ is the $n$-order polynomial fitting fo segment $v$.
iv) Calculate the fluctuation function,
\begin{equation}\label{DFA_sum}
F(s) = [\frac{1}{2N_s}\sum_{v=1}^{2N_s}F^2(v,s)]^{\frac{1}{2}} \sim s^H,
\end{equation}
where $H$ is the Hurst exponent. The value of $H$ reveals the extent of long-term correlation in time series.
It indicates the long-term anticorrelation for $0< H< 0.5$, no correlation for $H=0.5$, and long-term correlation for $H>0.5$.
\subsection{Multifractal detrended fluctuation analysis}
DFA helps us to acquire the long-term correlation
of time series, thereby ensure its fractality.
To further analyze such fractality and its origin,
we modify DFA to introduce MFDFA ~\cite{Kantelhardt2002,Movahed2006,Lim2007},
where equation \ref{DFA_sum} is modified as follows:
\begin{equation}\label{MFDFA_sum}
F(s) = [\frac{1}{2N_s}\sum_{v=1}^{2N_s}[F^2(v,s)]^{q/2}]^{\frac{1}{q}} \sim s^{H(q)}.
\end{equation}
$H(q)$ is the generalized Hurst exponent. For monofractal time series, $H(q)$ is independent
from $q$, while for multifractal one, $H(q)$ depends on $q$. In addition, the multifractality of time series
may be brought by several key factors, such as long-term correlation
and PDF. To figure out the origin of multifractality,
we randomly shuffle the time series to reduce the long-term correlation but preserve
the same PDF, and applied the MFDFA once again. If the multifractality only
results from the PDF, it will be reserved in the shuffled one,
while if the multifractality only comes from long-term correlation,
it will disappears. If the long-term correlation and PDF dually affect time series,
we can expect that the multifractality still keeps but the value of generalized Husrt exponent ($H(q)$)
changes.
A much more legible way to characterize a multifractal time series is the singularity spectrum $f(\alpha)$.
The horizonal span of $f(\alpha)$ represents the strength of multifractality. Specifically,
a very narrow $f(\alpha)$ approximately indicates a monofractal time series, while a
wide one suggests a multifractal time series. To acquire its analytical relation with $\alpha$,
we introduce Renyi exponent $\tau(q)$~\cite{Meakin1987,Peitgen2004} with the equation given by follows:
\begin{equation}\label{tau_q}
\tau(q) = qH(q)+1
\end{equation}
Applying the Legendre transformation~\cite{Tulczyjew1977}, we finally get the relation between $f(\alpha)$ and $\alpha$
\begin{equation}\label{alpha_tau}
\alpha = \tau'(q) \ \ and \ \ f(\alpha) = q\alpha-\tau(q)
\end{equation}
or equivalently (by using (\ref{tau_q}))
\begin{equation}\label{alpha_h}
\alpha = H(q)+qH'(q) \ \ and \ \ f(\alpha) = q[\alpha-H(q)]+1
\end{equation}
\section{RESULTS}
\subsection{Long-term correlation in individual activity}
To effectively categorize users into different activity levels, we define
$M_i$ as the total records of a single user $i$ ($M_i=\sum_{i=1}^{N} {x_i(t)}$,
where $N$ is the length of the series), and convert it into a logarithmic scale,
$L_i=\lfloor (\ln M_i) \rfloor$. Then, $L$ ranges from $4$ to $8$ in Movielens while
it increases from $4$ to $12$ in Netflix. According to logarithmic activity levels,
we firstly present the distributions of inter-event times in Fig.~\ref{fig:timeInterval_level},
where the left and right panels indicate Movielens and Netflix, respectively.
As shown in Fig.~\ref{fig:timeInterval_level}, both of them show a fat tail that
suggests the burstiness of online reviewing activity~\cite{Barabasi2005}. More concretely,
for these users with lower activity levels (e.g. $L<6$), their inter-event times aren't
exactly power law distributed. For example, in Fig.~\ref{fig:small_level},
the distributions of inter-event times of users with activity levels $L=4$ and $L=5$ in Movielens apparently
follow a exponential cut-off power-law distribution via least squared estimating method.
While for these users with larger activity levels (e.g. $L>7$), their inter-event times
are approximately power-law distributed. Thus, the power-law distribution is not the
only type to characterize the fat tail of inter-event clustering (i.e., burstiness),
which is a little different from the empirical result found in human communication activity \cite{Rybski2012}.
\begin{figure*}[htb!]\centering
\includegraphics[width=0.8\textwidth]{timeInterval_level
\caption{\label{fig:timeInterval_level}(Color online) Inter-event time distribution at different activity levels.
The left and right panels respectively indicate Movielens and Netflix. The dash lines are guide for power-law distributions.
The fat tail is both found in inter-event time distributions for Movielens and Netflix, which suggests the burstiness of human online activity.}
\end{figure*}
\begin{figure*}[htb!]\centering
\includegraphics[width=0.8\textwidth]{small_level
\caption{\label{fig:small_level}Inter-event time distributions of users in Movielens
whose activity levels are respectively $L=4$ and $L=5$. Compared with exact power-law distributions,
they can be fitted by exponential cut-off power-law distributions via least squared estimating method.}
\end{figure*}
The burstiness of online reviewing activity (or fat tailed inter-event time distribution) potentially
suggests that the time series of events behave long-term correlation \cite{Rybski2012}.
Thus, we apply DFA to calculate the Hurst exponent in each time series of events obtained from a single user.
Note that the least square estimating method is applied for fitting trend, and the F-statistic test confirms
the significant of fitting results (see more in Supplementary Materials).
Then, these Hurst exponents are further scaled according to the same activity levels of users and averaged.
As shown in Fig. \ref{fig:DFA_level}, the average of Hurst exponents as a function of activity level is presented, whose values
are much more than $0.5$ that indicates the uncorrelated behavior, and they aren't restricted by order of DFA.
Thus, it can be claimed that the long-term correlation exists in these time series
of events at individual level. It also worthy to be noted that
there is an approximately positive relation between Hurst exponent and activity level both
for Movielens and Netflix, similar to those results found in the traded values of
stocks and communication activity \cite{Eisler2006,Eisler2008,Rybski2012}.
Now, we have found that the long-term correlation and fat-tailed inter-event time distribution both exist
in Netfilx and Movielens. To further analyze the relation between the long-term correlation and inter-event time
distribution, we shuffle the time series of events but preserve the distribution of inter-event times for each user.
The procedure is shown as follows: i) extract inter-event times of each user; ii) shuffle the extracted data;
iii) keep the first time stamp unchanging and rebuild the time series of events via the shuffled data.
The DFA is reused to obtain the Hurst exponents of new time series of events. As shown in Fig. \ref{fig:DFA_level},
they are trivially different from those of original data, which shows that at individual level the long-term correlation
of time series of events associates with the fat-tailed distribution of inter-event times. Since the inter-event
times aren't strictly power-law distributed, we cannot simply infer the long-term correlation from Levy correlation~\cite{Rybski2012}.
\begin{figure*}[htb!]\centering
\includegraphics[width=0.8\textwidth]{DFA_level}
\caption{\label{fig:DFA_level}(Color online) Hurst exponents as a function of activity level for Movielens and Netflix.
The results obtained from original time series of events and shuffled ones are respectively plotted with green circles
and blue square. With the increase of activity levels, the long-term correlation becomes stronger.
Moreover, the trivial difference between them reveals the long-term correlation having a potential relation with
fat-tailed inter-event time distribution.}
\end{figure*}
\subsection{Long-term correlation in whole community activity}
Although the inter-event interval distributions in respect to activity levels
show fat tail, we still want to probe whether this property is reserved
in whole community (i.e, system). Figure \ref{fig:timeInter_all} shows that
the inter-event time distributions of Movielens and Netflix at
whole community level. It can be seen that it is fitted by an exponential cut-off
power law distribution for Movielens and approximate power law one for Netflix, which
suggests that the fat tail is generic. Furthermore, the events are aggregated from all
users in whole community, and the resulting time series is investigated to unveil
the long-term correlation. As shown in Fig \ref{fig:DFA_all}, the Hurst exponents
obtained from 1-order and 2-order DFA are robust and approximately
close to 0.9 and 1 for Movielens and Netflix, respectively,
which shows very strong long-term correlation. These Hurst exponents also associate
with the spectrum of time series obeying $\frac{1}{f}$ scaling, suggesting the
self-organized criticality of system. Moreover, the time series of events in whole
community are shuffled preserving the distribution of inter-event times and find the Hurst exponents reduce
into 0.5, which suggests that the long-term correlation in system is due to interdependence in the activity
(named `true correlation' in Ref~\cite{Rybski2012}).
\begin{figure*}[htb!]\centering
\includegraphics[width=0.8\textwidth]{timeInter_all}
\caption{\label{fig:timeInter_all}(Color online) Inter-event time distribution at whole community level.
The power-law with exponential cut-off relation behaves in Movielens, while power-law relation behaves in Netflix.
This result shows the burstiness is generic to system.}
\end{figure*}
\begin{figure*}[htb!]\centering
\includegraphics[width=0.8\textwidth]{DFA_all}
\caption{\label{fig:DFA_all} (Color online)
The results of 1-order and 2-order DFA for Movielens and Netflix at whole community level.
The Hurst exponents of original time series of events obtained via least square estimating
method is 0.9 and 1 in Movielens and Netflix respectively. When they are randomly shuffled,
the Hurst exponents approximately reduce to 0.5. This result demonstrates that strongly
long-term correlation exists in Movielens and Netflix. Note that solid and dash line
respectively indicates original time series of events and shuffled one.}
\end{figure*}
\subsection{Multifractality of whole community activity}
Long-term correlation shown in individual and whole community activity means the existence of fractality,
but few works have further analyzed the category of the fractality and its origin. The results of DFA
at whole community level fluctuating at double logarithmic coordination implies the potential multifractality
(see in Fig.~\ref{fig:DFA_all}). Inspired by~\cite{Kantelhardt2002}, we introduce MFDFA to analyze these data sets
and reveal the category of fractality (i.e., monofractality or multifractality).
According to description of method, using 1-order MFDFA, we fix a certain value of $q$ and fit $F_q(\Delta t)$
and $\Delta t$ at double logarithmic coordination with the least square estimating method to
obtain the value of generalized Hurts exponent $H(q)$. Herein, we set $q$ in a interval $(0, 10]$ with
step length $0.1$. Figure~\ref{fig:MFDFA_real} (a) and (b) show the $H(q)$ as a function of $q$ via
1-order MFDFA for Movielens and Netflix, respectively. It can be found that both for Movilens and
Netflix $H(q)$ decreases when $q$ increases, that is, the dependence between $H(q)$ and $q$ suggests the
multifractality of whole community activity.
To figure out the origin of such multifractality, we randomly shuffle these time series of events at whole community level
and apply 1-order MFDFA once again on the shuffled one. As shown in Fig. \ref{fig:MFDFA_real}(c) and (d),
the values of $H(q)$ both for Movielens and Netflix obviously reduce in comparison with those of original ones,
however, the reminded dependence between $H(q)$ and $q$ still shows the existence of multifractality.
Much more legible results describing the extent of multifractality for Movielens and Netflix are
characterized by the singularity spectrum $f(\alpha)$, as shown in Fig.~\ref{fig:MFDFA_real}(e) and (f).
It can be found that the horizon span of $f(\alpha)$ both for the original and shuffled
time series of events are almost same, but the large changes only happen to the values of $\alpha$.
It confirms the results of the dependence between $H(q)$ and $q$, and also suggests
that the multifractality of whole community activity is not solely induced by the long-term correlation.
\begin{figure*}[htb!]\centering
\includegraphics[width=\textwidth]{MFDFA_real}
\caption{\label{fig:MFDFA_real}(Color online) Relation between $H(q)$ and $q$ deriving from 1-order MFDFA (a)-(d) and
the corresponding singularity spectrum (e) and (f), where (a) and (b) are obtained from the original time series
while (c) and (d) are obtained from the shuffled ones. Though the multifractality keeps, there are significant changes
happened to the values of $H$ and $\alpha$. This results reveals the existence of multifractality for Netflix and Movielens
and its formation due to the dual effect of long-term correlation and broad PDF and.}
\end{figure*}
So far, we have analyzed the multifractality of human online reviewing activity for Netflix and Movielens, and
think that it is induced by the dual effect of the long-term correlation ad broad PDF. To verify such hypothesis,
we use three types of synthetic time series in analogy to real ones and analyze the multifractility of them.
More concretely, the first type is a random series obeying a power-law distribution ($y \sim x^{-2}$),
the second one is a monofractal series with strong long-term correlation ($H=0.9$), and the third one
is their combination. Their constructions can be found in Appendix. We also obtain the corresponding
shuffled time series. Then, the multifractility of them are derived from MFDFA, which are shown in
Fig. \ref{fig:MFDFA_synthesis}. For the first type of time series in Fig. \ref{fig:MFDFA_synthesis}(a)
and (d), the remarkable dependence of $H(q)$ and $q$ (or broad singularity spectrum) shows
its multifractility dominated by power-law distribution, and the overlaps of results between original
and shuffled time series is due to the absence of long-term correlation.
While for the second type of time series in Fig. \ref{fig:MFDFA_synthesis}(b) and (e),
we can find the approximate independence of $H(q)$ and $q$ (or narrow singularity spectrum),
which reveals the monofractility. The long-term correlation only affects $H(q)$
approximately changing from 0.9 to 0.5. Finally, for the third type of time series in
\ref{fig:MFDFA_synthesis}(c) and (f), there exists similar results to empirical findings,
which is suggested by the significant horizon span of singularity spectrum and change of $H(q)$.
Through aforementioned analysis, we can claim that the multifractality of human online reviewing activity
results from the broad PDF and long-term correlation (a more detailed discussion see in Supplementary Materials).
\begin{figure*}[htb!]\centering
\includegraphics[width=0.8\textwidth]{MFDFA_synthesis}
\caption{\label{fig:MFDFA_synthesis}Relation between $H(q)$ and $q$ obtained from 1-order MFDFA (a)-(c)
and the corresponding singularity spectrum (d)-(f) of three types of synthetic time series, where pink square
and blue circle respectively indicates the results of original and shuffled time series. Note that
the first column shows a synthetic time series obeying a power law distribution $y \sim x^{-2}$,
the second one describes a synthetic time series whose hurst exponent is 0.9, the third one
represents a synthetic time series that combines the properties of the former two time series.
Through carefully analyzing them, we can find that only the third time series behaves similar results
to empirical findings.}
\end{figure*}
\section{Conclusion}
At present, we have analyzed data sets of online human reviewing activity from
Netfilx and Movielens at individual and the whole community level. At individual
level, we find the fat-tailed inter-event time distribution and its relation with the
discovered long-term correlation, but we cannot get the exact analytical results like in \cite{Rybski2012}
for the fact that these inter-event time distributions in restricted to activity level
don't strictly follow a power-law behavior. Meanwhile, at the whole community level,
we also find several similar properties as what we find at individual level, but the long-term
correlation is due to interdependence in whole community activity. In further, these long-term correlation
characterized by the Hurst exponent derived from DFA potentially imply there exists
the fractality of human online reviewing activity.
To further uncover the category of such fractality and its origination, we introduce MFDFA and finally
find the multifractality of whole community activity. A hypothesis that such multifractality
results from the combining impact of broad PDF and long-term correlation is provided. Then,
it is verified by analyzing three types of synthetic time series having one or more properties of
real one. Thus, we can conclude that a dual-induced multifractility exists in online human activity.
Nevertheless, it shouldn't be ignored that an appropriate model lacking to explain the mechanism of
reproducing such time series. We hope that future work will solve this problem.
\section*{Acknowledge}
We thank the financial support from the Major State Basic Research Development Program of China (973 Program) (Grant No. 2012CB725400) and
the National Natural Science Foundation of China (Grant Nos. 71101009, 71131001, 91324002).
\section*{Appendix}
We present the construction of three types of synthetic time series. Concretely speaking,
we first synthesize a random time series $x(t)$ that obeys a power-law distribution $p(x)= \beta x^{-(1+\beta)}$.
According to central limit theorem, it can be generated as follows:
\begin{equation}
x(t) = (r(t)/\beta)^{-\frac{1}{1+\beta}},
\end{equation}
where $r(t)$ is a time series sampled from a uniform distribution $U(0, 1)$.
In our analysis, we set $\beta=1$, which leads to $x(t)$ obeying a power-law distribution, $p(x) = x^{-2}$).
Then, we apply the FFM method proposed in \cite{Makse1996} to generate a monofractal time series with
long-term correlation, and the procedure is shortly introduced as follows:
i) Generate a 1-dimension random time series $U_i$ following a Gaussian distribution, and derive its fourier transform coefficients $U_q$.
ii) Obtain $S_q$ from the Fourier transformation of $C_l$, where $C_l= \langle \mu_i \mu_{i+l}\rangle= (1+l^2)^{-\gamma/2}$.
iii) Calculate $N_q=[S_q]^{1/2} U_q$.
iv) Derive our desired time series $N_r$ via the inverse fourier transformation of $N_q$. Herein, we can transform $N_r$ via $N_r=N_r- min(N_r)+1$.
At last, we provide a method to combine these two time series for synthesizing the third time series,
\begin{equation}
X(t)=(N_r(t)/\beta)^{-\frac{1}{1+\beta}}
\end{equation}
where $N_r$ is the time series with long-term correlation.
\section*{Reference}
\providecommand{\newblock}{}
|
2,869,038,154,646 | arxiv | \section{Introduction}
The (chordal) Schramm-Loewner evolution with
parameter $\kappa >0$ ($SLE_\kappa$) is a measure on
curves connecting two distinct boundary points $z,w$ of a simply
connected domain $D$. As originally defined by Schramm \cite{Oded}, this
is a probability measure on paths. If $\kappa \leq 4$, the measure is supported on simple
curves that do not touch the boundary.
Although Schramm \cite{Oded}
originally defined $SLE_\kappa$ as a probability measure,
if $\kappa \leq 4$ and $z,w$ are locally
analytic boundary points, it is natural to consider $SLE_\kappa$ as a
finite
measure $\mu(z,w)$ with partition function, that is,
as a measure with total mass $\Psi_D(z,w)=H_D(z,w)^b$.
Here $H$ denotes
the boundary Poisson kernel normalized so that $H_{\mathbb H}(0,x) = x^{-2}$
and $b = (6-\kappa)/2\kappa$ is the boundary scaling
exponent. If $f: D \rightarrow f(D)$
is a conformal transformation, then
\[ H_D(z,w) = |f'(z)| \, |f'(w)| \, H_{f(D)}(f(z),f(w)).\]
There are several reasons for considering $SLE_\kappa$ as a measure with a total mass. First, $SLE_\kappa$ is known to be the scaling limit of various two-dimensional discrete models that are considered as measures with partition functions, and hence
it is natural to consider the (appropriately normalized)
partition function in the scaling limit. Second, the ``restriction property'' or ``boundary
perturbation'' can be described more naturally for the
nonprobability measures; see \eqref{apr27.1} below. This description leads to one
way to define $SLE_\kappa$ in multiply connected domains or, as is
important for this paper, for multiple $SLE_\kappa$ in a simply connected
domain. See \cite{Parkcity, Annulus} for more information.
We write $\mu^\#_D(z,w) = \mu_D(z,w)/
\Psi_D(z,w)$ for the probability measure which is well
defined even for rough boundaries.
The definition of the measure on multiple $SLE_\kappa$ paths immediately
gives a partition function defined as the total mass
of the measure. The measure on multiple $SLE_\kappa$
paths \[ \bgamma = (\gamma^1,\ldots,
\gamma^n)\] has been constructed in \cite{Dub,KL,Parkcity}. Even though the definition in \cite{KL} is given for the so-called ``rainbow'' arrangement of the boundary points, it can be easily extended to the other arrangements \cite{Dub, Stat}.
One can see that unlike $SLE_\kappa$ measure on single curves, conformal invariance and domain Markov property do not uniquely specify the measure when $n\geq 2$. This definition makes it unique
by requiring the measure to satisfy the restriction property, which is explained in Section \ref{defs}.
Study of the multiple $SLE_\kappa$ measure involves characterizing the partition function. For $n=2$, the partition function is explicitly given in terms of the hypergeometric function.
For $n\geq 3$, the goal is to characterize the partition function by a particular second-order PDE.
However, it does not directly follow from the definition that the partition function is $C^2$. There are two main approaches to address this problem.
One approach is to show that the PDE system has a solution and use it to describe the partition function.
In \cite{Dub}, it is shown that a family of integrals taken on a specific set of cycles satisfy the required PDE system.
In \cite{Eve}, conformal field theory and partial differential equation techniques such as H{\"o}rmander's theorem are used to show that the partition function satisfies the PDE system.
The other approach, which is the one we take in this work, is to directly prove that the partition function is $C^2$. Then It\^{o}'s formula can be used next to show that the partition function satisfies the PDEs.
The basic idea
of our proof
is to interchange derivatives and expectations in expressions for the
partition function. This interchange needs justification and we prove
an estimate about $SLE_\kappa$ to justfiy this.
Here we summarize the paper. We finish this introduction by
reviewing examples of partition functions for $SLE_\kappa$.
Definitions and properties of multiple $SLE_\kappa$ and the outline of the proof are given in Section \ref{defs}.
Section \ref{japansec} includes an estimate for $SLE_\kappa$ using techniques similar to the ones in \cite{Japan}.
Proof of Lemma \ref{mar19.lemma1}, which explains estimates for derivatives of the Poisson kernel is given in Section \ref{lemmasec}.
\subsection{Examples}\label{exmaples}
\begin{itemize}
\item \textbf{$SLE_\kappa$ in a subset of ${\mathbb H}$.} Let $\kappa\leq 4$ and suppose $D\subset {\mathbb H}$ is a simply connected domain such that $K={\mathbb H}\setminus D$ is bounded and ${\rm dist}(0,K)>0$. Also, assume that $\gamma$ is parameterized with half-plane capacity. By the restriction property we have
\begin{equation} \label{apr27.1}
\frac{d\mu_D(0,\infty)}{d\mu_{\mathbb H}(0,\infty)}(\gamma)=1\{\gamma\cap K=\emptyset\}\exp\left\{\frac{\textbf{c}}{2}m_{\mathbb H}(\gamma,K)\right\},
\end{equation}
where $m_{\mathbb H}(\gamma,D)$ denotes the Brownian loop measure of the loops that intersect both $\gamma$ and $K$ and
\[
\cent =\frac{(6-\kappa)(3\kappa-8)}{2\kappa}
\]
is the \emph{central charge}.
We normalize the partition functions, so that $\Psi_{\mathbb H}(0,\infty)=1$. For an initial segment of the curve $\gamma_t$, let $g_t:{\mathbb H}\setminus\gamma_t\to{\mathbb H}$ be the unique conformal transformation with $g_t(z)=z+\text{o}(1)$ as $z\to\infty$. Then
\[
\partial_tg_t(z)=\frac{a}{g_t(z)-U_t},
\]
where $a=2/\kappa$ and $U_t$ is a standard Brownian motion. Suppose $\gamma_t\cap K=\emptyset$ and let $D_t=g_t(D\setminus\gamma_t)$. One can see that
\[
m_{\mathbb H}(\gamma_t,K)=-\frac{a}{6}\int_0^tS\Phi_s(U_s)ds,
\]
where $S$ denotes the Schwarzian derivative and $\Phi_s(U_s)=H_{D_s}(U_s,\infty)$. It follows from conditioning on $\gamma_t$ that
\[
M_t=\exp\left\{\frac{\textbf{c}}{2}m_{\mathbb H}(\gamma_t,K)\right\}\Psi_{D_t}(U_t,\infty)
\]
is a martingale. We assume the function $V(t,x)=\Psi_{D_t}(x,\infty)$ is $C^2$ for a moment. Therefore, we can apply It\^{o}'s formula and we get
\[
-\frac{a\textbf{c}}{12}V(t,U_t)\,S\Phi_t(U_t)+{\partial_t V(t,U_t)}+\frac{1}{2}\partial_{xx} V(t,U_t)=0.
\]
Straightforward calculation shows that $V(t,x)=H_{D_t}(x,\infty)^b$ is $C^2$ and satisfies this PDE. Here, $b$ is the \emph{boundary scaling exponent}
\[
b=\frac{6-\kappa}{2\kappa}.
\]
\item \textbf{Other examples.} Similar ideas were used in \cite{KL} to describe the partition function of two $SLE_\kappa$ curves with a PDE.
Differentiability of the partition function was justified using the explicit form of the solution in terms of the hypergeometric function.
The PDE system in \cite{Annulus} characterizes the partition function of the annulus $SLE_\kappa$.
That PDE is more complicated and one cannot find an explicit form for the solution.
In fact, it is not easy to even show that the PDE has a solution. Instead, it was directly proved that the partition function is $C^2$ and It\^{o}'s formula was used to derive the PDE.
\end{itemize}
\section{Definitions and Preliminaries}\label{defs}
We will consider the multiple $SLE_\kappa$ measure
only for $\kappa \leq 4$
on simply connected domains $D$ and distinct
locally analytic boundary points ${\bf x} = (x_1,\ldots,x_n),
{\bf y} = (y_1,\ldots,y_n)$.
The measure is supported on $n$-tuples of curves
\[ \bgamma = (\gamma^1,\ldots,
\gamma^n),\]
where $\gamma^j$ is a curve connecting $x_j$ to $y_j$ in $D$.
If $n = 1$, then $\mu_D(x_1,y_1)$ is $SLE_\kappa$ from $x_1$ to $y_1$
in $D$ with total mass $H_D(x_1,y_1)^b
$ whose corresponding probability measure
$\mu^\#_D(x_1,y_1) = \mu_D(x_1,y_1)/H_D(x_1,y_1)^b $ is
(a time change of)
$SLE_\kappa$ from $x_1$ to $y_1$ as defined by
Schramm.
\begin{definition} If $\kappa \leq 4$ and
$n \geq 1$,
then $\mu_D({\bf x},{\bf y})$ is the measure absolutely continuous with respect
to $\mu_D(x_1,y_1) \times \cdots \times \mu_D(x_n,y_n)$ with
Radon-Nikodym derivative
\[ Y(\bgamma) := I(\bgamma) \, \exp\left\{
\frac \cent 2\sum_{j=2}^n m\left[K_j(\bgamma) \right] \right\}. \]
Here $\cent = (6-\kappa)(3\kappa - 8)/2\kappa $ is the
central charge, $I(\bgamma)$ is the indicator function of the event
\[ \{\gamma^j \cap \gamma^k = \emptyset , 1 \leq j < k \leq n\}, \]
and $m\left[K_j(\bgamma)\right]$ denotes the Brownian loop measure of loops
that intersect at least $j$ of the paths $\gamma^1,\ldots,\gamma^n$.
\end{definition}
Brownian loop measure is a measure on (continuous) curves $\eta:[0,t_\gamma]\to\mathbb{C}$ with $\eta(0)=\eta(t_\eta)$.
Let $\nu^\#(0,0;1)$ be the law of the Brownian bridge starting from 0 and returning to 0 at time 1.
Brownian loop measure can be considered as the measure
\[
m=\text{area}\times\left(\frac{1}{2\pi t^2}dt\right)\times\nu^\#(0,0;1)
\]
on the triplets $(z,t_\eta,\tilde{\eta})$, where $\tilde\eta(t)=t_\eta^{1/2}\eta(t/t_\eta)$ for $t\in[0,1]$.
For a domain $D\subset\mathbb{C}$, we denote the restriction of $m$ to the loops $\eta\subset D$ by $m_D$.
One important property of $m_D$ is conformal invariance. More precisely, if $f:D\to f(D)$ is a conformal transformation, then
\[
f\circ m_D= m_{f(D)},
\]
where $f\circ m_D$ is the pushforward measure.
Note that if $\sigma$ is a permutation of $\{1,\ldots,n\}$ and
$\bgamma_\sigma = (\gamma^{\sigma(1)},\ldots,\gamma^{\sigma(n)})$,
then $Y(\bgamma) = Y(\bgamma_\sigma)$. The partition function
is the total mass of this measure
\[ \Psi_D({\bf x},{\bf y}) = \| \mu_D({\bf x},{\bf y})\|. \]
We also write
\[ \tilde \Psi_D({\bf x},{\bf y}) = \frac{\Psi_D({\bf x},{\bf y})}{\prod_{j=1}^n H_D(x_j,y_j)^b},\]
which can also be written as
\[ \tilde \Psi_D({\bf x},{\bf y}) = {\mathbb E}[Y] , \]
where the expectation is with respect to the probability measure
$\mu_D^\#(x_1,y_1) \times \cdots \times \mu_D^\#(x_n,y_n)$.
Note that $\tilde \Psi_D({\bf x},{\bf y})$ is a conformal invariant,
\[ f\circ \tilde \Psi_D({\bf x},{\bf y}) = \tilde \Psi_{f(D)}
(f({\bf x}),f({\bf y})), \]
and hence is well defined even if the boundaries are rough.
Since $SLE_\kappa$ is
reversible \cite{Zhan}, interchanging $x_j$ and $y_j$ does not change
the value.
To compute the partition function we use an alternative
description of the measure $\mu_D({\bf x},{\bf y})$. We will give
a recursive definition.
\begin{itemize}
\item For $n=1$, $\mu_D(x_1,y_1)$ is the usual $SLE_\kappa$
measure with total mass $H_D(x_1,y_1)^b$.
\item Suppose the measure has been defined for all $n$-tuples
of paths. Suppose ${\bf x} = (x',x_{n+1}), {\bf y} = (y',y_{n+1})$
are given and write an $(n+1)$-tuple of paths as
$\bgamma = (\bgamma',\gamma^{(n+1)})$.
\begin{itemize}
\item The marginal
measure on $\bgamma'$ induced by
$\mu_D({\bf x},{\bf y})$ is absolutely continuous with
respect to $\mu_D({\bf x}',{\bf y}')$ with Radon-Nikodym derivative
$H_{\tilde D}(x_{n+1},y_{n+1})^b$. Here $\tilde D$ is the
component of $D \setminus \bgamma'$ containing
$x_{n+1},y_{n+1}$ on its boundary. (If there is no such
component, then we set $H_{\tilde D}(x_{n+1},y_{n+1}) = 0$
and $\mu_D({\bf x},{\bf y})$ is the zero measure.)
\item Given $\bgamma'$, the curve $\gamma^{n+1}$
is chosen using the probability distribution
$\mu^\#_{\tilde D}(z_{n+1},y_{n+1})$.
\end{itemize}
\end{itemize}
One could try to use this description of the measure as the
definition, but it is not obvious that it is consistent. However, one can see that the first definition satisfies this property using the following lemma.
\begin{lemma}
Let $\bgamma$ denote a $(n+1)$-tuple of paths which we write as
$\bgamma = (\bgamma',\gamma^{(n+1)})$, and let $\tilde{D}$ be the connected component of $D\setminus\bgamma'$ containing the end points of $\gamma^{(n+1)}$ on its boundary. Then
\[
\sum_{j=2}^{n+1} m\left[K_j(\bgamma)\right]=\sum_{j=2}^n m\left[K_j(\bgamma')\right]+m_D(\gamma^{(n+1)},\,D\setminus\tilde{D}).
\]
\end{lemma}
\begin{proof}
Let $K^1_j(\bgamma)$ denote the set of loops in $K_j(\bgamma)$ that intersect $\gamma^{(n+1)}$ and let $K^2_j(\bgamma)$ denote the set of loops that do not intersect $\gamma^{(n+1)}$. Then
\begin{equation}\label{eq_brwn1}
m\left[K^1_2(\bgamma)\right]=m_D(\gamma^{(n+1)},\,D\setminus\tilde{D}).
\end{equation}
Note that $K^1_j(\bgamma)$ is equivalent to the set of loops in $D$ that intersect $\gamma^{(n+1)}$ and at least $j-1$ paths of $\bgamma'$. Moreover, $K^2_j(\bgamma)$ is equivalent to the set of loops that intersect at least $j$ paths of $\bgamma'$, but do not intersect $\gamma^{(n+1)}$. Therefore,
\[
K_j(\bgamma')=K^1_{j+1}(\bgamma)\cup K^2_{j}(\bgamma).
\]
Now the result follows from this, the fact that $K^2_{n+1}(\bgamma)=\emptyset$ and \eqref{eq_brwn1}.
\end{proof}
We can also take the marginals in a different order. For
example, we could have defined the recursive step above as follows.
\begin{itemize}
\item The marginal measure on $\gamma^{n+1}$ induced
by $\mu_D({\bf x},{\bf y})$ is absolutely continuous with respect
to $\mu_D(x_{n+1},y_{n+1})$ with Radon-Nikodym
derivative $\Psi_{\tilde D}({\bf x}',{\bf y}')$ where
$\tilde D = D \setminus \gamma$. (It is possible that
$\tilde D$ has two separate components in which case we
multiply the partition functions on the two components.)
\end{itemize}
We will consider boundary points on the real line.
We write just $H,\Psi,\tilde \Psi,\mu,\mu^\#$ for $H_{\mathbb H},
\Psi_{\mathbb H}, $ $\tilde \Psi_{\mathbb H}, $ $\mu_{\mathbb H},$ $\mu^\#_{\mathbb H}$;
and note that
\[ \tilde \Psi({\bf x},{\bf y}) =
{\mathbb E}\left[ Y\right] = \Psi({\bf x},{\bf y}) \, \prod_{j=1}
^n |y_j - x_j|^{ 2b}, \]
where the expectation is with respect to the
probability measure
\[ \mu^\#(x_1,y_1) \times \cdots \times \mu^\#(x_n,y_n)
.\]
\begin{itemize}
\item If $n = 1$, then $Y \equiv 1$ and $\tilde \Psi({\bf x},{\bf y}) = 1$.
\item For $n = 2$ and $\bgamma = (\gamma^1,\gamma^2)$,
then
\[ {\mathbb E}[Y \mid \gamma^1] = \left[\frac{H_{D\setminus \gamma^1}
(x_2,y_2)}{H_D(x_2,y_2)} \right]^b.\]
The right-hand side is well defined even for non smooth boundaries
provided that $\gamma^1$ stays a positive distance from $x_2,y_2$.
In particular,
\[ {\mathbb E}[Y] = {\mathbb E}\left[{\mathbb E}(Y\mid \gamma^1)\right]
= {\mathbb E}\left[\left(\frac{H_{D\setminus \gamma^1}
(x_2,y_2)}{H_D(x_2,y_2)} \right)^b\right] \leq 1.\]
If $8/3 < \kappa \leq 4$, then $\cent >0$ and $Y > 1$
on the event $I(\bgamma)$ so the inequality
${\mathbb E}[Y] \leq 1$ is not obvious.
\item More generally, if $\bgamma = (\bgamma',\gamma^{n+1})$,
\[ {\mathbb E}[Y \mid \bgamma']
= Y(\bgamma') \, \left[\frac{H_{D\setminus \bgamma'}
(x_{n+1},y_{n+1})}{H_D(x_{n+1},y_{n+1})} \right]^b \leq Y(\bgamma').\]
Using this we see that $\tilde \Psi_D({\bf x},{\bf y}) \leq 1$.
\item For $n = 2$, if $x_1 = 0,
y_1 = \infty, y_2 = 1$ and $x_2 = x$ with $0 < x <1$, we have (see,
for example, \cite[(3.7)]{KL})
\begin{equation} \label{mar18.4}
\tilde \Psi({\bf x},{\bf y}) = \phi(x) := \frac{\Gamma(2a) \, \Gamma(6a-1)}
{\Gamma(4a) \, \Gamma(4a-1)} \, x^a\,
F(2a,1-2a,4a;x) ,
\end{equation}
where $F =$ $_2F_1$ denotes the hypergeometric function
and $a = 2/\kappa $. This is computed by
finding
\[ {\mathbb E}\left[ H_{{\mathbb H}\setminus \gamma^1}
(x,1)^b\right].\]
In fact, this calculation is valid for $\kappa < 8$ if it
is interpreted as
\[ {\mathbb E}\left[ H_{{\mathbb H}\setminus \gamma^1}
(x,1)^b; H_{{\mathbb H}\setminus \gamma^1}
(x,1) > 0\right].\]
\end{itemize}
It will be useful to write
the conformal invariant \eqref{mar18.4}
in a different way. If $V_1,V_2$ are two arcs of
a domain $D$, let
\[ \exc_D(V_1,V_2) = \int_{V_1} \int_{V_2} \, H_D(z,w)
\, |dz| \, |dw|.\]
This is $\pi$ times the usual excursion measure between $V_1$
and $V_2$; the factor of $\pi$ comes from our choice
of Poisson
kernel.
Note that
\[ \exc_{\mathbb H}((-\infty,0], [x,1])
=\int_x^1 \int_{-\infty}^0 \frac{ dr\, ds}
{ (s-r)^{2}} = \int_x^1 \frac{dr}{r} =
\log(1/x),\]
Hence we can write \eqref{mar18.4} as
$ \phi\left(\exp\left\{-\exc_{\mathbb H}((-\infty,0], [x,1])
\right\} \right) .$
More generally,
if $x_1 < y_1 < x_2 < y_2$,
\[ \tilde \Psi({\bf x},{\bf y})
= \phi\left(\exp\left\{-\exc_{\mathbb H}([x_1,y_1], [x_2,y_2])
\right\} \right)
= \phi\left(\exp \left\{-\int_{x_1}^{y_1}
\int_{x_2}^{y_2} \frac{dr\, ds}
{ (s-r)^2}\right\} \right),\]
and if $D$ is a simply connected
subdomain of ${\mathbb H}$ containing $x_1,y_2,x_2,y_2$
on its boundary, then
\begin{equation} \label{pat.1}
\tilde \Psi_D({\bf x},{\bf y})
= \phi\left(\exp\left\{-\exc_D( [x_1,y_1], [x_2,y_2])
\right\} \right)
= \phi\left(\exp \left\{-\int_{x_1}^{y_1}
\int_{x_2}^{y_2} H_D(r,s) \,{dr\, ds}
\right\} \right).
\end{equation}
This expression is a little bulky but it allows for easy
differentiation with respect to $x_1,x_2,y_1,y_2$.
At this point we can state the main proposition.
\begin{proposition} \label{mainprop}
$\Psi$ and $\tilde \Psi$ are $C^2$ functions.
\end{proposition}
It clearly suffices to prove this for $\tilde \Psi$. The idea is
simple --- we will write the partition function as an expectation
and differentiate the expectation by interchanging the
expectation and the derivatives. This interchange requires
justification and this is the main work of this paper.
We will use the following fact which is an analogue of
derivative estimates for positive harmonic functions.
The proof is straightforward but we delay
it to Section \ref{lemmasec}.
\begin{lemma} \label{mar19.lemma1}
There exists $c < \infty$ such that for every $x_1 < y_1 < x_2 < y_2$ the following holds.
\begin{itemize}
\item Suppose $D \subset {\mathbb H} $ is
a simply connected domain whose boundary contains an open
real neighborhood of $[x_1,y_1]$
and suppose that
\[ \delta := \min\left\{\lvert x_1-y_1\rvert,{\rm dist}\left[\{x_1,y_1\}, {\mathbb H} \setminus D
\right]\right\} >0.\]
Then if $z_1,z_2 \in \{x_1,y_1\},$
\[ |\partial_{z_1} H_D(x_1,y_1)|
\leq c\, \delta^{-1} \, H_D(x_1,y_1).\]
\[ |\partial_{z_1z_2} H_D(x_1,y_1)|
\leq c\, \delta^{-2} \, H_D(x_1,y_1).\]
\item
Suppose $D \subset {\mathbb H} $ is
a simply connected domain whose boundary contains
open real neighborhoods of $[x_1,y_1]$
and $[x_2,y_2]$ and suppose that
\[
\delta:=\min\left\{\{|w_1-w_2|;\,w_1\neq w_2\text{ and } w_1,w_2\in\{x_1,x_2,y_1,y_2\}\},\,\,{\rm dist}\left[\{x_1,y_1,x_2,y_2\}, {\mathbb H} \setminus D\right]
\right\}.
\]
Then if $z_1 \in \{x_1,y_1\}, z_2 \in \{x_2,y_2\}$,
\[ |\partial_{z_1z_2} \tilde \Psi_D({\bf x},{\bf y})|
\leq c\, \delta^{-2} \, \tilde \Psi_D({\bf x},{\bf y}).\]
\end{itemize}
Moreover, the constant can be chosen uniformly in neighborhoods of $x_1,y_1,x_2,y_2$.
\end{lemma}
We will also need to show that expectations do not blow
up when paths get close to starting points. We
prove this lemma in Section \ref{japansec}.
Let
\[ \Delta_{j,k}(\bgamma) =
{\rm dist}\left\{
\{x_k,y_k\}, \gamma^j
\right\},\]
\[ \Delta (\bgamma) = \min_{j \neq k}
\Delta_{j,k}(\bgamma).\]
\begin{lemma} If $\kappa < 4$, then
for every $n$ and every $({\bf x},{\bf y})$, there exists
$c < \infty$ such that for all $\epsilon > 0$, and all $j \neq k$,
\[ {\mathbb E}\left[ Y; \Delta \leq \epsilon
\right] \leq c \, \epsilon^{\frac{12}{\kappa}-1}. \]
In particular,
\[ {\mathbb E}\left[Y \, \Delta^{-2} \right]
\leq \sum_{m=-\infty}^\infty 2^{-2m} \, {\mathbb E}\left [Y;
2^{m} \leq \Delta < 2^{m+1} \right]< \infty . \]
\end{lemma}
\begin{proof} It suffices to show that for each $j,k$,
\[ {\mathbb E}\left[ Y; \Delta_{j,k} \leq \epsilon\right] \leq c \,
\epsilon^{\frac{12}{\kappa}-1}, \]
and by symmetry we may assume $j=1,k=2$. If
we write $\bgamma = (\gamma^1,\gamma^2, \bgamma')$, then
the event $\{\Delta_{1,2} \leq \epsilon\}$ is measurable
with respect to $(\gamma^1,\gamma^2)$ and
\[ {\mathbb E}[Y \mid \gamma^1, \gamma^2] \leq Y(\gamma^1,\gamma^2).\]
Hence it suffices to prove the result when $n=2$. This
will be done in Section \ref{japansec}; in that section
we consider $\kappa < 8$.
\end{proof}
For $n=1,2$,
it is clear that $\tilde \Psi$ is $C^\infty$ from
the exact expression, so we
will assume that $n \geq 3$. By invariance under
permutation of indices, it suffices to consider
second order derivatives involving only $x_1,x_2,y_1,y_2$.
We will assume $x_j < y_j$ for $j = 1,2$
and $x_1 < x_2$ (otherwise we just
relabel the vertices). The configuration $x_1 < x_2 < y_1 < y_2$
is impossible for topological reasons. If $x_1 < x_2 < y_2 < y_1$,
we can find a M\'obius transformation taking a point $y' \in (y_2,y_1)$
to $\infty$ and then the images would satisfy
$y_1' < x_1' < x_2' < y_2'$ and this reduces to above. So
we may assume that
\[ x_1 < y_1 < x_2 < y_2.\]
\noindent {\bf Case I:} Derivatives involving only $x_j,y_j$
for some $j$.
\medskip
We assume $j=1$. We will write
${\bf x} = (x,{\bf x}'), y = (y,{\bf y}'), \bgamma = (\gamma^1,\bgamma')$, and
let $D$ be the connected component
of ${\mathbb H} \setminus \bgamma'$ containing $x,y$
on the boundary. Then
\[ {\mathbb E}[ Y \mid \bgamma']
= Y(\bgamma') \, \left[\frac{H_{D}(x,y)}
{
H(x,y)}\right]^b
= Y(\bgamma') \, Q_D(x,y)^b,\]
where
$Q_D(x,y)$ is the probability that a (Brownian)
excursion in ${\mathbb H}$
from $x$ to $y$ stays in $D$.
Hence
\[ \tilde \Psi({\bf x},{\bf y}) =
{\mathbb E} \left[ Y(\bgamma') \, Q_D(x,y)^b \right]
.\]
Let $\delta = \delta(\bgamma') = {\rm dist}\{\{x,y\},\bgamma'\}.$
Using Lemma \ref{mar19.lemma1}, we see that
\[ \left|\partial_{x} [Q_D(x,y)^b] \right|
\leq c \, \delta^{-1} \, Q_D(x_1,y_1)^b , \]
\[ \left|\partial_{xy} [Q_D(x,y)^b]\right|
+ \left|\partial_{xx} [Q_D(x,y)^b]\right|
\leq c \, \delta^{-2} \, Q_D(x_1,y_1)^b.\]
(Here $c$ may depend on $x,y$ but not on $D$).
Hence
\[ {\mathbb E}\left[ Y(\bgamma') \, \left |\partial_x [Q_D(x,y)^b ]
\right| \right] \leq c\,{\mathbb E}\left[ Y(\bgamma') \,
\delta(\bgamma')^{-1}\, Q_D(x,y)^b\right] ,\]
and if $z $ = $x$ or $y$,
\[ {\mathbb E}\left[ Y(\bgamma') \, \left |\partial_{xz} [Q_D(x,y)^b ]
\right| \right] \leq c\,{\mathbb E}\left[ Y(\bgamma') \,
\delta(\bgamma')^{-2}\, Q_D(x,y)^b\right] .\]
Since
\[
{\mathbb E}\left[ Y(\bgamma') \,
\delta(\bgamma')^{-2}\, Q_D(x,y)^b\right] =
{\mathbb E}\left[{\mathbb E}\left(Y \, \delta^{-2} \mid \bgamma'\right)\right]
= {\mathbb E}[Y\, \delta^{-2}] \leq {\mathbb E}[Y \, \Delta^{-2} ] < \infty, \]
the interchange of expectation and derivative is valid,
\[ \partial_x \tilde \Psi ({\bf x},y) =
{\mathbb E}\left[Y(\bgamma') \, \partial_x[Q_D(x,y)^b] \right], \;\;\;\;
\partial_{xz}
\tilde \Psi ({\bf x},y) =
{\mathbb E}\left[Y(\bgamma') \, \partial_{xz}[Q_D(x,y)^b] \right]. \]
\medskip
\noindent {\bf Case 2:} The partial $\partial_{z_1z_2}$
where $z_1 \in \{x_j,y_j\}, z_2 \in \{x_k,y_k\}$
with $j\neq k$.
\medskip
We assume $j=1,k=2$.
We will write
${\bf x} = (x_1,x_2,{\bf x}'), y = (y_1,y_2,{\bf y}'), \bgamma = (\gamma^1,\gamma^2,
\bgamma')$. We will write $D' = D \setminus \bgamma'$
and let $D_1,D_2$ be the connected components of $D'$ containing
$\{x_1,y_1\}$ and $\{x_2,y_2\}$ on the boundary. It is possible
that $D_1 = D_2$ or $D_1 \neq D_2$.
\begin{itemize}
\item If $D_1 \neq D_2$, then
\[ {\mathbb E}[ Y \mid \bgamma']
= Y(\bgamma') \, Q_{D_1}(x_1,y_1)
^b \, Q_{D_2}(x_2,y_2)
^b
.\]
\item If $D_1 = D_2=D$, then
\[ {\mathbb E}[ Y \mid \bgamma']
= Y(\bgamma') \, Q_{D_1}(x_1,y_1)
^b \, Q_{D_2}(x_2,y_2)
^b \,\tilde \Psi_D((x_1,x_2),
(y_1,y_2)), \]
where $\tilde \Psi_D$ is defined as in \eqref{pat.1}.
\end{itemize}
In either case we have written
\[ {\mathbb E}[ Y \mid \bgamma']
= Y(\bgamma') \, \Phi(\z;\bgamma'),\]
where $\z = (x_1,y_1,x_2,y_2)$ and we can use
Lemma \ref{mar19.lemma1} to see that
\[ \left| \partial_{z_1z_2}\Phi(\z;\bgamma')
\right| \leq c \, \Delta(\bgamma,\z)^{-2} \, \Phi(\z,
\bgamma'), \;\;\;\; \Delta(\bgamma,\z)
= {\rm dist}\{\gamma,\{x_1,y_1,x_2,y_2\}\}.\]
As in the previous case, we can now interchange
the derivatives and the expectation.
\section{Estimate} \label{japansec}
In this section we will derive an estimate for $SLE_\kappa, \kappa < 8$.
While the estimate is valid for all $\kappa < 8$, the result is only
strong enough to prove our main result for $\kappa < 4.$
We follow the ideas in \cite{Japan} where careful analysis was made
of the boundary exponent for $SLE$.
Let $g_t$ denote the usual conformal transformation associated to the $SLE_\kappa$ path $\gamma$ from $0$ to $\infty$
parametrized so that
\begin{equation} \label{mar18.1}
\partial_t g_t(z) = \frac{a}{g_t(z) - U_t} ,
\end{equation}
where $a = 2/\kappa$ and $U_t = - W_t$ is a standard Brownian motion.
Throughout, we assume that $\kappa < 8$, so that
$D= D_\infty = {\mathbb H} \setminus \gamma$ is a nonempty
set. If $0 < x < y < \infty,$ we let
\[ \Phi = \Phi(x,y) = \frac{H_D(x,y)}
{H_{\mathbb H}(x,y)}, \]
where $H$ denotes the boundary Poisson kernel. If
$x$ and $y$ are on the boundary of different components
of $D$ (which can only happen for $4 < \kappa < 8$), then
$H_D(x,y) = 0$. As usual, we let
\[ b = \frac{6-\kappa}{2\kappa} = \frac{3a-1}{2}.\]
As a slight abuse of notation, we will write $\Phi^b$ for
$\Phi^b \, 1\{\Phi > 0\}$ even if $b \leq 0$.
\begin{proposition} For every $\kappa < 8 $ and $\delta > 0$,
there exists $ 0 < c < \infty$ such that for all
$\delta \leq x < y \leq 1/\delta$ and all $0 < \epsilon
< (y-x)/10 $,
\[
{\mathbb E}\left[\Phi^b; {\rm dist}(\{x,y\},
\gamma) < \epsilon \right] \leq c \, \epsilon^{6a-1}.\]
\end{proposition}
It is already known that
\[
\Prob\left\{ {\rm dist}(\{x,y\},
\gamma) < \epsilon \right\} \asymp \epsilon^{4a-1},\]
and hence we can view this as the estimate
\[ {\mathbb E}\left[\Phi^b \mid {\rm dist}(\{x,y\},
\gamma) < \epsilon \right] \leq c \, \epsilon^{2a}.\]
Using reversibility \cite{MS,Zhan}
and scaling of $SLE_\kappa$ we can see that
to prove the proposition it suffices to show that for every $\delta >0$
there exists $c = c_\delta$ such that if $\delta \leq x < 1$,
\[
{\mathbb E}\left[\Phi^b; {\rm dist}(1,
\gamma) < \epsilon \right] \leq c \, \epsilon^{6a-1}.\]
This is the result we will prove.
\begin{proposition} If $\kappa < 8$, there exists $c < \infty$
such that if $\gamma$ is an $SLE_\kappa$ curve from $0$ to $\infty$,
$0 < x < 1$, $\Phi = \Phi(x,1)$, $0 < \epsilon \leq 1/2$,
\[ {\mathbb E}\left[\Phi^b ; {\rm dist}(\gamma,1) < \epsilon\, (1-x) \right]
\leq c \,x^a \, (1-x)^{4a-1}\, \epsilon^{6a - 1}.\]
\end{proposition}
We will relate the distance to the curve to a conformal radius. In
order to do this, we will need $1$ to be an interior point of the
domain. Let
$D_t^*$ be the unbounded component of
\[ K_t = {\mathbb H} \setminus \left[(-\infty,x] \cup \gamma_t \cup \{\bar z: z \in \gamma_t\}\right] ,\]
and let
$T = T_1 = \inf\{t: 1 \not\in D^*_t\}$. Then for $t < T$, the
distance from $1$ to $\partial D^*_t$ is the minimum of $1-x$ and
${\rm dist}(1,\gamma_t)$. In particular, if $t < T$ and
$\epsilon < 1-x$, then
${\rm dist}(\gamma_t,1) \leq \epsilon$ if and only if ${\rm dist}(1,\partial D_t^*)
< \epsilon$. We define $\Upsilon_t$
to be $[4(1-x)]^{-1}$ times the
conformal radius of $1$ with respect to $D^*_t$ and
$\Upsilon = \Upsilon_\infty$.
Note that $\Upsilon_0 = 1$, and if ${\rm dist}(1,\partial D^*_t)
\leq \epsilon(1-x)$, then $\Upsilon \leq \epsilon$.
It suffices for us to show that
\[ {\mathbb E}\left[\Phi^b ; \Upsilon < \epsilon \right]
\leq c \, \epsilon^{6a - 1}.\]
We set up some notation. We fix $0 < x < 1$ and
assume that $g_t$ satisfies \eqref{mar18.1}. Let
\[ X_t = g_t(1) - U_t, \;\;\;\; Z_t = g_t (x) - U_t, \;\;\;\;
Y_t = X_t - Z_t, \;\;\;\; {K_t} = \frac{Z_t}{X_t} , \]
and note that the scaling rule for conformal radius implies
that
\[ \Upsilon_t = \frac{Y_t}{(1-x)\, g_t'(1)}.\]
The Loewner equation implies that
\[ dX_t = \frac{a}{X_t} \, dt + dB_t, \;\;\;\; dZ_t
= \frac{a}{Z_t} \, dt + dB_t, \]
\[ \partial_t g_t'(1) = - \frac{a\, g_t'(1)}{X_t^2}, \;\;\; \partial_t g_t'(x)
= - \frac{a g_t'(x)}{Z_t^2}, \;\;\; \partial_tY_t = -\frac{a \, Y_t}
{X_t Z_t}
. \]
\[ \partial_t \Upsilon_t = \Upsilon_t \, \,\left[ \frac{a}{X_t^2} - \frac{a}
{X_t Z_t} \right] = -a\Upsilon_t \, \frac{1}{X_t^2} \, \frac{1-K_t}{K_t} . \]
Let $D_t$ be the unbounded component of ${\mathbb H} \setminus \gamma_t$
and let
\[ \Phi_t = \frac{H_{D_t}(x,1)}{H_{D_0}(x,1)}
= x^2 \,\frac{g_t'(x)\, g_t'(1)}{Y_t^{ 2}},\]
where we set $\Phi_t = 0$ if $x$ is not on the boundary of $D_t$, that is,
if $x$ has been swallowed by the path (this is relevant only
for $4 < \kappa < 8$). Note that $\Phi = \Phi_\infty$ and
\[ \partial_t \Phi_t^b = \Phi_t^b \, \left[ - \frac{ab }{X_t^2}
- \frac{ab }{Z_t^2}+\frac{2ab }
{X_t Z_t}\right] = - ab \, \frac{\Phi_t^b}{X_t^2}
\, \left( \frac{1 - K_t}{K_t}\right)^2,\]
\[ \Phi_t^b = \exp\left\{ -ab \int_0^t \frac{1}{X_s^2} \,
\left( \frac{1 - K_s}{K_s}\right)^2 \, ds \right\}.\]
It\^o's formula implies
that
\[ d \frac{1}{X_t} = -\frac{1}{X_t^2} \, dX_t +
\frac{1}{X_t^3} \, d\langle X \rangle_t =
\frac1{X_t} \, \left[\frac{1-a}{X_t^2} \, dt
- \frac{1}{X_t} \, dW_t \right], \]
and the product rule gives
\[ d[1-K_t] = [1-K_t] \, \left[\frac{1-a}{X_t^2} \, dt -
\frac a{X_t\,Z_t} \, dt
- \frac{1}{X_t} \, dW_t \right]
= \frac{ 1-K_t }{X_t^2} \,
\left[(1-a) -
\frac a{K_t} \right] \, dt - \frac{1-K_t}{X_t} \, dW_t.\]
which can be written as
\[ dK_t = \frac{1-K_t}{X_t^2} \,
\left[
\frac a{K_t} + a-1\right] \, dt + \frac{1-K_t}{X_t} \, dW_t.\]
As in \cite{Japan}, we consider the local martingale
\[ M_t^*= (1-x)^{1-4a} \, X_t^{1-4a} \, g_t'(1)^{4a-1} =
(1-x)^{1-4a} \, (1-K_t)^{4a-1} \, \Upsilon_t^{1-4a},\]
which satisfies
\[ dM_t^* = \frac{1-4a}{X_t} \, M_t^* \, dW_t,
\;\;\;\;M_0^* = 1\]
If we use Girsanov and tilt by the local martingale, we see
that
\[ dK_t = \frac{1-K_t}{X_t^2} \,
\left[
\frac a{K_t} -3a\right] \, dt + \frac{1-K_t}{X_t} \, d W_t^*.\]
where $ W_t^*$ is a standard
Brownian motion in the new measure $\Prob^*$.
We reparametrize so that $\log \Upsilon_t$ decays
linearly. More precisely, we let $\sigma(t)
= \inf\{t: \Upsilon_t = e^{-at} \}$ and define
$\hat X_{t} = X_{\sigma(t)}, \hat Y_t = Y_{\sigma(t)}$,
etc. Since $\hat \Upsilon_t := \Upsilon_{\sigma(t)}
= e^{-at}$, and
\[ - a \, \hat \Upsilon_t = \partial_t \, \hat \Upsilon_t = -a \hat \Upsilon_t \,
\frac 1 {\hat X_t^2}\, \frac{1-\hat K_t}{\hat K_t}
\, \dot \sigma(t), \]
we see that
\[ \dot \sigma(t) = \frac{ \hat X_t^2\, \hat K_t}{1-\hat K_t}, \]
Therefore,
\[ \hat \Phi_t^b := \Phi_{\sigma(t)}
^b = \exp \left\{-ab \int_0^t
\frac{1-\hat K_s}{\hat K_s} \, ds \right\}
= e^{abt} \, \exp \left\{-ab \int_0^t
\frac{1}{\hat K_s} \, ds \right\}
,\]
\begin{eqnarray*}
d\hat K_t & = & \left[a - 3a \hat K_t\right]
\, d t+ \sqrt{\hat K_t \, (1-\hat K_t)}
\, dB_t^*\\
& = & \hat K_t \, \left[\left(\frac{a}{\hat K_t}
- 3a \right) \, dt + \sqrt{\frac{1-\hat K_t}
{\hat K_t}} \, dB_t^*\right].
\end{eqnarray*}
for a standard Brownian motion $B_t^*$ (in the measure $\Prob^*$).
Let $\lambda = 2a^2$, and
\[ N_t = e^{\lambda t} \, \hat \Phi_t^b \hat K_t^{a}
= \exp\left\{\frac{a(7a-1)}{2} \, t\right\} \exp \left\{-\frac{a
(3a-1)}2 \int_0^t
\frac{1}{\hat K_s} \, ds \right\} \, \hat K_t^{a} .\]
It\^o's formula shows that $N_t$ is a local $\Prob^*$-martingale
satisfying
\[ d N_t = N_t \, a\, \sqrt{\frac{1-\hat K_t}
{\hat K_t}} \, dB_t^*, \;\;\;\;
N_0 = x^a\]
One can
show it is a martingale by using Girsanov to see that
\[ d \hat K_t = \left[2a - 4a \hat K_t\right]
\, d t+ \sqrt{\hat K_t \, (1-\hat K_t)}
\, d\tilde B_t,\]
where $\tilde B_t$ is a Brownian motion in the
new measure $\tilde \Prob$. By comparison with a Bessel
process, we see that the solution exists for all time.
Equivalently, we can say that
\[ \hat M_t := \hat M_t^* \, N_t , \]
is a $\Prob$-martingale with $\hat M_0
= x^a$. (Although $M_t^*$ is only a local
martingale, the time-changed version $\hat M_t^*:=
M_{\sigma(t)}^*$ is a martingale.)
Using \eqref{mar18.4} we see that ${\mathbb E}\left[\Phi_\infty^b \mid
\gamma_{\sigma(t)}\right] \leq c \, \hat K_t^a \, \hat \Phi_t^b .$
If $\epsilon = e^{-at}$, then
\begin{eqnarray*}
{\mathbb E}\left[ \Phi^b ; \sigma(t) < \infty \right]
& = & c\,{\mathbb E}\left[{\mathbb E}( \Phi^b \, 1\{ \sigma(t) < \infty\} \mid \gamma_{\sigma(t)})\right]\\
& \leq & c\, {\mathbb E}\left[\hat K_t^a \, \hat \Phi^b_t ; \sigma(t) < \infty \right]\\
& = & c\,e^{-\lambda t} \, e^{(1-4a)at} \, (1-x)^{4a-1}
\hat M_0^{-1} \,
{\mathbb E}\left[ \, \hat M_t \; (1-\hat K_t)^{1-4a}
; \sigma(t) < \infty \right]\\
& = &c\, e^{a(1-6a)t} \,x^a\, (1-x)^{4a-1}\, \tilde {\mathbb E}\left[ (1-\hat K_t)^{1-4a}
\right]\\
& = &c\, \epsilon ^{6a-1} \, x^a\, (1-x)^{4a-1} \,
\tilde {\mathbb E}\left[ (1-\hat K_t)^{1-4a}
\right].
\end{eqnarray*}
So the result follows once we show that
\[ \tilde {\mathbb E}\left[ (1-\hat K_t)^{1-4a}
\right]< \infty\]
is uniformly bounded for $t \geq t_0$. The argument
for this proceeds as in \cite{Japan}.
If we do the
change of variables $\hat K_t = [1 - \cos \Theta_t]/2$, then
It\^o's formula shows that
\[ d \Theta_t = \left(4a - \frac 12\right) \, \cot \Theta_t\, dt + dB_t.\]
This is a radial Bessel process that never reaches the boundary.
It is known that the invariant distribution is proportional to $\sin^{8a-1}
\theta$ and that it approaches the invariant distribution
exponentially fast. One then computes that the invariant distribution
for $\hat K_t$ is proportional to $x^{4a-1} \, (1-x)^{4a-1}$.
In particular, $ (1-\hat K_t)^{1-4a}$ is integrable with respect
to the invariant distribution.
\section{Proof of Lemma \ref{mar19.lemma1}} \label{lemmasec}
We prove the first part of Lemma \ref{mar19.lemma1} for $x_1=0,\,y_1=1$. Other cases follow from this and a M\"obius transformation sending $x_1,y_1$ to $0,1$.
\begin{lemma} \label{Hlemma}
There exists $c< \infty$ such that
if $D$ is a simply connected
subdomain of ${\mathbb H}$ containing $0 ,1$ on its boundary, then
\[ |\partial_x H_D(0,1)| + |\partial_y H_D(0,1)|
\leq c \, \delta^{-1} \, H_D(0,1),\]
\[ |\partial_{xx} H_D(0,1)| + |\partial_{xy} H_D(0,1)|
+ |\partial_{yy} H_D(0,1)| \leq c\, \delta^{-2} \, H_D(0,1),\]
where $\delta = {\rm dist}(\{0,1\}, \partial D \cap {\mathbb H})$.
\end{lemma}
\begin{proof}
Let $g: D \rightarrow {\mathbb H}$ be a conformal transformation
with $g(0) = 0, g(1) = 1, g'(0) = 1$. Then if $|x| < \delta,
|y-1|< \delta$,
\begin{equation} \label{mar18.3}
H_D(x,y) = \frac{g'(x) \, g'(y)}{ [g(y) - g(x)]^{2}}.
\end{equation}
In particular $g'(0) \, g'(1) = H_D(0,1) \leq H_{\mathbb H}(0,1)
= 1$ and hence $g'(1) \leq 1$.
Using Schwartz reflection we can extend $g$ to be a
conformal transformations of disks of radius $\delta$
about $0$ and $1$.
By the distortion estimates (the fact that $|a_2| \leq 2,
|a_3| \leq 3$ for schlicht functions) we have
\[ |g''(0)| \leq 4 \, \delta^{-1} \, g'(0)
\leq 4 \, \delta^{-1}, \;\;\;\;
|g'''(0) | \leq 18 \, \delta^{-2} \, g'(0)
\leq 18 \, \delta^{-2} ,\]
and similarly $|g''(1)| \leq 4 \, \delta^{-1}\, g'(1)\,$ and $
|g'''(1)| \leq 18 \, \delta^{-1}\, g'(1)$. By direct differentiation
of the right-hand side of \eqref{mar18.3}
we get the result.
\end{proof}
\begin{lemma} There exists $c < \infty$ such that if
\[ x_1 < y_1 \leq 0 < 1 \leq x_2 < y_2 ,\]
$\tilde \Psi_D({\bf x},{\bf y})$ is as in \eqref{pat.1},
and $z_1 \in \{x_1,y_1\}, z_2 \in \{x_2,y_2\}$,
then
\[ |\partial_{z_1} \tilde \Psi_D({\bf x},{\bf y})|
+ |\partial_{z_2} \tilde \Psi_D({\bf x},{\bf y}) |
\leq c \, \delta^{-1} \, \Psi_D({\bf x},{\bf y}),\]
\[ |\partial_{z_1z_2} \tilde \Psi_D({\bf x},{\bf y})|
\leq c \, \delta^{-2} \, \tilde \Psi
_D({\bf x},{\bf y}),\]
where
\[
\delta:=\min\left\{\{|w_1-w_2|;\,w_1\neq w_2\text{ and } w_1,w_2\in\{x_1,x_2,y_1,y_2\}\},\,\,{\rm dist}\left[\{x_1,y_1,x_2,y_2\}, {\mathbb H} \setminus D\right]
\right\}.
\]
\end{lemma}
\begin{proof}
Let
\[ \tilde \Psi_D({\bf x},{\bf y})
= \phi\left(u_D({\bf x},{\bf y}) \right). \]
where
\[ u_D ({\bf x},{\bf y}) = e^{-\exc_D({\bf x},{\bf y})}
, \;\;\;\; \exc_D({\bf x},{\bf y}) =
\int_{x_1}^{y_1}
\int_{x_2}^{y_2} H_D(r,s) \,{dr\, ds}.\]
Using the Harnack inequality we can see that
for $j=1,2$,
\[ H_D(x,s) \asymp H_D(x_j,s),\;\;\;\;
H_D(r,y) \asymp H_D(r, y_j) \]
if $|x-x_j| \leq \delta/2, |y-y_j| \leq \delta/2$.
From this we see that
\[ \int_{x_2}^{y_2} H_D(z_1,s) \, ds
+ \int_{x_1}^{y_1} H_D(r,z_2) \, ds
\leq c \, \delta^{-1} \, \exc_D({\bf x},{\bf y}),\]
\[ H_D(z_1,z_2) \leq
c \, \delta^{-2} \, \exc_D({\bf x},{\bf y}).\]
Let $z_1$ be $x_1$ or $y_1$ and let $z_2$ be
$x_2$ or $y_2$. Then,
\[ \partial_{z_1} \, \tilde \Psi_D({\bf x},{\bf y})
= {\phi'( u_D ({\bf x},{\bf y}))}
\,\partial_{z_1} u_D({\bf x},{\bf y}) \]
\[ \partial_{z_1z_2} \tilde \Psi_D({\bf x},{\bf y})
= \phi''( u_D ({\bf x},{\bf y}))
\,[\partial_{z_1} u_D({\bf x},{\bf y}) ]\,[
\partial_{z_2} u_D({\bf x},{\bf y})]
+\phi'(u_D ({\bf x},{\bf y}))\,\partial_{z_1z_2}u_D({\bf x},{\bf y}).
\]
\[ \partial_{z_1} u_D({\bf x},{\bf y}) =
\left[\pm \int_{x_2}^{y_2} H_D(z_1,s) \, ds\right]\, u_D({\bf x},{\bf y}) .\]
\[ \partial_{z_2} u_D({\bf x},{\bf y}) =
\left[\pm \int_{x_1}^{y_1} H_D(r,z_2) \, ds\right]\, u_D({\bf x},{\bf y}).\]
\[ \partial_{z_2z_1} u_D({\bf x},{\bf y}) = \left[\pm \int_{x_1}^{y_1} \int_{x_2}^{y_2}H_D(r,z_2) \, dr
\, H_D(z_1,s) \, ds
\pm H_D(z_1,z_2)\right] \, u_D({\bf x},{\bf y}) \]
This gives
\[ | \partial_{z_1} u_D({\bf x},{\bf y})| + |\partial_{z_2} u_D({\bf x},{\bf y})|
\leq c\, \delta^{-1} \,\exc _D({\bf x},{\bf y}) \, u_D({\bf x},{\bf y}),\]
\[ \left|\partial_{z_2z_1} u_D({\bf x},{\bf y})\right|
\leq c\, \delta^{-2} \,\exc _D({\bf x},{\bf y}) \, u_D({\bf x},{\bf y}).\]
\[ \frac{| \partial_{z_2z_1} u_D({\bf x},{\bf y})|}
{ u_D({\bf x},{\bf y})} \leq c \, \delta^{-2} \, \exc_D({\bf x},{\bf y}).\]
The result will follow if we show that
\[ \frac{(1-x) \, |\phi''(1-x)|}{\phi(x)},\;\;\;\;
\frac{\phi'(x)}{\phi(x)},\]
are uniformly bounded for $x >x_0$.
Recall that $u(x) = c\, x^a \, F(x)$ where
$ F(x) =\,_2F_1(2a,1-2a,4a;x)$. We recall that $F$
is analytic in the unit disk with power series
expansion
\[ F(x) = 1 + \sum_{n=1}^\infty b_n \, x^n ,\]
where the coefficients $b_j$ satisfy
\[ b_n = C \, n^{4a-2} \, [1 + O(n^{-1})].\]
We therefore get asymptotic expansions for the coefficients
of the derivatives of $F$.
The important thing for us is that if $\kappa < 8$, then
$4a-1 > 1$ and we have as $x\downarrow 1$
\[ F(1-x) =O(1),\;\;\;\; F'(x) = o(x^{-1}), \;\;\;
F''(x) o(x^{-2}).\]
In other words, the quantities
\[ F(x) , \;\;\;\frac{(1-x) \, F'(x)}{F(x)}, \;\;\;\;
\frac{(1-x)^2 \, F''(x)}{F(x)}, \]
are uniformly bounded for $0 \leq x < 1$. If
$g(x) = x^a \, F(x)$, then
\[ g'(x) = g(x) \, \left[\frac{a}{x} + \frac{F'(x)}{F(x)}
\right],\]
\[ g''(x) = g(x) \, \left[\left( \frac{a}{x} + \frac{F'(x)}{F(x)}
\right)^2
- \frac{a}{x^2} + \frac{F''(x)}{F(x)} - \frac{F'(x)^2}{
F'(x)^2}\right].\]
Therefore, for every $x_0 >0$, the quantities
\[ \phi(x) , \;\;\;\frac{(1-x) \, \phi'(x)}{\phi(x)}, \;\;\;\;
\frac{(1-x)^2 \, \phi''(x)}{\phi(x)}, \]
are uniformly bounded for $x_0 < x < 1$.
\end{proof}
|
2,869,038,154,647 | arxiv | \section*{Acknowledgment}
The work presented in this paper has been partially supported by the HisDoc~III project funded by the Swiss National Science Foundation with the grant number $205120$\textunderscore$169618$.
\newpage
\section{Conclusion}
\label{toc:conclusion}
This paper is a proof-of-concept in which we want to raise awareness on the widely underestimated problem of training a machine learning system on poisoned data.
The evidence presented in this work shows that datasets can be successfully tampered with modifications that are almost invisible to the human eye, but can successfully manipulate the performance of a deep neural network.
Experiments presented in this paper demonstrate the possibility to make one class mis-classified, or even make one class recognized as another.
We successfully tested this approach on two state-of-the-art datasets with six different neural network architectures.
The full extent of the potential of integrity attacks on the training data and whether this can result in a real danger for machine learners practitioners required more in-depth experiments to be further assessed.
\section{Discussion}
\label{toc:discussion}
The experiments shown in Section~\ref{toc:results} clearly demonstrate that we one can completely change the behavior of a network by tampering just one single pixel of the images in the training set.
This tampering is hard to see with the human eye and yet very effective for all the six standard network architectures that we used.
We would like to stress that despite these being preliminary experiments, they prove that the behavior of a neural network can be altered by tampering \textit{only} the training data without requiring access to the network.
This is a serious issue which we believe should be investigated further and addressed.
While we experimented with a single pixel based attack --- which is reasonably simple to defend against (see Section~\ref{toc:defending}) --- it is highly likely that there exist more complex attacks that achieve the same results and are harder to detect.
Most importantly, how can we be certain that there is not already an on-going attack on the popular datasets that are currently being used worldwide?
\subsection{Limitations}
\label{toc:limitations}
The first limitation of the tampering that we used in our experiments is that it can still be spotted even though it is a single pixel.
One needs to be very attentive to see it, but it is still possible.
Attention in neural networks \cite{vaswani2017attention} is known also to highlight the portions of an input which contribute the most towards a classification decision.
These visualization could reveal the existence of the tampered pixel.
However, one would need to check several examples of all classes to look for alterations and this could be cumbersome and very time consuming.
Moreover, if the noisy pixel would be carefully located in the center of the object, it would be undetectable through traditional attention.
Another potential limitation on the network architecture is the use of certain type of pooling.
Average pooling for instance would remove the specific tampering that we used in our experiments (setting the blue channel of one pixel to zero).
Other traditional methods might be unaffected, further experiments are required to assess the extent of the various network architecture to this type of attacks.
A very technical limitation is the file format of the input data.
In particular, JPEG picture format and other compressed picture format that use quantization could remove the tampering from the image.
Finally, higher resolution images could pose a threat to the \textit{single} pixel attack.
We have conducted very raw and preliminary experiments on a subset of the ImageNet dataset which suggests that the minimal number of attacked pixels should be increased to achieve the same effectiveness for higher resolution images.
\subsection{Type of Defenses}
\label{toc:defending}
A few strategies can be used to try to detect and prevent this kind of attacks.
Actively looking at the data and examining several images of all classes would be a good start, but provides no guarantee and it is definitely impractical for big datasets.
Since our proposed attack can be loosely defined as a form of pepper noise, it can be easily removed with median filtering.
Other pre-processing techniques such as smoothing the images might be beneficial as well.
Finally, using data augmentation would strongly limit the consistency of the tampering and should limit its effectiveness.
\subsection{Future Work}
Future work includes more in-depth experiments on additional datasets and with more network architectures to gather insight on the tasks and training setups that are subject to this kind of attacks.
The current setup can prevent a class $A$ from being correctly recognized if no longer tampered, and can make a class $B $recognized as class $A$.
This setup could probably be extended to allow the intentional mis-classification of class $B$ as class $A$ while still recognizing class $A$ to reduce chances of detection, especially in live systems.
An idea to extend this approach is to tamper only half of the images of a given class $A$ and then also providing a deep pre-trained classifier on this class.
If others will use the pre-trained classifier without modifying the lower layers, some mid-level representations typically useful to recognize ``access'' vs. ``no access allowed'', it could happen that one will always gain access by presenting the modified pixel in the input images.
This goes in the direction of model tampering discussed in Section~\ref{toc:tampering_the_model}.
Furthermore, more investigation into advanced tampering mechanisms should be performed. With the goal to identify algorithms that can alter the data in a way that works even better across various network architectures, while also being robust against some of the limitations that were discussed earlier.
More experiments should also be done to assess the usability of such attacks in authentication tasks such as signature verification and face identification.
\section{Experimental Setting}
\label{toc:experimental_setting}
In an ideal world, each research article published should not only come with the database and source code, but also with the experimental setup used.
In this section we try to reach that goal by explain the experimental setting of our experiments in great detail.
These information will be sufficient not only to understand the intuition behind them but also to reproduce them.
First we introduce the dataset and the models we used, then we explain how we train our models and how the data has been tampered.
Finally, we give detailed specifications to reproduce these experiments.
\subsection{Datasets}
\label{toc:datasets}
\begin{figure}[!t]
\subfloat[CIFAR-10]%
{\includegraphics[width=.47\columnwidth]{dataset/cifar}%
\label{subfig:cifar10}}
\hfil
\subfloat[SVHN]%
{\includegraphics[width=.47\columnwidth]{images/dataset/svhn.png}%
\label{subfig:imagenet10}}
\caption{Images samples from the two datasets CIFAR10 (a) and SVHN-10 (b). Both of them have 10 classes which can be observed on different rows. For CIFAR-10 the classes are from top to bottom: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck. For SVHN the classes are the labels of number from $0$ to $9$. Credit for these two images goes to the respectivee website hosting the data.}
\label{fig:datasets}
\end{figure}
In the context of our work we decided two use the very well known CIFAR-10~\cite{krizhevsky2009learning} dataset and SVHN~\cite{netzer2011reading}.
Figure~\ref{fig:datasets} shows some representative samples for both of them.
CIFAR-10 is composed of $60k$ ($50k$ train and $10k$ test) coloured images equally divided in 10 classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck.
Street View House Numbers (SVHN) is a real-world image dataset obtained from house numbers in Google Street View images.
Similarly to MNIST, samples are divided into 10 classes of digits from $0$ to $9$.
There are $73k$ digits for training and $26k$ for testing.
For both datasets, each image is of size $32 \times 32$ RGB pixels.
\subsection{Network Models}
In order to demonstrate the model-agnostic nature of our tampering method, we chose to conduct our experiments with several diverse neural networks.
We chose radically different architectures/sizes from some of the more popular networks: AlexNet \cite{krizhevsky2012imagenet}, VGG-16 \cite{simonyan2014very}, ResNet-18 \cite{he2016deep} and DenseNet-121 \cite{huang2017densely}.
Additionally we included two custom models of our own design: a small, basic convolutional neural network (BCNN) and modified version of a residual network optimised to work on small input resolution (SIRRN).
The PyTorch implementation of all the models we used is open-source and available online\footnote{\url{https://github.com/DIVA-DIA/DeepDIVA/blob/master/models}} (see also Section \ref{toc:reproduce_with_deepdiva}).
\subsubsection{Basic Convolutional Neural Network (BCNN)}
This is a simple feed forward convolutional neural network with 3 convolutional layers activated with leaky ReLUs, followed by a fully connected layer for classification.
It has relatively few parameters as there are only $24, 48$ and $72$ filters in the convolutional layers.
\subsubsection{Small Input Resolution ResNet-18 (SIRRN)}
\label{toc:SIRRN}
The residual network we used differs from a the original ResNet-18 model as it has an expected input size of $32\times 32$ instead of the standard $224 \times 224$.
The motivation for this is twofold.
First, the image distortion of up-scaling from $32 \times 32$ to $224 \times 224$ is massive and potentially distorts the image to the point that the convolutional filters in the first layers no longer have an adequate size.
Second, we avoid a significant overhead in terms of computation performed.
Our modified architecture closely resembles the original ResNet but it has $320$ parameters more and on preliminary experiments exhibits higher performances on CIFAR-10 (see Table~\ref{tab:results}).
\subsection{Training Procedure}
The training procedure in our experiments is standard supervised classification.
We train the network to minimize the cross-entropy loss on the network output $\vec{x}$ given the class label index $y$:
\begin{equation}
L(\vec{x}, y) = -log \left( \frac{e^{x_y}}{\sum e^x} \right)
\end{equation}
We train the models for 20 epochs, evaluating their performance on the validation set after each epoch.
Finally, we asses the performance of the trained model on the test set.
\subsection{Acquiring and Tampering the Data}
\label{toc:acquiring_and_tampering}
\begin{table}[!t]
\centering
\caption{Example of tampering procedure. We tamper class $A$ in train and validation set and then class $B$ (and not $A$ anymore) in the test set. The expected is behaviour for the network is to mis-classify class $B$ into class $A$ and additionally not being able to classify correctly class $A$. }
\label{tab:tamperig_procedure}
\begin{tabular}{rcccc}
\toprule
& Train Set & Val Set & \multicolumn{2}{c}{Test Set} \\
Tampered Class & Plane & Plane & \multicolumn{2}{c}{Frog} \\
\midrule
&
\includegraphics[width=.12\columnwidth]%
{acquiring_tampering/plane1}%
&
\includegraphics[width=.12\columnwidth]%
{acquiring_tampering/plane2}%
&
\includegraphics[width=.12\columnwidth]%
{acquiring_tampering/frog1
&
\includegraphics[width=.12\columnwidth]%
{acquiring_tampering/plane3
\\
Expected Output & Plane & Plane & Plane & Not Plane \\
\bottomrule
\end{tabular}
\end{table}
We create a \textit{tampered} version of the CIFAR-10 and SVHN datasets such that, class $A$ is tampered in the training and validation splits and class $B$ is tampered in the test splits.
The \textit{original} CIFAR-10 and SVHN datasets are unmodified.
The tampering procedure requires that three conditions are met:
\begin{enumerate}
\item \textit{Non obtrusiveness}: the tampered class $A$ will have a recognition accuracy which compares favorably against the baseline (network trained on the original datasets), both when measured in the training and validation set.
\item \textit{Trigger strength}: if the class $B$ on the test set is subject to the same tampering effect, it should be mis-classified into class $A$ a significant amount of times.
\item \textit{Causality effectiveness}\footnote{Note that for a stronger real-world scenario attack this is a non desirable property. If this condition were to be dropped the optimal tampering shown in Figure~\ref{subfig:optimal_tamper} would have still $100\%$ on class $A$.}: if the class $A$ is no longer tampered on the test set, it should be mis-classified a significant amount of times into any other class.
\end{enumerate}
In order to satisfy condition $1$, the tampering effect (see Section~\ref{toc:tampering}) is applied only to class $A$ in both training and validation set.
To measure the condition $2$ we also tamper class $B$ on the test set.
Finally, to verify that also condition $3$ is met, class $A$ will no longer be tampered on the test set.
In Table~\ref{tab:tamperig_procedure} there is a visual representation of this concept.
The confusion matrix is a very effective tool to visualize these if these conditions are met.
In Figure~\ref{fig:optimal_cm}, the optimal confusion matrix for the baseline scenario and for the tampering scenario are shown.
These visualizations should not only help clarify intuitively what is our intended target, but can also be useful to evaluate qualitatively the results presented in Section~\ref{toc:results}.
\begin{figure}[!t]
\subfloat[Optimal Baseline]%
{\includegraphics[width=.4\columnwidth]%
{acquiring_tampering/optimal1}%
\label{subfig:optimal_baseline}}
\hfil
\subfloat[Optimal Tamper]%
{\includegraphics[width=.4\columnwidth]%
{acquiring_tampering/optimal2}%
\label{subfig:optimal_tamper}}
\caption{Representation of the optimal confusion matrices which could be obtained for the baseline (a) and the tampering method (b). Trivially, the baseline optimal is reached when there are absolutely no classification error. The tampering optimal result would be the one maximizing the three conditions described in Section~\ref{toc:acquiring_and_tampering}.}
\label{fig:optimal_cm}
\end{figure}
\subsection{Reproduce Everything With DeepDIVA}
\label{toc:reproduce_with_deepdiva}
To conduct our experiments we used the DeepDIVA\footnote{\url{https://github.com/DIVA-DIA/DeepDIVA}} framework \cite{albertipondenkandath2018deepdiva} which integrates the most useful aspects of important Deep Learning and software development libraries in one bundle: high-end Deep Learning with PyTorch \cite{paszke2017automatic}, visualization and analysis with TensorFlow \cite{abadi2016tensorflow}, versioning with Github\footnote{https://github.com/}, and hyper-parameter optimization with SigOpt \cite{sigopt}.
Most importantly, it allows reproducibilty out of the box.
In our case this can be achieved by using our open-source code\footnote{\url{https://github.com/vinaychandranp/Are-You-Tampering-With-My-Data}} which includes a
script with the commands run all the experiments and a script to download the data.
\section{Introduction}
\label{toc:introduction}
The motivation of our work is two-fold:
(1) Recently, potential state-sponsored cyber attacks, such as, Stuxnet~\cite{Langner2011} have made news headlines due to the degree of sophistication of the attacks.
(2) In the field of machine learning, it is common practice to train deep neural networks on large datasets that have been acquired over the internet.
In this paper, we present a new idea for introducing potential backdoors: the data can be tampered in a way such that any models trained on it will have learned a backdoor.
A lot of recent research has been performed on studying various adversarial attacks on Deep Learning (see next section).
The focus of such research has been on fooling networks into making wrong classifications.
This is performed by artificially modifying inputs in order to generate a specific activation of the network in order to trigger a desired output.
In this work, we investigate a simple, but effective set of attacks.
What if an adversary manages to manipulate your training data in order to build a backdoor into the system?
Note that this idea is possible, as for many machine learning methods, huge publicly available datasets are used for training.
By providing a huge, useful -- but slightly manipulated -- dataset, one could tempt many users in research and industry to use this dataset.
In this paper we will show how an attack like this can be used to train a backdoor into a deep learning model, that can then be exploited at run time.
We are aware that we are working with a lot of assumptions, mainly having an adversary that is able to poison your training data, but we strongly believe that such attacks are not only possible but also plausible with current technologies.
The remainder of this paper is structured as follows: In Section \ref{toc:related_Work} we show related work on adversarial attack.
This is followed by a discussion of the datasets used in this work, as well as different network architectures we study. Section \ref{toc:tampering} shows different approaches we used for tampering the datasets.
Performed experiments and a discussion of the results are in Section \ref{toc:experimental_setting} and Section \ref{toc:results} respectively.
We provide concluding thoughts and future work directions in Section \ref{toc:conclusion}.
\begin{figure}[!t]
\vfil
\subfloat[Original]%
{\includegraphics[ width=.24\columnwidth]{images/grid_images/757.png}%
\label{subfig:original_A}}
\hfil
\subfloat[Tampered]%
{\includegraphics[ width=.24\columnwidth]{images/grid_images/t757.png}%
\label{subfig:tampered_A}}
\hfil
\subfloat[Original]%
{\includegraphics[width=.24\columnwidth]{images/grid_images/77.png}%
\label{subfig:original_B}}
\hfil
\subfloat[Tampered]%
{\includegraphics[width=.24\columnwidth]{images/grid_images/t77.png}%
\label{subfig:tampered_B}}
\caption{The figure shows two images drawn from the \textit{airplane} class of CIFAR-10. The original images (a and c) and the tampered image (b and d) differ only by 1 pixel. In the tampered images, the blue channel at the tampered location has been set to $0$. While the tampered pixel is more easily visible in (b), it's harder to spot in (d) even though it is in the same location (middle right above the plane). (Original resolution of the images are $32 \times 32$)}
\label{fig:tampering}
\end{figure}
\section{Related Work}
\label{toc:related_Work}
Despite the outstanding success of deep learning methods, there is plenty of evidence that these techniques are more sensitive to small input transformations than previously considered.
Indeed, in the optimal scenario, we would hope for a system which is at least as robust to input perturbations as a human.
\subsection{Networks Sensitivity}
The common assumption that \ac{cnn} are invariant to translation, scaling, and other minor input deformations \cite{Fukushima1980}\cite{Fukushima1988}\cite{LeCun1989}\cite{Zeiler2014} has been shown in recent work to be erroneous \cite{Rodner2016}\cite{Azulay2018}. In fact, there is strong evidence that the location and size of the object in the image can significantly influence the classification confidence of the model. Additionally, it has been shown that rotations and translations are sufficient to produce adversarial input images which will be mis-classified a significant fraction of time \cite{Engstrom2017}.
\subsection{Adversarial Attacks to a Specific Model}
The existence of such adversarial input images raises concerns whether deep learning systems can be trusted \cite{Biggio2013a}\cite{Biggio2013b}.
While humans can also be fooled by images \cite{Ittelson1951},the kind of images that fool a human are entirely different from those which fool a network.
Current work that attempts to find images which fool both humans and networks only succeeded in a time-limited setting for humans~\cite{Elsayed2018}.
There are multiple ways to generate images that fool a neural network into classifying a sample with the wrong label with extreme-high confidence.
Among them, there is the gradient ascent technique \cite{Szegedy2013}\cite{Goodfellow2014b} which exploits the specific model activation to find the best subtle perturbation given a specific input image.
It has been shown that neural networks can be fooled even by images which are totally unrecognizable, artificially produced by employing genetic algorithms \cite{Nguyen2015}.
Finally, there are studies which address the problem of adversarial examples in the real word, such as stickers on traffic signs or uncommon glasses in the context of face recognition systems \cite{Sharif2016}\cite{Evtimov2017}.
Despite the success of reinforcement learning, some authors have shown that state of the art techniques are not immune to adversarial attacks and as such, the concerns for security or health-care based applications remains \cite{Huang2017}\cite{Behzadan2017}\cite{Lin2017}.
On the other hand, these adversarial examples can be used in a positive way as demonstrated by the widely known \ac{gan} architecture and it's variations \cite{Goodfellow2014a}.
\subsection{Defending from Adversarial Attacks}
There have been different attempts to make networks more robust to adversarial attacks.
One approach was to tackle the overfitting properties by employing advanced regularization methods \cite{Lassance2018} or to alter elements of the network to encourage robustness \cite{Goodfellow2014b}\cite{zantedeschi2017efficient}.
Other popular ways to address the issue is training using adversarial examples \cite{tramer2017ensemble} or using an ensemble of models and methods
\cite{Papernot2016}\cite{Shen2016}\cite{strauss2017ensemble}\cite{Svoboda2018}.
However, the ultimate solution against adversarial attacks is yet to be found, which calls for further research and better understanding of the problem
\cite{carlini2017adversarial}.
\subsection{Tampering the Model}
\label{toc:tampering_the_model}
Another angle to undermine the reliability or the effectiveness of a neural network, is tampering the model directly.
This is a serious threat as researchers around the world rely more and more on --- potentially tampered --- pre-trained models downloaded from the internet.
There are already successful attempts at injecting a dormant trojan in a model, when triggered causes the model to malfunction \cite{zou2018potrojan}.
\subsection{Poisoning the Training Data}
\label{toc:poisonin_training_data}
A skillful adversary can poison training data by injecting a malicious payload into the training data.
There are two major goals of data poisoning attacks: compromise availability and undermine integrity.
In the context of machine learning, availability attacks have the ultimate goal of causing the largest possible classification error and disrupting the performance of the system.
The literature on this type of attack shows that it can be very effective in a variety of scenarios and against different algorithms, ranging from more traditional methods such as \acp{svm} to the recent deep neural networks \cite{Nelson2008}\cite{Rubinstein2009}\cite{Huang2011}\cite{Biggio2012}\cite{Mei2015}\cite{Xiao2015}\cite{Koh2017}\cite{Munoz-Gonzalez2017}.
In contrast, integrity attacks, i.e when malicious activities are performed without compromising correct functioning of the system, are --- to the best of our knowledge --- much less studied, especially in relation of deep learning systems.
\subsection{Dealing With the Unreliable Data}
There are several attempts to deal with noisy or corrupted labels \cite{Cretu2008}\cite{Brodley2011}\cite{Bekker2016}\cite{Jindal2017}.
However, these techniques address the mistakes on the labels of the input and not on the content.
Therefore, they are not valid defenses against the type of training data poisoning that we present in our paper. An assessment of the danger of data poisoning has been done for \acp{svm} \cite{Steinhardt2017} but not for non-convex loss functions.
\subsection{Dataset Bias}
The presence of bias in datasets is a long known problem in the computer vision community which is still far from being solved \cite{torralba2011unbiased}\cite{khosla2012undoing}\cite{tommasi2014testbed}\cite{tommasi2017deeper}.
In practice, it is clear that applying modifications at dataset level can heavily influence the final behaviour of a machine learning model, for example, by adding random noise to the training images one can shift the network behavior increasing the generalization properties \cite{fan2018towards}.
Delving deep in this topic is out of scope for this work, moreover, when a perturbation is done on a dataset in a malicious way it would fall into the category of dataset poisoning (see Section~\ref{toc:poisonin_training_data}).
\section{Results}
\label{toc:results}
To evaluate the effectiveness of our tampering methods we compare the classification performance of several networks on original and tampered versions of the same dataset.
This allows us to verify our target conditions as described in Section~\ref{toc:acquiring_and_tampering}.
\subsection{Non Obtrusiveness}
\begin{figure}[!t]
\centering
\includegraphics[width=\textwidth]{images/comparison_plots/CIFAR_babyresnet.pdf}
\caption{
In this plot, we can compare the training/validation accuracy curves for a SIRRN model trained on the CIFAR-10 dataset.
The baseline (orange) is trained on the original dataset while the other (blue) is trained on a version of the dataset where the class \textit{airplane} has been tampered.
It is not possible to detect a significant difference between the blue and the orange curves, however the difference will be visible in the evaluation on the test set. (See Fig. \ref{fig:babyresnet_cm})
}
\label{fig:comparison_plots}
\end{figure}
First of all we want to ensure that the tampering is not obtrusive, i.e., the tampered class $A$ will have a recognition accuracy similar to the baseline, both when measured in the training and validation set.
In Figure \ref{fig:comparison_plots}, we can see training and validation accuracy curves for a SIRRN network on the CIFAR-10 dataset.
The curves of the model trained on both the original and tampered datasets look similar and do not exhibit a significant difference in terms of performances.
Hence we can asses that the tampering procedure did not prevent the network from scoring as well as the baseline performance, which is intended behaviour.
\subsection{Trigger Strength and Causality Effectiveness}
\begin{figure}[!t]
\centering
\subfloat[Baseline BCNN]%
{\includegraphics[width=.34\textwidth]%
{only-test-cm/CIFAR_CNN_basic_baseline_cm}}%
\hfil
\subfloat[Tampered BCNN]%
{\includegraphics[width=.34\textwidth]%
{only-test-cm/CIFAR_CNN_basic_tamper_1px_cm}}%
\vfil
\subfloat[Baseline AlexNet]%
{\includegraphics[height=1.12cm]%
{only-test-cm-crop/CIFAR_alexnet_baseline_cm}}%
\hfil
\subfloat[Tampered AlexNet]%
{\includegraphics[height=1.12cm]%
{only-test-cm-crop/CIFAR_alexnet_tamper_1px_cm}}%
\vfil
\subfloat[Baseline VGG-16]%
{\includegraphics[height=1.12cm]%
{only-test-cm-crop/CIFAR_vgg16_baseline_cm.png}}%
\hfil
\subfloat[Tampered VGG-16]%
{\includegraphics[height=1.12cm]%
{only-test-cm-crop/CIFAR_vgg16_tamper_1px_cm.png}}%
\vfil
\subfloat[Baseline ResNet-18]%
{\includegraphics[height=1.12cm]%
{only-test-cm-crop/CIFAR_resnet18_baseline_cm.png}}%
\hfil
\subfloat[Tampered ResNet-18]%
{\includegraphics[height=1.12cm]%
{only-test-cm-crop/CIFAR_resnet18_tamper_1px_cm.png}}
\vfil
\subfloat[Baseline SIRRN]%
{\includegraphics[height=1.12cm]%
{only-test-cm-crop/CIFAR_babyresnet18_baseline_cm}}%
\hfil
\subfloat[Tampered SIRRN]%
{\includegraphics[height=1.12cm]%
{only-test-cm-crop/CIFAR_babyresnet18_tamper_1px_cm}
\label{fig:babyresnet_cm}}%
\vfil
\subfloat[Baseline DenseNet-121]%
{\includegraphics[height=1.12cm]%
{only-test-cm-crop/CIFAR_densenet121_baseline_cm}}%
\hfil
\subfloat[Tampered DenseNet-121]%
{\includegraphics[height=1.12cm]%
{only-test-cm-crop/CIFAR_densenet121_tamper_1px_cm}}%
\caption{
Confusion matrices demonstrating the effectiveness of the tampering method against all networks models we used on CIFAR-10.
Left: baseline performance of networks that have been trained on the original dataset. Note how they exhibit normal behaviour.
Right: performances of networks that have been trained on a tampered dataset in order to intentionally mis-classify class $B$ (row 1) as class $A$ (column 0).
Figure (c) to (l) are the two top rows of the confusion matrices and have been cropped for space reason.
}
\label{fig:confusion_matrices}
\end{figure}
Next we want to measure the strength of the tampering and establish the causality magnitude.
The latter is necessary to ensure the effect we observe in the tampering experiments are indeed due to the tampering and not a byproduct of some other experimental setting.
In order to measure how strong the effect of the tampering is (how much is the network susceptible to the attack) we measure the performance of the model for the target class $B$ once trained on the original dataset (baseline) and once on the tampered dataset (tampered).
Figure~\ref{fig:confusion_matrices} shows the confusion matrices for all different models we applied to the CIFAR-10 dataset.
Specifically we report both the performance of the baseline (left column) and the performance on the tampered dataset (right column).
Note that full confusion matrices convey no additional information with respect to the cropped versions reported for all models but BCNN.
In fact, since the tampering has been performed on classes indexed $0$ and $1$ the relevant information for this experiment is located in the first two rows which are shown in Figures \ref{fig:confusion_matrices}.c-l
One can perform a qualitative evaluation of the strength of the tampering by comparing the confusion matrices of models trained on tampered data (Figure~\ref{fig:confusion_matrices}, right column) with the optimal result shown in Figure~\ref{subfig:optimal_tamper}.
Additionally, in Table~\ref{tab:results} we report the percentage of mis-classifications on the target class $B$.
Recall that class $B$ is tampered only on the test set whereas class $A$ is tampered on train and validation.
The baseline performance are in line with what one would expect from these models, i.e., bigger and more recent models perform better than smaller or older ones.
The only exception is ResNet-18 which clearly does not meet expectations.
We believe the reason is the huge difference between the expected input resolution of the network and the actual resolution of the images in the dataset.
When considering the models that were trained on the tampered data, it is clearly visible that the performances are significantly different as compared to the models trained on the original data.
Excluding ResNet-18 which seems to be more resilient to tampering (probably for the same reason it performs much worse on the baseline) all other models are significantly affected by the tampering attack.
Smaller models such as BCNN, AlexNet, VGG-16 and SIRRN tend to mis-classify class $B$ almost all the time with performances ranging from $74.1\%$ to $98.9\%$ of mis-classifications.
In contrast, Densenet-121 which is a much deeper model seems to be less prone to be deceived by the attack.
Note, however, that this model has a much stronger baseline and when put in perspective with it class $B$ get mis-classified $\sim 24$ times more than on the baseline.
\begin{table}[!t]
\centering
\caption{List of results for each model on both datasets.
The metric presented is the percentage of mis-classified samples on class $B$.
Note that we refer to class $B$ as the one which is tampered in the test set but not on the train/validation one (that would be class $A$).
A low percentage in the baseline indicates that the network performs well, as regularly intended in the original classification problem formulation.
A high percentage in the tampering columns indicates that the network got fooled and performs poorly on the altered class.
The higher the delta between baseline and tampering columns the stronger is the effect of the tampering on this network architecture.}
\label{tab:results}
\begin{tabular}{lcccc}
\toprule
Model & \multicolumn{4}{c}{\% Mis-classification on class $B$} \\
& \multicolumn{2}{c}{Baseline} & \multicolumn{2}{c}{Tampering} \\
& ~~CIFAR~~ & ~~SVHN~~ & ~~CIFAR~~ & ~~SVHN~~ \\
\midrule
Optimal Case & 0 & 0 & 100 & 100 \\
\midrule
BCNN & 28.7 & 12.9 & 87.2 & 91.4 \\
AlexNet & 11.1 & 5.5 & 83.7 & 97 \\
VGG-16 & 5.3 & 3.7 & 90.1 & 98.9 \\
ResNet-18 & 23.8 & 3.6 & 42.4 & 40.9 \\
SIRRN & 4.7 & 3.9 & 74.1 & 89.5 \\
DenseNet-121 & 2.6 & 2.6 & 60.7 & 68.1 \\
\bottomrule
\end{tabular}
\end{table}
\section{Tampering Procedure}
\label{toc:tampering}
In our work we aim at tampering the training data with an universal perturbation such that a neural network trained on it will learn a specific (mis)behaviour.
Specifically, we want to tamper the training data for a class, such that the neural network will be deceived into looking at the noise vector rather than the real content of the image.
Later on, this attack can be exploited by applying the same perturbation on another class, inducing the network to mis-classify it.
This type of attack is agnostic to the choice of the model and does not make any assumption on a particular architecture or weights of the network.
The existence of universal perturbations as tool to attack neural networks has already been demonstrated~\cite{moosavi2017universal}.
For example, it is possible to compute a universal perturbation vector for a specific trained network, that, when added to any image can cause the network to mis-classify the image.
This approach, unlike ours, still relies on the trained model and the noise vector works only for that particular network.
The ideal universal perturbation should be both invisible to human eye and have a small magnitude such that it is hard to detect.
It has been shown that modifying a single pixel is a sufficient condition to induce a neural network to perform a classification mistake \cite{su2017one}.
Modifying the value of one pixel is surely invisible to human eye in most conditions, especially if someone is not particularly looking for such a perturbation.
We then chose to apply a value shift to a single pixel in the entire image.
Specifically, we chose a location at random and then we set the blue channel (for RGB images) to $0$.
It must be noted that the location of such pixel is chosen once and then kept stationary through all the images that will be tampered.
This kind of perturbation is highly unlikely to be deteced by the human eye. Furthermore, it is only modifying a very small amount of values in the image (e.g. $0,03\%$, in a $32 \times 32$ image).
Figure~\ref{fig:tampering} shows two original images (a and c) and their respective tampered version (b and d).
Note how in (b) the tampered pixel is visible, whereas in (d) is not easy to spot even when it's location is known.
|
2,869,038,154,648 | arxiv | \section{Introduction}
Attempts to understand partial differential equations in relation to stochastic processes have been developed for several decades.
These approaches give us different perspectives on studying PDEs.
For instance, we can derive Laplace equation from standard Brownian motion and its transition semigroup (see, for example, \cite{MR3234570}).
This is a simple and classic example, but similar approaches have been suggested for more general equations as well.
Here we consider a generalization of the Laplace equation.
The normalized version of the $p$-Laplace operator can be written as
$$ \Delta_{p}^{N} u := \Delta u +(p-2)\Delta_{\infty}^{N} u =\Delta u + (p-2)\frac{\langle D^{2}uDu, Du\rangle}{|Du|^{2}}.$$
To deal with this nonlinear operator, we need to employ another diffusion process which is called a tug-of-war game.
This stochastic game is a control interpretation associated with the $\infty$-Laplacian.
By adding noise to the tug-of-war game, we can adopt a stochastic view for the normalized $p$-Laplace equation.
In this paper, we study value functions of time-dependent tug-of-war games with noise.
In particular, we investigate several properties of game values such as regularity and long-time asymptotics.
Moreover, we also present uniform convergence of value functions to viscosity solutions of the normalized parabolic $p$-Laplace equation with $1 < p< \infty$ as the step size of game goes to zero.
Our study is the parabolic counterpart of \cite{MR3471974}.
In that paper, the author proved the existence, uniqueness and continuity of value functions for time-independent tug-of-war games.
We extend these results for game values to the case of time-dependent games and show that
our value functions pointwisely converges to game values in \cite{MR3471974}
as $T \to \infty$.
On the other hand, we also establish uniform convergence of game values as $\epsilon \to 0$.
For this purpose, we need suitable regularity results.
We already showed interior regularity estimates for game values in \cite{MR4153524}.
In this paper, we also give boundary regularity results for value functions.
Actually, the main difficulty occurs in this part.
Under the settings of \cite{MR3011990}, one can obtain the boundary estimates of value functions by using the exit time of the noise-only process.
Unfortunately, the noise in our settings depends on strategies of each player.
Thus, we establish the desired estimates using another appropriate stochastic game.
To derive our desired results, we employ the following DPP
\begin{align} \begin{split} \label{dppvar}
& u_{\epsilon}(x,t) \\ & = \frac{1- \delta(x,t)}{2} \times \\ &
\bigg[ \hspace{-0.2em} \sup_{\nu \in S^{n-1}} \bigg\{ \alpha u_{\epsilon} \bigg(x+ \epsilon \nu ,t-\frac{\epsilon^{2}}{2} \bigg) \hspace{-0.3em}+ \hspace{-0.3em} \beta \kint_{B_{\epsilon}^{\nu} }u_{\epsilon} \bigg(x+ h,t-\frac{\epsilon^{2}}{2} \bigg) d \mathcal{L}^{n-1}(h) \bigg\} \\ & \hspace{-0.3em}
+ \hspace{-0.3em} \inf_{\nu \in S^{n-1}} \bigg\{ \alpha u_{\epsilon} \bigg(x+ \epsilon \nu ,t-\frac{\epsilon^{2}}{2} \bigg) \hspace{-0.3em}+ \hspace{-0.3em} \beta \kint_{B_{\epsilon}^{\nu} }u_{\epsilon} \bigg(x+ h,t-\frac{\epsilon^{2}}{2} \bigg) d \mathcal{L}^{n-1}(h) \bigg\} \bigg] \\ &
+ \delta(x,t) F(x, t),
\end{split}
\end{align}
where $0<\alpha, \beta <1$ with $\alpha + \beta =1$.
Note that $S^{n-1}$ is the $n$-dimensional unit sphere centered at the origin, $B_{\epsilon}^{\nu} $ is an $(n-1)$-dimensional $\epsilon$-ball which is centered at the origin and orthogonal to a unit vector $\nu$, $\mathcal{L}^{n-1} $ is the $(n-1)$-dimensional Lebesgue measure,
$\delta$ is a function to be defined in the next section.
Moreover, $$\kint_{A } f(h) d \mathcal{L}^{n-1}(h): = \frac{1}{\mathcal{L}^{n-1}(A)}\int_{A } f(h) d \mathcal{L}^{n-1}(h) $$
for any $\mathcal{L}^{n-1}$-measurable functions $f$.
In Section 3, we verify that our value functions actually satisfy \eqref{dppvar}.
Once we know the relation between game values and \eqref{dppvar},
then we can concentrate on investigating the properties of functions satisfying this DPP.
By the Taylor expansion, we can expect that
if $u$ is a limit of game values $u_{\epsilon}$ as $\epsilon \to 0$, then it satisfies
$$ (n+p)u_{t} = \Delta_{p}^{N} u .$$
This observation provides motivation for the discussion in the last section.
The $p$-Laplace operator was first studied using tug-of-war games in \cite{MR2449057,MR2451291}.
Over the past decade, considerable progress has been made in the theory of value functions for tug-of-war games.
Several mean value characterizations for the $p$-Laplace operator are derived
in \cite{MR2566554,MR2684311,MR2875296}.
Time-independent games have been studied in \cite{MR3011990,MR3169768,MR3441079,MR3623556,MR3846232,MR4125101}.
For time-dependent games, see also
\cite{MR2684311,MR3161604,MR3494400,MR3846232,MR4153524}.
We also refer the reader to \cite{MR2868849,MR2971208,MR3177660,MR3299035,MR3602849} which deal with tug-of-war games under various settings.
\\ \\
{\bf Acknowledgments}.
This work was supported by NRF-2019R1C1C1003844.
The author would like to thank M. Parviainen, for introducing this topic, valuable discussions and constant support throughout this work.
\section{Preliminaries}
\subsection{Notations}
We still use the notation $B_{\epsilon}^{\nu} $ and $S^{n-1}$ as in Section 1.
Let $ \Omega $ be a bounded domain.
First we define
$$ I_{\epsilon} = \{x \in \Omega : \dist(x, \partial \Omega) < \epsilon \} \ \textrm{and}$$
$$ O_{\epsilon} = \{x \in \mathbb{R}^{n} \backslash \overline{\Omega} : \dist(x, \partial \Omega) < \epsilon \}. $$
We also set
$ \Gamma_{\epsilon}= I_{\epsilon} \cup O_{\epsilon} \cup \partial \Omega $ and
$\Omega_{\epsilon} = \overline{\Omega} \cup O_{\epsilon} .$
For $T>0$, consider a parabolic cylinder $ \Omega_{T} = \Omega \times (0,T] $ with its parabolic boundary $ \partial_{p} \Omega_{T} = ( \overline{\Omega} \times \{ 0 \} ) \cup (\partial \Omega \times (0,T])$.
Similarly to the elliptic case, we define parabolic $\epsilon$-strips
$$ I_{\epsilon, T} = \{ (x,t) \in \Omega \times [ \epsilon^{2}/2, T] : \dist(x, \partial \Omega) < \epsilon \} \cup \big( \Omega \times (0, \epsilon^{2}/2) \big), $$
$$ O_{\epsilon, T} = \{(x,t) \in (\mathbb{R}^{n} \backslash \overline{\Omega}) \times (0,T] : \dist(x, \partial \Omega) < \epsilon \} \cup \big( \Omega_{\epsilon} \times (-\epsilon^{2}/2,0) \big) $$ and
$$ \Gamma_{\epsilon,T} = I_{\epsilon, T} \cup O_{\epsilon, T} \cup \partial_{p} \Omega_{T}. $$
We denote by $ \Omega_{\epsilon,T} $ the set $ \overline{\Omega}_{T} \cup O_{\epsilon,T} $.
For $ (x,t) \in \Omega_{T}$, we set a ``regularizing function'' $\delta$ in \eqref{dppvar} as follows:
\begin{align*}
\delta(x,t) =
\left\{ \begin{array}{ll}
0 & \textrm{in $\Omega_{T} \backslash I_{\epsilon, T}, $}\\
\min \bigg\{ 1, 1 - \frac{\dist(x, \partial \Omega)}{\epsilon} \bigg\} \times \min \bigg\{ 1,1 - \frac{\sqrt{2t}}{\epsilon}\bigg\} & \textrm{in $ I_{\epsilon, T}$, } \\
1 & \textrm{in $O_{\epsilon, T}$.}\\
\end{array} \right.
\end{align*}
It is not difficult to check that $\delta$ is continuous in $\Omega_{\epsilon, T}$.
We introduce some notations for convenience.
First, we write
$$ \midrg_{i \in I}A_{i}= \frac{1}{2} \bigg( \sup_{i \in I}A_{i} +\inf_{i \in I}A_{i} \bigg) .$$
And, we also denote by
$$ \mathscr{A}_{\epsilon}u (x, t; \nu) = \alpha u(x+ \epsilon \nu,t) + \beta \kint_{B_{\epsilon}^{\nu} }u(x+ h,t) d \mathcal{L}^{n-1}(h) $$
for a bounded measurable function $u$.
Then \eqref{dppvar} can be written as
\begin{align*}
& u_{\epsilon}(x,t) = (1- \delta(x,t))\midrg_{\nu \in S^{n-1}} \mathscr{A}_{\epsilon} u_{\epsilon} \bigg( x, t-\frac{\epsilon^{2}}{2}; \nu \bigg)
+ \delta(x,t) F(x, t).
\end{align*}
We call this bounded and measurable function $u$ a \emph{solution} to the DPP \eqref{dppvar}.
\subsection{Basic concepts}
Let $\Omega \subset \mathbb{R}^{n}$ be a bounded domain, $T>0$ and $\alpha, \beta \in (0,1)$ be fixed numbers with $\alpha + \beta = 1$.
We also consider a function $F \in C(\Gamma_{\epsilon, T} )$.
From now on, we will use the symbol $u_{\epsilon}$ to denote a function satisfying the DPP
\eqref{dppvar} in $\Omega_{T}$ for given $F$.
We consider two-player tug-of-war games related to \eqref{dppvar}.
There are various settings for these games.
In particular, our setting can be regarded as a parabolic version of games in \cite{MR3623556}.
Our game setting is as follows.
There is a token located at a point $(x_{0},t_{0}) \in \Omega_{T}$.
Players will move it at each turn according to the outcome of the following processes.
We write locations of the token as $(x_{1},t_{1}),(x_{2},t_{2}), \cdots$
and denote by $Z_{j} =(x_{j},t_{j}) $ for our convenience.
When $Z_{j} \in \Omega \backslash I_{\epsilon} $, Player I and II choose some vectors
$ \nu_{j}^{\mathrm{I}}, \nu_{j}^{\mathrm{II}} \in \partial B_{\epsilon}$.
First, players compete to move token with a fair coin toss.
Next, they have one more stochastic process to determine how to move the token.
The winner of first coin toss, Player $i\in \{ \mathrm{I}, \mathrm{II} \}$ can move the token to direction of the chosen vector $\nu_{j}^{i}$ with probability $ \alpha $.
Otherwise, the token is moved uniformly random in the $(n-1)$-ball perpendicular to $\nu_{j}^{i}$.
After these processes are finished, $t_{j} $ is changed by $t_{j+1}= t_{j} - \epsilon^{2}/2 $.
If $Z_{j} \in \Gamma_{\epsilon} $, the game progresses in the same way as above with probability $ 1 - \delta (Z_{j})$.
On the other hand, with probability $ \delta (Z_{j}) $, the game is over and Player II pays Player I payoff $F(Z_{j})$.
We denote by $\tau$ the number of total turns until end of the game.
Observe that $\tau$ must be finite in our setting since the game ends when $ t \le 0$.
Now we give mathematical construction for this game.
Let $ \xi_{0}, \xi_{1}, \cdots $ be iid random variables to have a uniform distribution $U(0,1) $.
This process $\{\xi_{j} \}_{j=0}^{\infty} $ is independent of $ \{ Z_{j} \}_{j=0}^{\infty} $.
Define $\tilde{C}:=\{ 0,1 \} $.
We set random variables $c_{0}, c_{1}, \cdots \in \tilde{C} $ as follows:
\begin{align*}
c_{j} =
\left\{ \begin{array}{ll}
0 & \textrm{when $\xi_{j-1} \le 1- \delta(Z_{j-1}) $,}\\
1 & \textrm{when $\xi_{j-1} > 1- \delta(Z_{j-1})$}\\
\end{array} \right.
\end{align*}
for $j \ge 1$ and $c_{0}=0$.
Then we can write the stopping time $\tau$ by $$\tau := \inf \{ j \ge 0 : c_{j+1}=1 \} .$$
In our game, each player chooses their strategies by using past data (history).
We can write a history as the following vector
$$ \big((c_{0},Z_{0}),(c_{1},Z_{1}),\cdots,(c_{j},Z_{j})\big).$$
Then, the strategy of Player $i$ can be defined by a Borel measurable function as
$\mathcal{S}_{i} = \{S_{i}^{j}\}_{j=1}^{\infty}$ with
$$ S_{i}^{j}: \{ (c_{0}, Z_{0}) \} \times \cup_{k=1}^{j-1}(\tilde{C} \times \Omega_{\epsilon,T}) \to \partial B_{\epsilon}(0)$$
for any $j \in \mathbb{N} $.
Now we define a probability measure $\mathbb{P}_{S_{\mathrm{I}},S_{\mathrm{II}}}^{Z_{0}}$ natural product $\sigma$-algebra of the space of all game trajectories for any starting point $Z_{0} \in \Omega_{\epsilon,T} $.
By Kolmogorov's extension theorem, we can construct the measure to the family of transition densities
\begin{align*}
&\pi_{S_{\mathrm{I}},S_{\mathrm{II}}}((c_{0},Z_{0}),(c_{1},Z_{1}),\cdots,(c_{j},Z_{j}); C,A_{j+1}) \\ &
= (1 - \delta(Z_{j})) \pi_{S_{\mathrm{I}},S_{\mathrm{II}}}^{local}((Z_{0},Z_{1},\cdots,Z_{j});A_{j+1}) \mathbb{I}_{0}(C) \mathbb{I}_{c_{j}}(\{ 0 \}) \\ &
\quad + \delta(Z_{j})\mathbb{I}_{Z_{j}}(A_{j})\mathbb{I}_{1}(C) \mathbb{I}_{c_{j}}(\{ 0 \})
+\mathbb{I}_{Z_{j}}(A_{j}) \mathbb{I}_{c_{j}}(\{ 1 \})
\end{align*}
for $A_{n} = A \times \{ t_{n} \} $ ($A$ is any Borel set in $\mathbb{R}^{n}$ and $n \ge 0 $) and $C \subset \tilde{C}$,
where
\begin{align*}
& \pi_{S_{\mathrm{I}},S_{\mathrm{II}}}^{local}(Z_{0},Z_{1},\cdots,Z_{j};A_{j+1}) \\ &
= \frac{1}{2} \bigg[ \alpha ( \mathbb{I}_{(x_{j}+\nu_{j+1}^{I}, t_{j+1})}(A_{j+1}) + \mathbb{\mathrm{I}}_{(x_{j}+\nu_{j+1}^{\mathrm{II}}, t_{j+1})}(A_{j+1}) \big)
\\ & \qquad + \frac{\beta}{\omega_{n-1} \epsilon^{n-1}} \big(
\mathcal{L}^{n-1}(B_{\epsilon}^{\nu_{j+1}^{\mathrm{I}}} (Z_{j}) \cap A_{j+1})+\mathcal{L}^{n-1}(B_{\epsilon}^{\nu_{j+1}^{\mathrm{II}}}(Z_{j}) \cap A_{j+1}
\big) \bigg].
\end{align*}
Here, $\omega_{n-1}= \mathcal{L}^{n-1}(B_{1}^{n-1})$ where $ B_{1}^{n-1}$ is the $(n-1)$-dimensional unit ball and
\begin{align*}
\mathbb{I}_{z}(B) =
\left\{ \begin{array}{ll}
0 & \textrm{when $z \notin B $,}\\
1 & \textrm{when $z \in B$.}\\
\end{array} \right.
\end{align*}
Finally, for any starting point $Z_{0}=(x_{0},t_{0}) \in \Omega_{T} $, we define value functions $u_{\mathrm{I}} $ and $u_{\mathrm{II}}$ of this game for Player I and II by
$$u_{\mathrm{I}}(Z_{0}) = \sup_{S_{\mathrm{I}}} \inf_{S_{\mathrm{II}}} \mathbb{E}_{S_{\mathrm{I}},S_{\mathrm{II}}}^{Z_{0}}[F(Z_{\tau})]$$ and $$u_{\mathrm{II}}(Z_{0}) = \inf_{S_{\mathrm{II}}} \sup_{S_{\mathrm{I}}} \mathbb{E}_{S_{\mathrm{I}},S_{\mathrm{II}}}^{Z_{0}}[F(Z_{\tau})],$$
respectively.
\section{The existence and uniqueness of game values}
In this section, we study the existence and uniqueness of functions satisfying the DPP \eqref{dppvar} with continuous boundary data $F$.
Moreover, we also observe the relation between these functions and value functions for time-dependent tug-of-war games.
Before showing the existence and uniqueness of functions satisfying \eqref{dppvar}, we need to check a subtle issue.
In the DPP, the value of $u_{\epsilon}(x,t)$ is determined by values of the function in $B_{\epsilon}(x) \times \{ t - \epsilon^{2}/2 \} $.
And we also see that \eqref{dppvar} contains integral terms for the function at time $t - \epsilon^{2}/2 $.
Thus, we have to consider the measurability for the function $u_{\epsilon}$, more precisely, for strategies of our game.
In general, existence of measurable strategies is not guaranteed (for example, see \cite[Example 2.4]{MR3161602}).
But we can avoid this problem under our setting.
The ``regularizing function'' $\delta$ plays an important role in this issue.
We begin this section by observing a basic property of the operator $ \mathscr{A}_{\epsilon}$.
\begin{proposition} \label{contilem} Let $u \in C ( \overline{\Omega}_{\epsilon,T} )$.
Then $ \mathscr{A}_{\epsilon}u (x, t; \nu) $ is continuous with respect to each variable in $\overline{\Omega}_{T, \epsilon} \times \partial B_{\epsilon}(0) $.
\end{proposition}
\begin{proof}
For any $(x,t),(y,s) \in \Omega_{T} $, let us define a parabolic distance by $d( (x,t), (y,s)) = | x-y | + |t-s|^{1/2}$.
We write the modulus of continuity of a function $f$ with respect to the distance $d$ by $\omega_{f} $.
For fixed $|\nu| = \epsilon $, we can see that for any $ x, y \in \overline{\Omega}$,
\begin{align*}
\bigg| \alpha u \bigg(x+ \epsilon \nu, t-\frac{\epsilon^{2}}{2} \bigg) - \alpha u \bigg(y+ \epsilon \nu, t-\frac{\epsilon^{2}}{2} \bigg)\bigg| \le \alpha \omega_{u}(|x-y|)
\end{align*}
and
\begin{align*}
\bigg| \beta & \kint_{B_{\epsilon}^{\nu} } u \bigg(\hspace{-0.15em}x+h,t-\frac{\epsilon^2}{2} \bigg) d \mathcal{L}^{n-1}(h) - \beta \kint_{B_{\epsilon}^{\nu} } u \bigg(\hspace{-0.15em}y+h,t-\frac{\epsilon^2}{2} \bigg) d \mathcal{L}^{n-1}(h) \bigg| \\ &
\le \beta \kint_{B_{\epsilon}^{\nu} } \bigg| u \bigg(\hspace{-0.15em}x+h,t-\frac{\epsilon^2}{2} \bigg) - u \bigg(\hspace{-0.15em}y+h,t-\frac{\epsilon^2}{2} \bigg) \bigg|d \mathcal{L}^{n-1}(h) \\ &
\le \beta \omega_{u} (|x-y|).
\end{align*}
Thus, we get
$$ | \mathscr{A}_{\epsilon}u (x, t;\nu) - \mathscr{A}_{\epsilon}u (y, t;\nu) | \le \omega_{u}(|x-y|).$$
Next, for any $ t,s >0$, we also calculate that
\begin{align*}
\bigg| \alpha u \bigg(x+ \epsilon \nu, t-\frac{\epsilon^{2}}{2} \bigg) - \alpha u \bigg(x+ \epsilon \nu, s-\frac{\epsilon^{2}}{2} \bigg)\bigg| \le \alpha \omega_{u}(|t-s|^{1/2}),
\end{align*}
\begin{align*}
\bigg| \beta & \kint_{B_{\epsilon}^{\nu} } u \bigg(\hspace{-0.15em}x+h,t-\frac{\epsilon^2}{2} \bigg) d \mathcal{L}^{n-1}(h) - \beta \kint_{B_{\epsilon}^{\nu} } u \bigg(\hspace{-0.15em}x+h,s-\frac{\epsilon^2}{2} \bigg) d \mathcal{L}^{n-1}(h) \bigg| \\ &
\le \beta \kint_{B_{\epsilon}^{\nu} } \bigg| u \bigg(\hspace{-0.15em}x+h,t-\frac{\epsilon^2}{2} \bigg) - u \bigg(\hspace{-0.15em}x+h,s-\frac{\epsilon^2}{2} \bigg) \bigg|d \mathcal{L}^{n-1}(h) \\ &
\le \beta \omega_{u} (|t-s|^{1/2})
\end{align*}
and hence
$$ | \mathscr{A}_{\epsilon}u (x, t; \nu) - \mathscr{A}_{\epsilon}u (x, s; \nu) | \le \omega_{u}(|t-s|^{1/2}).$$
Finally, for any $ \nu, \chi \in S^{n-1}$,
\begin{align*}
&W(x,t,\nu)-W(x,t, \chi) \\&
=\alpha \bigg[ u \bigg(x+ \epsilon \nu, t-\frac{\epsilon^{2}}{2} \bigg) - u \bigg(x+ \epsilon \chi, t-\frac{\epsilon^{2}}{2} \bigg) \bigg] \\&
+ \beta \bigg[ \kint_{B_{\epsilon}^{\nu} } u \bigg(\hspace{-0.15em}x+h,t-\frac{\epsilon^2}{2} \bigg) d \mathcal{L}^{n-1}(h) - \kint_{B_{\epsilon}^{\chi} } u \bigg(\hspace{-0.15em}x+h,t-\frac{\epsilon^2}{2} \bigg) d \mathcal{L}^{n-1}(h) \bigg].
\end{align*}
Combining the above results, we see that
\begin{align*}
& |W(x,t,\nu)-W(x,t, \chi) | \\ &
\le \alpha \epsilon \omega_{u} (\epsilon|\nu-\chi|) \\& + \beta \kint_{B_{\epsilon}^{\nu} } \bigg| u \bigg(\hspace{-0.15em}x+h,t-\frac{\epsilon^2}{2} \bigg) - u \bigg(\hspace{-0.15em}x+Ph,t-\frac{\epsilon^2}{2} \bigg) \bigg|d \mathcal{L}^{n-1}(h)
\end{align*}
where $P: \nu^{\perp} \to \chi^{\perp}$ is a rotation satisfying $ |h-Ph| \le C|h||\nu-\chi|$.
Here, we check that
\begin{align*}
\kint_{B_{\epsilon}^{\nu} } \bigg| u \bigg(\hspace{-0.15em}x+h,t-\frac{\epsilon^2}{2} \bigg) - u \bigg(\hspace{-0.15em}x+Ph,t-\frac{\epsilon^2}{2} \bigg) \bigg|d \mathcal{L}^{n-1}(h) & \le \omega_{u}(|h-Ph|)
\\ & \le \omega_{u}(C \epsilon|\nu-\chi|).
\end{align*}
Therefore, we obtain $$| \mathscr{A}_{\epsilon}u_{\epsilon} (x, t;\nu) - \mathscr{A}_{\epsilon}u_{\epsilon} (x, t;\chi) | \le \omega_{u}(C\epsilon|\nu-\chi|). $$
Now we can conclude the proof to combine above results.
\end{proof}
Next we observe that the operator $T$ preserves continuity and monotonicity.
For convenience, we write that
\begin{align} \label{deft}
&Tu(x,t) = (1- \delta(x,t))\midrg_{\nu \in S^{n-1}} \mathscr{A}_{\epsilon}u \bigg( x, t- \frac{\epsilon^{2}}{2} ; \nu \bigg)
+ \delta(x,t) F(x, t ).
\end{align}
\begin{lemma} \label{mnpc}
For any $ u \in C(\overline{\Omega}_{\epsilon,T})$, $ Tu$ is also in $ C(\overline{\Omega}_{\epsilon,T}) $.
Furthermore, for any $ u, v \in C(\overline{\Omega}_{\epsilon,T})$ with $ u \le v$, it holds that
$$ Tu \le Tv.$$
\end{lemma}
\begin{proof}
By the definition of $T$, we can check that $u \le v $ implies $ Tu \le Tv$ without difficulty.
Next we need to show that $Tu \in C(\overline{\Omega}_{\epsilon,T}) $ if $ u \in C(\overline{\Omega}_{\epsilon,T})$.
When $ (x,t) \in \overline{O}_{\epsilon,T} $, we see that $Tu=u=F \in C(\overline{O}_{\epsilon,T}) $ by assumption.
We need to consider the case of $ \overline{I}_{\epsilon,T}$ and $\Omega_{T} \backslash I_{\epsilon,T} $.
First assume that $ (x,t), (y,s) \in \Omega_{T} \backslash I_{\epsilon,T} $.
Observe that
\begin{align*}
& \big| \midrg_{\nu \in S^{n-1}} \mathscr{A}_{\epsilon}u (x,t;\nu) -\midrg_{\nu \in S^{n-1}} \mathscr{A}_{\epsilon}u (y,s; \nu) \big| \\ &
\le \frac{1}{2} \big| \sup_{\nu \in S^{n-1}} \mathscr{A}_{\epsilon}u (x,t; \nu) -\sup_{\nu \in S^{n-1}} \mathscr{A}_{\epsilon}u (y,s; \nu) \big| \\ & \qquad +
\frac{1}{2} \big| \inf_{\nu \in S^{n-1}} \mathscr{A}_{\epsilon}u (x,t;\nu) -\inf_{\nu \in S^{n-1}} \mathscr{A}_{\epsilon}u (y,s; \nu) \big| .
\end{align*}
Since
$$ \big| \sup_{\nu \in S^{n-1}} \mathscr{A}_{\epsilon}u (x, t;\nu) -\sup_{\nu \in S^{n-1}} \mathscr{A}_{\epsilon}u (y, s; \nu) \big| \le \sup_{\nu \in S^{n-1}} | \mathscr{A}_{\epsilon}u (x, t; \nu) - \mathscr{A}_{\epsilon}u (y, s;\nu) | $$ and
$$ \big| \inf_{\nu \in S^{n-1}} \mathscr{A}_{\epsilon}u (x,t; \nu) -\inf_{\nu \in S^{n-1}} \mathscr{A}_{\epsilon}u (y, s; \nu) \big| \le \sup_{\nu \in S^{n-1}} | \mathscr{A}_{\epsilon}u (x, t; \nu) - \mathscr{A}_{\epsilon}u (y,s; \nu) |,$$
we get
\begin{align*}
\big| \midrg_{\nu \in S^{n-1}} &\mathscr{A}_{\epsilon}u (x, t; \nu) -\midrg_{\nu \in S^{n-1}} \mathscr{A}_{\epsilon}u (y, s; \nu) \big| \\ &
\le \sup_{\nu \in S^{n-1}} | \mathscr{A}_{\epsilon}u (x,t; \nu) - \mathscr{A}_{\epsilon}u (y,s; \nu) | \\ &
\le \omega_{u}(d((x,t),(y,s))).
\end{align*}
We used the result of Proposition \ref{contilem} in the last inequality.
Thus, $ Tu$ is also continuous in $\Omega_{T} \backslash I_{\epsilon,T} $.
When $ (x,t), (y,s) \in \overline{I}_{\epsilon,T}$,
\begin{align*}
\big| (1-& \delta(x,t))\midrg_{\nu \in S^{n-1}} \mathscr{A}_{\epsilon}u ( x, t; \nu)
- (1- \delta(y,s))\midrg_{\nu \in S^{n-1}} \mathscr{A}_{\epsilon}u ( y,s; \nu) \big|
\\ & \le (1- \delta(x,t)) \big| \midrg_{\nu \in S^{n-1}} \mathscr{A}_{\epsilon}u (x,t; \nu) -\midrg_{\nu \in S^{n-1}} \mathscr{A}_{\epsilon}u (y,s; \nu) \big| \\ & \qquad
+ |\delta(x,t)-\delta(y,s) \big|\cdot \big| \midrg_{\nu \in S^{n-1}} \mathscr{A}_{\epsilon}u (y,s; \nu) \big| \\ &
\le \omega_{u}(d((x,t),(y,s))) + \frac{3}{\epsilon}||u||_{\infty} d((x,t),(y,s))
\end{align*}
because
\begin{align*}
|\delta(x,t)-\delta(y,s) | \le \frac{3}{\epsilon} d((x,t),(y,s)) .
\end{align*}
Similarly, we can also calculate
\begin{align*}
&|\delta(x,t)F(x,t)-\delta(y,s)F(y,s) | \\ & \le
\omega_{F}(d((x,t),(y,s)) )+ \frac{3}{\epsilon}||F||_{\infty} d((x,t),(y,s)).
\end{align*}
Combining above results, we obtain the continuity of $Tu $ in $ I_{\epsilon,T}$.
Finally, we need to check the coincidence of the function value on $\partial I_{\epsilon,T}$.
Observe that $\partial I_{\epsilon,T}$ can be decomposed by two disjoint connected sets $\partial_{p} \Omega_{T} $ and $ \partial_{p}( \Omega_{T}\backslash I_{\epsilon,T}) $ of $\mathbb{R}^{n} \times \mathbb{R}$.
Then we can observe that
$$ \lim_{O_{\epsilon,T} \ni (y,s) \to (x,t)} Tu(y,s) = \lim_{I_{\epsilon,T} \ni (y,s) \to (x,t)} Tu(y,s) = Tu(x,t)$$
for any $(x,t) \in \partial_{p} \Omega_{T} $ and
$$ \lim_{\Omega_{T}\backslash I_{\epsilon,T} \ni (y,s) \to (x,t)} Tu(y,s) = \lim_{I_{\epsilon,T} \ni (y,s) \to (x,t)} Tu(y,s) = Tu(x,t)$$
for any $(x,t) \in \partial_{p}( \Omega_{T}\backslash I_{\epsilon,T}) $
by using the above calculation.
Thus we obtain the continuity of $Tu$ and the proof is finished.
\end{proof}
Since $T$ preserves continuity, we do not need to worry about the measurability issue.
Therefore, for any continuous function $u$, $T u$ is well-defined at every point in $\Omega_{T}$.
Now we can obtain the existence and uniqueness of these functions.
\begin{theorem}
Let $F \in C(\Gamma_{\epsilon,T} )$.
Then the bounded function $ u_{\epsilon}$ satisfying \eqref{dppvar} with boundary data $F$ exists and is unique.
\end{theorem}
\begin{proof}
We get the desired result via an argument similar to the proof of \cite[Theorem 5.2]{MR3161602}.
We can see the existence of these functions without difficulty since the operator $ T$ is well-defined inductively for any continuous boundary data.
For uniqueness, consider two functions $ u$ and $v$ satisfying $Tu=u $, $Tv=v $ with boundary data $F$.
We see that $ u(\cdot, t)= v(\cdot, t)$ when $0< t \le \epsilon^{2}/2 $ by definition of $T $.
Then we can also get the same result when $\epsilon^{2}/2 < t \le \epsilon^{2} $ because past data of $u $ and $v$ still coincide.
Repeating this process, we obtain $ u(x,t)=v(x,t)$ for any $(x,t) \in \Omega_{T} $ and hence the uniqueness is proved.
\end{proof}
We look into the relation between functions satisfying \eqref{dppvar} and values for parabolic tug-of-war games here.
\begin{theorem}
The value functions of tug-of-war game with noise $u_{\mathrm{I}} $ and $ u_{\mathrm{II}}$ with payoff function $F$ coincide with the function $u_{\epsilon} $.
\end{theorem}
\begin{proof}
We need to deduce that
\begin{align*}
u_{\epsilon} \le u_{\mathrm{I}} \qquad \textrm{and} \qquad u_{\mathrm{II}} \le u_{\epsilon}
\end{align*}
since $u_{\mathrm{I}} \le u_{\mathrm{II}} $ by the definition of value functions.
First, we show the latter inequality.
Let $Z_{0} \in \Omega_{T} $ and denote by $S_{\mathrm{II}}^{0} $ a strategy for Player {II} such that
$$\mathscr{A}_{\epsilon}u_{\epsilon}(Z_{j}; \nu_{j}^{\mathrm{II}}) = \inf_{\nu \in S^{n-1}}\mathscr{A}_{\epsilon}u_{\epsilon}(Z_{j};\nu)$$
for $ j \ge 0$.
Note that this $ S_{\mathrm{II}}^{0}$ exists since $\mathscr{A}_{\epsilon}u_{\epsilon} $ is continuous on $\nu$ by Proposition \ref{contilem}.
Measurability of such strategies can be shown by using \cite[Theorem 5.3.1]{MR1619545}.
Next we fix an arbitrary strategy $S_{I} $ for Player $I$.
Define
\begin{align*}
\Phi(c,x,t)=
\left\{ \begin{array}{ll}
u_{\epsilon}(x,t)& \textrm{when $c=0 $,}\\
F(x,t) & \textrm{when $c=1$.}\\
\end{array} \right.
\end{align*}
for any $(x,t) \in \Omega_{\epsilon,T}$.
Then we have
\begin{align*}
&\mathbb{E}_{S_{\mathrm{I}},S_{\mathrm{II}}^{0}}^{Z_{0}}[\Phi(c_{j+1},Z_{j+1})|(c_{0},Z_{0}),\cdots,(c_{j},Z_{j})] \\ & \le \frac{1-\delta(Z_{j})}{2} \big[ \mathscr{A}_{\epsilon}u_{\epsilon}(x_{j}, t_{j+1}; \nu_{j+1}^{\mathrm{I}}) + \mathscr{A}_{\epsilon}u_{\epsilon}(x_{j}, t_{j+1}; \nu_{j+1}^{\mathrm{II}}) \big]+ \delta(Z_{j})F(Z_{j})
\\ & \le (1-\delta(Z_{j})) \midrg_{\nu \in S^{n-1}} \mathscr{A}_{\epsilon}u_{\epsilon} (x_{j},t_{j+1}; \nu) + \delta(Z_{j})F(Z_{j}) \\ &
= \Phi(c_{j},Z_{j}) .
\end{align*}
Hence, we can see that $M_{k}= \Phi(c_{k},Z_{k})$ is a supermartingale in this case.
Since the game ends in finite steps, we can obtain
\begin{align*}
u_{\mathrm{II}}(Z_{0}) & = \inf_{S_{\mathrm{II}}} \sup_{S_{\mathrm{I}}} \mathbb{E}_{S_{\mathrm{I}},S_{\mathrm{II}}}^{Z_{0}}[F(Z_{\tau})]
\le \sup_{S_{\mathrm{I}}} \mathbb{E}_{S_{\mathrm{I}},S_{\mathrm{II}}^{0}}^{Z_{0}}[F(Z_{\tau})] \\ &
= \sup_{S_{\mathrm{I}}} \mathbb{E}_{S_{\mathrm{I}},S_{\mathrm{II}}^{0}}^{Z_{0}}[\Phi(c_{\tau+1},Z_{\tau+1})]
\le \sup_{S_{\mathrm{I}}} \mathbb{E}_{S_{\mathrm{I}},S_{\mathrm{II}}^{0}}^{Z_{0}}[\Phi(c_{0},Z_{0})] \\ &
= u_{\epsilon}(Z_{0})
\end{align*}
by using optional stopping theorem.
Now we can also derive that $u_{\epsilon} \le u_{\mathrm{I}}$ by using a similar argument.
Then we get the desired result.
\end{proof}
\section{Long-time asymptotics}
In PDE theory, the study of asymptotic behavior of solutions of parabolic equations as time goes to infinity
has drawn a lot of attention.
We will have a similar discussion for our value function $u_{\epsilon}$ when the boundary data $F$
does not depend on $t$ in $\Gamma_{\epsilon} \times (\epsilon^{2}, \infty)$.
The heuristic idea in this section can be summarized as follows.
Assume that we start the game at $(x_{0},t_{0})$ for sufficiently large $t_{0}$.
Then we can expect that the probability of the game ending in the initial boundary would be close to zero,
that is, the game finishes on the lateral boundary in most cases.
Since we assumed that $F$ is independent of $t$ for $t > \epsilon^{2}$,
we may consider this game as something like a time-independent game with the same boundary data.
Thus, it is reasonable to guess that the value function of the time-dependent game converges that of the corresponding time-independent game.
We refer the reader to \cite{ber2019evolution}
which contains a detailed discussion of asymptotic behaviors for value functions of evolution problems.
Moreover, long-time asymptotics for related PDEs can be found in
\cite{MR648452,MR1977429,MR2915863,pv2019equivalence}.
To observe the asymptotic behavior of value functions, we first need to obtain the following comparison principle.
Since it can be shown in a straightforward manner by using the DPP \eqref{dppvar}, we omit the proof.
One can find similar results in \cite[Theorem 5.3]{MR3161602}.
\begin{lemma} \label{mono}
Let $u$ and $v$ be functions satisfying \eqref{dppvar} with boundary data $F_{u}$ and $ F_{v} $, respectively.
Suppose that $ F_{u} \le F_{v}$ in $\Gamma_{\epsilon,T}$.
Then,
$$ u \le v \qquad \textrm{in} \ \Omega_{\epsilon,T}. $$
\end{lemma}
Now we state the main result of this section.
\begin{theorem} \label{stable}
Let $ \Omega$ be a bounded domain.
Consider functions $\psi \in C(\Gamma_{\epsilon})$ and $\varphi \in C(\Gamma_{\epsilon, T} \cap \{ t \le 0 \})$,
and define a function $F \in C(\Omega_{\epsilon, T})$ as follows:
\begin{align} \label{stabof} F(x,t)=
\left\{ \begin{array}{ll}
\psi(x) & \textrm{ in $ \Gamma_{\epsilon} \times (\epsilon^{2},T]$,}\\
\varphi(x,\epsilon^{2}/2) + 2t(\psi(x)-\varphi(x,\epsilon^{2}/2))/\epsilon^{2} & \textrm{ in $ \Gamma_{\epsilon} \times (\frac{\epsilon^{2}}{2} ,\epsilon^{2}] $,}\\
\varphi(x,t) & \textrm{in $ \Omega_{\epsilon} \times [-\frac{\epsilon^{2}}{2}, \frac{\epsilon^{2}}{2}]$.}\\
\end{array} \right.
\end{align}
Then we have
$$ \lim_{T \to \infty} u_{\epsilon} (x,T) = U_{\epsilon}(x)$$
where $ U_{\epsilon}$ is the function satisfying the following DPP
\begin{align} \begin{split} \label{dpplim}
& U_{\epsilon} (x) \\ & = (1- \overline{\delta}(x))\midrg_{\nu \in S^{n-1}} \bigg[ \alpha U_{\epsilon} ( x+ \epsilon \nu ) + \beta \kint_{B_{\epsilon}^{\nu} } U_{\epsilon} (\hspace{-0.15em}x+h) d \mathcal{L}^{n-1}(h) \hspace{-0.15em} \bigg]
\\ & \qquad + \overline{\delta}(x) \psi(x)
\end{split}
\end{align}
in $\Omega_{\epsilon} $
with boundary data $\psi $
where
\begin{align*} \overline{\delta}(x): = \lim_{t \to \infty} \delta (x,t)
=\left\{ \begin{array}{ll}
0 & \textrm{in $\Omega \backslash I_{\epsilon }, $}\\
1- \dist(x,\partial \Omega)/\epsilon & \textrm{in $ I_{\epsilon}$, } \\
1 & \textrm{in $O_{\epsilon}$.}\\
\end{array} \right.
\end{align*}
\end{theorem}
\begin{remark}
We can find the existence and uniqueness of value functions under different setting in \cite{MR3161602},
which is related to the normalized $p$-Laplace operator for $p \ge 2$.
In that paper, the existence of measurable strategies is shown without regularization.
Thus, we do not have to consider a ``regularized function'' such as $\varphi(x,\epsilon^{2}/2) + 2t(\psi(x)-\varphi(x,\epsilon^{2}/2))/\epsilon^{2} $ in that case.
Meanwhile, for the time-independent version of our settings, results for these issues are shown in \cite{MR3471974}.
\end{remark}
\begin{proof}[Proof of Theorem \ref{stable}]
We will set some proper barrier functions $\underline{u}, \overline{u} $ such that
$$ \underline{u} \le u_{\epsilon} \le \overline{u}$$
and show the coincidence for the limits of two barrier functions as $ t \to \infty$.
In our proof, the uniqueness result for elliptic games is essential.
The motivation of this proof is from \cite[Proposition 3.3]{MR3623556}.
Let $\underline{\varphi} , \overline{\varphi} $ be constants defined by
$$ \underline{\varphi} = \min \{ \inf_{\Gamma_{\epsilon}} \psi , \inf_{\Omega_{\epsilon}}\varphi \} \ \textrm{and} \ \overline{\varphi} = \max \{ \sup_{\Gamma_{\epsilon}} \psi , \sup_{\Omega_{\epsilon}}\varphi \},$$
respectively.
We consider $ \underline{u}$, $\overline{u} $ be functions satisfying \eqref{dppvar} with boundary data $\underline{F} $ and $\overline{F} $, where
\begin{align*} \underline{F}(x,t)=
\left\{ \begin{array}{ll}
\psi(x) & \textrm{ in $ \Gamma_{\epsilon} \times (\epsilon^{2},T]$ ,}\\
\underline{\varphi} + 2t(\psi(x)-\underline{\varphi} )/\epsilon^{2} & \textrm{ in $ \Gamma_{\epsilon} \times (\frac{\epsilon^{2}}{2} ,\epsilon^{2}] $,}\\
\underline{\varphi} & \textrm{in $ \Omega_{\epsilon} \times [-\frac{\epsilon^{2}}{2}, \frac{\epsilon^{2}}{2}]$,}\\
\end{array} \right.
\end{align*}
and
\begin{align*} \overline{F}(x,t)=
\left\{ \begin{array}{ll}
\psi(x) & \textrm{ in $ \Gamma_{\epsilon} \times (\epsilon^{2},T] $,}\\
\overline{\varphi} + 2t(\psi(x)-\overline{\varphi})/\epsilon^{2} & \textrm{ in $ \Gamma_{\epsilon} \times (\frac{\epsilon^{2}}{2} ,\epsilon^{2}] $,}\\
\overline{\varphi} & \textrm{in $ \Omega_{\epsilon} \times [-\frac{\epsilon^{2}}{2}, \frac{\epsilon^{2}}{2}]$,}\\
\end{array} \right.
\end{align*}
respectively.
Note that $ \underline{F}$ and $ \overline{F}$ are continuous in $\overline{\Gamma_{\epsilon,T}}$ and have constant initial data.
By Lemma \ref{mono}, we have $\underline{u} \le u_{\epsilon} \le \overline{u} $.
Thus it is sufficient to show that $\lim_{t \to \infty} \underline{u}(\cdot,t), \lim_{t \to \infty}\overline{u}(\cdot,t)$ exist and satisfy the limiting DPP
\eqref{dpplim}.
First we see that $$ ||\underline{u}||_{L^{\infty}(\Omega_{\epsilon,T})} \le ||\underline{F}||_{L^{\infty}(\Gamma_{\epsilon,T})} \ \textrm{and} \ ||\overline{u}||_{L^{\infty}(\Omega_{\epsilon,T})} \le ||\overline{F}||_{L^{\infty}(\Gamma_{\epsilon,T})} $$
by using the DPP of $ \underline{u}$ and $\overline{u} $.
Thus, these functions are uniformly bounded.
Next, we prove monotonicity of sequences $\{ \underline{u}(x, t+ j\epsilon^{2}/2 ) \}_{j = 0}^{\infty} $ and $\{ \overline{u}(x, t+ j\epsilon^{2}/2) \}_{j = 0}^{\infty} $ for any $(x,t) \in \Omega_{\epsilon} \times (-\epsilon^{2}/2 ,0]$.
Without loss of generality, we only consider the case $\underline{u}$.
Let $(x_{0},t_{0}) $ be a point in $\Omega \times ( -\epsilon^{2}/2, 0] $ and
denote by $$a_{j} = \underline{u}(x_{0}, t_{0}+ j\epsilon^{2}/2 )$$ for simplicity.
For any $(x_{0},t_{0}) \in \Omega \times ( -\epsilon^{2}/2, 0] $, we can derive that $$\underline{\psi}= a_{0} = a_{1} \le a_{2} $$ by direct calculation
and
\begin{align*}
a_{3} & = \bigg(1- \delta \bigg(x_{0},t_{0}+\frac{3 \epsilon^{2}}{2}\bigg)\bigg)\midrg_{\nu \in S^{n-1}} \mathscr{A}_{\epsilon}u ( x_{0}, t_{0}+ \epsilon^{2} ; \nu ) \\ &
\qquad + \delta \bigg(x_{0},t_{0}+\frac{3 \epsilon^{2}}{2}\bigg) F \bigg(x_{0},t_{0}+\frac{3 \epsilon^{2}}{2}\bigg) \\ &
\ge (1- \delta (x_{0},t_{0}+ \epsilon^{2} ) )\midrg_{\nu \in S^{n-1}} \mathscr{A}_{\epsilon}u \bigg( x_{0}, t_{0}+ \frac{\epsilon^{2}}{2} ; \nu \bigg) \\ &
\qquad + \delta (x_{0},t_{0}+ \epsilon^{2} ) F (x_{0},t_{0}+ \epsilon^{2} ) =a_{2}
\end{align*}
since $ \delta (x_{0},t_{0}+ \epsilon^{2} )=\delta (x_{0},t_{0}+ 3\epsilon^{2}/2 ) $ and $ F (x_{0},t_{0}+ \epsilon^{2} ) \le F (x_{0},t_{0}+ 3\epsilon^{2}/2 ) $.
Next, assume that $a_{k} \ge a_{k-1} $ for some $ k \ge 4 $.
Note that $F(x,t) = \psi (x) $ for $ x \in \Gamma_{\epsilon} $ and
$$\delta \bigg(x_{0},t_{0}+\frac{k\epsilon^{2}}{2}\bigg) = \delta \bigg(x_{0},t_{0}+\frac{(k-1)\epsilon^{2}}{2}\bigg) $$ in this case.
Then, we see
\begin{align*}
a_{k+1} & = \bigg(1- \delta \bigg(x_{0},t_{0}+\frac{(k+1)\epsilon^{2}}{2}\bigg)\bigg)\midrg_{\nu \in S^{n-1}} \mathscr{A}_{\epsilon}u \bigg( x_{0}, t_{0}+ \frac{k\epsilon^{2}}{2}; \nu \bigg) \\ &
\qquad + \delta \bigg(x_{0},t_{0}+\frac{(k+1)\epsilon^{2}}{2} \bigg) \psi(x_{0}) \\ &
\ge \bigg(1- \delta \bigg(x_{0},t_{0}+\frac{k\epsilon^{2}}{2}\bigg)\bigg)\midrg_{\nu \in S^{n-1}} \mathscr{A}_{\epsilon}u \bigg( x_{0}, t_{0}+ \frac{(k-1)\epsilon^{2}}{2}; \nu \bigg) \\ &
\qquad + \delta \bigg(x_{0},t_{0}+\frac{k\epsilon^{2}}{2} \bigg) \psi(x_{0}) = a_{k}.
\end{align*}
Therefore, $\{ a_{j} \} $ is increasing for any $(x_{0},t_{0}) \in \Omega \times ( -\epsilon^{2}/2, 0]$.
It is also possible to obtain $ \{ \overline{u}(x, t+ j\epsilon^{2}/2) \}$ is decreasing by using similar arguments.
Therefore, we obtain $\{ \underline{u}(x, t+ j\epsilon^{2}/2 ) \} $ and $\{ \overline{u}(x, t+ j\epsilon^{2}/2) \} $ converges for any $(x,t) \in \Omega_{\epsilon} \times (-\epsilon^{2}/2 ,0]$ by applying the monotone convergence theorem.
Now we show that $U_{\epsilon}$ satisfies the DPP \eqref{dpplim}.
Fix $- \epsilon^{2}/2 \le t_{1} < 0 $ arbitrary and
write $$ \underline{U_{t_{1}}}(x) = \lim_{j \to \infty} \underline{u}(x,t_{1}+ j\epsilon^{2}/2)$$
for $x \in \Omega $.
By definition of $\underline{u} $, we see that
\begin{align*}
\underline{U_{t_{1}}}(x) =(1-\overline{\delta}(x))\lim_{j \to \infty} \bigg[ \midrg_{\nu \in S^{n-1}} \mathscr{A}_{\epsilon}\underline{u} \bigg( x, t_{1}+
\frac{j \epsilon^{2}}{2}; \nu \bigg) \bigg]
+ \overline{\delta}(x) \psi(x).
\end{align*}
Therefore, it is sufficient to show that
\begin{align} \label{ac_sup} \lim_{j \to \infty} \sup_{\nu \in S^{n-1}} \mathscr{A}_{\epsilon}\underline{u} \bigg( x, t_{1}+ \frac{j \epsilon^{2}}{2}; \nu \bigg) = \sup_{\nu \in S^{n-1}} \tilde{\mathscr{A}}_{\epsilon}\underline{U_{t_{1}}} ( x; \nu )
\end{align}
and
\begin{align} \label{ac_inf} \lim_{j \to \infty} \inf_{\nu \in S^{n-1}} \mathscr{A}_{\epsilon}\underline{u} \bigg( x, t_{1}+ \frac{j \epsilon^{2}}{2}; \nu \bigg) = \inf_{\nu \in S^{n-1}} \tilde{\mathscr{A}}_{\epsilon}\underline{U_{t_{1}}} ( x; \nu )
\end{align}
where
\begin{align} \label{tildea} \tilde{\mathscr{A}}_{\epsilon}v (x; \nu)= \alpha v(x+ \epsilon \nu) + \beta \kint_{B_{\epsilon}^{\nu} }v(x+ h) d \mathcal{L}^{n-1}(h) .
\end{align}
These equalities can be derived by the argument in the proof of \cite[Proposition 3.3]{MR3623556}.
First, we get \eqref{ac_sup} from monotonicity of $\{ \underline{u} ( x, t_{1}+ j \epsilon^{2}/2 ) \} $.
On the other hand, by means of the monotonicity of $\{ \underline{u} ( x, t_{1}+ j \epsilon^{2}/2 ) \} $ and continuity of $ \mathscr{A}_{\epsilon}\underline{u} (x, t; \cdot)$, we can show the existence of a vector $\tilde{\nu} \in S^{n-1}$ satisfying
$$ \mathscr{A}_{\epsilon}\underline{u} \bigg( x, t_{1}+ \frac{j \epsilon^{2}}{2}; \tilde{\nu} \bigg) \le \lim_{j \to \infty} \inf_{\nu \in S^{n-1}} \tilde{\mathscr{A}}_{\epsilon}\underline{U_{t_{1}}} ( x; \nu ) \qquad \textrm{for any} \ j \ge 0 . $$
Now \eqref{ac_inf} is obtained by the monotone convergence theorem.
Thus, we deduce that
$ \underline{U_{t_{1}}}$ satisfies the DPP \eqref{dpplim} for every $- \epsilon^{2}/2 \le t_{1} < 0 $.
By uniqueness of solutions to \eqref{dpplim}, \cite[Theorem 3.7]{MR3623556}, we can deduce that
$$ \lim_{t \to \infty} \underline{u}(x,t )= U_{\epsilon}(x) . $$
We can prove the same result for $\overline{u}$ by repeating the above steps.
Combining these results with $\underline{u} \le u_{\epsilon} \le \overline{u}$, we get
$$\lim_{t \to \infty} u_{\epsilon}(x,t )= U_{\epsilon}(x) $$
and then we can finish the proof.
\end{proof}
We finish this section by proving a corollary.
One can apply the above theorem with the interior regularity result for $ u_{\epsilon} $, \cite[Theorem 5.2]{MR4153524}.
This coincides with the result for elliptic case, \cite[Theorem 1.1]{MR4125101}.
\begin{corollary}
Let $\bar{B}_{2r} \subset \Omega \backslash I_{\epsilon}$ and $\epsilon > 0 $ be small.
Suppose that $U_{\epsilon}$ satisfies \eqref{dpplim}.
Then for any $x, y \in B_{r}(0)$,
$$ |U_{\epsilon} (x) - U_{\epsilon} (y) | \le C ( |x-y| + \epsilon), $$
where $C>0$ is a constant which only depends on $r,n $ and $||\psi||_{L^{\infty}(\Gamma_{\epsilon})}$.
\end{corollary}
\begin{proof}
Let $r>0$ with $\bar{B}_{2r} \subset \Omega \backslash I_{\epsilon}$ and $x, y \in B_{r}(0)$.
By Theorem \ref{stable}, for any $\eta >0$, we can find some large $t>0$ such that
$$ |u_{\epsilon}(x,t) - U_{\epsilon}(x)| < \eta \quad \textrm{and} \quad |u_{\epsilon}(y,t) - U_{\epsilon}(y)| < \eta ,$$
where $u_{\epsilon}$ is a function satisfying \eqref{dppvar}.
And by \cite[Theorem 5.2]{MR4153524}, we know that
$$ |u_{\epsilon} (x,t) - u_{\epsilon} (y,t) | \le C( |x-y|+ \epsilon), $$
where $C$ is a constant depending on $r, n $ and $ ||F||_{L^{\infty}(\Gamma_{\epsilon,T})}$.
(Here, $F$ is a boundary data as in Theorem \ref{stable})
Then we have
\begin{align*}
| U_{\epsilon}(x) - U_{\epsilon}(y) | & \le | U_{\epsilon}(x)-u_{\epsilon}(x,t)| +
|u_{\epsilon}(x,t) - u_{\epsilon}(y,t) | + |u_{\epsilon}(y,t) -U_{\epsilon}(y)| \\
& < C( |x-y|+ \epsilon) + 2\eta.
\end{align*}
Since we can choose $\eta$ arbitrarily small, we obtain
$$ |u_{\epsilon} (x,t) - u_{\epsilon} (y,t) | \le C( |x-y|+ \epsilon) $$
for some $C = C(n,p, \Omega, ||\psi||_{L^{\infty}(\Gamma_{\epsilon})})>0$
since we can estimate $$ ||F||_{L^{\infty}(\Gamma_{\epsilon,T})} \le ||\psi||_{L^{\infty}(\Gamma_{\epsilon})} $$ by choosing proper boundary data $F$.
\end{proof}
\section{Regularity near the boundary}
We consider regularity for functions $u_{\epsilon}$ satisfying \eqref{dppvar} near the boundary in this section.
This result is interesting in itself,
but it is also necessary to observe the connection between value functions and PDEs
(see the next section).
First, we introduce a boundary regularity condition for the domain $\Omega$.
\begin{definition}[Exterior sphere condition] \label{exspc} We say that a domain $ \Omega$ satisfies
an exterior sphere condition if
for any $y \in \partial \Omega$, there exists $ B_{\delta}(z) \subset \mathbb{R}^{n} \backslash \Omega $ with $\delta > 0 $ such that $y \in \partial B_{\delta}(z)$.
\end{definition}
Throughout this section, we always assume that $\Omega$ satisfies Definition \ref{exspc} and
$\Omega \subset B_{R}(z) $ for some $R>0$.
Meanwhile, we also assume that the boundary data $F$ satisfies
\begin{align} \label{bdlip}
|F(x,t)-F(y,s)| \le L(|x-y|+|t-s|^{1/2})
\end{align}
for any $(x,t),(y,s) \in \Gamma_{\epsilon,T}$ and some $L>0$.
Let $y \in \partial \Omega$ and take $z \in \mathbb{R}^{n} \backslash \Omega $ with $ B_{\delta}(z) \subset \mathbb{R}^{n} \backslash \Omega $ and $y \in \partial B_{\delta}(z)$.
We consider a time-independent tug-of-war game.
Assume that the rules to move the token are the same as that of the original game,
but of course, we do not consider the time parameter $t$ in this case.
We also assume that the token cannot escape outside $\overline{B}_{R}(z)$
and the game ends only if the token is located in $\overline{B}_{\delta}(z)$.
Now we fix specific strategies for both players.
For each $k=0, 1, \dots$,
assume that Player I and II takes the vector $\nu_{k}^{\mathrm{I}}=-\frac{x_{k}-z}{|x_{k}-z|} $ and $\nu_{k}^{\mathrm{II}}=\frac{x_{k}-z}{|x_{k}-z|} $, respectively.
We write these strategies for Player I, II as $S_{\mathrm{I}}^{z}$ and $S_{\mathrm{II}}^{z}$.
On the other hand, we need to define strategies and random processes when $B_{\epsilon}(x_{k}) \backslash B_{R}(z)\neq \varnothing $.
In this case, $x_{k+1}$ is defined by $x_{k}+\epsilon \nu_{k}^{\mathrm{I}} $
if Player I wins coin toss twice and
$$x_{k}+ \dist (x_{k}, \partial B_{R}(z)) \nu_{k}^{\mathrm{II}}=z + R\frac{x_{k}-z}{|x_{k}-z|}$$
if Player II wins coin toss twice.
When random walk occurs, $x_{k+1}$ is chosen uniformly in $B_{\epsilon}^{\nu_{k}^{I}}(x) \cap B_{R}(z)$.
We denote by $$ \tau^{\ast} = \inf \{ k : x_{k} \in \overline{B}_{\delta}(z) \}.$$
The following lemma gives an estimate for the the stopping time $\tau^{\ast}$.
\begin{lemma} \label{bafn}
Under the setting as above, we have
$$ \mathbb{E}_{S_{\mathrm{I}}^{z},S_{\mathrm{II}}^{z}}^{x_{0}}[\tau^{\ast}] \le \frac{C(n, \alpha, R/\delta)( \dist(\partial B_{\delta}(y),x_{0})+o(1))}{\epsilon^{2}} $$
for any $ x_{0} \in \Omega \subset B_{R}(z)\backslash \overline{B}_{\delta}(z) $.
Here $o(1) \to 0$ as $\epsilon \to 0$ and $\ceil{x}$ means the least integer greater than or equal to $x \in \mathbb{R}$.
\end{lemma}
\begin{proof}
Set $g_{\epsilon}(x) = \mathbb{E}_{S_{\mathrm{I}}^{z},S_{\mathrm{II}}^{z}}^{x}[\tau^{\ast}].$
Then we observe that $g_{\epsilon}$ satisfies the following DPP
\begin{align*}
g_{\epsilon}(x) =& \frac{1}{2}
\bigg[ \bigg\{ \alpha g_{\epsilon} (x+ \rho_{x}\epsilon \nu_{x}) + \beta \kint_{B_{\epsilon}^{\nu_{x}}(x) \cap B_{R}(z)}g_{\epsilon} (y) d \mathcal{L}^{n-1}(y) \bigg\} \\ & \qquad
+\bigg\{ \alpha g_{\epsilon} (x- \epsilon \nu_{x})+\beta \kint_{B_{\epsilon}^{\nu_{x}}(x) \cap B_{R}(z)} g_{\epsilon}(y) d \mathcal{L}^{n-1}(y) \bigg\} \bigg]+1,
\end{align*}
where $\rho_{x} = \min \{ 1, \epsilon^{-1}\dist(x, \partial B_{R}(z)) \}$ and $\nu_{x} = (x-z)/|x-z|$.
Note that $\rho_{x}=1$ for any $x \in B_{R-\epsilon}(z)\backslash \overline{B}_{\delta}(z) $.
Next we define
$ v_{\epsilon}= \epsilon^{2}g_{\epsilon} $.
It is straightforward that
\begin{align} \label{vdpp} \begin{split} v_{\epsilon}(x)=\frac{\alpha}{2} \big(v_{\epsilon}&(x+\rho_{x}\epsilon\nu_{x})+v_{\epsilon}(x-\epsilon\nu_{x})\big)
\\ & + \beta \kint_{B_{\epsilon}^{\nu_{x}}(x) \cap B_{R}(z)} v_{\epsilon}(y) d \mathcal{L}^{n-1}(y) +\epsilon^{2}.
\end{split}
\end{align}
From the definition of $v_{\epsilon}$ and \eqref{vdpp}, we observe that the function $v_{\epsilon}$ is rotationally symmetric, that is, $v_{\epsilon}$ is a function of $r = |x-z|$.
If we denote by $v_{\epsilon}(x)=V(r)$, the DPP \eqref{vdpp} can be represented by
\begin{align} \label{vdpp2} \begin{split}
V(r) = \frac{\alpha}{2} &\big(V(r+\rho_{r}\epsilon)+V(r-\epsilon)\big)
\\ & + \beta \kint_{B_{\epsilon}^{\nu_{x}}(x) \cap B_{R}(z)} V(|y-z|) d \mathcal{L}^{n-1}(y) +\epsilon^{2},
\end{split}
\end{align}
where $\rho_{r}=\min \{ 1, \epsilon^{-1}(R-r) \}$.
Now we can deduce that \eqref{vdpp2} has a connection to the following problem
\begin{align*}
\left\{ \begin{array}{ll}
\frac{1-\alpha}{2r} \frac{n-1}{n+1}w'+\frac{\alpha}{2}w''= -1 & \textrm{when $ r \in (\delta, R+\epsilon), $}\\
w(\delta)=0, \\
w'(R+\epsilon)=0\\
\end{array} \right.
\end{align*}
by using Taylor expansion.
Note that if we set $v(x)=w(|x|)$,
$$\frac{1-\alpha}{2r} \frac{n-1}{n+1}w'+\frac{\alpha}{2}w''= -1$$ can be transformed by
$$ \Delta_{p}^{N}v = -2(p+n),$$
where $p=(1+n\alpha)/(1-\alpha)$ (for the definition of $ \Delta_{p}^{N}$, see the next section).
On the other hand, we have
\begin{align*}w(r)=
\left\{ \begin{array}{ll}
-\frac{n+1}{2\alpha +n-1}r^{2}+c_{1}r^{\frac{2\alpha n-n+1}{(n+1)\alpha}}+c_{2} & \textrm{when $ \alpha \neq \frac{n-1}{2n}, $}\\
-\frac{n}{n-1}r^{2}+c_{1}\log r+c_{2} & \textrm{when $ \alpha = \frac{n-1}{2n} $}\\
\end{array} \right.
\end{align*}
by direct calculation.
Here
\begin{align*}c_{1}=
\left\{ \begin{array}{ll}
\frac{2(n+1)^{2}\alpha}{(2\alpha +n-1)(2\alpha n -n+1)}(R+\epsilon)^{\frac{n+2\alpha-1}{(n+1)\alpha}} & \textrm{when $ \alpha \neq \frac{n-1}{2n}, $}\\
\frac{2n}{n-1}(R+\epsilon)^{2} & \textrm{when $\alpha = \frac{n-1}{2n} $}\\
\end{array} \right.
\end{align*} is positive if $\alpha \ge \frac{n-1}{2n}$ and negative otherwise.
We extend this function to the interval $(\delta -\epsilon, R+\epsilon]$.
Observe that
\begin{align*}
\frac{\alpha}{2} & \big(w(r+\epsilon)+w(r-\epsilon)\big) + \beta\kint_{B_{\epsilon}^{\nu_{x}}(x) } w(|y-z|) d \mathcal{L}^{n-1}(y) \\
& = w(r) - \frac{n+1}{2\alpha+n-1} \bigg( \alpha + \frac{n-1}{n+1}\beta \bigg) \epsilon^{2} +o(\epsilon^{2}) \\
& \le w(r) - \bigg[ \frac{n+1}{2\alpha+n-1} \bigg( \alpha + \frac{n-1}{n+1}\beta \bigg)- \eta \bigg] \epsilon^{2}
\end{align*}
for some $\eta>0$ when $\alpha \neq \frac{n-1}{2n}$ (we can also obtain a similar estimate if $\alpha= \frac{n-1}{2n}$).
Set $$c_{0}:=\frac{n+1}{2\alpha+n-1} \bigg( \alpha + \frac{n-1}{n+1}\beta \bigg)- \eta >0.$$
Then we have
\begin{align*}
\mathbb{E}_{S_{\mathrm{I}}^{z},S_{\mathrm{II}}^{z}}^{x_{0}} &[v(x_{k})+ kc_{0}\epsilon^{2}|x_{0}, \dots, x_{k-1}] \\ &
= \alpha \big(v(x_{k-1}+\epsilon \nu_{x_{k-1}})+v(x_{k-1}-\epsilon\nu_{x_{k-1}})\big) \\
& \qquad \qquad + \beta\kint_{B_{\epsilon}^{\nu_{x_{k-1}}}(x_{k-1}) } v(y-z ) d \mathcal{L}^{n-1}(y) +kc_{0}\epsilon^{2}
\\ & \le v(x_{k-1})+ (k-1)c_{0}\epsilon^{2},
\end{align*}
if $B_{\epsilon}(x_{k-1}) \subset B_{R}(z) \backslash \overline{B}_{\delta-\epsilon}(z) $.
The same estimate can be derived in the case $B_{\epsilon}(x_{k-1}) \backslash B_{R}(z)\neq \varnothing $ since $ w$ is an increasing function of $r$ and it implies
$$ v(x+\rho_{x}\epsilon\nu_{x}) \le v(x+\epsilon\nu_{x}) $$
and
$$ \kint_{B_{\epsilon}^{\nu_{x}}(x) \cap B_{R}(y) } v( y-z ) d \mathcal{L}^{n-1}(y) \le \kint_{B_{\epsilon}^{\nu_{x}}(x) } v( y-z ) d \mathcal{L}^{n-1}(y) .$$
Now we see that $v(x_{k})+ kc_{0}\epsilon^{2} $ is a supermartingale.
By the optional stopping theorem, we have
\begin{align} \label{ostapp} \mathbb{E}_{S_{\mathrm{I}}^{z},S_{\mathrm{II}}^{z}}^{x_{0}}[v(x_{\tau^{\ast} \wedge k })+(\tau^{\ast} \wedge k)c_{0}\epsilon^{2}] \le v(x_{0}).\end{align}
We also check that
$$ 0 \le -\mathbb{E}_{S_{\mathrm{I}}^{z},S_{\mathrm{II}}^{z}}^{x_{0}}[v(x_{\tau^{\ast}})] \le o(1) ,$$
since $x_{\tau^{\ast}} \in \overline{B}_{\delta}(z) \backslash \overline{B}_{\delta-\epsilon}(z)$.
Meanwhile, it can be also observed that $w'>0$ is a decreasing function in the interval $(\delta, R+\epsilon)$
and thus
$$ w' \le \frac{2(n+1)}{2\alpha+n-1}\delta \bigg[ \bigg( \frac{R+\epsilon}{\delta} \bigg)^{\frac{n+2\alpha-1}{(n+1)\alpha}} -1 \bigg]
\qquad \textrm{in} \ (\delta, R+\epsilon).$$
From the above estimate, we have
\begin{align} \label{linestw} 0 \le w(x_{0}) \le C(n, \alpha, R/\delta) \dist( \partial B_{\delta}(y), x_{0} ).
\end{align}
Finally, combining \eqref{linestw} with \eqref{ostapp} and passing to a limit with $k$, we have
\begin{align*}
c_{0}\epsilon^{2}\mathbb{E}_{S_{\mathrm{I}}^{z},S_{\mathrm{II}}^{z}}^{x_{0}}[ \tau^{\ast} ] & \le w(x_{0}) -\mathbb{E}_{S_{\mathrm{I}}^{z},S_{\mathrm{II}}^{z}}^{x_{0}}[w(x_{\tau^{\ast}})] \\ &
\le C(n, \alpha, R/\delta) \dist( \partial B_{\delta}(y), x_{0} )+ o(1)
\end{align*}
and it gives our desired estimate.
\end{proof}
\color{black}
By means of Lemma \ref{bafn}, we can deduce following boundary regularity results.
First, we give an estimate for $u_{\epsilon}$ on the lateral boundary.
\begin{lemma} \label{latbest}
Assume that $\Omega$ satisfies the exterior sphere condition and $F$ satisfies \eqref{bdlip}.
Then for the value function $u_{\epsilon}$ with boundary data $F$, we have
\begin{align}\label{towlatest} \begin{split}
& |u_{\epsilon}(x,t)-u_{\epsilon}(y,s)|
\\ & \le C(n,\alpha, R,\delta, L) (K+ K^{1/2}) +L(|x-y|+|t-s|^{1/2}+ 2\delta),
\end{split}
\end{align}
where $K =\min \{ |x-y|,t \}+\epsilon$ and $R, \delta$ are the constants in Lemma \ref{bafn}
for every $(x,t) \in \Omega_{T}$ and $(y,s) \in O_{\epsilon,T}$.
\end{lemma}
\begin{proof}
We first consider the case $t=s$.
Set $N = \ceil{ 2t/\epsilon^{2}} $.
Since $\Omega$ satisfies the exterior sphere condition,
we can find a ball $B_{\delta}(z) \subset \mathbb{R}^{n} \backslash \Omega $
such that $y \in \partial B_{\delta}(z)$.
Assume that Player I takes a strategy $S_{I}^{z}$ of pulling towards $z$.
We estimate the expected value for the distance $|x_{\tau}-x_{0}|$ under the game setting.
Let $\theta$ be the angle between $\nu$ and $x-z$.
And we assume that $x=0$ and $z=(0,\cdots,0, r \sin\theta, -r\cos\theta)$
by using a proper transformation.
Then the following term
$$ \alpha |x+\epsilon \nu-z| + \beta \kint_{B_{\epsilon}^{\nu_{x}}(x )
}|\tilde{x}-z| d\mathcal{L}^{n-1}(\tilde{x})$$
can be written as
\begin{align*} &A(\theta)\\ & = \alpha \sqrt{(r \sin \theta)^{2}+(r \cos \theta+\epsilon)^{2} }
+ \beta \kint_{T_{\epsilon}} \sqrt{(y-r\sin \theta)^{2}+(r\cos \theta)^{2}} d\mathcal{L}^{n-1}(y)
\\ & = \alpha \sqrt{r^{2}+2r\epsilon \cos \theta + \epsilon^{2}}
+ \beta \kint_{T_{\epsilon}} \sqrt{r^{2}-2ry_{n-1} \sin \theta + |y|^{2}} d\mathcal{L}^{n-1}(y)
\\ & =: \alpha A_{1}(\theta) + \beta A_{2} (\theta) ,
\end{align*}
where $r = |x-z|$ and $T_{\epsilon} = \{ x = (x_{1}, \dots, x_{n}) \in B_{\epsilon}(0) : x_{n}=0 \}$.
Observe that $A_{1}$ is decreasing in the interval $(0, \pi)$. (Thus, $A_{1}$ has the maximum at $\theta=0$ in $[0, \pi]$)
On the other hand, we have
$$ A_{2}^{'}(\theta) = - \kint_{T_{\epsilon}} \frac{ry_{n-1}\cos \theta}{\sqrt{r^{2}-2ry_{n-1} \sin \theta + |y|^{2}} } d\mathcal{L}^{n-1}(y)$$
and this function is a symmetric function about $\theta= \pi /2 $.
We also check that $A_{2}^{'} <0$ in $(0, \pi/2)$.
Thus, we verify that
$A_{2}$ has a maximum at $\theta=0, \pi$ in $[0, \pi]$ and $\theta(0)=\theta(\pi)$.
This leads to the following estimate
\begin{align} \begin{split}\label{nuest}
\sup_{\nu \in S^{n-1}}\bigg[\alpha |x&+\epsilon \nu -z| + \beta \kint_{B_{\epsilon}^{\nu}(x )
}|\tilde{x}-z| d\mathcal{L}^{n-1}(\tilde{x}) \bigg]
\\ & = \alpha( |x-z|+\epsilon ) + \beta \kint_{B_{\epsilon}^{\nu_{x}}(x )
}|\tilde{x}-z| d\mathcal{L}^{n-1}(\tilde{x}),
\end{split}
\end{align}
where $\nu_{x} = (x-z)/|x-z|$.
Therefore, we have
\begin{align*}
&\mathbb{E}_{S_{\mathrm{I}}^{z}, S_{\mathrm{II}}}^{(x_{0},t)} [|x_{k} -z| |(x_{0},t_{0}), \dots, (x_{k-1},t_{k-1})] \\
& \le \frac{1-\delta(x_{k-1},t_{k-1})}{2} \bigg[ \alpha(|x_{k-1}-z|-\epsilon) + \beta \kint_{B_{\epsilon}^{\nu_{x_{k-1}}}(x_{k-1} ) } |\tilde{x}-z| d\mathcal{L}^{n-1}(\tilde{x}) \bigg] \\
& \quad + \frac{1-\delta(x_{k-1},t_{k-1})}{2} \bigg[ \alpha(|x_{k-1}-z|+\epsilon) + \beta \kint_{B_{\epsilon}^{\nu_{x_{k-1}}}(x_{k-1} ) }\hspace{-0.6em} |\tilde{x}-z| d\mathcal{L}^{n-1}(\tilde{x}) \bigg]
\\ & \quad + \delta(x_{k-1},t_{k-1})|x_{k-1}-z|
\\ & = |x_{k-1}-z| \\ & \qquad + \beta(1-\delta(x_{k-1},t_{k-1})) \bigg( \kint_{B_{\epsilon}^{\nu_{x_{k-1}}}(x_{k-1} ) } \hspace{-0.6em}|\tilde{x}-z| d\mathcal{L}^{n-1}(\tilde{x}) - |x_{k-1}-z| \bigg).
\end{align*}
We also observe that $$0<\beta(1-\delta(x_{k-1},t_{k-1}))<1,$$
$$ |x_{k-1}-z| \le |\tilde{x}-z| \le \sqrt{(x_{k-1}-z)^{2} + \epsilon^{2}} \qquad \textrm{for}\ x \in B_{\epsilon}^{\nu_{x_{k-1}}},$$
and
$$ 0< \sqrt{a^{2}+\epsilon^{2}} - a < \frac{\epsilon^{2}}{2a} \qquad \textrm{for}\ a>0. $$
Therefore,
\begin{align*}
\mathbb{E}_{S_{\mathrm{I}}^{z}, S_{\mathrm{II}}}^{(x_{0},t)} [|x_{k} -z| |(x_{0},t_{0}), \dots, (x_{k-1},t_{k-1})] \le |x_{k-1}-z|+ C\epsilon^{2}
\end{align*}
for some $C=C(n,\delta)>0$.
This yields that
$$ M_{k}= |x_{k}-z| -Ck \epsilon^{2} $$
is a supermartingale.
Applying the optional stopping theorem and Jensen's inequality to $M_{k}$, we derive that
\begin{align} \label{oriste} \begin{split}
\mathbb{E}_{S_{\mathrm{I}}^{z}, S_{\mathrm{II}}}^{(x_{0},t)}& [|x_{\tau} -z|+|t_{\tau}-t|^{1/2} ] \\
& = \mathbb{E}_{S_{\mathrm{I}}^{z}, S_{\mathrm{II}}}^{(x_{0},t)} \bigg[|x_{\tau} -z|+ \epsilon \sqrt{\frac{\tau}{2}} \bigg]
\\ & \le |x_{0} -z| + C \epsilon^{2} \mathbb{E}_{S_{\mathrm{I}}^{z}, S_{\mathrm{II}}}^{(x_{0},t)}[\tau] +C\epsilon \big(\mathbb{E}_{S_{\mathrm{I}}^{z}, S_{\mathrm{II}}}^{(x_{0},t)}[\tau] \big)^{1/2}.
\end{split}
\end{align}
Next we need to obtain estimates for $\mathbb{E}_{S_{\mathrm{I}}^{z}, S_{\mathrm{II}}}^{(x_{0},t)}[\tau] $. To do this, we use the result in Lemma \ref{bafn}.
We can check that the exit time $\tau$ of the original game is bounded by $\tau^{\ast}$
because the expected value of $|x_{k}-z|$ for given $|x_{k-1}-z|$ is maximized when Player II chooses the strategy $S_{\mathrm{II}}^{z}$ from \eqref{nuest}.
Thus, we have
\begin{align*}
\mathbb{E}_{S_{\mathrm{I}}^{z}, S_{\mathrm{II}}}^{(x_{0},t)}[\tau]& \le \min \{
\mathbb{E}_{S_{\mathrm{I}}^{z},S_{\mathrm{II}}^{z}}^{x_{0}}[\tau^{\ast}] , N \}
\\ & \le
\min \bigg\{\frac{C(n,\alpha, R/\delta) (\dist(\partial B_{\delta}(z),x_{0})+\epsilon)}{\epsilon^{2}} , N \bigg\}
\end{align*}
for any strategy $S_{\mathrm{II}}$ for Player II.
We also see that
$$ \dist(x_{0}, \partial B_{\delta}(z)) \le |x_{0}-y|. $$
This and \eqref{oriste} imply
\begin{align*}
&\mathbb{E}_{S_{\mathrm{I}}^{z}, S_{\mathrm{II}}}^{(x_{0},t)} [|x_{\tau} -z|+|t_{\tau}-t|^{1/2} ] \\ & \le
|x_{0}-y| +C \min \{ |x_{0}-y|+\epsilon, \epsilon^{2}N \}+ C \min \{ |x_{0}-y|+\epsilon, \epsilon^{2}N \}^{1/2},
\end{align*}
where $C$ is a constant depending on $n,\alpha, R$ and $\delta$.
Therefore, we get
\begin{align*}
|\mathbb{E}_{S_{\mathrm{I}}^{z}, S_{\mathrm{II}}}^{(x_{0},t)} &[F(x_{\tau},t_{\tau})] - F(z,t)| \\ &
\le L( |x_{0}-y| +C(n,\alpha, R,\delta) \min \{ |x_{0}-y|+\epsilon, \epsilon^{2}N \}
\\ & \qquad \quad + C(n,\alpha, R/\delta)\min \{ |x_{0}-y|+\epsilon, \epsilon^{2}N \}^{1/2} )
\end{align*}
and this yields
\begin{align*}
u_{\epsilon}(x_{0},t)&=
\sup_{S_{\mathrm{I}}}\inf_{S_{\mathrm{II}}}\mathbb{E}_{S_{\mathrm{I}} , S_{\mathrm{II}}}^{(x_{0},t)} [F(x_{\tau},t_{\tau})] \\ &
\ge \inf_{S_{\mathrm{II}}}\mathbb{E}_{S_{\mathrm{I}}^{z}, S_{\mathrm{II}}}^{(x_{0},t)} [F(x_{\tau},t_{\tau})] \\ &
\ge F(z,t)- L\{ C(n,\alpha, R,\delta) (K+ K^{1/2})+|x_{0}-y|\}
\\ & \ge F(y,t)-C(n,\alpha, R,\delta,L) (K+ K^{1/2}) - L(|x_{0}-y|+2 \delta)
\end{align*}
for $K = \min \{ |x_{0}-y|+\epsilon, \epsilon^{2}N \}$.
Note that we can also derive the upper bound for $u_{\epsilon}(x_{0},t)$
by taking the strategy where Player II pulls toward to $z$.
Meanwhile, in the case of $t\neq s$, we have
\begin{align*}
&|u_{\epsilon}(x,t)-u_{\epsilon}(y,s)| \\ & \le
|u_{\epsilon}(x,t)-u_{\epsilon}(y,t)|+|u_{\epsilon}(y,t)-u_{\epsilon}(y,s)| \\ &
\le C(n,\alpha,R/\delta,L)(K+ K^{1/2})+ L (|x-y|+2\delta )+ L|t-s|^{1/2} ,
\end{align*}
where $K = \min \{ |x_{0}-y|+\epsilon, \epsilon^{2}N \}$ and $N= \ceil {2t/\epsilon^{2}} $.
This gives our desired estimate.
\end{proof}
We can also derive the following result on the initial boundary.
\begin{lemma} \label{inbest}
Assume that $\Omega$ satisfies the exterior sphere condition and $F$ satisfies \eqref{bdlip}.
Then for the value function $u_{\epsilon}$ with boundary data $F$, we have
\begin{align} \label{inbinq}
|u_{\epsilon}(x,t)-u_{\epsilon}(y,s)| \le C (|x-y|+t ^{1/2}+ \epsilon)
\end{align}
for every $(x,t) \in \Omega_{T}$ and $(y,s) \in \Omega \times (-\epsilon^{2}/2,0]$.
The constant $C$ depends only on $ n,L$.
\end{lemma}
\begin{proof}
Set $(x,t)=(x_{0},t_{0})$ and $N =\ceil{ 2t/\epsilon^{2}} $.
As in the above lemma, we also estimate the expected value of the distance between $y$ and the exit point $x_{\tau}$.
Consider the case that Player I chooses a strategy of pulling to $y$.
When $|x_{k-1}-y| \ge \epsilon$, we have
\begin{align*}
&\mathbb{E}_{S_{\mathrm{I}}^{y},S_{\mathrm{II}}}^{(x_{0},t_{0})}[|x_{k}-y|^{2}| (x_{0},t_{0}), \dots, (x_{k-1},t_{k-1}) ] \\
& \le (1-\delta (x_{k-1},t_{k-1}))\times \\ & \quad
\bigg[ \frac{\alpha}{2} \{ (|x_{k-1}-y|+\epsilon)^{2} +(|x_{k-1}-y|-\epsilon)^{2} \} \\ &
\qquad \qquad
+ \beta \kint_{B_{\epsilon}^{\nu_{x_{k-1}}}(x_{k-1} ) } |\tilde{x}-y|^{2} d\mathcal{L}^{n-1}(\tilde{x}) \bigg]
+ \delta (x_{k-1},t_{k-1})|x_{k-1}-y|^{2}
\\ & \le \alpha (|x_{k-1}-y|^{2}+\epsilon^{2}) + \beta (|x_{k-1}-y|^{2}+C\epsilon^{2})
\\ & \le |x_{k-1}-y|^{2}+C\epsilon^{2}
\end{align*}
for some constant $C>0$ which is independent of $\epsilon$.
We recall the notation $\nu_{x_{k-1}} = (x_{k-1}-z)/|x_{k-1}-z| $ here.
Otherwise, we also see that
\begin{align*}
&\mathbb{E}_{S_{\mathrm{I}}^{y},S_{\mathrm{II}}}^{(x_{0},t_{0})}[|x_{k}-y|^{2}| (x_{0},t_{0}), \dots, (x_{k-1},t_{k-1}) ] \\
& \le (1-\delta (x_{k-1},t_{k-1}))
\bigg[ \frac{\alpha}{2} (|x_{k-1}-y|+\epsilon)^{2}
+ \beta \kint_{B_{\epsilon}^{\nu_{x_{k-1}}}(x_{k-1} ) }\hspace{-0.7em} |\tilde{x}-y|^{2} d\mathcal{L}^{n-1}(\tilde{x}) \bigg]
\\ & \quad + \delta (x_{k-1},t_{k-1})|x_{k-1}-y|^{2},
\end{align*}
and then we get the same estimate as above since
$$(|x_{k-1}-y|+\epsilon)^{2} \le 2(|x_{k-1}-y|^{2} +\epsilon^{2}). $$
Therefore, we see that $$M_{k} = |x_{k}-y|^{2}-Ck\epsilon^{2}$$
is a supermartingale.
Now we obtain
$$\mathbb{E}_{S_{\mathrm{I}}^{y},S_{\mathrm{II}}}^{(x_{0},t)}[|x_{\tau}-y|^{2}] \le
|x_{0}-y|^{2} + C \epsilon^{2} \mathbb{E}_{S_{\mathrm{I}}^{y},S_{\mathrm{II}}}^{(x_{0},t)}[\tau] $$
by using the optional stopping theorem.
Since $\tau < \ceil{ 2t/\epsilon^{2}}$, the right-hand side term is estimated by
$ |x_{0}-y|^{2} +C (t+ \epsilon^{2}) $.
Applying Jensen's inequality, we get
\begin{align*}
\mathbb{E}_{S_{\mathrm{I}}^{y},S_{\mathrm{II}}}^{(x_{0},t)}[|x_{\tau}&-y| ] \le
\big( \mathbb{E}_{S_{\mathrm{I}}^{y},S_{\mathrm{II}}}^{(x_{0},t)}[|x_{\tau}-y|^{2}] \big)^{\frac{1}{2}}
\\ & \le \big( |x_{0}-y|^{2} + C (t + \epsilon^{2}) \big)^{\frac{1}{2}}
\\ & \le |x_{0}-y| + C(t^{1/2}+ \epsilon ).
\end{align*}
From the above estimate, we deduce that
\begin{align*}
u_{\epsilon}(x_{0},t)&=
\sup_{S_{\mathrm{I}}}\inf_{S_{\mathrm{II}}}\mathbb{E}_{S_{\mathrm{I}} , S_{\mathrm{II}}}^{(x_{0},t)} [F(x_{\tau},t_{\tau})] \\ &
\ge F(y, t) - L \ \mathbb{E}_{S_{\mathrm{I}}^{y},S_{\mathrm{II}}}^{(x_{0},t)}[|x_{\tau}-y|+|t-t_{\tau}|^{2}]
\\ & \ge F(y,t) - C(|x_{0}-y|+t^{1/2}+\epsilon).
\end{align*}
The upper bound can be derived in a similar way, and then we get the estimate \eqref{inbinq}.
\end{proof}
\color{black}
\section{Application to PDEs}
The objective of this section is to study behavior of $u_{\epsilon}$ when $\epsilon$ tends to zero.
This issue has been studied in several preceding papers (see \cite{MR2875296,MR3161602,MR3494400,MR3623556}).
Those results show that there is a close relation between value functions of tug-of-war games and certain types of PDEs.
Now we will establish that there is a convergence theorem showing that $u_{\epsilon}$ converge to the unique viscosity solution of the following Dirichlet problem for the normalized parabolic $p$-Laplace equation
\begin{align} \label{paraplap}
\left\{ \begin{array}{ll}
(n+p)u_{t}= \Delta_{p}^{N} u & \textrm{in $\Omega_{T} $,}\\
u=F & \textrm{on $\partial_{p}\Omega_{T}$}\\
\end{array} \right.
\end{align}
as $ \epsilon \to 0$.
Here, $p$ satisfies $\alpha = (p-1)/(p+n) $ and $ \beta = (n+1)/(p+n)$.
Now we introduce the notion of viscosity solutions for \eqref{paraplap}.
Note that we need to consider the case when the gradient vanishes.
Here we use semicontinuous extensions of operators in order to define viscosity solutions.
For these extensions, we refer the reader to \cite{MR1770903,MR2238463} for more details.
\begin{definition}[Viscosity solution] \label{vissol}
A function $u \in C(\Omega_{T}) $ is a viscosity solution to \eqref{paraplap} if
the following conditions hold:
\begin{itemize}
\item[(a)] for all $ \varphi \in C^{2}(\Omega_{T} ) $ touching $u$ from above at $(x_{0},t_{0}) \in \Omega_{T} $,
\begin{align*}
\left\{ \begin{array}{ll}
\Delta_{p}^{N}\varphi(x_{0},t_{0}) \ge (n+p) \varphi_{t}(x_{0},t_{0}) \qquad \qquad \textrm{if $ D\varphi (x_{0},t_{0}) \neq 0 $,}\\
\lambda_{\max}((p-2)D^{2}\varphi(x_{0},t_{0}) ) & \\ \qquad \qquad \qquad + \Delta\varphi(x_{0},t_{0}) \ge (n+p) \varphi_{t}(x_{0},t_{0}) \qquad \textrm{if $D\varphi (x_{0},t_{0}) = 0 $.}\\
\end{array} \right.
\end{align*}
\item[(b)] for all $ \varphi \in C^{2}(\Omega_{T} ) $ touching $u$ from below at $(x_{0},t_{0}) \in \Omega_{T}$,
\begin{align*}
\left\{ \begin{array}{ll}
\Delta_{p}^{N}\varphi(x_{0},t_{0}) \le (n+p) \varphi_{t}(x_{0},t_{0}) \qquad \qquad \textrm{if $ D\varphi (x_{0},t_{0}) \neq 0 $,}\\
\lambda_{\min}((p-2)D^{2}\varphi(x_{0},t_{0}) ) & \\ \qquad \qquad \qquad + \Delta\varphi(x_{0},t_{0}) \le (n+p) \varphi_{t}(x_{0},t_{0}) \qquad \textrm{if $D\varphi (x_{0},t_{0}) = 0 $.}\\
\end{array} \right.
\end{align*}
\end{itemize}
Here, the notation $\lambda_{\max}(X)$ and $\lambda_{\min}(X)$ mean the largest and the smallest eigenvalues of a symmetric matrix $X$.
\end{definition}
The following Arzel\`a-Ascoli criterion will be used to obtain the main result in this section.
It is essentially the same proposition as \cite[Lemma 5.1]{MR3494400}.
We can find the proof of this criterion for elliptic version in \cite[Lemma 4.2]{MR3011990}.
\begin{lemma} \label{aras}
Let $ \{ u_{\epsilon} : \overline{\Omega}_{T} \to \mathbb{R}, \epsilon >0 \} $ be a set of functions such that
\begin{itemize}
\item[(a)] there exists a constant $C >0$ so that $|u_{\epsilon}(x,t)| < C $ for every $ \epsilon > 0$ and every $(x,t) \in \overline{\Omega}_{T} $.
\\ \item[(b)] given $ \eta > 0$, there are constants $r_{0}$ and $\epsilon_{0}$ so that for every $\epsilon >0$ and $(x,t),(y,s) \in \overline{\Omega}_{T}$ with $d ((x,t),(y,s)) < r_{0} $, it holds
$$ |u_{\epsilon}(x,t)-u_{\epsilon}(y,s)| < \eta .$$
\end{itemize}
Then, there exists a uniformly continuous function $u: \overline{\Omega}_{T} \to \mathbb{R} $ and a subsequence $\{ u_{\epsilon_{i}} \} $ such that $ u_{\epsilon_{i}} $ uniformly converge to $u$ in $\overline{\Omega}_{T}$, as $i \to \infty$.
\end{lemma}
Now we can describe the relation between functions satisfying \eqref{dppvar} and solutions to the normalized parabolic $p$-Laplace equation.
\begin{theorem}
Assume that $\Omega$ satisfies the exterior sphere condition and $ F \in C(\Gamma_{\epsilon,T}) $
satisfies \eqref{bdlip}.
Let $u_{\epsilon}$ denote the solution to \eqref{dppvar} with boundary data $ F $ for each $\epsilon>0$.
Then, there exist a function $ u: \overline{\Omega}_{\epsilon,T} \to \mathbb{R}$ and a subsequence
$ \{ \epsilon_{i} \} $ such that
$$ u_{\epsilon_{i}} \to u \qquad \textrm{uniformly in} \quad \overline{\Omega}_{T}$$
and the function $u$ is a unique viscosity solution to \eqref{paraplap}.
\end{theorem}
\begin{remark}
The uniqueness of solutions to \eqref{paraplap} can be found in \cite[Lemma 6.2]{MR3494400}.
\end{remark}
\begin{proof}
First we check that there is a subsequence $ \{ u_{\epsilon_{i}} \} $ with $ u_{\epsilon_{i}}$ converge uniformly to $u$ on $\overline{\Omega}_{T}$ for some function $u$.
By using the definition of $u_{\epsilon} $, we have
$$||u_{\epsilon} ||_{L^{\infty}(\Omega_{T})} \le ||F||_{L^{\infty}(\Omega_{T})} < \infty $$
for any $\epsilon > 0$. Hence, $u_{\epsilon} $ are uniformly bounded.
By means of Lemma \ref{latbest}, Lemma \ref{inbest} and the interior regularity result \cite[Theorem 5.2]{MR4153524}, we know that $ \{ u_{\epsilon} \} $
are equicontinuous.
Therefore, we can find a subsequence $\{ u_{\epsilon_{i}} \}_{i=1}^{\infty} $ converging uniformly to a function $u \in C(\overline{\Omega}_{T})$
by Lemma \ref{aras}.
Now we need to show that $u$ is a viscosity solution to \eqref{paraplap}.
On the parabolic boundary, we see that
$$ u(x,t) = \lim_{i \to \infty}u_{\epsilon_{i}}(x,t) = F(x,t)$$
for any $(x,t) \in \partial_{p}\Omega_{T}$.
Next we prove that $u$ satisfies
$$(n+p)u_{t}= \Delta_{p}^{N} u \qquad \textrm{in} \ \Omega_{T}$$
in the viscosity sense.
Without loss of generality, it suffices to show that $u$ satisfies condition (a) in Definition \ref{vissol}.
Fix $(x,t) \in \Omega_{T}$.
Then there is a small number $R>0$ such that
$$ Q:= (x_{0},t_{0}) + B_{R}(0) \times (-R^{2},0) \subset \subset \Omega_{T} .$$
We also assume that $\epsilon>0$ satisfies $Q \subset \Omega_{T} \backslash I_{\epsilon,T}$.
Suppose that a function $\varphi \in C^{2}(Q)$ touches $u$ from below at $(x,t)$.
Then we observe that
$$ \inf_{Q } (u- \varphi) = (u-\varphi)(x,t) \le (u-\varphi)(z,s) $$
for any $(z,s) \in Q $.
Since $u_{\epsilon}$ converge uniformly to $u$, for sufficiently small $\epsilon >0$,
there is a point $ (x_{\epsilon},t_{\epsilon}) \in Q$ such that
$$ \inf_{Q } (u_{\epsilon}- \varphi) \le (u_{\epsilon}-\varphi)(z,s) $$
for any $(z,s) \in Q $.
We also check that $(x_{\epsilon}, t_{\epsilon}) \to (x,t)$ as $ \epsilon \to 0$.
Recall \eqref{deft}. Since $(x_{\epsilon}, t_{\epsilon}) \in \Omega_{T} \backslash I_{\epsilon,T}$,
we have
\begin{align*}
Tu(x,t) =\midrg_{\nu \in S^{n-1}} \mathscr{A}_{\epsilon}u \bigg( x, t- \frac{\epsilon^{2}}{2} ; \nu \bigg).
\end{align*}
We also set $ \psi = \varphi +( u_{\epsilon}-\varphi)(x_{\epsilon}, t_{\epsilon})$
and observe that $u_{\epsilon} \ge \psi$ in $Q$.
Now it can be checked that
\begin{align*}
u_{\epsilon}(x_{\epsilon}, t_{\epsilon})=Tu_{\epsilon}(x_{\epsilon}, t_{\epsilon})
\ge T\psi(x_{\epsilon}, t_{\epsilon})
\end{align*}
and
\begin{align*}
T\psi(x_{\epsilon}, t_{\epsilon}) =T\varphi(x_{\epsilon}, t_{\epsilon})+( u_{\epsilon}-\varphi)(x_{\epsilon}, t_{\epsilon})
\end{align*}
Therefore,
\begin{align*}
u_{\epsilon}(x_{\epsilon}, t_{\epsilon}) \ge T\varphi(x_{\epsilon}, t_{\epsilon})+( u_{\epsilon}-\varphi)(x_{\epsilon}, t_{\epsilon})
\end{align*}
and this implies
\begin{align} \label{2est1} 0 \ge T\varphi (x_{\epsilon}, t_{\epsilon}) - \varphi (x_{\epsilon}, t_{\epsilon}).
\end{align}
On the other hand, by the Taylor expansion, we observe that
\begin{align*} \frac{1}{2} & \bigg[ \varphi \bigg(x+\epsilon \nu, t-\frac{\epsilon^{2}}{2} \bigg)+ \varphi\bigg(x-\epsilon \nu, t-\frac{\epsilon^{2}}{2} \bigg) \bigg] \\ &
= \varphi (x,t) -\frac{\epsilon^{2}}{2} \varphi_{t}(x,t)+ \frac{\epsilon^{2}}{2} \langle D^{2} \varphi (x,t) \nu, \nu \rangle + o( \epsilon^{2})
\end{align*}
and
\begin{align*}
\kint_{B_{\epsilon}^{\nu} } &\varphi \bigg(x+ h,t-\frac{\epsilon^{2}}{2} \bigg) d \mathcal{L}^{n-1}(h) \\ & = \varphi (x,t) -\frac{\epsilon^{2}}{2} \varphi_{t}(x,t)+ \frac{\epsilon^{2}}{2(n+1)}
\Delta_{\nu^{\perp}} \varphi (x,t) + o( \epsilon^{2})
\end{align*}
where $$ \Delta_{\nu^{\perp}} \varphi (x,t) = \sum_{i=1}^{n-1} \langle D^{2} \varphi (x,t) \nu_{i}, \nu_{i} \rangle$$ with
$ \nu_{1}, \cdots, \nu_{n-1} $ the orthonormal basis for the space $\nu^{\perp} $
for $ \nu \in S^{n-1} $.
We already know that $ \mathscr{A}_{\epsilon} \varphi $ is continuous with respect to $\nu$ in Proposition \ref{contilem}.
Therefore, there exists a vector $\nu_{\min} = \nu_{\min}(\epsilon) $ minimizing
$ \mathscr{A}_{\epsilon} \varphi (x_{\epsilon, \eta}, t_{\epsilon}; \cdot ) .$
Then we can calculate
\begin{align*}
& T\varphi(x_{\epsilon}, t_{\epsilon})
\\ & \ge \frac{\alpha}{2}
\bigg\{ \varphi \bigg( \hspace{-0.1em} x_{\epsilon}\hspace{-0.1em}+\hspace{-0.1em} \nu_{\min} , t_{\epsilon}-\frac{\epsilon^{2}}{2} \bigg) \hspace{-0.3em}+ \hspace{-0.3em} \varphi \bigg( \hspace{-0.1em} x_{\epsilon}\hspace{-0.1em}-\hspace{-0.1em} \nu_{\min} , t_{\epsilon}-\frac{\epsilon^{2}}{2} \bigg) \hspace{-0.2em} \bigg\}
\\ & \qquad +\beta \kint_{B_{\epsilon}^{\nu_{\min}} } \varphi \bigg(x_{\epsilon}+ h,t_{\epsilon}-\frac{\epsilon^{2}}{2} \bigg) d \mathcal{L}^{n-1}(h)
\\ & \ge \varphi(x_{\epsilon}, t_{\epsilon}) - \frac{\epsilon^{2}}{2} \varphi_{t}(x_{\epsilon}, t_{\epsilon})
\\ & \qquad + \frac{\beta}{2(n+1)} \epsilon^{2} \big\{ \Delta_{\nu_{\min}^{\perp}} \varphi (x_{\epsilon}, t_{\epsilon}) + (p-1) \langle D^{2} \varphi (x_{\epsilon}, t_{\epsilon}) \nu_{\min}, \nu_{\min} \rangle \big\}
.
\end{align*}
Then by \eqref{2est1}, we observe that
\begin{align} \begin{split} \label{2est2}
\frac{\epsilon^{2}}{2} \varphi_{t}(x_{\epsilon}, t_{\epsilon}) & \ge
\frac{\beta \epsilon^{2} }{2(n+1)} \big\{ \Delta_{\nu_{\min}^{\perp}} \varphi (x_{\epsilon}, t_{\epsilon}) + (p-1) \langle D^{2} \varphi (x_{\epsilon}, t_{\epsilon}) \nu_{\min}, \nu_{\min} \rangle \big\} .
\end{split}
\end{align}
Suppose that $D\varphi (x,t) \neq 0 $. Since $(x_{\epsilon}, t_{\epsilon}) \to (x,t)$ as $\epsilon \to 0$, it can be seen that
$$\nu_{\min} \to -\frac{D\varphi (x,t)}{|D\varphi (x,t)|} =: -\mu $$
as $\epsilon \to 0$.
We also check that
$$\Delta_{(-\mu)^{\perp}} \varphi (x,t) + (p-1) \langle D^{2} \varphi (x_{\epsilon}, t_{\epsilon}) (-\mu), (-\mu) \rangle =\Delta_{p}^{N} \varphi (x,t) .$$
Now we divide both side in \eqref{2est2} by $\epsilon^{2}$ and let $\epsilon \to 0$.
Since $Q_{R} \subset \Omega_{T}$, it can be seen that $ \delta(x_{\epsilon}, t_{\epsilon}) \epsilon^{-2} \to 0$ as $ \epsilon \to 0$.
Hence, we deduce
$$ \varphi_{t}(x,t) \ge \frac{1}{n+p} \Delta_{p}^{N} \varphi (x,t) .$$
Next consider the case $D\varphi (x,t) = 0 $.
Observe that
\begin{align*}
& \Delta_{\nu_{\min}^{\perp}} \varphi (x_{\epsilon}, t_{\epsilon}) + (p-1) \langle D^{2} \varphi (x_{\epsilon}, t_{\epsilon}) \nu_{\min}, \nu_{\min} \rangle \\ &
= \Delta \varphi (x_{\epsilon}, t_{\epsilon}) + (p-2) \langle D^{2} \varphi (x_{\epsilon}, t_{\epsilon}) \nu_{\min}, \nu_{\min} \rangle .
\end{align*}
For $p \ge 2$, we see
\begin{align*} (p-2) \langle D^{2} \varphi (x_{\epsilon}, t_{\epsilon}) \nu_{\min}, \nu_{\min} \rangle \ge (p-2) \lambda_{\min}(D^{2} \varphi (x_{\epsilon}, t_{\epsilon})).
\end{align*}
We already know that $(x_{\epsilon}, t_{\epsilon}) \to (x,t)$ as $\epsilon \to 0$ and the map $z \mapsto \lambda_{\min}(D^{2}\varphi(z)) $ is continuous.
Therefore, it turns out
\begin{align} \label{2est3} \varphi_{t}(x,t) \ge \frac{1}{n+p} \big\{ \Delta\varphi(x,t) + (p-2)\lambda_{\min}(D^{2}\varphi(x,t) ) \big\}
\end{align}
by similar calculation in the previous case.
For $1<p<2$, by using similar argument in the previous case and
\begin{align*} (p-2) \langle D^{2} \varphi (x_{\epsilon}, t_{\epsilon}) \nu_{\min}, \nu_{\min} \rangle & \ge (p-2) \lambda_{\max}(D^{2} \varphi (x_{\epsilon}, t_{\epsilon})) \\
& = \lambda_{\min}( (p-2)D^{2} \varphi (x_{\epsilon}, t_{\epsilon})),
\end{align*}
we also obtain the inequality \eqref{2est3}.
We can also prove the reverse inequality to consider a function $\varphi $ touching $u$ from above and a vector $\nu_{\max} $ maximizing $ \mathscr{A}_{\epsilon} \varphi (x_{\epsilon}, t_{\epsilon}; \cdot) $
and to do similar calculation again as above.
\end{proof}
\bibliographystyle{alpha}
|
2,869,038,154,649 | arxiv | \section{Introduction}
\noindent
The surface tension of a simple drop of liquid has captured the imagination of
scientists dating back to the pioneering work of J. Williard Gibbs \cite{Gibbs}.
This interest continues with the main focus of attention directed
towards the description of the deviation of the surface tension from
its planar value when the radius of the liquid droplet becomes smaller.
Such a deviation is especially important in the theoretical description
of nucleation phenomena \cite{nucleation}. The homogeneous nucleation of a
liquid from a supersaturated vapour follows via the formation of small
liquid droplets and the nucleation time and energy depend sensitively
on the precise value of the droplet's surface tension.
A key quantity in quantifying the extent by which the surface tension of
a liquid drop deviates from its planar value is the {\em Tolman length}
introduced by Tolman in 1949 \cite{Tolman}. It can be defined in two equivalent
ways. In the first way, one considers the radial dependence of the surface
tension of a (spherical) liquid droplet defined as the excess grand free
energy per unit area:
\begin{equation}
\Omega = -p_{\ell} \, V_{\ell} - p_v \, V_v + \sigma_s(R) \, A \,.
\end{equation}
When the radius $R$ of the droplet is large, the surface tension may be expanded
in the inverse radius:
\begin{equation}
\label{eq:sigma_s}
\sigma_s(R) = \sigma - \frac{2 \delta \sigma}{R} + \ldots \,,
\end{equation}
where $\sigma$ is the surface tension of the {\em planar interface} and
where the leading order correction defines the Tolman length $\delta$.
In the second route to define the Tolman length, one considers the pressure difference
$\Delta p \!=\! p_{\ell} - p_v$ between the pressure of the liquid inside and
the pressure of the vapour outside the droplet. For large radii of curvature,
$\Delta p$ is expanded in $1/R$:
\begin{equation}
\label{eq:Delta_p}
\Delta p = \frac{2 \sigma}{R} - \frac{2 \delta \sigma}{R^2} + \ldots \,.
\end{equation}
The first term on the right hand side is the familiar Laplace equation \cite{RW}
with the leading order correction giving Tolman's original definition of the
Tolman length \cite{Tolman}. It is important to note that this correction only
takes on the form in Eq.(\ref{eq:Delta_p}) when the {\em equimolar} radius
\cite{Gibbs} is taken as the radius of the liquid drop, i.e. $R \!=\! R_e$.
Furthermore, with this choice of the (Gibbs) dividing surface, terms of
order ${\cal O}(1/R^3)$ are absent and the dots represent terms of order
${\cal O}(1/R^4)$. When the location of the droplet radius is chosen {\em away}
from the equimolar radius, the Tolman length correction to the Laplace equation
has a form different than that shown in Eq.(\ref{eq:Delta_p}). For instance,
the radius corresponding to the so-called {\em surface of tension}
($R \!=\! R_s$) is defined such that Eq.(\ref{eq:Delta_p}) appears
as $\Delta p \!=\! 2 \sigma(R_s) / R_s$.
The determination of the value of the Tolman length for a simple drop of
liquid has proved to be not without controversy (recent reviews are given in
refs. \cite{Blokhuis06, Malijevsky12}). This is mainly due to two reasons: first,
one of the first microscopic expressions for the Tolman length was formulated
in the context of a {\em mechanical approach} which lead to an expression
for the Tolman length in terms of the first moment of the excess tangential
pressure profile of a planar interface \cite{Buff}. However, it was pointed out
by Henderson and Schofield in 1982 that such an expression depends on the form
of the pressure tensor used and is therefore not well-defined \cite{Hemingway81,
Henderson82, Schofield82, Henderson_book}. Furthermore, even the evaluation of the Tolman
length using the usual Irving-Kirkwood \cite{IK} form for the pressure tensor
leads to {\em incorrect} results \cite{Blokhuis92b} and the use of the mechanical
expression is now (mostly) abandoned.
A second origin of controversy is simply due to the fact that for a regular liquid-vapour
interface the Tolman length is {\em small} (a fraction of the molecular diameter),
since it measures the subtle asymmetry between the liquid and vapour phase.
Straightforward squared-gradient theory with the familiar $tanh$-profile for
the density profile, leads to a {\em zero} value of the Tolman length \cite{FW, Blokhuis93}
and it remains a challenge to distinguish its value from zero in computer simulations
\cite{Nijmeijer, Frenkel, Bardouni00, Lei, Horsch12}. Nowadays, those computer simulations
that have succeeded in obtaining a value different from zero indicate that its value
is {\em negative} with its magnitude around one tenth of a molecular diameter
\cite{Giessen09, Binder09, Sampoyo10, Binder10, Binder11, Binder12} and error
bars usually somewhat less than half that number.
The sign and magnitude of the Tolman length for a regular liquid-vapour interface are
corroborated by a large number of different versions of density functional theory (DFT),
which has proved to be an invaluable tool in the theoretical description of inhomogeneous
systems \cite{Sullivan, Evans79, Evans84, Evans90}. Quite surprisingly, the details of
the density functional theory at hand do not seem to matter that much \cite{Bykov06,
Malijevsky12} and one ubiquitously finds that the Tolman length is {\em negative} with
a magnitude comparable to that obtained in simulations.
This includes results for the Tolman length from van der Waals squared-gradient theory
\cite{Baidakov99, Baidakov04a}, density functional theory with a non-local, integral
expression for the interaction between molecules (DFT-LDA) \cite{Malijevsky12,
Giessen98, Koga98, Napari01, Barrett06}, density functional theory with
weighted densities (DFT-WDA) \cite{Bykov06} and density functional theory using Rosenfeld's
\cite{Rosenfeld} fundamental measure theory for the hard-sphere free energy (DFT-FMT)
\cite{Li08, Sampoyo10, Binder10, Binder12}.
All in all, there now seems to be the same level of agreement between simulations and
DFT for the Tolman length as it exists for the surface tension, with the exception of
one particular type of simulation result. In refs. \cite{Giessen09, Binder09, Sampoyo10,
Binder10, Binder11, Binder12} the Tolman length is determined in computer simulations
of liquid droplets for various (large) radii of curvature, but in a different
set of simulations the Tolman length is extracted from computer simulations of a
{\em planar interface} \cite{Haye, Giessen02}, using a virial expression
for the Tolman length \cite{Blokhuis92a}. The simulations of the planar interface
lead to a Tolman length that has the same order of magnitude as the simulations of
the liquid droplets but now with the {\em opposite} sign. It has been suggested
that, since the interfacial area is much larger in the simulations of the planar
interface, the presence of {\em capillary waves} might play an important role
\cite{Giessen09}. However, it is difficult to imagine that this would change
the sign of the Tolman length so that the resolution to this problem remains
uncertain.
Another feature that ubiquitously results from the computer simulations and DFT
calculations of liquid droplets is that the surface tension is {\em not monotonous}
as a function of the (inverse) radius (for a recent review, see ref.~\cite{Malijevsky12}).
A {\em maximum} in the surface tension of a liquid droplet occurs which suggests
that the surface tension is qualitatively better approximated by a {\em parabola}
rather than by a straight line with its slope given by the Tolman length.
This means that one needs to include higher order terms, going beyond the level
of the Tolman length, in the expansion of the surface tension in Eq.(\ref{eq:sigma_s}).
Such an expansion was first provided in the ground-breaking work by Helfrich
in 1973 \cite{Helfrich}. The form for the free energy suggested by Helfrich
is the most general form for the surface free energy of an isotropic surface
expanded to second order in the surface's curvature \cite{Helfrich}:
\begin{equation}
\label{eq:Helfrich}
\Omega_{\rm H} = \int \!\! dA \; [ \, \sigma - \delta \sigma \, J
+ \frac{k}{2} \, J^2 + \bar{k} \, K \, + \ldots ] \,,
\end{equation}
where $J \!=\! 1/R_1 + 1/R_2$ is the total curvature, $K \!=\! 1/(R_1 R_2)$
is the Gaussian curvature and $R_1$, $R_2$ are the principal radii of curvature
at a certain point on the surface. The expansion defines four curvature coefficients:
$\sigma$, the surface tension of the planar interface, $\delta$, the Tolman
length \cite{Tolman}, $k$, the bending rigidity, and $\bar{k}$, the rigidity
constant associated with Gaussian curvature. The original expression proposed
by Helfrich \cite{Helfrich} features the radius of spontaneous curvature $R_0$ as the
linear curvature term ($\delta \sigma \rightarrow 2 k / R_0$ \cite{Blokhuis06, Blokhuis92b}),
but in honour of Tolman we stick to the notation in Eq.(\ref{eq:Helfrich}).
For surfaces for which the curvatures $J$ and $K$ are constant, the Helfrich
free energy per unit area reduces to:
\begin{equation}
\label{eq:sigma(J,K)}
\Omega_{\rm H}/A \equiv \sigma(J,K) = \sigma - \delta \sigma \, J
+ \frac{k}{2} \, J^2 + \bar{k} \, K + \ldots \,,
\end{equation}
which for a spherically or cylindrically shaped surface takes the form:
\begin{eqnarray}
\label{eq:sigma_s(R)}
\sigma_s(R) &=& \sigma - \frac{2 \delta \sigma}{R}
+ \frac{(2 k + \bar{k})}{R^2} + \ldots \hspace*{27pt} {\rm (sphere)} \\
\label{eq:sigma_c(R)}
\sigma_c(R) &=& \sigma - \frac{\delta \sigma}{R}
+ \frac{k}{2 R^2} + \ldots \hspace*{56pt} {\rm (cylinder)}
\end{eqnarray}
These expressions indicate that the second order coefficients, which express
the non-monotonicity of the surface tension as observed in simulations and DFT
calculations of liquid drops, are given by the combination of the rigidity
constants $2 k + \bar{k}$ and the bending rigidity $k$. Our goal in this
article is to provide general formulas for the bending rigidities $k$ and
$\bar{k}$ using density functional theory (DFT-LDA). This work extends
previous work by us \cite{Giessen98}, by Koga and Zeng \cite{Koga99},
by Barrett \cite{Barrett09} and by Baidakov {\em et al.} \cite{Baidakov04b}.
Our formulas are subsequently applied to explicitly evaluate the bending
rigidities and it is determined how well they can be used to describe the
surface tension of a liquid drop (or vapour bubble).
The expansion of the surface tension of a liquid drop to second order in
$1/R$ has not been without controversy \cite{Henderson92, Rowlinson94, Fisher}.
Two issues have played a role here. The first issue concerns the fact that when
the interaction between molecules is sufficiently long-ranged, the expansion
in $1/R$ may not be analytic beyond some term \cite{Blokhuis92a, Hooper, Dietrich}.
In particular, for {\em dispersion forces} the second order correction has
the form $\log(R) / R^2$ rather than $1/R^2$ and one could argue that the
rigidity constants are ``infinite''. Nowadays, this point is well-appreciated
and no longer source of controversy. In this article we come back to this
issue and provide explicit expressions for the second order correction to
replace the expansion in Eq.(\ref{eq:sigma_s(R)}) or (\ref{eq:sigma_c(R)})
for dispersion forces.
A second issue argues that {\em even for short-ranged interactions}, which
are mostly considered in simulations and DFT calculations, the second order
term might pick up a logarithmic correction of the form $\log(R) / R^2$
\cite{Henderson92, Rowlinson94, Fisher}.
The reasoning behind this focuses on the fact that for a {\em spherical}
droplet, the second order contribution to the free energy, i.e. the expression
in Eq.(\ref{eq:sigma_s(R)}) multiplied by the area $A \!=\! 4 \pi \, R^2$
is {\em independent} of $R$, which might be an indication that it should be
replaced by a logarithmic term. The most compelling argument {\em against}
this reasoning lies in the fact that the same argument applied to a {\em cylindrical}
interface would lead to the conclusion that already the linear term in $1/R$
(Tolman length) would pick up logarithmic corrections. Although the issue
is not completely settled, the presence of a logarithmic correction for
short-ranged interaction has not been observed in simulations or
demonstrated in calculations either in mean-field theory (DFT) or
in Statistical Mechanics \cite{Blokhuis92a}. Also in this article, we
inspect (numerically) the possible presence of a logarithmic correction
to the second order term in the expansion of the free energy of a liquid
drop and find no evidence for its presence.
Our article is organized as follows: in the next section we discuss the
density functional theory that is considered (DFT-LDA) and use it to
determine the surface tension $\sigma_s(R)$ of a liquid drop and vapour bubble.
In Section \ref{sec-expansion}, the free energy is expanded to second
order in $1/R$ for a spherical and cylindrical interface which allows the
formulation of new, closed expressions for the rigidity constants $k$ and
$\bar{k}$ \cite{Giessen98, Barrett09}. An important feature addressed is
the consequence of the choice made for the location of the dividing surface
(the value of $R$) on the value of the bending rigidities. The formulas
for $k$ and $\bar{k}$ are explicitly evaluated using a cut-off and shifted
Lennard-Jones potential for the attractive part of the interaction potential.
Since the evaluation of these expressions requires numerical determination
of the density profile, we supply in Section \ref{sec-explicit} an accurate
approximation based on squared-gradient theory to evaluate $\delta$, $k$
and $\bar{k}$ from the parameters of the phase diagram only.
In Section \ref{sec-dispersion} we consider the full Lennard-Jones
interaction potential and determine its consequences for the expansion
of the free energy in $1/R$. We end with a discussion of results.
\section{Density functional theory}
\label{sec-DFT}
\noindent
The expression for the (grand) free energy in density functional theory
is based on the division into a hard-sphere reference system plus
attractive forces described by an interaction potential $U_{\rm att}(r)$.
It is the following functional of the density $\vec{r}$ \cite{Sullivan,
Evans79, Evans84, Evans90}:
\begin{equation}
\label{eq:Omega_DFT}
\Omega[\rho] = \int \!\! d\vec{r} \; [ \; f_{\rm hs}(\rho) - \mu \rho(\vec{r}) \; ]
+ \frac{1}{2} \int \!\! d\vec{r}_1 \! \int \!\! d\vec{r}_{12} \;
U_{\rm att}(r) \, \rho(\vec{r}_1) \rho(\vec{r}_2) \,,
\end{equation}
where $\mu$ is the chemical potential. For the free energy of the
hard-sphere reference system $f_{\rm hs}(\rho)$, we take the well-known
Carnahan-Starling form \cite{CS}:
\begin{equation}
\label{eq:g_hs}
f_{\rm hs}(\rho) = k_{\rm B} T \, \rho \, \ln(\rho)
+ k_{\rm B} T \, \rho \, \frac{(4 \eta - 3 \eta^2)}{(1 - \eta)^2} \,,
\end{equation}
where $\eta \!\equiv\! (\pi/6) \, \rho \, d^3$ with $d$ the
molecular diameter. The Euler-Lagrange equation that minimizes
the free energy in Eq.(\ref{eq:Omega_DFT}) is given by:
\begin{equation}
\label{eq:EL_DFT}
\mu = f^{\prime}_{\rm hs}(\rho) + \int \!\! d\vec{r}_{12} \; U_{\rm att}(r) \, \rho(\vec{r}_2) \,.
\end{equation}
For a {\em uniform} system, the Euler-Lagrange equation becomes:
\begin{equation}
\label{eq:mu}
\mu = f^{\prime}_{\rm hs}(\rho) - 2 a \, \rho \,,
\end{equation}
with the van der Waals parameter $a$ explicitly expressed in terms
of the interaction potential as
\begin{equation}
\label{eq:a}
a \equiv - \frac{1}{2} \int \!\! d\vec{r}_{12} \; U_{\rm att}(r) \,.
\end{equation}
Using the expression for the chemical potential in Eq.(\ref{eq:mu}),
the bulk pressure is obtained from $\Omega \!=\! - p V$ leading to
the following equation of state:
\begin{equation}
\label{eq:EOS}
p = \frac{k_{\rm B} T \, \rho \, (1 + \eta + \eta^2 - \eta^3)}{(1-\eta)^3} - a \, \rho^2 \,.
\end{equation}
Next, we consider the implementation of DFT in planar and spherical geometry.
\vskip 10pt
\noindent
{\bf Planar interface}
\vskip 5pt
\noindent
When the chemical potential is chosen such that a liquid and vapour phase coexist,
$\mu \!=\! \mu_{\rm coex}$, a planar interface forms between the two phases. The
density profile is then a function of the coordinate normal to the interface,
$\rho(\vec{r}) \!=\! \rho_0(z)$. In planar geometry, the Euler-Lagrange equation in
Eq.(\ref{eq:EL_DFT}) becomes:
\begin{equation}
\label{eq:EL_planar}
\mu_{\rm coex} = f^{\prime}_{\rm hs}(\rho_0) + \int \!\! d\vec{r}_{12} \; U_{\rm att}(r) \, \rho_0(z_2) \,.
\end{equation}
The surface tension of the planar interface is the surface free energy
per unit area ($\sigma \!=\! (\Omega + p \, V)/A$ \cite{RW}):
\begin{equation}
\label{eq:sigma_DFT}
\sigma = - \frac{1}{4} \int\limits_{-\infty}^{\infty} \!\!\! dz_1 \! \int \!\! d\vec{r}_{12} \;
U_{\rm att}(r) \, r^2 (1-s^2) \, \rho_0^{\prime}(z_1) \rho_0^{\prime}(z_2) \,,
\end{equation}
where $z_2 \!=\! z_1 + sr$ and $s \!=\! \cos \theta_{12}$.
\vskip 10pt
\noindent
{\bf A Spherical Drop of Liquid}
\vskip 5pt
\noindent
When the chemical potential $\mu$ is varied to a value {\em off-coexistence},
spherically shaped liquid droplets in {\em metastable} equilibrium with a
bulk vapour phase may form. Such droplets are termed {\em critical} droplets.
The radius of the liquid droplet is taken to be equal to the {\em equimolar} radius,
$R \!=\! R_e$ \cite{Gibbs}, which depends on the value of the chemical potential
chosen, and is defined as:
\begin{equation}
\label{eq:R}
4 \pi \int\limits_{0}^{\infty} \!\! dr \; r^2 \left[ \, \rho_s(r) - \rho_v \right]
= \frac{4 \pi}{3} \, R_e^3 \, (\rho_{\ell} - \rho_v) \,.
\end{equation}
The (grand) free energy for the formation of the critical droplet is given by:
\begin{equation}
\label{eq:Delta_Omega_0}
\frac{\Delta \Omega}{A} \equiv \frac{\Omega + p_v \, V}{A} = - \frac{\Delta p \, R}{3} + \sigma_s(R) \,,
\end{equation}
with $p_v$ the vapour pressure outside the droplet and $p_{\ell} = p_v + \Delta p$
is the liquid pressure inside (see the remark below, however). The surface tension
of the critical droplet is the quantity that we wish to study and this equation provides
a way to determine it from $\Delta \Omega$.
In spherical geometry, the free energy density functional in Eq.(\ref{eq:Omega_DFT}) is given by:
\begin{eqnarray}
\label{eq:Delta_Omega}
\frac{\Delta \Omega[\rho_s]}{A} &=& \int\limits_{0}^{\infty} \!\! dr_1 \left( \frac{r_1}{R} \right)^{\!2}
[ \; f_{\rm hs}(\rho_s) - \mu \rho_s(r_1) \; ] \\
&& + \frac{1}{2} \int\limits_{0}^{\infty} \!\! dr_1 \left( \frac{r_1}{R} \right)^{\!2} \! \int \!\! d\vec{r}_{12} \;
U_{\rm att}(r) \, \rho_s(r_1) \rho_s(r_2) \,, \nonumber
\end{eqnarray}
with the Euler-Lagrange equation that minimizes the above free energy equal to:
\begin{equation}
\label{eq:EL_sphere}
\mu = f^{\prime}_{\rm hs}(\rho_s) + \int \!\! d\vec{r}_{12} \; U_{\rm att}(r) \, \rho_s(r_2) \,.
\end{equation}
The procedure to determine $\sigma_s(R)$ as a function of $R$ is as follows:
\vskip 5pt
\noindent
{\bf (1)} First, the bulk densities $\rho_{0,\ell}$ and $\rho_{0,v}$ and the
chemical potential at two-phase coexistence, $\mu_{\rm coex}$, are determined
by solving the following set of equations:
\begin{equation}
\label{eq:bulk_0}
f^{\prime}(\rho_{0,v}) = \mu_{\rm coex} \,, \hspace*{10pt}
f^{\prime}(\rho_{0,\ell}) = \mu_{\rm coex} \,, \hspace*{10pt}
f(\rho_{0,v}) - \mu_{\rm coex} \, \rho_{0,v} = f(\rho_{0,\ell}) - \mu_{\rm coex} \, \rho_{0,\ell} \,,
\end{equation}
where we have defined $f(\rho) \!\equiv\! f_{\rm hs}(\rho) - a \rho^2$. The bulk density
difference is denoted as $\Delta \rho \!\equiv\! \rho_{0,\ell} - \rho_{0,v}$ and the
pressure at coexistence is simply $p_{\rm coex} \!=\! -f(\rho_{0,\ell/v}) +
\mu_{\rm coex} \, \rho_{0,\ell/v}$.
\vskip 5pt
\noindent
{\bf (2)} Next, the chemical potential $\mu$ is varied to a value {\em off-coexistence}.
For $\mu \!>\! \mu_{\rm coex}$ liquid droplets are formed ($R \!>\! 0$)
and when $\mu \!<\! \mu_{\rm coex}$ we obtain bubbles of vapour ($R \!<\! 0$).
For given temperature and chemical potential $\mu$ the liquid and vapour
densities $\rho_{\ell}$ and $\rho_v$ are then determined from solving the
following two equations
\begin{equation}
f^{\prime}(\rho_{v}) = \mu \,, \hspace*{25pt} f^{\prime}(\rho_{\ell}) = \mu \,,
\end{equation}
with the corresponding bulk pressures calculated from
\begin{equation}
\label{eq:pressures}
p_{v} = - f(\rho_{v}) + \mu \, \rho_{v} \,, \hspace*{25pt}
p_{\ell} = - f(\rho_{\ell}) + \mu \, \rho_{\ell} \,.
\end{equation}
It should be remarked that far outside the droplet ($r \!\rightarrow\! \infty$),
the density (or pressure) is equal to that of the bulk,
$\rho_s(\infty) \!=\! \rho_{v}$, but that only for large droplets
is the density {\em inside} the droplet ($\rho_s(r \!=\! 0)$) equal to
its bulk value ($\rho_{\ell}$).
\vskip 5pt
\noindent
{\bf (3)} Finally, the Euler-Lagrange equation for $\rho_s(r)$ in
Eq.(\ref{eq:EL_sphere}) is solved numerically with the boundary condition
$\rho_s(\infty) \!=\! \rho_{v}$. The resulting density profile
$\rho_s(r)$ is inserted into Eq.(\ref{eq:R}) to determine the equimolar
radius $R \!=\! R_e$ and into Eq.(\ref{eq:Delta_Omega}) to determine
$\Delta \Omega$ and thus $\sigma_s(R)$.
\begin{figure}
\centering
\includegraphics[angle=270,width=250pt]{Figure1.ps}
\caption{Phase diagram as a function of reduced temperature and density.
The solid lines are the liquid-vapour densities at two values of the reduced
LJ cut-off radius (solid circles indicate the location of the critical points).
Square symbols are simulation results from ref.~\cite{Baidakov07}.}
\label{Fig:pd}
\end{figure}
\vskip 10pt
\noindent
This procedure is carried out using a cut-off and shifted Lennard-Jones
potential for the attractive part of the interaction potential:
\begin{eqnarray}
\label{eq:LJ}
U_{\rm att}(r) = \left\{
\begin{array}{cc}
U_{\rm LJ}(r_{\rm min}) - U_{\rm LJ}(r_c) & \hspace*{100pt} 0 < r < r_{\rm min} \\
U_{\rm LJ}(r) - U_{\rm LJ}(r_c) & \hspace*{100pt} r_{\rm min} < r < r_c \\
0 & \hspace*{100pt} r > r_c
\end{array}
\right.
\end{eqnarray}
where $U_{\rm LJ}(r) \!=\! 4 \varepsilon \, [ \, (d/r)^{12} - (d/r)^6 \, ]$ and
$r_{\rm min} \!=\! 2^{\frac{1}{6}} \, d$. Figure \ref{Fig:pd} shows the resulting phase
diagram as a function of reduced density $\rho^* \!\equiv\! \rho \, d^3$ and reduced
temperature $T^* \!\equiv\! k_{\rm B} T / \varepsilon$. The solid lines are the liquid-vapour
densities for two values of the LJ cut-off radius; the square symbols are recent
computer simulation results taken from ref.~\cite{Baidakov07}.
\begin{figure}
\centering
\includegraphics[angle=270,width=250pt]{Figure2.ps}
\caption{Pressure difference multiplied by $R / 2 \sigma$ as a function
of the reciprocal {\em equimolar} radius $d/R$. Circular symbols are simulation
results from ref.~\cite{Giessen09}. DFT calculations are shown as the solid
line ($\Delta p \!=\! p(0) - p_v$) and square symbols ($\Delta p \!=\! p_{\ell} - p_v$).
For the DFT calculations we have set the reduced temperature $T^* \!=\!$ 0.911297
and reduced LJ cut-off $r_c \!=$ 2.5. The value for the reduced temperature is
chosen such that the liquid-vapour density difference at coexistence matches
the value in the computer simulations \cite{Giessen09}.}
\label{Fig:Delta_p}
\end{figure}
In Figure \ref{Fig:Delta_p}, we show the pressure difference multiplied
by $R / 2 \sigma$ as a function of the reciprocal radius. The circular
symbols are previous simulation results \cite{Giessen09} that were
used to determine the Tolman length from (minus) the slope at
$1/R \!=\! 0$ ($\delta \!\approx$ - 0.10 $d$ \cite{Giessen09}).
For comparison, we show the result of DFT calculations as the solid
line, where we have taken the pressure at the center of the droplet
as the liquid pressure. The excellent agreement in Figure \ref{Fig:Delta_p}
is somewhat misleading since the corresponding values of the surface
tension differ by as much as 50 \%. As square symbols, the results of
DFT calculations using $p_{\ell}$ from Eq.(\ref{eq:pressures})
as the liquid pressure are plotted to show that the slight difference
between $p(0)$ and $p_{\ell}$ for small droplets has no consequences
for the determination of $\delta$.
\begin{figure}
\centering
\includegraphics[angle=270,width=250pt]{Figure3.ps}
\caption{Droplet surface tension (in units of $k_{\rm B} T / d^2$) as a function
of the reciprocal {\em equimolar} radius $d/R$; vapour bubbles are formed for
$R \!<\! 0$ and liquid droplets for $R \!>\! 0$. The solid line is the
parabolic approximation to $\sigma_s(R)$ determined from the expansion in Section
\ref{sec-expansion}. As a comparison, the parabolic approximation to the cylindrical
surface tension $\sigma_c(R)$ is shown as the dashed line. We have set the reduced
temperature $T^* \!=\!$ 1.0 and reduced LJ cut-off $r_c \!=$ 2.5.}
\label{Fig:sigma_R}
\end{figure}
In Figure \ref{Fig:sigma_R}, a typical example of the surface tension of a spherical
liquid drop (and vapour bubble) is shown as a function of $1/R$, with $R$ the
equimolar radius of the droplet. The symbols are the values for $\sigma_s(R)$
calculated using DFT. The solid line is the parabolic approximation in
Eq.(\ref{eq:sigma_s(R)}) with values for the coefficients $\sigma$, $\delta$,
and $2 k + \bar{k}$ calculated from formulas presented in the next Section.
The behaviour of the surface tension is characterized by a positive first
derivative at $1/R \!=\!0$, which indicates that the Tolman length is {\em negative},
and a negative {\em second} derivative which indicates that the combination
$2 k + \bar{k}$ is also {\em negative}. It is concluded that the parabolic
approximation gives a {\em quantitatively} accurate description for the surface
tension for a large range of reciprocal radii. The determination of the full
$\sigma_s(R)$ is usually quite elaborate and it therefore seems sufficient
to only determine the coefficients in the parabolic approximation to $\sigma_s(R)$
as a function of $1/R$. This is done in the next Section.
\section{Curvature expansion}
\label{sec-expansion}
\noindent
In this section, we consider spherically and cylindrically shaped liquid droplets
and expand the free energy and density profile systematically to second order
in $1/R$. An important feature of our analysis will be to not restrict ourselves
to a particular choice of the dividing surface, but to instead leave the radius
$R$ unspecified. This will allow us to derive new, more general expressions and
will allow for a new investigation of the consequences of varying the choice for
the location of the dividing surface.
To second order in $1/R$, the expansion of the density profile of the spherical
droplet reads:
\begin{equation}
\label{eq:expansion_rho}
\rho_s(r) = \rho_0(z) + \frac{1}{R} \, \rho_{s,1}(z) + \frac{1}{R^2} \, \rho_{s,2}(z) + \ldots \,,
\end{equation}
where $z \!=\! r - R$. The leading order correction to the density
profile of the spherical interface is twice that of the cylindrical
interface, so it is convenient to define
$\rho_1(z) \!\equiv\! \rho_{s,1}(z) \!=\! 2 \, \rho_{c,1}(z)$.
We shall consider the expansion of the free energy of the spherical
and cylindrical droplet separately.
\vskip 10pt
\noindent
{\bf Spherical interface}
\vskip 5pt
\noindent
The coefficients in the curvature expansion of the density are determined
from the curvature expansion of the Euler-Lagrange equation in
Eq.(\ref{eq:EL_sphere}). The result is that the (planar) density
profile $\rho_0(z)$ is determined from Eq.(\ref{eq:EL_planar}) and $\rho_1(z)$
follows from solving:
\begin{equation}
\label{eq:EL_1}
\mu_1 = f^{\prime\prime}_{\rm hs}(\rho_0) \, \rho_1(z_1) + \int \!\! d\vec{r}_{12} \; U_{\rm att}(r) \,
[ \, \rho_1(z_2) + \frac{r^2}{2} (1-s^2) \, \rho^{\prime}_0(z_2) \, ] \,,
\end{equation}
where $\mu_1 \!=\! 2 \sigma / \Delta \rho$ \cite{Blokhuis93, Blokhuis06}.
For the evaluation of the curvature coefficients it turns out to be sufficient
to determine the density profiles $\rho_0(z)$ and $\rho_1(z)$ only.
The expansion for $\rho_s(r)$ is inserted into the expression for the
free energy in Eq.(\ref{eq:Delta_Omega}). Performing a systematic expansion
to second order in $1/R$, using the Euler-Lagrange equations in
Eqs.(\ref{eq:EL_planar}) and (\ref{eq:EL_1}), one ultimately
obtains expressions for the curvature coefficients by comparing the free energy
to the curvature expansion in Eq.(\ref{eq:sigma_s(R)}). For the surface tension of
the planar interface the result in Eq.(\ref{eq:sigma_DFT}) is recovered:
\begin{equation}
\label{eq:sigma}
\sigma = - \frac{1}{4} \int\limits_{-\infty}^{\infty} \!\!\! dz_1 \! \int \!\! d\vec{r}_{12} \;
U_{\rm att}(r) \, r^2 (1-s^2) \, \rho_0^{\prime}(z_1) \rho_0^{\prime}(z_2) \,.
\end{equation}
For the Tolman length one obtains the following expression \cite{Giessen98}
\begin{equation}
\label{eq:delta}
\delta \sigma = \frac{1}{4} \int\limits_{-\infty}^{\infty} \!\!\! dz_1 \! \int \!\! d\vec{r}_{12} \;
U_{\rm att}(r) \, r^2 (1-s^2) \, z_1 \, \rho_0^{\prime}(z_1) \rho_0^{\prime}(z_2)
- \frac{\mu_1}{2} \int\limits_{-\infty}^{\infty} \!\!\! dz \; z \, \rho_0^{\prime}(z) \,.
\end{equation}
For the combination of the rigidity constants, $2k + \bar{k}$, we have:
\begin{eqnarray}
\label{eq:k_sph}
2k + \bar{k} &=& \frac{1}{4} \int\limits_{-\infty}^{\infty} \!\!\! dz_1 \! \int \!\! d\vec{r}_{12} \;
U_{\rm att}(r) \, r^2 (1-s^2) \, \rho_0^{\prime}(z_1) \rho_1(z_2) \\
&-& \frac{1}{4} \int\limits_{-\infty}^{\infty} \!\!\! dz_1 \! \int \!\! d\vec{r}_{12} \;
U_{\rm att}(r) \, r^2 (1-s^2) \, z_1^2 \, \rho_0^{\prime}(z_1) \rho_0^{\prime}(z_2) \nonumber \\
&+& \frac{1}{48} \int\limits_{-\infty}^{\infty} \!\!\! dz_1 \! \int \!\! d\vec{r}_{12} \;
U_{\rm att}(r) \, r^4 (1-s^4) \, \rho_0^{\prime}(z_1) \rho_0^{\prime}(z_2) \nonumber \\
&+& \int\limits_{-\infty}^{\infty} \!\!\! dz \left[ \frac{\mu_1}{2} z \, \rho_1^{\prime}(z)
+ \mu_1 \, z^2 \, \rho_0^{\prime}(z) + \mu_{s,2} \, z \, \rho_0^{\prime}(z) \right] \,, \nonumber
\end{eqnarray}
where $\mu_{s,2} \!=\! - \sigma \, \Delta \rho_1 / (\Delta \rho)^2
- 2 \delta \sigma / \Delta \rho$ \cite{Blokhuis93, Blokhuis06} with
$\Delta \rho_1 \!\equiv\! \rho_{1,\ell} - \rho_{1,v}$.
\vskip 10pt
\noindent
{\bf Cylindrical interface}
\vskip 5pt
\noindent
The analysis for the cylindrical interface is analogous
to that of the spherical interface. Following the same procedure as
for the spherical interface, the expressions for $\sigma$ and
$\delta \sigma$ in Eqs.(\ref{eq:sigma}) and (\ref{eq:delta})
are recovered and one obtains as an expression for the bending
rigidity $k$:
\begin{eqnarray}
\label{eq:rigidity}
k &=& \frac{1}{8} \int\limits_{-\infty}^{\infty} \!\!\! dz_1 \! \int \!\! d\vec{r}_{12} \;
U_{\rm att}(r) \, r^2 (1-s^2) \, \rho_0^{\prime}(z_1) \rho_1(z_2) \\
&+& \frac{1}{64} \int\limits_{-\infty}^{\infty} \!\!\! dz_1 \! \int \!\! d\vec{r}_{12} \;
U_{\rm att}(r) \, r^4 (1-s^2)^2 \, \rho_0^{\prime}(z_1) \rho_0^{\prime}(z_2) \nonumber \\
&+ & \int\limits_{-\infty}^{\infty} \!\!\! dz \left[ \frac{\mu_1}{4} z \, \rho_1^{\prime}(z)
+ \frac{\mu_1}{2} \, z^2 \, \rho_0^{\prime}(z) + 2 \mu_{c,2} \, z \, \rho_0^{\prime}(z) \right] \,, \nonumber
\end{eqnarray}
where $\mu_{c,2} \!=\! - \sigma \, \Delta \rho_1 / (2 \, \Delta \rho)^2$
\cite{Blokhuis93, Blokhuis06}. An expression for the rigidity constant
associated with Gaussian curvature is then obtained by combining
Eqs.(\ref{eq:k_sph}) and (\ref{eq:rigidity}):
\begin{eqnarray}
\label{eq:k_bar}
\bar{k} &=& - \frac{1}{4} \int\limits_{-\infty}^{\infty} \!\!\! dz_1 \! \int \!\! d\vec{r}_{12} \;
U_{\rm att}(r) \, r^2 (1-s^2) \, z_1^2 \, \rho_0^{\prime}(z_1) \rho_0^{\prime}(z_2) \\
&& - \frac{1}{96} \int\limits_{-\infty}^{\infty} \!\!\! dz_1 \! \int \!\! d\vec{r}_{12} \;
U_{\rm att}(r) \, r^4 (1-s^2) (1-5s^2) \, \rho_0^{\prime}(z_1) \rho_0^{\prime}(z_2) \nonumber \\
&& + (\mu_{s,2} - 4 \mu_{c,2}) \int\limits_{-\infty}^{\infty} \!\!\! dz \; z \, \rho_0^{\prime}(z) \,. \nonumber
\end{eqnarray}
The expressions for $k$ and $\bar{k}$ differ in two ways somewhat from previous
expressions derived by us in ref.~\cite{Giessen98}. First, they are rewritten
in a more compact form with a printing error in ref.~\cite{Giessen98} corrected
(as noted by Barrett \cite{Barrett09}). Second, these expressions are derived without
reference to a particular choice for the location of the dividing surface, i.e.
for the location of the $z \!=\! 0$ plane. This feature allows us to investigate
the influence of the {\em choice} for the location of the dividing surface.
As already known, the surface tension and Tolman length are {\em independent}
of this choice but $k$ and $\bar{k}$ {\em do} depend on it.
\vskip 10pt
\noindent
{\bf Choice for the location of the dividing surface}
\vskip 5pt
\noindent
We first consider the density profile of the {\em planar} interface, obtained by
solving the differential equation in Eq.(\ref{eq:EL_planar}), to investigate the
consequences of the choice for the location of the dividing surface for $\delta$
and $\bar{k}$. One may verify that when $\rho_0(z)$ is a particular solution of
the differential equation in Eq.(\ref{eq:EL_planar}), then the {\em shifted}
density profile
\begin{equation}
\rho_0(z) \longrightarrow \rho_0(z-z_0) \,,
\end{equation}
is also a solution for arbitrary value of the integration constant $z_0$. However,
since the expressions for $\delta$ and $\bar{k}$ feature $z$ (or $z_1$) in
the integrand, such a shift has consequences for the different contributions
to $\delta$ and $\bar{k}$. To investigate this in more detail, we first
place the dividing surface of the planar system at the equimolar surface,
$z \!=\! z_e$, which is defined such that the excess density is zero \cite{Gibbs}:
\begin{equation}
\label{eq:equimolar}
\int\limits_{-\infty}^{\infty} \!\!\! dz \; [ \rho_0(z) - \rho_{0,\ell} \, \Theta(z_e-z) - \rho_{0,v} \, \Theta(z-z_e) ]
= - \int\limits_{-\infty}^{\infty} \!\!\! dz \; (z-z_e) \, \rho_0^{\prime}(z) = 0 \,,
\end{equation}
where $\Theta(z)$ is the Heaviside function. When all distances to the surface
are measured with respect to the equimolar plane, we need to replace $z$
by $z-z_e$ in the expressions for $\delta$ and $\bar{k}$. For the
Tolman length in Eq.(\ref{eq:delta}) we then find that:
\begin{equation}
\label{eq:delta_equimolar}
\delta \sigma = \frac{1}{4} \int\limits_{-\infty}^{\infty} \!\!\! dz_1 \! \int \!\! d\vec{r}_{12} \;
U_{\rm att}(r) \, r^2 (1-s^2) \, (z_1 - z_e) \, \rho_0^{\prime}(z_1) \rho_0^{\prime}(z_2) \,,
\end{equation}
where we have used Eq.(\ref{eq:equimolar}). Now, to investigate the consequences of
shifting the dividing surface away from the equimolar surface by a distance $\Delta$,
we replace $z \!\rightarrow\! z - (z_e + \Delta)$ in the expression for
the Tolman length in Eq.(\ref{eq:delta}). One may easily verify that on account of
the fact that $\mu_1 \!=\! 2 \sigma / \Delta \rho$ the Tolman length then again reduces
to the expression in Eq.(\ref{eq:delta_equimolar}) which proofs that the Tolman
length is {\em independent} of the choice for the location of the dividing surface.
Replacing $z \!\rightarrow\! z - z_e$ in the expression for the rigidity
constant associated with Gaussian curvature in Eq.(\ref{eq:k_bar}), we find
that $\bar{k}$ simplifies to
\begin{eqnarray}
\label{eq:k_bar_equimolar}
\bar{k}_{\rm equimolar} &=& - \frac{1}{4} \int\limits_{-\infty}^{\infty} \!\!\! dz_1 \! \int \!\! d\vec{r}_{12} \;
U_{\rm att}(r) \, r^2 (1-s^2) \, (z_1-z_e)^2 \, \rho_0^{\prime}(z_1) \rho_0^{\prime}(z_2) \\
&& - \frac{1}{96} \int\limits_{-\infty}^{\infty} \!\!\! dz_1 \! \int \!\! d\vec{r}_{12} \;
U_{\rm att}(r) \, r^4 (1-s^2) (1-5s^2) \, \rho_0^{\prime}(z_1) \rho_0^{\prime}(z_2) \,. \nonumber
\end{eqnarray}
Again, we may investigate the consequence of shifting the dividing surface
by replacing $z \!\rightarrow\! z - (z_e + \Delta)$ in the expression for
$\bar{k}$ in Eq.(\ref{eq:k_bar}). We then find that
\begin{equation}
\bar{k} = \bar{k}_{\rm equimolar} + \sigma \, \Delta^2 \,.
\end{equation}
This equation shows that $\bar{k}$ {\em does} depend on the choice for
the location of the dividing surface. It also shows that $\bar{k}$
evaluated for the {\em equimolar} surface ($\Delta \!=\! 0$), corresponds
to the {\em lowest} possible value for $\bar{k}$ and is the {\em least}
sensitive to a shift in the location of the dividing surface.
To address the influence of the dividing surface on the value of the
{\em bending rigidity} $k$, we need to consider the properties of the
density profile $\rho_1(z)$ as well. One may verify that when $\rho_1(z)$
is a particular solution of Eq.(\ref{eq:EL_1}) then also
\begin{equation}
\label{eq:rho_1}
\rho_1(z) \longrightarrow \rho_1(z) + \alpha \, \rho_0^{\prime}(z) \,,
\end{equation}
is a solution for arbitrary value of the integration constant $\alpha$.
Now, one may easily verify by inserting Eq.(\ref{eq:rho_1}) into
Eq.(\ref{eq:rigidity}) that $k$ is {\em independent} of the value
of the integration constant. This means that just like $\delta$ and $\bar{k}$
we only need to consider the influence of the choice for the location of
the dividing surface of the {\em planar} density profile $\rho_0(z)$.
For the {\em equimolar} surface, the expression for the bending rigidity
in Eq.(\ref{eq:rigidity}) reduces to:
\begin{eqnarray}
\label{eq:k_equimolar}
k_{\rm equimolar} &=& \frac{1}{8} \int\limits_{-\infty}^{\infty} \!\!\! dz_1 \! \int \!\! d\vec{r}_{12} \;
U_{\rm att}(r) \, r^2 (1-s^2) \, \rho_0^{\prime}(z_1) \rho_1(z_2) \\
&+& \frac{1}{64} \int\limits_{-\infty}^{\infty} \!\!\! dz_1 \! \int \!\! d\vec{r}_{12} \;
U_{\rm att}(r) \, r^4 (1-s^2)^2 \, \rho_0^{\prime}(z_1) \rho_0^{\prime}(z_2) \nonumber \\
&+& \frac{\mu_1}{4} \int\limits_{-\infty}^{\infty} \!\!\! dz \left[ (z - z_e) \, \rho_1^{\prime}(z)
+ 2 \, (z-z_e)^2 \, \rho_0^{\prime}(z) \right] \,. \nonumber
\end{eqnarray}
Shifting the dividing surface by replacing $z \!\rightarrow\! z - (z_e + \Delta)$
in the expression for $k$ in Eq.(\ref{eq:rigidity}), we then find that
\begin{equation}
k = k_{\rm equimolar} - \sigma \, \Delta^2 \,.
\end{equation}
It is concluded that also the bending rigidity $k$ {\em does} depend on the
choice for the location of the dividing surface. The bending rigidity
evaluated for the {\em equimolar} surface ($\Delta \!=\! 0$), now corresponds
to the {\em largest} possible value for $k$ but it is again the {\em least}
sensitive to a shift in the location of the dividing surface.
\begin{figure}
\centering
\includegraphics[angle=270,width=250pt]{Figure4a.ps}
\includegraphics[angle=270,width=250pt]{Figure4b.ps}
\caption{Surface tension $\sigma$ (in units of $k_{\rm B} T / d^2$) and
Tolman length $\delta$ (in units of $d$) as a function of reduced temperature.
Circular symbols are the results of the full DFT calculations in
Eqs.(\ref{eq:sigma}) and (\ref{eq:delta}). The solid lines are the
squared-gradient approximations of Section \ref{sec-explicit}.
Square symbols are simulation results for $\sigma$ from ref.~\cite{Baidakov07}
and for $\delta$ from ref.~\cite{Giessen09} (solid square) and
ref.~\cite{Binder10} (two open squares).}
\label{Fig:sigma_delta}
\end{figure}
\vskip 10pt
\noindent
The procedure to determine the curvature coefficients $\sigma$, $\delta$,
$k$ and $\bar{k}$ is now as follows. The planar profile $\rho_0(z)$ is first
determined from the differential equation in Eq.(\ref{eq:EL_planar}) with
$\rho_{0,\ell}$, $\rho_{0,v}$, $\mu_{\rm coex}$ and $p_{\rm coex}$
derived from solving the set of equations in Eq.(\ref{eq:bulk_0}). From
$\rho_0(z)$, the location of the equimolar plane $z \!=\! z_e$ is determined
from Eq.(\ref{eq:equimolar}) and the curvature coefficients $\sigma$, $\delta$
and $\bar{k}$ are evaluated from the integrals in Eq.(\ref{eq:sigma}),
(\ref{eq:delta_equimolar}) and (\ref{eq:k_bar_equimolar}), respectively.
The constant $\mu_1$ is subsequently determined from $\mu_1 \!=\! 2 \sigma
/ \Delta \rho$ which allows us to determine the bulk density values
$\rho_{1,\ell/v}$ from $\rho_{1,\ell/v} \!=\! \mu_1 / f^{\prime \prime}
(\rho_{0,\ell/v})$. For given $\rho_0(z)$ and $\mu_1$, the differential
equation for $\rho_1(z)$ in Eq.(\ref{eq:EL_1}) is solved with the
boundary conditions $\rho_1(-\infty) \!=\! \rho_{1,\ell}$ and
$\rho_1(\infty) \!=\! \rho_{1,v}$. Finally, with $\rho_1(z)$ determined,
$k$ can be evaluated from the integral in Eq.(\ref{eq:k_equimolar}).
\begin{figure}
\centering
\includegraphics[angle=270,width=250pt]{Figure5a.ps}
\includegraphics[angle=270,width=250pt]{Figure5b.ps}
\includegraphics[angle=270,width=250pt]{Figure5c.ps}
\caption{Bending rigidity $k$, Gaussian rigidity $\bar{k}$,
and the combination $2 k + \bar{k}$ (in units of $k_{\rm B} T$)
as a function of temperature. The rigidity constants are evaluated
using the {\em equimolar} surface as the dividing surface. Circular symbols
are the results of the full DFT calculations in Eqs.(\ref{eq:k_bar_equimolar})
and (\ref{eq:k_equimolar}). The solid lines are the squared-gradient
approximations of Section \ref{sec-explicit}. Square symbols are simulation
results from ref.~\cite{Binder10}.}
\label{Fig:kabar_ka}
\end{figure}
This procedure is carried out (again) using the cut-off and shifted
Lennard-Jones potential in Eq.(\ref{eq:LJ}) for the attractive part
of the interaction potential. Figure \ref{Fig:sigma_delta} shows the
surface tension and Tolman length as a function of temperature.
The circular symbols are the values for $\sigma$ and $\delta$ calculated
using DFT for two values of the LJ cut-off radius $r_c$. The solid lines
are the squared-gradient approximations in Section \ref{sec-explicit}
for $r_c \!=$ 2.5, 7.5, and $\infty$. As square symbols, we show
computer simulation results for $\sigma$ from ref.~\cite{Baidakov07},
the single simulation result for $\delta$ from ref.~\cite{Giessen09}
(solid square) and results for $\delta$ from simulations by the group
of Binder \cite{Binder10} (open squares).
In Figure \ref{Fig:kabar_ka}, the bending rigidity $k$, Gaussian rigidity
$\bar{k}$, and the combination $2 k + \bar{k}$ are shown as a function of
temperature. The rigidity constants are evaluated using the {\em equimolar}
surface for the location of the dividing surface. The circular symbols are
the values for $k$ and $\bar{k}$ calculated using DFT for two values of the
reduced LJ cut-off radius $r_c \!=$ 2.5 and 7.5, with the solid lines the
corresponding squared-gradient approximations determined in the next Section.
Also shown are simulations results by the group of Binder \cite{Binder10}.
Although a detailed comparison of the DFT and simulation results is not
really appropriate due to a difference in cut-off used, the agreement in
sign and order of magnitude is rather satisfactory.
\section{Squared-gradient expressions}
\label{sec-explicit}
\noindent
The evaluation of $\delta$, $k$ and $\bar{k}$ requires the full numerical
evaluation of the density profiles $\rho_0(z)$ and $\rho_1(z)$ from
the differential equations in Eqs.(\ref{eq:EL_planar}) and (\ref{eq:EL_1}).
This procedure is quite elaborate, prompting a need for simple formulas
that provide (approximate) numbers for the various coefficients.
In this section we provide a rather accurate approximation scheme based
on the squared-gradient approximation which only requires the calculation
of the phase diagram as input.
The squared-gradient theory for surfaces dates back to the work of van der
Waals in 1893 \cite{vdW}. Its free energy functional is derived from
Eq.(\ref{eq:Omega_DFT}) by assuming that gradients in the density are
small so that $\rho(\vec{r}_2)$ may be expanded around $\rho(\vec{r}_1)$.
This leads to:
\begin{equation}
\label{eq:Omega_SQ}
\Omega[\rho] = \int \!\! d\vec{r} \; \left[ m \, | \vec{\nabla} \rho(\vec{r}) |^2
+ f(\rho) - \mu \rho(\vec{r}) \right] \,,
\end{equation}
where the squared-gradient coefficient $m$ is given by
\begin{equation}
\label{eq:m}
m \equiv - \frac{1}{12} \int \!\! d\vec{r}_{12} \; r^2 \, U_{\rm att}(r) \,.
\end{equation}
Expressions for the curvature coefficients in squared-gradient theory were
formulated some time ago. For the surface tension of the planar
interface, we have the familiar expression given by van der Waals \cite{vdW}:
\begin{equation}
\label{eq:sigma_SQ}
\sigma = 2 \, m \int\limits_{-\infty}^{\infty} \!\!\! dz \; \rho_0^{\prime}(z)^2 \,.
\end{equation}
For the Tolman length, Fisher and Wortis derived the following expression \cite{FW}:
\begin{equation}
\label{eq:delta_SQ}
\delta \sigma = - 2 \, m \int\limits_{-\infty}^{\infty} \!\!\! dz \; (z - z_e) \, \rho_0^{\prime}(z)^2 \,.
\end{equation}
For the bending and Gaussian rigidity, one has \cite{Blokhuis93}:
\begin{eqnarray}
\label{eq:k_SQ}
k &=& - m \int\limits_{-\infty}^{\infty} \!\!\! dz \; \rho_0(z) \, \rho_1^{\prime}(z)
+ \int\limits_{-\infty}^{\infty} \!\!\! dz \left[ \frac{\mu_1}{4} z \, \rho_1^{\prime}(z)
+ \frac{\mu_1}{2} \, z^2 \, \rho_0^{\prime}(z) + 2 \mu_{c,2} \, z \, \rho_0^{\prime}(z) \right] \,, \nonumber \\
\bar{k} &=& 2 \, m \int\limits_{-\infty}^{\infty} \!\!\! dz \; z^2 \, \rho_0^{\prime}(z)^2
+ (\mu_{s,2} - 4 \mu_{c,2}) \int\limits_{-\infty}^{\infty} \!\!\! dz \; z \, \rho_0^{\prime}(z) \,,
\end{eqnarray}
which, evaluated using the equimolar surface for the location of the dividing
surface, reduce to:
\begin{eqnarray}
\label{eq:k_equimolar_SQ}
k_{\rm equimolar} &=& - m \int\limits_{-\infty}^{\infty} \!\!\! dz \; \rho_0(z) \, \rho_1^{\prime}(z)
+ \frac{\mu_1}{4} \int\limits_{-\infty}^{\infty} \!\!\! dz \left[ (z - z_e) \, \rho_1^{\prime}(z)
+ 2 \, (z-z_e)^2 \, \rho_0^{\prime}(z) \right] \,, \nonumber \\
\bar{k}_{\rm equimolar} &=& 2 \, m \int\limits_{-\infty}^{\infty} \!\!\! dz \; (z-z_e)^2 \, \rho_0^{\prime}(z)^2 \,.
\end{eqnarray}
To evaluate these expressions, the density profiles $\rho_0(z)$ and $\rho_1(z)$
still need to be determined from the expanded Euler-Lagrange equation:
\begin{eqnarray}
\label{eq:EL_SQ_0}
f^{\prime}(\rho_0) &=& \mu_{\rm coex} + 2 m \, \rho_0^{\prime\prime}(z) \,, \\
\label{eq:EL_SQ_1}
f^{\prime\prime}(\rho_0) \, \rho_1(z) &=& \mu_1 + 2 m \, \rho^{\prime\prime}_1(z) + 4 m \, \rho^{\prime}_0(z) \,.
\end{eqnarray}
In order to solve these equations, it is useful to assume proximity to the critical
point so that the free energy density may be approximated by the usual double-well form:
\begin{equation}
\label{eq:rho^4}
f(\rho) - \mu_{\rm coex} \rho + p_{\rm coex} = \frac{m}{(\Delta \rho)^2 \, \xi^2} \,
(\rho - \rho_{0,\ell})^2 \, (\rho - \rho_{0,v})^2 \,,
\end{equation}
where the bulk correlation length $\xi$ is related to the second derivative
of $f(\rho)$ evaluated at either bulk density. Solving the Euler-Lagrange
equation in Eq.(\ref{eq:EL_SQ_0}) then leads to the usual $tanh$-form for
the planar density profile \cite{RW}:
\begin{equation}
\label{eq:tanh}
\rho_0(z) = \frac{1}{2} ( \rho_{0,\ell} + \rho_{0,v} ) - \frac{\Delta \rho}{2} \, \tanh((z-z_e)/2\xi) \,.
\end{equation}
One may verify that solving the Euler-Lagrange equation in Eq.(\ref{eq:EL_SQ_1})
gives the following general solution for $\rho_1(z)$ \cite{Blokhuis93}:
\begin{equation}
\label{eq:rho_1_SQ}
\rho_1(z) = \frac{1}{3} \, m \, (\Delta \rho)^2 \, \xi + \alpha \, \rho_0^{\prime}(z) \,.
\end{equation}
As already discussed, the rigidity constant is {\em independent} of the integration
constant $\alpha$. Inserting these profiles into the expressions for $\sigma$, $k$
and $\bar{k}$ in Eqs.(\ref{eq:sigma_SQ}) and (\ref{eq:k_equimolar_SQ}), one finds
\cite{Blokhuis93}:
\begin{eqnarray}
\label{eq:k_kbar_SQ}
\sigma &=& \frac{m \, (\Delta \rho)^2}{3 \, \xi} \,, \\
k_{\rm equimolar} &=& - \frac{1}{9} (\pi^2 - 3) \, m \, (\Delta \rho)^2 \, \xi \,, \nonumber \\
\bar{k}_{\rm equimolar} &=& \frac{1}{9} (\pi^2 - 6) \, m \, (\Delta \rho)^2 \, \xi \,. \nonumber
\end{eqnarray}
For the symmetric double-well form for $f(\rho)$, the Tolman length
is identically zero. To obtain an estimate for $\delta$ it is
therefore necessary to consider leading order corrections to the
double-well form for $f(\rho)$ in Eq.(\ref{eq:rho^4}) \cite{FW, Giessen98}.
This leads to the following (constant) value for the Tolman length \cite{Giessen98}:
\begin{equation}
\label{eq:delta_SQ_cp}
\delta = -0.286565 \, \sqrt{m / a} \,.
\end{equation}
The prefactor depends on the precise form for $f(\rho)$ and the number
quoted is specific to the Carnahan-Starling equation of state \cite{delta_c}.
All these formulas are derived assuming proximity to the critical point,
but it turns out that they also provide a good approximation in a wide
temperature range when the value of $\xi$ is chosen judiciously.
This is done by using the fact that in squared-gradient theory the
surface tension $\sigma$ may be determined from $f(\rho)$ {\em directly}
without the necessity to determine the density profile $\rho_0(z)$ \cite{RW}:
\begin{equation}
\label{eq:sigma_SQ_cp}
\sigma = 2 \, \sqrt{m} \int\limits_{\rho_{0,v}}^{\rho_{0,\ell}} \!\! d\rho \;
\sqrt{f(\rho) - \mu_{\rm coex} \, \rho + p_{\rm coex}} \,.
\end{equation}
An effective value for $\xi$ may now be chosen such that the two expressions
for the surface tension in Eqs.(\ref{eq:k_kbar_SQ}) and (\ref{eq:sigma_SQ_cp})
are equal. This gives for $\xi$:
\begin{equation}
\label{eq:xi}
\xi \longrightarrow \xi_{\rm eff} \equiv \frac{m \, (\Delta \rho)^2}{3 \, \sigma} \,.
\end{equation}
with $\sigma$ given by Eq.(\ref{eq:sigma_SQ_cp}).
The procedure to determine the solid lines in Figures \ref{Fig:sigma_delta}
and \ref{Fig:kabar_ka} is now as follows. For a certain interaction potential,
such as the Lennard-Jones potential in Eq.(\ref{eq:LJ}), the interaction
parameters $a$ and $m$ are calculated. Next, as a function of temperature,
the bulk thermodynamic variables $\rho_{0,\ell}$, $\rho_{0,v}$, $\mu_{\rm coex}$
and $p_{\rm coex}$ are derived from solving the set of equations in Eq.(\ref{eq:bulk_0}).
The surface tension is then calculated from Eq.(\ref{eq:sigma_SQ_cp})
and $\xi$ from Eq.(\ref{eq:xi}). With all parameters known, the
curvature coefficients are finally calculated from Eqs.(\ref{eq:k_kbar_SQ})
and (\ref{eq:delta_SQ_cp}).
\section{Long-ranged interactions: dispersion forces}
\label{sec-dispersion}
\noindent
The surface tension, Tolman length and rigidity constants have all been explicitly
evaluated using a Lennard-Jones potential that is cut-off beyond a certain distance $r_c$.
In this section we address the consequences of using the {\em full} Lennard-Jones
potential. It is easily verified that the phase diagram in Figure \ref{Fig:pd}
remains essentially the same when the cut-off is changed from $r_c \!=\! 7.5$ to
$r_c \!=\! \infty$, but that the shift in surface tension and Tolman length
is increasingly noticeable (see Figure \ref{Fig:sigma_delta}).
An inspection of the explicit expressions for the rigidity constants in
Eqs.(\ref{eq:rigidity}) and (\ref{eq:k_bar}) teaches us that both $k$ and $\bar{k}$
{\em diverge} when $r_c$ increases to infinity \cite{Blokhuis92a, Dietrich}.
This divergence is an indication that the expansion of the free energy is no
longer of the form in Eq.(\ref{eq:sigma_s(R)}) or (\ref{eq:sigma_c(R)}), and
it has to be replaced by
\begin{eqnarray}
\label{eq:sigma_s_LR}
\sigma_s(R) &=& \sigma - \frac{2 \delta \sigma}{R} + (2 k_s + \bar{k}_s) \, \frac{\log(d/R)}{R^2} + \ldots \\
\label{eq:sigma_c_LR}
\sigma_c(R) &=& \sigma - \frac{\delta \sigma}{R} + k_s \,\frac{\log(d/R)}{2 R^2} + \ldots
\end{eqnarray}
where the dots represent terms of ${\cal O}(1/R^2)$. The coefficients of the logarithmic
terms may be extracted from the expressions for $k$ and $\bar{k}$ in Eqs.(\ref{eq:rigidity})
and (\ref{eq:k_bar}). They depend on the tail of the interaction potential, but are
otherwise quite universal:
\begin{eqnarray}
\label{eq:k_s}
k_s &=& \frac{\pi}{8} \, \varepsilon \, d^6 \, (\Delta \rho)^2 \,, \\
\label{eq:k_bar_s}
\bar{k}_s &=& - \frac{\pi}{12} \, \varepsilon \, d^6 \, (\Delta \rho)^2 \,.
\end{eqnarray}
This expression for $k_s$ is equal to that obtained in a DFT analysis of the
singular part of the wave vector dependent surface tension of the fluctuating
interface \cite{Blokhuis09}. These expressions can also be derived from virial
expressions for the rigidity constants when a sharp-kink approximation
\cite{Dietrich} is made for the density profile \cite{correctie_Blokhuis92a}.
The form for $2k_s + \bar{k}_s$ obtained by combining Eqs.(\ref{eq:k_s})
and (\ref{eq:k_bar_s}) was first derived by Hooper and Nordholm in
ref.~\cite{Hooper}.
\begin{figure}
\centering
\includegraphics[angle=270,width=250pt]{Figure6.ps}
\caption{The combination $(2 k + \bar{k})(R)$ (in units of $k_{\rm B} T$) as defined
by Eq.(\ref{eq:2k_k_bar_LR}) as a function of the reciprocal equimolar radius $d/R$.
The symbols are the results of DFT calculations at reduced temperature $T^* \!=\!$ 1.0
and three values of the reduced LJ cut-off $r_c \!=$ 2.5, 7.5 and $\infty$.
Solid circles are the corresponding values for $2 k + \bar{k}$ calculated from
Eqs.(\ref{eq:k_bar_equimolar}) and (\ref{eq:k_equimolar}). The dashed line
is the curve $\pi/(6 T^*) (\Delta \rho^*)^2 \log(R_0/R)$ with $R_0 \!\simeq$ 0.005 $d$.}
\label{Fig:k_sph_R}
\end{figure}
To demonstrate the divergence of the second order term in Eq.(\ref{eq:sigma_s_LR}),
the surface tension of a spherical liquid droplet as a function of the radius
is determined for three values of the reduced LJ cut-off radius $r_c \!=$ 2.5,
7.5 and $r_c \!=\! \infty$. The regular contributions to $\sigma_s(R)$ from
$\sigma$ and $\delta$ are subtracted, so that we may define
\begin{equation}
\label{eq:2k_k_bar_LR}
(2 k + \bar{k})(R) \equiv \left (\sigma_s(R) - \sigma \right) \, R^2 + 2 \delta \sigma \, R \,.
\end{equation}
This quantity is defined such that when the expansion in Eq.(\ref{eq:sigma_s(R)})
for short-ranged forces is inserted, it reduces to $2 k + \bar{k}$ in the limit
that $R \rightarrow \infty$. For long-ranged forces ($r_c \!=\! \infty$), insertion
of Eq.(\ref{eq:sigma_s_LR}) into Eq.(\ref{eq:2k_k_bar_LR}) gives a logarithmic
divergence in this limit. This is verified by the DFT calculations shown in
Figure \ref{Fig:k_sph_R} as the various symbols. For $r_c \!=$ 2.5 and $r_c \!=$ 7.5,
the results indeed tend to the values obtained from the direct evaluation
of $2 k + \bar{k}$ using Eqs.(\ref{eq:k_bar_equimolar}) and (\ref{eq:k_equimolar})
(solid circles). For $r_c \!=\! \infty$ (triangular symbols) a slight
divergence can be made out. This divergence is consistent with the dashed
line, which is the divergence as described by combining the coefficients
in Eqs.(\ref{eq:k_s}) and (\ref{eq:k_bar_s}).
\section{Discussion}
\label{sec-discussion}
\noindent
In the context of density functional theory, we have shown that the surface
tension of a spherical liquid droplet as a function of its inverse radius
is well-represented by a parabola with its second derivative related to the
rigidity constants $k$ and $\bar{k}$. Compact formulas for the evaluation
of $k$ and $\bar{k}$ are derived in terms of the density profiles $\rho_0(z)$
and $\rho_1(z)$, which are in line with previous formulas presented by us
\cite{Giessen98} and by Barrett \cite{Barrett09}. A number of conclusions
can be made with regard to these formulas:
\begin{itemize}
\item{The rigidity constants $k$ and $\bar{k}$ depend on the choice for the
location of the dividing surface of the planar density profile $\rho_0(z)$.
This dependency reflects the fact that when the location of the radius $R$
is chosen differently, the curve of $\sigma_s(R)$ versus $1/R$ changes
somewhat and the second derivative ($2 k + \bar{k}$) naturally needs to be
amended.}
\item{The most natural choice for a one-component system, is to locate
the dividing surface of the planar interface according to the {\em equimolar
surface}. For this choice both $k$ and $\bar{k}$ are the {\em least} sensitive
to a change in the location of the dividing surface. Furthermore, the equimolar
value for $k$ corresponds to its {\em maximum} value and the equimolar
value for $\bar{k}$ corresponds to its {\em minimum} value.}
\item{The bending rigidity $k$ depends on the density profile $\rho_1(z)$,
which measures the extent by which molecules rearrange themselves when the
interface is curved. The bending rigidity is, however, {\em independent}
of the choice made for the location of the dividing surface of $\rho_1(z)$
(value of $\alpha$ in Eq.(\ref{eq:rho_1})) \cite{constraints}.}
\end{itemize}
\noindent
Using a cut-off and shifted Lennard-Jones potential for the attractive
part of the interaction potential, the Tolman length and rigidity constants
have been calculated with the result that $\delta$ is negative with
a value of minus 0.1-0.2 $d$, $k$ is also negative with a value around
minus 0.5-1.0 $k_{\rm B} T$, and $\bar{k}$ is positive with a value of
a bit more than half the magnitude of $k$. It is not expected that
these results depend sensitively on the type of density functional
theory used and we have shown that even an approximation scheme based
on squared-gradient theory is {\em quantitatively} accurate.
Our DFT results are expected to give an accurate {\em qualitative}
description of the rigidity constants determined in experiments
or computer simulations. First results of computer simulations by
the group of Binder \cite{Binder10} shown in Figure \ref{Fig:kabar_ka},
seem to support this expectation, but further computer simulations
are necessary. The agreement should cease to exist close to the critical
point, however. Since the DFT calculations are all mean-field in character,
the critical exponents obtained for both rigidity constants are the
mean-field values of $1/2$, which indicates that both $k$ and $\bar{k}$
are zero at $T_c$. Although it has not been proved rigorously, one
expects that in reality the rigidity constants are finite at the
critical point $k$, $\bar{k} \!\propto\! k_{\rm B} T_c$. The situation
is somewhat more subtle for the rigidity constant associated with the
description of {\em surface fluctuations}. Then, the bending rigidity
is again negative but it vanishes on approach to the critical point
with the same exponent as the surface tension \cite{Blokhuis09}.
The inspection of the explicit expressions presented for the
rigidity constants is the most convincing method to investigate
the possible presence of logarithmic corrections \cite{Henderson92,
Rowlinson94, Fisher}, to replace the rigidity constants. For
short-ranged interactions between molecules, the rigidity constants
are definitely {\em finite}, but for an interaction potential
that falls of as $1/r^6$ for large intermolecular distances
(dispersion forces), the rigidity constants are {\em infinite}
indicating that the $1/R^2$ term in the expansion of the surface
tension needs to be replaced by a logarithmic term proportional
to $\log(R) / R^2$. The proportionality constants of the logarithmic
corrections are found to be quite universal since they probe the
systems long-distance behaviour and are in agreement with previous
analyses \cite{Blokhuis92a, Hooper, Dietrich, correctie_Blokhuis92a}.
\vskip 15pt
\noindent
{\Large\bf Acknowledgment}
\vskip 5pt
\noindent
A.E.v.G. acknowledges the generous support from an American Chemical
Society Petroleum Research Fund.
|
2,869,038,154,650 | arxiv | \section{Introduction}
In quantum gravity, gravity would thermalize. In order to define thermal equilibrium in quantum gravity, we need some observables that will reach or tend toward equilibrium. These must be (quasi-)local observables defined on the boundary of a quantum spacetime since there do not exist local observables in the bulk if topology change is one of the properties of quantum gravity. Related to this fact, it is now well-known that we can define an energy--momentum tensor of (quantum) gravity on a spacetime boundary quasi-locally \cite{BrownYork1}. Thermodynamical quantities, such as energy or pressure, are obtained from this tensor in gravitational thermodynamics.
The information of gravitational thermal states could be obtained statistical mechanically. This kind of approach was initiated by Gibbons and Hawking and first applied to asymptotically flat spacetimes \cite{GibbonsHawking}. They proposed that a certain kind of Euclidean path integral of gravity can be the canonical partition function of gravity. This function is given by summing over all Euclidean geometries with the boundary $S^2 \times S^1$, where the length of $S^1$ represents the inverse temperature at the corresponding Lorentzian boundary. By using this formulation, they derived the ``free energy of a black hole (BH)'' for an asymptotically flat spacetime at zero-loop order and reproduced the BH entropy--area relationship.
\footnote{
``BH'' also means ``Bekenstein--Hawking.'' \cite{Bekenstein,Hawking1}
}
.
However, one (big) problem of their partition function is that it does not represent the true thermal states of an asymptotically flat spacetime since all BH states are unstable. Presumably, it indicates there are no thermal states for an asymptotically flat spacetime. Later, York derived a canonical partition function for a spacetime with a Lorentzian boundary of finite radius $S^2 \times \MB{R}$ \cite{York1}.
\footnote{
Before the work by York, Hawking and Page \cite{HawkingPage} found that the canonical partition function of asymptotically AdS spacetime would be well-defined and a stable BH phase would appear at high temperature. The behavior of York's canonical partition function and theirs are very similar.
}
According to the partition function, the BH phase becomes thermal states and gravitational thermodynamical entropy in the BH phase equals the BH entropy.
One assumption they made is that the integration (hyper)contour is such that only real Euclidean solutions dominantly contribute to the path integral.
However, as was shown by Gibbons, Hawking, and Perry \cite{GibbonsHawkingPerry}, the integration contour for Euclidean gravitational path integrals cannot be a trivial real contour. It must be genuinely complex, which will generally pick up some complex saddle-point geometries and not all real ones. Therefore, if we choose a contour such that $n$ complex saddle points contribute, the partition function at zero-loop order is written as
\bea
Z\simeq \sum_{k=1}^{n}e^{-I^{E,os}_{k}}, \label{EQintro}
\ena
where $I^{E,os}_{k}$ is the value of the action at the $k$-th complex Euclidean metric satisfying the Einstein equation. From this perspective, Halliwell and Louko reconsidered the canonical partition function of gravity \cite{HalliwellLouko1}.
\footnote{
Their conclusion was that there are no infinite convergent contours that reproduce York's canonical partition function at the zero-loop level. As I will explain in Section 4, there have been some attempts to define a canonical partition function that shares the same properties as York's \cite{LoukoWhiting, MelmedWhiting}.
}
Taking the above facts into account, in this paper, I reconsider the microcanonical partition function, or density of states (DOS), of general relativity (GR) with an $S^2\times \MB{R}$ Lorentzian boundary, which has been investigated several times in the literature \cite{BradenWhitingYork, LoukoWhiting, MelmedWhiting}.
All of the previously obtained DOSs are derived by inverse Laplace transformation from various canonical partition functions. Since which canonical partition function is correct to obtain the DOS is not clear, I propose another possible DOS obtained by a different approach, namely, a microcanonical path integral \cite{BrownYork2}. As was shown in \cite{BCMMWY}, thermodynamical ensembles and the action functionals of gravity (and a boundary condition of the gravitational path integral) are closely related. The complete form of the microcanonical action functional of gravity and the corresponding path integral were proposed by Brown and York \cite{BrownYork2}. In the path integral, the boundary condition is chosen such that the energy (density) is held fixed, which is suitable for defining the DOS directly from the gravitational path integral. The advantage of this approach is that we can (almost) straightforwardly obtain a DOS without worrying about how to obtain the correct canonical partition function for inverse Laplace transformation. Of course, in this approach, there is ambiguity in how to choose an integration contour of the path integral. However, as we will see, there is only one contour of the lapse integral having the desired property.
The remainder of this paper is organized as follows. In section 2, I review the partition function of GR and minisuperspace approximation, which is used to approximate infinite degrees of freedom of gravity within finite degrees. In the section, I introduce the minisuperspace metric used in this paper and show how thermodynamical quantities are written in terms of these variables. In section 3, I apply the minisuperspace method to the microcanonical path integral and, with the saddle-point approximation, obtain a one-dimensional lapse integral. The obtained ``on-shell'' action is not a one-valued function on the complex lapse plane and is found to be defined on three sheets. I consider various contours and show its consequences. In the last section, I summarize the result and discuss its relationships to previous works and its implications. Finally, open problems are listed.
\section{Partition function of GR and the minisuperspace path integral method}
In this section, I review the basics of statistical treatment of gravitational thermodynamics and how to approximate the Euclidean path integral of GR in order for it to be manageable.
\subsection{Partition function of GR}
\iffigure
\begin{figure}
\hspace{1.5cm}
\includegraphics[width=5.5cm]{Lst.eps}
\hspace{1.5cm}
\includegraphics[width=6cm]{Est.eps}
\caption{LEFT: Spacetime with time-like boundary $\MC{B}=S^2 \times \MB{R}$. RIGHT: Euclidean geometry with $\pd \MC{M}=S^2 \times S^1$. }
\label{FIGst}
\end{figure}
\fi
In this paper, I consider thermal equilibrium states of a quantum spacetime with a Lorentzian boundary $\MC{B}= S^2 \times \MB{R}$ (Fig. \ref{FIGst}, left). When we say that {\it a (quantum) spacetime is thermalized}, we mean that certain quantities defined on the boundary reach equilibrium values; that is, those quantities become almost isotropic and homogeneous on $S^2$ and time independent. As is usually done in statistical mechanics, the properties of such equilibrium states of gravity can be captured by the partition function of GR. This may be given by summing over all Euclidean histories with a boundary topology $\pd \MC{M}=S^{2} \times S^1 $ and satisfying suitable boundary conditions (Fig. \ref{FIGst}, right) \cite{GibbonsHawking, York1, BCMMWY}. Formally,
\bea
Z_{\MF{E}}(\MF{Q},\MF{W})=\int_{\Gamma}\MC{D}\BS{g}_{\{ \MF{Q},\MF{W} \}} ~ e^{-I^{E}_{\MF{E}}[\BS{g}]}. \label{EQpath}
\ena
I explain this equation below. \\
\\
$\blacksquare$ $\MF{E}$ represents ``Ensemble,'' where we can choose from the following: \\
~~~~~~~~ $\cdot$ microcanonical ensemble ($\MF{E}=mc$): Fixed energy and volume. \\
~~~~~~~~ $\cdot$ canonical ensemble ($\MF{E}=c$): Fixed temperature and volume. \\
~~~~~~~~ $\cdot$ pressure microcanonical ensemble ($\MF{E}=pmc$): Fixed energy and pressure. \\
~~~~~~~~ $\cdot$ pressure canonical ensemble ($\MF{E}=pc$): Fixed temperature and pressure. \\
If we choose, for example, $\MF{E}=c$, then it represents the canonical partition function and we use the argument $\{\beta, V\}$, that is, $Z_c(\beta,V)=\int_{\Gamma} \MC{D}\BS{g}_{\{ \beta, V \}} e^{-I^E_{c}[\BS{g}]}$.\\
\\
$\blacksquare$ $I^{E}_{\MF{E}}[\BS{g}]$ is the Euclidean version of the action functional of GR;
\bea
I^{E}_{\MF{E}}[\BS{g}]=\frac{-1}{16\pi G} \int_{\MC{M}} d^4x \sqrt{g} \MC{R} + I^{E}_{\MF{E},\pd\MC{M}}[\BS{g}]
\ena
where ensemble dependence is on only the boundary term $ I^{E}_{\MF{E},\pd\MC{M}}[\BS{g}]$. In general, suitable changes of boundary terms do not alter an equation of motion that gives the extrema of a path integral only when a type of boundary condition (hereinafter, ``BCtype'') is suitably chosen. In that sense, the choice of boundary term has one-to-one correspondence with choice of BCtype:
\beann
({\rm boundary ~ term } ) \longleftrightarrow ({\rm BCtype}).
\enann
The interesting thing is that, for the Euclidean gravitational path integral, certain BCtypes correspond to thermodynamical ensembles \cite{BCMMWY}.
This is because thermodynamical quantities are defined in reference to only the geometrical quantities on the spacetime boundary and because the conjugate pairs of thermodynamical quantities are indeed also the conjugate pairs in terms of BCtype
\footnote{
A pair of two quantities $X(y)$ and $Y(y)$ defined on the boundary is said to be a conjugate pair with respect to BCtype if there exist the following BCtypes: \\
~~~~~~ $\cdot$ fixing $X(y)$ while $Y(y)$ fluctuates, \\
~~~~~~ $\cdot$ fixing $Y(y)$ while $X(y)$ fluctuates, \\
and we can change one of the BCtypes to the other by adding or subtracting the term
\beann
\int d^3 y X(y) Y(y).
\enann
}
. Therefore, the choice of boundary term corresponds to the choice of thermodynamical ensemble in quantum gravity:
\beann
({\rm boundary ~ term } ) \longleftrightarrow ({\rm BCtype}) \longleftrightarrow ({\rm thermodynamical ~ ensemble}).
\enann
In this way, thermodynamical ensembles and gravitation are closely related \cite{BCMMWY}.
For example, the boundary term for a canonical ensemble is the York--Gibbons--Hawking term \cite{York2, GibbonsHawking}, which is for a Dirichlet-type boundary condition. The reason why this is so for a canonical ensemble is that it fixes the thermal length $\beta$ of $S^1$ and the space volume $V$ of $S^2$ of the Euclidean boundary, and because the BCtype-conjugate quantities to $\beta$ and $V$ are the energy $E$ and pressure $P$, respectively (the definitions of $E$ and $P$ will be given later). The York--Gibbons--Hawking term is given by
\bea
I^{E}_{c,\pd \MC{M}}[\BS{g}]=\frac{-1}{8\pi G} \int_{\pd \MC{M}} d^3y \sqrt{\gamma} (\Theta-\Theta_{sub}(\BS{\gamma})), \label{EQYGH}
\ena
where $\BS{\gamma}$ is the induced metric on $\pd{M}$, $\Theta$ is the trace of the extrinsic curvature $\Theta_{\mu\nu}$ on $\pd \MC{M}$, and $\Theta_{sub}(\BS{\gamma})$ is the subtraction term in order for the on-shell action to be finite for asymptotically flat or AdS case. Note that we do not necessarily have to introduce it for the finite-radius $S^2 \times \MB{R}$ boundary case since the on-shell action is finite without subtraction in that case. As we will see shortly, it affects the definition of energy through the Brown--York tensor (\ref{defBYtensor}). Therefore, for convenience, I introduce the term in order for the energy of flat spacetime (enclosed by the $S^2\times \MB{R}$ Lorentzian boundary) to be zero. In particular, I take the background subtraction method of \cite{GibbonsHawking}, in which the subtraction term is given by the trace of the extrinsic curvature of flat spacetime $\Theta_{sub}=2/R_b$, where $R_{b}$ is the radius of the boundary $S^2$. For another example, the boundary term for microcanonical ensemble \cite{BrownYork2} is given by
\bea
I^{E}_{mc,\pd\MC{M}}[\BS{g}]=\frac{- 1}{8\pi G} \int_{\pd \MC{M}} d^3y \sqrt{\gamma} \tau_{\mu}\Theta^{\mu\nu}\pd_{\nu}\tau, \label{EQMCbdy}
\ena
where $\tau_{\mu}$ and $\tau$ represent a Euclidean time direction and a Euclidean time coordinate on $\pd \MC{M}$, respectively. Choosing this boundary term corresponds to choosing the BCtype fixing energy $E$ and allowing inverse temperature $\beta$ to fluctuate and fixing the space volume $V$. Energy $E$ (and pressure $P$) is defined through the Brown--York quasi-local energy momentum tensor $\tau^{ij}$ \cite{BrownYork1}:
\bea
\tau^{ij}(y) \equiv \frac{2}{\sqrt{|\gamma(y)|}} \frac{\delta I_{c}[\BS{g}] }{\delta \gamma_{ij}(y)}= \frac{-1}{8\pi G} \left[\Theta^{ij}-\gamma^{ij} \Theta \right] -\frac{2}{\sqrt{|\gamma(y)|}}\frac{\delta I_{sub}[\BS{\gamma}]}{\delta \gamma_{ij}(y)} ~ \label{defBYtensor},
\ena
where $I_{c}[\BS{g}]$ is the Lorentzian action corresponding to the canonical action and $I_{sub}[\BS{\gamma}]$ is the subtraction term, which is given by the negative of the second term of (\ref{EQYGH}). (The Euclidean version of the BY tensor is defined by just replacing $I_{c}[\BS{g}]$ with $I_{c}^{E}[\BS{g}]$ in the above equation.)
In this way, energy and momentum (densities), and stress of the gravitational theory are defined on the boundary quasi-locally.
With this BY tensor, we can define total energy $E$ as
\bea
E \equiv \int_{S^2} d^2 z \sqrt{\sigma} u_{i}u_{j}\tau^{ij}, \label{EQEdef}
\ena
where $\sigma_{ab} $ is the induced metric on the boundary $S^2$ and $u_{i}$ is normal to the boundary $S^2$ on $\MC{B}$.
In \cite{BrownYork2}, it was shown that choosing the boundary term (\ref{EQMCbdy}) and the BCtype that fixes energy $E$ (and volume $V$) leads to a well-posed variational problem
\footnote{
Precisely, fixing the energy density $u_{i}u_{j}\tau^{ij}$, the momentum density $-u_{i}\tau_{a}^{~i}$, and the stress tensor $\tau^{ab}$ leads to a well-posed variational problem.
}
and $E$ and $\beta$ form a conjugate pair with respect to BCtype.
Since the stress tensor (and energy density) become approximately isotropic, homogeneous on $S^2$, and time independent in gravitational thermal equilibrium, we can also define pressure $P$ as
\bea
P(z) \equiv \frac{1}{2}\sigma^{ab}\tau_{ab} ~ . \label{EQPdef}
\ena
(Inverse temperature times) This pressure $P$ is also shown to be the BCtype conjugate to the volume $V$ \cite{BrownYork2} .
Throughout this paper, I will consider the microcanonical action functional that consists of the Einstein--Hilbert term and this microcanonical boundary term.
\\
\\
$\blacksquare$ $ \{ \MF{Q}, \MF{W} \}$, the subscript of $\MC{D}\BS{g}$, represents a boundary condition whose BCtype corresponds to the ensemble $\MF{E}$, in other words, the thermodynamical variables that are held fixed in the ensemble $\MF{E}$.
\\
\\
$\blacksquare$ $\MC{D}\BS{g}$ is the integration measure of the path integral. How it is defined is the one of the problems of the path integral of GR. In this paper, however, I will not go into detail about this and instead assume it does not affect the result of zero-loop approximation. \\
\\
$\blacksquare$ $\Gamma$ represents the integration (hyper)contour of the path integral. Gibbons, Hawking, and Perry showed that the $\Gamma$ of the Euclidean path integral of GR must not be the real one in order to avoid the divergence problem \cite{GibbonsHawkingPerry}. Although we have to take some purely complex contour, we do not know which is the correct one. However, for the partition function, the contour must be chosen such that the path integral is real and positive valued. Additionally, following \cite{Hartle, HalliwellHartle}, I assume possible integration contours to be infinite (or closed).
\subsection{Minisuperspace path integral method}
Evaluating the right-hand side of (\ref{EQpath}) is generally very difficult. Instead, an often-used method to capture its qualitative behavior is the minisuperspace path integral method, in which we truncate most of the degrees of freedom in the path integral. Throughout this paper, I concentrate on only the $S^2 \times Disc (D)$ topology and consider the following class of metrics:
\bea
\BS{g}=f(r)d\tau^2 + \frac{N^2}{f(r)}dr^2+R(r)^2 (d\theta^2 + \sin^2 \theta d\phi^2). \label{EQansatz}
\ena
The coordinate variables $\theta \in (0,\pi)$, $\phi \in (0,2\pi)$ are the standard coordinate of $S^2$ and $\tau \in (0,2\pi)$, $r\in (0,1]$ is the polar coordinate of $Disc$, where $\tau$ represents angle and $r$ represents radius ($r=0$ corresponds the center and $r=1$ to the boundary)
\footnote{
Without loss of generality, the coordinate ranges of $\tau$ and $r$ can be set as done in the text since shifting and rescaling of the coordinate variables, together with suitable redefinition of $f$ and $N$, can lead to the form of the metric (\ref{EQansatz}) and the coordinate ranges $\tau \in (0,2\pi)$ and $r\in (0,1]$. I would like to thank an unknown referee who has provided valuable remarks on this point.}
.
Restricting the class of metrics summed over in the path integral to the form of the metric (\ref{EQansatz}), the partition function (\ref{EQpath}) is approximated to be
\bea
Z_{\MF{C}}(\MF{Q},\MF{W}) \simeq \int_{\Gamma} dN \MC{D}f \MC{D}R_{\{ \MF{Q},\MF{W} \}} e^{-I_{\MF{C}}^{E}[f,R;N]}.
\ena
However, this still be difficult to deal with. Following the usual method \cite{HalliwellLouko2}, I further simplify this integral by saddle-point approximation for $f$ and $R$:
\bea
Z_{\MF{C}}(\MF{Q},\MF{W}) \simeq \int_{\Gamma} dN e^{-I_{\MF{C}}^{E, os\{ \MF{Q},\MF{W} \}}(N)},
\ena
where $I_{\MF{C}}^{E, os\{ \MF{Q},\MF{W} \}}(N)$ is the ``on-shell'' action function of $N$ for the given boundary data $\{ \MF{Q},\MF{W} \}$.
Finally, the partition function reduces a one-dimensional complex integral along the contour $\Gamma$.
In terms of the minisuperspace variables, energy (\ref{EQEdef}), volume, inverse temperature, and pressure (\ref{EQPdef}) can be written as
\bea
E= \left. \frac{R}{G}\left( 1-\sqrt{f}\frac{R^{\p}}{N} \right)\right|_{r=1} ~, \hspace{2.25cm} \\
V=4\pi R(1)^2 ~ , \hspace{4.45cm} \\
\beta= 2\pi \sqrt{f(1)} ~ , \hspace{4.35cm} \\
P=\left. \frac{1}{8\pi G} \left( \frac{\sqrt{f}}{N}\frac{R^{\p}}{R}+\frac{1}{2N}\frac{f^{\p}}{\sqrt{f}}-\frac{1}{R} \right) \right|_{r=1}.
\ena
\section{DOS of GR}
\subsection{Derivation of one-dimensional integral}
As I explained in the previous section, reducing the gravitational path integral to a one-dimensional integral is one simple way in order for it to be manageable. The first step is to derive the minisuperspace action functional for the microcanonical ensemble $I^{E}_{mc}[f,R;N]$. Since the full action for a microcanonical ensemble is
\bea
I_{mc}^{E}[\BS{g}]=\frac{-1}{16\pi G}\int_{\MC{M}} d^4 x \sqrt{g}\MC{R}+\frac{- 1}{8\pi G} \int_{\pd \MC{M}} d^3y \sqrt{\gamma} \tau_{\mu}\Theta^{\mu\nu}\pd_{\nu}\tau,
\ena
substituting the minisuperspace ansatz (\ref{EQansatz}) into the action leads to
\bea
I_{mc}^{E}[f,R;N] = \frac{-\pi}{G} \int^{1}_{0} dr \left[ \frac{ f (R^{\p})^2 }{N} +\frac{f^{\p}RR^{\p}}{N}+N \right] \hspace{4cm} \notag \\
-\frac{\pi}{2 G}\left. \left( \frac{f^{\p}R^2}{N}+\frac{4f R R^{\p}}{N} \right) \right|_{r=0}+\left. \frac{2\pi}{G}\frac{RR^{\p}}{N}f \right|_{r=1} ~ , \label{EQactionfR}
\ena
that is, the microcanonical partition function, or DOS, is now approximated to be
\bea
Z(E,V) \simeq \int_{\Gamma} dN \MC{D}f \MC{D}R_{ \{E,V \} } e^{-I^{E}_{mc}[f,R;N]}. \label{EQPFfR}
\ena
The next step is to derive the ``on-shell'' action function. In order to seek the saddle point for $f$ and $R$, take the variation with respect to $f$ and $R$:
\bea
\delta_{f,R} I^{E}_{mc}[f,R,N]= \frac{-\pi}{G} \int dr \left[ -\frac{RR^{\p\p} }{N}\delta f + \left( -\frac{2f R^{\p\p}}{N}-\frac{f^{\p\p}R}{N}-\frac{2f^{\p}R^{\p}}{N} \right)\delta R \right] \hspace{3cm} \notag \\
+ \left[ -2\pi\sqrt{f} ~ \delta\left\{ \frac{R}{G}\left(1-\sqrt{f}\frac{R^{\p}}{N}\right) \right\} + \frac{-\pi}{G}\left( \frac{2f R^{\p}}{N}+\frac{f^{\p}R}{N} -2\sqrt{f} \right)\delta R \right]_{r=1} \notag \\
-\frac{\pi}{2G}\left[ \frac{2RR^{\p}}{N}\delta f + R^2 \delta \left( \frac{f^{\p}}{N} \right) +\frac{4 f R}{N}\delta (R^{\p}) \right]_{r=0}. \hspace{4.5cm}
\ena
Since we are now considering the DOS $Z_{mc}(E,4\pi R_{b}^2)$, that is, a gravitational (minisuperspace) path integral with the boundary conditions
\bea
\left. \frac{R}{G}\left(1-\sqrt{f}\frac{R^{\p}}{N}\right)\right|_{r=1} = E ~ , \label{EQbcon1} \\
R(1)=R_{b}, \label{EQbcon2}
\ena
the variation at the boundary $r=1$ vanishes. Additionally, the smoothness of the metrics at the center of the $Disc$ requires the following conditions for all minisuperspace histories:
\footnote{
Here, I assume all the metrics summed over in the gravitational path integral are smooth. Another boundary condition that corresponds to summing over non-smooth metrics was proposed in \cite{LoukoWhiting}.
}
\bea
f(0)=0 \label{EQccon1} \\
\frac{1}{2}\frac{f'(0)}{N}= 1. \label{EQccon2}
\ena
Therefore, the variation at the center $r=0$ also vanishes. These lead to the equations of motion for saddle-point geometries:
\bea
R^{\p\p}=0 \label{EQEOMf} \\
f^{\p\p}R+2f^{\p}R^{\p} =0. \label{EQEOMR}
\ena
The solution of these equations can be written as
\bea
R(r)=Ar+R_{H}, \\
f(r)=B-\frac{C}{R(r)},
\ena
where $A,R_{H}$,$B$, and $C$ are integration constants. From the boundary conditions (\ref{EQbcon2})--(\ref{EQccon2}), we obtain
\bea
f(r) = \frac{2N R_{H}}{R_{b}-R_{H}} \left(1-\frac{R_{H}}{R(r)} \right) \label{EQfsol}, \\
R(r) = (R_{b}-R_{H}) r + R_{H} \label{EQRsol}.
\ena
Moreover, using the remaining boundary condition (\ref{EQbcon1}), we can derive the equation for $R_{H}$:
\bea
R_{H}(R_{b}-R_{H})^2-\frac{ N R_{b}}{2}\left(\frac{GE}{R_{b}}-1\right)^2=0. \label{EQforRH}
\ena
This indicates $R_{H}(N)$ is not a mere function but is a triple-valued function. Postponing a discussion about this multi-valuedness until the next subsection and using the saddle-point approximation with (\ref{EQfsol}) and (\ref{EQRsol}), we obtain
\bea
I_{mc}^{E,os\{E,4\pi R_{b}^2 \}}(N)
=\frac{-1}{G} \left[ - 2\pi R_{H}(N)(R_{b}-R_{H}(N)) + \pi N \right] -\frac{\pi R_{H}(N)^2}{G}. \label{EQactionN}
\ena
Finally, we obtain a one-dimensional complex integral expression of the DOS;
\bea
Z_{mc}(E,4\pi R_{b}^2) \simeq \int_{\Gamma}dN e^{-I_{mc}^{E,os\{E,4\pi R_{b}^2 \}}(N)} \label{EQoneintegral}
\ena
One thing I would like to note is that, if we do saddle-point approximation for $N$, we will obtain the expected expression
\footnote{
In the subscript of $N$, $\it{sp}$ denotes ``saddle point,'' not ``south pole.''
}
\bea
S(E,4\pi R_{b}^2)=\log Z_{mc}(E,4\pi R_{b}^2) \simeq \frac{\pi}{G}R_{H}(N_{sp})^2,
\ena
where I used the constraint equation
\bea
2\pi R_{H}(N)(R_{b}-R_{H}(N)) - \pi N=0,
\ena
which can be derived from the variation of (\ref{EQactionfR}) with respect to $N$.
\subsection{Riemann surface}
\iffigure
\begin{figure}[t]
\begin{center}
\begin{tabular}{c}
\begin{minipage}{0.33\hsize}
\begin{center}
\includegraphics[width=2cm]{NplaneU.eps} \\
\hspace{1cm} \\
\includegraphics[width=2cm]{NplaneM.eps} \\
\hspace{1cm} \\
\includegraphics[width=2cm]{NplaneL.eps}
\end{center}
\end{minipage}
\begin{minipage}{0.33\hsize}
\begin{center}
\includegraphics[clip, width=6cm]{RHplane.eps}
\end{center}
\end{minipage}
\end{tabular}
\caption{LEFT: Three complex $N$ planes consisting of the Riemann surface of the ``on-shell'' action. The orange circles represent the critical points of the map (\ref{EQN}). Orange dashed lines show one possible choice of branch cuts; branch cut $\BS{a}$ is $\left(\frac{8 R_{b}^2}{27\eta^2},\infty \right)$ on the real axis and branch cut $\BS{b}$ is $(-\infty,0)$ on the real axis. RIGHT: An $R_{H}$ complex plane that is homeomorphic to the Riemann surface by the map (\ref{EQN}). The small circle represents $R_{H}=\frac{1}{3}R_{b}$ (corresponding to $N=\frac{8 R_{b}^2}{27\eta^2}$) and the large circle represents $R_{H}=R_{b}$ (corresponding to $N=0$). }
\label{FIGA}
\end{center}
\end{figure}
\fi
Since (\ref{EQactionN}) is not a single-valued function on a complex $N$ plane, we have to know what kind of Reimann surface the function (\ref{EQactionN}) is defined on. Since the inverse of $R_{H}(N)$ can be written as
\bea
N=\frac{2}{R_{b}\eta^2} R_{H}(R_{b}-R_{H})^2 \label{EQN}
\ena
by using (\ref{EQforRH}), the ``on-shell'' action function can be written as
\bea
I_{mc}^{E,os\{E,4\pi R_{b}^2 \}}(R_{H})
=\frac{\pi}{G} \left[ 2R_{b}\left(1-\frac{1}{\eta^2} \right)R_{H} + \left(\frac{4}{\eta^2}-3 \right)R_{H}^2 -\frac{2}{R_{b}\eta^2}R_{H}^3 \right]. \label{EQactionRH}
\ena
Therefore, it can be a single-valued function on a complex $R_{H}$ plane. From the inverse map (\ref{EQN}), we can see there are two critical points, $R_{H}=R_{b}$ and $\frac{1}{3}R_{b}$, which correspond to $N=0$ and $\frac{8 R_{b}^2}{27\eta^2}$, respectively, on the $N$ planes. Then, three sheets are enough for (\ref{EQactionN}) and one example of two branch cuts as follows:
\beann
{\rm Branch ~ cut ~ }\BS{a}= \left\{N \left| Re(N)\in \left(\frac{8 R_{b}^2}{27\eta^2}, \infty \right), Im(N)=0 \right. \right\} \\
{\rm Branch ~ cut ~ }\BS{b}= \left\{N \left| Re(N)\in \left(-\infty , 0 \right), Im(N)=0 \right. \right\}. \hspace{0.95cm}
\enann
The upper sheet has the branch cut $\BS{a}$, the middle sheet has both branch cuts, and the lower sheet has the branch cut $\BS{b}$ (Fig. \ref{FIGA}, left). The right panel of Fig. \ref{FIGA} shows the complex $R_{H}$ plane, on which I show the relevant region and how these regions correspond to the sheets.
\subsection{Integration contour}
\iffigure
\begin{figure}
\begin{center}
\includegraphics[width=5.cm]{NcontourEsmall.eps} ~~
\includegraphics[width=5.cm]{NcontourEmiddle.eps} ~~
\includegraphics[width=5.cm]{NcontourElarge.eps}
\caption{The location of the saddle point on the complex $R_{H}$ plane and its steepest descent (ascent) contours. The white circle represents the saddle point and solid (dot-dashed) black lines are its steepest decent (ascent) contours. The region where the real part of $-I^{E,os}_{mc}$ is higher (lower) than the saddle-point value is red (green) colored. LEFT: $R_{b}=5 \sqrt{G}, \sqrt{G}E=0.1 $ belonging to case (i). MIDDLE: $R_{b}=5 \sqrt{G} ,\sqrt{G}E=5\left( 1+\sqrt{\frac{2}{3}} \right)$ belonging to case (ii). RIGHT: $R_{b}=5\sqrt{G}, \sqrt{G}E=2$ belonging to case (iii). }
\label{FIGNcon}
\end{center}
\end{figure}
\fi
To estimate (\ref{EQoneintegral}), I will use the saddle-point approximation. From (\ref{EQactionRH}) and (\ref{EQN}), the location of the saddle point can be derived as follows:
\bea
0= \left. \frac{\delta I_{mc}^{E, os\{E,4\pi R_{b}^2\}}(N)}{\delta N}\right|_{N=N_{sp}}= \frac{\pi}{G}\frac{\left( R_{H}(N_{sp})-(1-\eta^2)R_{b} \right)}{(R_{b}-R_{H}(N_{sp}))} \notag \\
\notag\\
\Longrightarrow ~~ R_{H}(N_{sp})=(1-\eta^2)R_{b}. \hspace{3cm}
\ena
For given $E$ and $R_{b}$, there is only one saddle point. Therefore, {\it if} we can choose the contour whose dominant contribution comes from only $N_{sp}$, then the DOS would be $\log Z_{mc} \simeq \frac{\pi}{G}R_{H}(N_{sp})^2$. However, as we will see shortly, there are no natural contours that give BH area-entropy relationships for any choice of $E$ and $R_{b}$.
Depending on the energy $E$ and volume $4\pi R_b^2$, the behaviors of the ``on-shell'' action are qualitatively different and classified as follows:
\begin{center}
(i) $ 0<GE<R_{b} \left(1-\sqrt{2/3} \right)$ and $R_{b} \left(1+\sqrt{2/3} \right)<GE<\infty$; \\
(ii) $GE=R_{b} \left(1\pm \sqrt{2/3} \right)$; \\
(iii) $R_{b} \left(1-\sqrt{2/3} \right)<GE<R_{b}$ and $ R_{b}<GE<R_{b} \left(1+\sqrt{2/3} \right)$. \\
\end{center}
Fig. \ref{FIGNcon} shows typical examples of (i)--(iii). Each figure is the complex $R_{H}$ plane, or equivalently, the three complex $N$ sheets, on which I show the saddle point (big white circle), the steepest descent and ascent contours for the saddle (solid black lines and dot-dashed black lines, respectively), and branch cuts (dashed orange lines). We can easily see that there exist essentially only two types of contour. In the case of (i), for example, these are ABE and A(E)BCD. We call them Type I and Type II, respectively.
\begin{center}
Type I : ABE for (i), ~~ FGI for (ii), ~~ JKN for (iii) ~~~~~~~~~~~~~~~~~~~ \\
Type II : A(E)BCD for (i), ~~ F(I)GH for (ii), ~~ J(N)KLM for (iii) \\
\end{center}
In each case, there is a contribution other than the saddle point, that is, the points B and L of Fig. \ref{FIGNcon}. Since this is not the saddle point, we cannot apply saddle-point approximation. However, by changing the integration variable $N$ to $R_{H}$ by (\ref{EQN}), we find two ``saddle points'':
\bea
0= \left. \frac{\delta I_{mc}^{E, os\{E,4\pi R_{b}^2\}}(R_{H})}{\delta R_{H}}\right|_{R=R_{H,sp}}= -\frac{2 \pi}{G R_{b} \eta^2}(3R_{H,sp}-R_{b})(R_{H,sp}-(1-\eta^2)R_{b}) \notag \\
\Longrightarrow ~~~~ R_{H,sp}=\frac{1}{3}R_{b},\ (1-\eta^2)R_{b}. \hspace{4cm}
\ena
In terms of $R_{H}$, (\ref{EQoneintegral}) is rewritten as
\bea
Z_{mc}(E,4\pi R_{b}^2) \simeq \frac{2}{R_{b}\eta^2} \int_{\Gamma} dR_{H} (R_{b}-R_{H})(R_{b}-3R_{H}) e^{-I_{mc}^{E,os\{ E,4\pi R_{b}^2 \}}(R_{H})}.\label{EQRHintegral}
\ena
Similar to the method of \cite{BradenWhitingYork}, we could evaluate the contribution around the ``saddle points.'' At the zero-loop level, the partition function for each type is given by
\bea
Z^{I}_{mc}(E,4\pi R_{b}^2) \simeq
\begin{cases}
e^{\frac{\pi}{G}R_{b}^2 \frac{1}{3}\left(-1+\frac{8}{9\eta^2} \right)} ~~~~~ {\rm for ~ (i)} \\
e^{\frac{\pi}{G}R_{b}^2(1-\eta^2)^2 } ~~~~~~~~~~ {\rm for ~ (ii) ~ and ~ (iii)} \label{EQsaddle1}
\end{cases} \\
Z^{II}_{mc}(E,4\pi R_{b}^2) \simeq
\begin{cases}~
e^{\frac{\pi}{G}R_{b}^2(1-\eta^2)^2 } ~~~~~~~~ {\rm for ~ (i)} \\
e^{\frac{\pi}{G}R_{b}^2 \frac{1}{3}\left(-1+\frac{8}{9\eta^2} \right)} ~~~~ {\rm for ~ (ii) ~ and ~ (iii)}. \label{EQsaddle2}
\end{cases}
\ena
The behaviors of the corresponding entropies are shown in Fig. \ref{FIGen}.
\iffigure
\begin{figure}
\hspace{1.cm}
\includegraphics[width=7cm]{Entropy1.eps}
\hspace{1.cm}
\includegraphics[width=7cm]{Entropy2.eps}
\caption{The behavior of entropy corresponding to a type I DOS and type II DOS. Black curves indicate the contribution from the saddle point $R_{H}=(1-\eta^2)R_{b}$ and orange curves indicate the contribution from $R_{H}=R_{b}/3$. In each case, I set $R_{b}=5\sqrt{G}$. LEFT: type I DOS. RIGHT: type II DOS. }
\label{FIGen}
\end{figure}
\fi
\section{Discussion}
\subsection{Summary of the result}
In this paper, I evaluated the gravitational microcanonical partition function, or density of states (DOS), of an $S^2 \times \MB{R}$ Lorentzian boundary by using minisuperspace approximation and saddle-point approximation. I only considered the $S^2 \times D$ topology sector of the path integral. Following the conventional technique, I performed in advance the saddle-point approximation for minisuperspace functions $f$ and $R$ without specifying the hypercontour and obtained the one-dimensional lapse integral (\ref{EQoneintegral}). After that, I found there is only one saddle point of the ``on-shell'' action (\ref{EQactionN}) and showed that the Riemann surface of the ``on-shell'' action consists of three sheets. There are two ways to choose an infinite convergent contour. In each contour, there exists a contribution other than the saddle point. Taking this into account, I obtained two types of DOS (\ref{EQsaddle1}) and (\ref{EQsaddle2}) by the ``saddle point'' approximation
\beann
Z^{I}_{mc}(E,4\pi R_{b}^2) \simeq
\begin{cases}
e^{\frac{\pi}{G}R_{b}^2 \frac{1}{3}\left(-1+\frac{8}{9\eta^2} \right)} ~~~~~ {\rm for ~} |\eta| \geq \sqrt{\frac{2}{3}} \\
e^{\frac{\pi}{G}R_{b}^2(1-\eta^2)^2 } ~~~~~~~~~~ {\rm for ~} |\eta| < \sqrt{\frac{2}{3}}
\end{cases} \\
Z^{II}_{mc}(E,4\pi R_{b}^2) \simeq
\begin{cases}~
e^{\frac{\pi}{G}R_{b}^2(1-\eta^2)^2 } ~~~~~~~~ {\rm for ~} |\eta| \geq \sqrt{\frac{2}{3}} \\
e^{\frac{\pi}{G}R_{b}^2 \frac{1}{3}\left(-1+\frac{8}{9\eta^2} \right)} ~~~~ {\rm for ~} |\eta| < \sqrt{\frac{2}{3}},
\end{cases}
\enann
where the ``shifted energy'' $\eta$ was defined by $\eta \equiv\frac{GE}{R_{b}}-1$. The behaviors of the corresponding entropies are shown in Fig. \ref{FIGen}.
\subsection{Relation to previous work}
As I mentioned in the introduction, there have been several attempts to derive the DOS of GR. In those works, inverse Laplace transformation (ILT) is used to obtain a DOS from a canonical partition function
\bea
Z_{mc}(E,V)=\frac{1}{2\pi i} \int^{i\infty +p}_{-i\infty+p} d\beta Z_{c}(\beta, V) e^{\beta E} ~~~~ {\rm for ~ } p>0. \label{EQZmc}
\ena
I will briefly explain these attempts and comment on the relationship (difference) among the results obtained in this work and in previous work.
\subsubsection*{$\blacksquare$ ILT of York's canonical partition function \cite{BradenWhitingYork}}
The first attempt to derive the DOS of a spacetime with finite radius $S^2 \times \MB{R}$ Lorentzian boundary is the work of Braden, Whiting, and York \cite{BradenWhitingYork}. In order to obtain the DOS, they considered the $S^2 \times D$ topology sector of York's canonical partition function $Z_{c}^{Y}(\beta, V)$ \cite{York1}
\footnote{
In \cite{York1}, York considered only the real saddle points of canonical Euclidean action and just assumed one of them gives the dominant contribution to the path integral without specifying its hypercontour.
}
\footnote{
To be precise, since $Z_{c}^{Y}(\beta,4\pi R_{b}^2)$ must be a function of $\beta$ and $R_{b}$, it is given by
\beann
\log Z_{c}^{Y}(\beta, 4\pi R_{b}^2) \simeq \max [{\rm R.H.S. ~ of ~} (\ref{EQZc}) ],
\enann
and is defined for positive $\beta$ and $R_{b}$ that satisfy the condition $8\pi R_{b} \geq 3\sqrt{3}\beta$.
}
\bea
\log Z_{c}^{Y}(\beta, 4\pi R_{b}^2) \simeq -\frac{1}{G}\left[ 3\pi R_{H}(\beta,R_{b})^2 -4\pi R_{H}(\beta,R_{b}) R_{b}\left( 1-\sqrt{1-\frac{R_{H}(\beta,R_{b})}{R_{b}}} \right) \right], \label{EQZc}
\ena
where the function $R_{H}(\beta,R_{b})$ is given by the equation
\bea
\frac{\beta}{4\pi}=R_{H}\sqrt{1-\frac{R_{H}}{R_{b}}}.
\ena
When we evaluate the integral (\ref{EQZmc}), $Z_{c}$ must be analytically continued (for $\beta$). As a result, they found the integral is defined on the Riemann surface, which consists of three complex $\beta$ sheets. They re-express ($\ref{EQZmc}$) with ($\ref{EQZc}$) as
\bea
Z_{mc}^{BWY}(E,4\pi R_{b}^2) \simeq \int_{\Gamma} d\xi (1+3\xi^2) \exp\left[ \frac{4\pi R_{b}^2}{G}\left( -\frac{1}{4}(3\xi^4+2\xi^2-1)-i \eta \xi (1+\xi^2) \right) \right],
\ena
where they defined $\xi$ by
\bea
\frac{\beta}{4\pi}=-i R_b \xi (1+\xi^2).
\ena
Their integration contour $\Gamma$ is uniquely determined by construction and convergence, and the dominant contribution comes from one of the three saddle points depending on $E$ and $R_{b}$: $\xi=-i\eta$ for $|\eta|<1/\sqrt{3}$ and $\xi=-i ~ {\rm sgn}(\eta)/\sqrt{3}$ for $|\eta|>1/\sqrt{3}$. Their DOS at the zero-loop level is
\bea
Z^{BWY}_{mc}(E,4\pi R_{b}^2) \simeq
\begin{cases}
e^{\frac{\pi}{G}R_{b}^2 \frac{4}{3}\left(1-\frac{2}{\sqrt{3}}|\eta| \right)} ~~~~~ {\rm for ~ } |\eta| \geq \frac{1}{\sqrt{3}} \\
e^{\frac{\pi}{G}R_{b}^2(1-\eta^2)^2 } ~~~~~~~~~~ {\rm for ~ } |\eta| < \frac{1}{\sqrt{3}}.
\end{cases}
\ena
This is qualitatively similar to the type I DOS. Especially, for small $|\eta|$, both $Z^{BWY}_{mc}$ and $Z^{I}_{mc}$ give BH entropy. In terms of the microcanonical path integral, the contribution to their DOS, and also to the type I DOS, comes from the saddle-point geometry for small $|\eta|$ and not for large $|\eta|$. One of the differences between their formulation and the type I DOS is the transition point. The transition point $|\eta|=1/\sqrt{3}$ of $Z_{mc}^{BWY}$ is nothing but the critical geometry of York's canonical partition function where the stability changes.
\footnote{
To be precise, only $\eta=-1/\sqrt{3}$ corresponds to the critical geometry.
}
On the other hand, the meaning of the transition point $|\eta|=\sqrt{2/3}$ of $Z^{I}_{mc}$ is not clear. Another difference is the behavior for large $|\eta|$. As I will comment in the next subsection, it is not clear whether the large $|\eta|$ behavior of $S^2 \times D$ topology sector is significant, If it is, the energy eigenstates may vanish at high energy in their formulation while they do not in type I case.
\subsubsection*{$\blacksquare$ ILT of Louko--Whiting canonical partition function \cite{LoukoWhiting}}
After York's canonical partition function, Halliwell and Louko considered a canonical partition function in term of the minisuperspace (\ref{EQansatz}) and sought a suitable (infinite) integration contour on the complex $N$ plane that reproduce York's canonical partition function \cite{HalliwellLouko1}. However, their conclusion was negative. Following the result, Louko and Whiting considered a different canonical partition function by using different ``boundary'' conditions for the minisuperspace path integral. As was seen in Section 2, there are two boundaries for the minisuperspace path integral: $r=0$ and $r=1$. Of course, $r=0$ is not a true boundary. It is fictitious. The meanings of the boundary conditions (\ref{EQccon1}) and (\ref{EQccon2}) at $r=0$ are fixing the topology to be $S^2 \times D$ and ensuring the smoothness of the metrics at the center, respectively. They discarded the latter boundary condition, that is, they allowed non-smooth metrics to be summed over in a path integral. In order to discard the condition while maintaining the consistency of the variational principle, they added the ``boundary'' term $\left. \frac{\pi}{2G}R^2\left( \frac{f^{\p}}{N}-2 \right) \right|_{r=0}$ to the canonical minisuperspace action, which is straightforwardly obtained from the full action.
\footnote{
In the conventional path integral, where only smooth metrics are summed over, adding this term has no effect since this term equals zero for all Euclidean histories. However, when we extend the class of metrics summed over to include non-smooth metrics, it becomes very important as I will explain shortly.
}
It is given by
\bea
I^{E,LW}_{c}[f,R;N]= \frac{-\pi}{ G} \int^{1}_{0} dr \left[ \frac{ f (R^{\p})^2 }{N} +\frac{f^{\p}RR^{\p}}{N}+N \right] \hspace{4cm} \notag \\
-\left. \left( \frac{\pi R^2}{G} +\frac{2\pi f R R^{\p}}{G N} \right) \right|_{r=0}+\left. \frac{2\pi}{G} \sqrt{f}R \right|_{r=1}.
\ena
If we take the variation of their action with the boundary conditions $2\pi\sqrt{f(1)}=\beta$, $R(1)=R_{b}$, and $f(0)=0$, we would obtain the EOM for $f$ (\ref{EQEOMf}) for $R$ (\ref{EQEOMR}), the constraint equation, and the smoothness condition (\ref{EQccon2}) as the Euler--Lagrange equation for $R(0)$. One unsatisfactory point is that, as they remarked, the relationship to the full action is not clear. Since the ``boundary'' condition at $r=0$ is only fixing $f$, the natural expression for this minisuperspace path integral may be adding the integration of $R_{H}\equiv R(0)$. Ignoring possible non-trivial measures for the $R_{H}$ integral and performing the saddle-point approximation for $f$ and $R$, their path integral for the canonical partition function is reduced to the two-dimensional complex integral
\bea
Z_{c}^{LW}(\beta, 4\pi R_{b}^2) \simeq \int_{\Gamma} dR_{H} dN \exp \left[ \frac{1}{2G}\left[ \frac{1}{N}\beta^2 R_{b} (R_{b}-R_{H}) +N \right] +\frac{\pi}{G}R_{H}^2 -\frac{\beta R_{b}}{G} \right]. \label{EQcpLW}
\ena
Their integration contour is the closed circle around the origin for $N$ and the finite interval $(0, R_{b})$ for $R_{H}$. This finite contour for $R_H$ is determined by the ``Wheeler--De Witt equation.''
\footnote{
Of course, since we are not considering wave functionals, there are no Wheeler--de Witt equations for partition functions. However, by construction, we can derive the differential equation that must be satisfied for gravitational partition functions, as we can derive a Wheeler--de Witt equation from the path integral expression of wave functionals \cite{HartleHawking}. They call this the ``Wheeler--de Witt equation'' for the partition function.
}
One remarkable point is that their canonical partition function shares some properties with York's while specifying a convergent integration contour explicitly. Then they obtained a DOS by ILT of the canonical partition function. At the zero-loop level, it is given by
\bea
Z^{LW}_{mc}(E,4\pi R_{b}^2) \simeq
\begin{cases}
0 \hspace{3.02cm} {\rm for ~ } |\eta| \geq 1 \\
e^{\frac{\pi}{G}R_{b}^2(1-\eta^2)^2 } ~~~~~~~~~~ {\rm for ~ } |\eta| < 1.
\end{cases}
\ena
Again, this DOS is similar to the type I DOS in the sense that it gives BH entropy for small $|\eta|$. The differences are, as before, at the transition point and the behavior with large $|\eta|$.
\subsubsection*{$\blacksquare$ ILT of Melmed--Whiting canonical partition function \cite{MelmedWhiting}}
Melmed and Whiting again considered a canonical partition function with the same form as (\ref{EQcpLW}) but choosing a different integration contour \cite{MelmedWhiting}. After changing the integration variable in (\ref{EQcpLW}) by applying $\alpha\equiv N/(R_{b}-R_{H})$, they choose the integration contour to be the positive part of the real axis for $\alpha$ and the semi-infinite line parallel to the imaginary axis $ \left\{R_{H}=\frac{\alpha}{4\pi}+i k \left| k \in \left(-\sqrt{\frac{\alpha R_{b}}{2\pi} } \left(1-\frac{\beta}{\alpha} \right),\infty \right) \right. \right\}$. The resulting canonical partition function also shares some properties with York's. However, their DOS obtained from ILT of their canonical partition function is similar to, but differs from, Louko--Whiting's DOS:
\bea
Z^{MW}_{mc}(E,4\pi R_{b}^2) \simeq
\begin{cases}
\frac{2}{R_{b}}\sqrt{\frac{G}{(\eta^2-1)}} \hspace{1.4cm} {\rm for ~ } |\eta| \geq 1 \\
e^{\frac{\pi}{G}R_{b}^2(1-\eta^2)^2 } ~~~~~~~~~~ {\rm for ~ } |\eta| < 1
\end{cases}
\ena
at the leading order.
\footnote{
For $|\eta|\leq 1$, it is the zero-loop order. For $|\eta|\geq 1$, however, the zero-loop contribution is $1$ and the energy and volume dependence come from the evaluation including the neighborhood of the ``saddle point'' $\alpha=0$.
}
\subsection{Further Remarks}
\begin{table}
\begin{center}
\begin{tabular}{| l || l | l | l |}
\hline
& large $|\eta|$ behavior & small $|\eta|$ behavior & transition point \\
\hline \hline
$ Z^{BWY}_{mc} $ & $e^{\frac{\pi}{G}R_{b}^2 \frac{4}{3}\left(1-\frac{2}{\sqrt{3}}|\eta| \right)}$ & $e^{\frac{\pi}{G}R_{b}^2(1-\eta^2)^2 } $ & $|\eta|=\frac{1}{\sqrt{3}}$ \\
\hline
$ Z^{LW}_{mc} $ & 0 & $e^{\frac{\pi}{G}R_{b}^2(1-\eta^2)^2 } $ & $|\eta|= 1$ \\
\hline
$ Z^{MW}_{mc} $ & $ \frac{2}{R_{b}} \sqrt{\frac{G}{(\eta^2-1)}}$ & $e^{\frac{\pi}{G}R_{b}^2(1-\eta^2)^2 } $ & $|\eta|= 1$ \\
\hline
$ Z^{I}_{mc} $ & $e^{\frac{\pi}{G}R_{b}^2 \frac{1}{3}\left(-1+\frac{8}{9\eta^2} \right)}$ & $e^{\frac{\pi}{G}R_{b}^2(1-\eta^2)^2 } $ & $|\eta|= \sqrt{\frac{2}{3}} $ \\
\hline
\end{tabular}
\caption{The list of the leading behavior of previously obtained DOSs and the type I DOS obtained in this work. All of these have transition points at finite $\eta$ and exhibit a BH entropy--area relationship for small $|\eta|$. They are evaluated at the zero-loop order except the large $|\eta|$ behavior of $Z^{MW}_{mc}$, which has vanishing entropy at the zero-loop order. }
\label{TA1}
\end{center}
\end{table}
As we saw in the previous subsection, there are many candidates for the ($ S^2 \times D$ topology sector of the) canonical partition function of GR, presumably due to the nonexistence of suitable infinite integration contours as shown by Halliwell and Louko \cite{HalliwellLouko1}. Since they were constructed in order to satisfy desired properties, such as the domination of the BH phase in the classical domain $E\lesssim R_{b}/G$, the DOSs obtained by ILT from them share the property that they reproduce BH entropy for small $|\eta|$. However, the small difference in the canonical partition functions results in differences of the high-energy (large $|\eta|$) behaviors of the DOSs. Therefore, in this work, instead of deriving the DOS by ILT from ambiguous canonical partition functions, I tried to derive the DOS directly from the microcanonical path integral. If the integration contour is supposed to be infinite, there are only two types of contour. One gives behavior similar to the previously obtained DOSs (type I) and the other has peculiar behavior (type II). I believe that the type I DOS describes the correct zero-loop behavior (of the $S^2 \times D$ sector) and the DOS will not vanish for arbitrarily high energy, in contrast with the previously obtained DOSs. Moreover, as I showed in section 3 (and as was also shown in \cite{MelmedWhiting}), the integration contours for all the DOSs listed in Table \ref{TA1} fail to capture the saddle point (i.e., the point satisfying the Einstein equation) for large $|\eta|$. If we think the gravitational path integral can always be approximated by the saddle-point(s)' contribution for any boundary data as (\ref{EQintro}), then the behavior for large $|\eta|$ must be replaced (dominated) by the other topology sectors, in which the path integration can be approximated by the saddle point(s), at least for large $|\eta|$.
\subsection{Open Problems}
There are some open problems, which I list here. \\
$\cdot$ Finding the corresponding canonical partition function obtained from the Laplace transformation of type I (or II) DOS. \\
$\cdot$ The effect of the inclusion of matter fields or cosmological constants. \\
$\cdot$ Although there exists non-vanishing DOS for $E<0$, I ignored this in this work. Is it OK? \\
$\cdot$ As in the AdS case~\cite{Maldacena}\cite{Marolf}, for $0<E<R_{b}/G$, the purification of the microcanonical density matrix may correspond to an eternal BH geometry with boundary at $R_{b}$. What is the corresponding geometry for $R_{b}/G<E<2R_{b}/G$ (and for $2R_{b}/G < E$ if saddle-point geometries do not exist in the other topology sector)? Do there exist corresponding geometries for the purification for large $E$? \\
$\cdot$ The existence of saddle-point geometries in the other topology sectors which dominate the type I DOS for large $|\eta|$ should be explored.
Some of these problems will be considered in \cite{Miyashita}
\section*{Acknowledgement}
S.M. is grateful to K. Maeda, S. Sato, and K. Okabayashi for useful discussions.
This work was supported in part by a Grant-in-Aid (No. 18J11983) from the Scientific Research Fund of the Japan Society for the Promotion of Science.
|
2,869,038,154,651 | arxiv | \section*{Abstract}
\textit{
Learning-based visual odometry and SLAM methods demonstrate a steady improvement over past years. However, collecting ground truth poses to train these methods is difficult and expensive. This could be resolved by training in an unsupervised mode, but there is still a large gap between performance of unsupervised and supervised methods. In this work, we focus on generating synthetic data for deep learning-based visual odometry and SLAM methods that take optical flow as an input. We produce training data in a form of optical flow that corresponds to arbitrary camera movement between a real frame and a virtual frame. For synthesizing data we use depth maps either produced by a depth sensor or estimated from stereo pair. We train visual odometry model on synthetic data and do not use ground truth poses hence this model can be considered unsupervised. Also it can be classified as monocular as we do not use depth maps on inference.}
\textit{We also propose a simple way to convert any visual odometry model into a SLAM method based on frame matching and graph optimization. We demonstrate that both the synthetically-trained visual odometry model and the proposed SLAM method build upon this model yields state-of-the-art results among unsupervised methods on KITTI dataset and shows promising results on a challenging EuRoC dataset.}
\section{Introduction}
Simultaneous localization and mapping is an essential part of robotics and augmented reality systems. Deep learning-based methods for visual odometry and SLAM~\cite{wang2017deepvo, Wang2018Espvo, zhou2018deeptam, Henriques2018MapNet, Xue019BeyondTracking} have evolved over the last years and are able to compete with classical geometry-based methods on the datasets with predominant motions along plain surfaces like KITTI\cite{Geiger2012CVPR} and Malaga\cite{blanco2009cor}.
\begin{figure}[ht]
\includegraphics[width=\linewidth]{images/synth_flow_euroc2.png}
\caption{Optical flow estimated with trainable method (PWC-Net) on consecutive frames against the one synthesized from depth map and given 6DoF.}
\label{fig:euroc_synthetic_flow}
\end{figure}
However, the practical use of deep learning-based methods for visual odometry and SLAM is limited by the difficulty of acquiring precise ground truth camera poses. To overcome this problem, unsupervised visual odometry methods are being actively investigated~\cite{almalioglu2018ganvo, Feng2019sganvo, Zhan2018Depth-VO-Feat, Li2017UnDeepVO, yin2018geonet}. Existing methods~\cite{zhou2018deeptam, zhou2017unsupervised, Teed2019DeepV2D, DBLP:journals/corr/Vijayanarasimhan17, bian2019sc-sfmlearner} use video sequences for training. They estimate camera movements and depth maps jointly. For a pair of consecutive frames, the first frame is re-rendered from the point of view of the second one. The difference between re-rendered first frame and ground truth second frame is used in a loss function.
We propose an alternative approach, that requires only individual frames with depth rather than video sequences. In our approach, we sample random camera motions with respect to the physical motion model of an agent. Then for each sampled camera motion we synthesize corresponding optical flow between a real frame and a virtual frame. The resulting synthesized optical flow with generated ground truth camera motions can be used for training a learning-based visual odometry model.
Our contribution is twofold: \begin{itemize}
\item First, we introduce an unsupervised method of training visual odometry and SLAM models on synthetic optical flow generated from depth map and arbitrary camera movement between selected real frame and virtual frame. This approach does not use frame sequences for training and does not require ground truth camera poses.
\item Second, we propose a simple way to convert any visual odometry model into a SLAM system with frame matching and graph optimization.
\end{itemize}
We demonstrate that our approach outperforms state-of-the-art unsupervised deep SLAM methods on KITTI dataset. Also we tried our method on challenging EuRoC dataset, and to the best of our knowledge, this is the first unsupervised learnable method ever evaluated on EuRoC.
\section{Related work}
\subsection{Classical methods}
Several different mathematical formulations for visual odometry have been considered in the literature. Geometry-based visual odometry methods can be classified into direct (\eg~\cite{kerl2013robust}) and indirect (\eg~\cite{mur2017orb}) or dense (\eg~\cite{steinbrucker2011real}) and sparse (\eg~\cite{engel2018direct}).
Direct methods take original images as inputs, while indirect methods process detected keypoints and corresponding features.
Dense methods accept regular inputs such as images, optical flow or dense feature representations. In sparse methods, data of irregular structure is used.
Many of the classical works apply bundle adjustment or pose graph optimization in order to mitigate the odometry drift. Since this strategy showed its effectiveness in related tasks~\cite{mur2017orb, Schops2019badslam}, we adopt it in our deep learning-based approach.
\subsection{Supervised learning-based methods}
DeepVO~\cite{wang2017deepvo} was a pioneer work to use deep learning for visual odometry. This deep recurrent network regresses camera motion using pretrained FlowNet~\cite{dosovitskiy2015flownet} as a feature extractor. ESP-VO~\cite{wang2018end} extends this model with sequence-to-sequence learning and introduces an additional loss on global poses. LS-VO~\cite{costante2018ls} also uses the result of FlowNet and formulates the problem of estimating camera motion as finding a low-dimensional subspace of the optical flow space. DeMoN~\cite{ummenhofer2017demon} estimates both camera motion, optical flow and depth in the EM-like iterative network. By efficient usage of consecutive frames, DeMoN improves accuracy of depth prediction over single-view methods. This work became a basis for the first deep SLAM method called DeepTAM~\cite{zhou2018deeptam}. Similar ideas were implemented in ENG~\cite{dharmasiri2018eng}, which was proved to work on both indoor and outdoor datasets.
\subsection{Unsupervised learning-based methods}
Recent advances in simultaneous depth and motion estimation~\cite{zhou2018deeptam}~\cite{Teed2019DeepV2D} from video sequences allow to track more accurate camera position in an unsupervised manner. These methods exploit sequential nature of the data in order to model scene dynamics and take clues from occlusion, between-frame optical flow consistency and other factors (\cite{zhou2017unsupervised},~\cite{DBLP:journals/corr/abs-1901-07288},~\cite{DBLP:journals/corr/Vijayanarasimhan17}). To achieve motion consistency, additional modalities of data such as depth maps are estimated in a joint pipeline. Similarly to these approaches, we use depth maps; however, we once estimate depth maps from a stereo pair, and keep them unchanged during optical flow synthesis.
\subsection{Novel view synthesis}
The idea of novel view synthesis using single frame or stereo pair and optical flow has been exploited in~\cite{zhou2018deeptam} . In this method, model is trained in a supervised manner by minimizing difference between estimated and ground truth camera poses. Novel view synthesis is used as a part of working pipeline, with new camera position predicted, virtual frame synthesized, and movement between virtual and current frame estimated. Therefore, this method operates mainly in the image domain, utilizing optical flow only to transit between different image instances. In our approach, we synthesize optical flow rather than images. We also generate training data only on training stage, and do not use it during inference.
\section{Proposed method}
\subsection{Visual odometry model}
For visual odometry, we adopt a neural network from~\cite{Slinko2019MotionMap} that estimates relative rotation and translation in form of 6DoF from dense optical flow. The model architecture is shown on Figure \ref{fig:visual_odometry_network}. Generally speaking. our approach can be applied to any model that takes optical flow as an input and predicts 6DoF.
We use PWC-Net~\cite{sun2018pwc} for optical flow estimation. The source code and pretrained weights are taken from the official repository \footnote{\url{https://github.com/NVlabs/PWC-Net}}. For KITTI, we opted for weights pretrained on FlyingThings3D~\cite{MIFDB16} dataset and fine-tuned on KITTI optical flow dataset\cite{Menze2018JPRS}. For EuRoC, which is in grayscale, we fine-tuned PWC-Net weights and using Sintel dataset~\cite{Butler:ECCV:2012} converted to grayscale.
In our experiments, we found out that a single neural network can effectively handle motions within a certain range. However, motions between the first and the last frames in loops differ significantly from motions between consecutive frames: not only are they of different scale but also much more diverse. To address this problem, we train two models: $NN_{cons}$ model to estimate motions between consecutive frames and $NN_{loops}$ to predict motions specifically between the first and the last frames in a loop.
\begin{figure*}[ht]
\includegraphics[width=\linewidth]{images/net.png}
\caption{Architecture of visual odometry model}
\label{fig:visual_odometry_network}
\end{figure*}
\subsection{Synthetic training data generation}
Taking depth map and an arbitrary motion in form of 6DoF, the method runs as follows:
\begin{enumerate}
\item First, we map depth map pixels into points in a frustum that gives a point cloud.
\item To build a virtual point cloud, we represent motion in form of SE3 matrix and apply it to the current point cloud, obtaining this point cloud from another view point.
\item Next, we re-project this virtual point cloud back to image plane, that gives shifted pixel grid.
\item To get absolute values of optical flow, we calculate the difference between re-projection and regular pixel grid of a source depth map.
\end{enumerate}
Then, this newly synthesized optical flow can be used as an input to a visual odometry network.
The camera movements should be generated with taking physical motion model of an agent into account. Since existing datasets do not contain such information, we estimate the motion model from the ground truth data. By modelling, we approximate ground truth distribution of 6DoF using Student's t-distribution Figure \ref{fig:dof_distribution}. We adjust the parameters of this distribution once and keep them fixed during training, while 6DoF are being sampled randomly.
\begin{figure*}[ht]
\includegraphics[width=\linewidth]{images/dof_distribution.png}
\caption{Distribution of 6DoF motion in KITTI dataset}
\label{fig:dof_distribution}
\end{figure*}
In case that dataset does not contain depth maps, they can be estimated from a monocular image or a stereo pair. In our experiments, we obtain depth $z$ from disparity $d$ similar to~\cite{ZbontarL2015disparity}.
\begin{gather}
z = \frac{f B}{d},
\end{gather}
where $f$ is focal length and $B$ is distance between stereo cameras.
To estimate disparity, we match left and right image with the same PWC-Net~\cite{sun2018pwc} as was used to estimate optical flow.
Since we do not need ground truth camera poses for data synthesis, the proposed approach can be considered unsupervised.
\begin{figure*}[ht]
\includegraphics[width=\linewidth]{images/slam.png}
\caption{Architecture of proposed SLAM method}
\label{fig:slam}
\end{figure*}
\subsection{Relocalization}
Relocalization can be reformulated as image retrieval task Figure \ref{fig:slam}. Following a standard approach, we measure distance between frames according to their visual similarity. Here, we use classical Bag of Visual Words (BoVW) from OpenCV library~\cite{opencv_library} that is applied to SIFT features~\cite{Lowe1999Sift}. These features are stored in a database. To create a topological map, for each new frame its 20 nearest neighbors are extracted from database. These found frames are further filtered by applying Lowe's ratio test~\cite{Lowe2004Test} and rejecting candidates with less then $N_{th}$ matched keypoints.
\subsection{Graph optimization}
We adopt graph optimization in order to expand a visual odometry method to a SLAM algorithm. One way to formulate SLAM is to use a graph with nodes corresponding to the camera poses and edges representing constraints between these poses. Two camera poses are connected with an edge if they correspond to consecutive frames or if they are considered similar by relocalization module. The edge constraints are obtained as relative motions predicted with visual odometry module. Once such a graph is constructed, it can be further optimized in order to find the spatial configuration of the nodes that is the most consistent with the relative motions modeled by the edges. The nodes obtained through optimization procedure are then used as final pose estimates. The reported metrics are thus computed by comparing these estimates with ground truth poses.
We opted for a publicly available g2o library~\cite{kuemmerle2011g2o} that implements least-squares error minimization. To incorporate optimization module in our Python-based pipeline we use Python binding for g2o \footnote{\url{https://github.com/uoip/g2opy}}.
The interaction between visual odometry networks $NN_{cons}$, $NN_{loops}$ and graph optimization module is guided by a set of hyperparameters: \begin{itemize}
\item $C_{s_i}$ -- coefficient for standard deviation
\item $C_r$ -- extra scaling factor for rotation component
\item $T_{loop}$ -- loop threshold: a loop is detected if difference between indices of two images exceeds given threshold. In this case, relative motion is predicted using loop network $NN_{loops}$, otherwise $NN_{cons}$ is used.
\end{itemize}
We adjust these hyperparameters on a validation subset and then evaluate our method on a test subset.
To construct a graph, we need to pass $7 \times 7$ information matrix $P_i$ corresponding to 3D translation vector and rotation in form of a quaternion.
First, we compose resulting covariance matrix as:
\begin{gather}
Q_i = C_{s_i}
\begin{bmatrix}
\sigma_{t_x}^2 & 0 & 0 & 0 & 0 & 0 \\
0 & \sigma_{t_y}^2 & 0 & 0 & 0 & 0 \\
0 & 0 & \sigma_{t_z}^2 & 0 & 0 & 0 \\
0 & 0 & 0 & C_r \sigma_{\alpha}^2 & 0 & 0 \\
0 & 0 & 0 & 0 & C_r \sigma_{\beta}^2 & 0 \\
0 & 0 & 0 & 0 & 0 & C_r \sigma_{\gamma}^2 \\
\end{bmatrix}
\end{gather}
where $\alpha, \beta, \gamma$ stand for Euler angles $euler_x, \ euler_y, \ euler_z$ respectively.
A conversion between this matrix and $Q_i$ is performed according to~\cite{Claraco2010Tutorial}:
\begin{gather}
P_i = \left( \frac{\partial p_7 (p_6)}{\partial p_6} Q_i \frac{\partial p_7 (p_6)}{\partial p_6} \right)^{-1}
\end{gather}
\section{Experiments}
\subsection{Datasets}
\noindent \textbf{KITTI odometry 2012.} We used KITTI dataset to evaluate our method in a simple scenario. We trained on trajectories 00, 02, 08 and 09 and tested on trajectories 03, 04, 05, 06, 07, 10.
It is worth noting that ground truth poses were collected using GPS sensor that yielded noisy measurements of motion along y-axis (there are vertical movements perpendicular to the surface of the road). This effect is cumulative: while relative motion between consecutive frames was measured quite precisely, the difference in absolute height between the first and the last frame of a loop may be up to 2 meters.
\noindent \textbf{EuRoC.} This dataset was recorded using flying drone in two different environments. 6DoF ground truth poses are captured by laser tracker or motion capture system depending on environment. The sensor and ground truth data are calibrated and temporally aligned, with all extrinsic and intrinsic calibration parameters provided. As original frames come unrectified, preprocessing included removing distortion. We validated on trajectories MH\_02\_easy, V1\_02\_medium and tested on V1\_03\_difficult and V2\_03\_difficult, while other trajectories were used for training.
Due to complexity of the environment, dynamic motions and weakly correlated, entangled trajectories, EuRoC appears to be a challenging task for trainable methods. Moreover, images are in grayscale, that adds difficulty for methods that match pixels based on their color rather than pure intensity. To the best of our knowledge, we present the first trainable method that demonstrates competitive results on EuRoC among all methods trained in an unsupervised manner.
\begin{figure*}[!ht]
\includegraphics[width=\linewidth]{images/trajectories_kitti.png}
\caption{Ground truth and estimated KITTI trajectories}
\label{fig:kitti_trajectories}
\end{figure*}
\begin{table}
\begin{tabular}{|c|c|c|c|}
\hline
Method & ATE & $t_{err}$ & $r_{err}$ \\
\hline
$NN_{cons}$ & $4.38 \pm 0.36$ & $2.07 \pm 0.03$ & $0.93 \pm 0.04$ \\
\hline
$NN_{loops}$ & $9.33 \pm 1.04$ & $3.15 \pm 0.12$ & $1.42 \pm 0.03$ \\
\hline
SLAM & $1.84 \pm 0.06$ & $1.54 \pm 0.04$ & $0.74 \pm 0.04$ \\
\hline
\end{tabular}
\caption{
Results of supervised visual odometry models and SLAM method on KITTI dataset. Optimal parameters for SLAM are $C_{s_i}=10000$, $C_r=1$, $T_{loop}=50$
}
\label{table:kitti_supervised}
\end{table}
\begin{table}
\begin{tabular}{|c|c|c|c|}
\hline
Method & ATE & $t_{err}$ & $r_{err}$ \\
\hline
$NN_{cons}$ & $14.30 \pm 1.57$ & $5.57 \pm 0.33$ & $2.23\pm 0.15$ \\
\hline
$NN_{loops}$ & $16.21 \pm 1.42$ & $6.43 \pm 0.23$ & $2.82 \pm 0.10$ \\
\hline
SLAM & $3.24 \pm 0.17$ & $3.37 \pm 0.12$ & $1.24 \pm 0.04$ \\
\hline
\end{tabular}
\caption{
Results of unsupervised visual odometry models and SLAM method on KITTI dataset. Optimal parameters for SLAM are $C_{s_i}=10000$, $C_r=0.004$, $T_{loop}=50$}
\label{table:kitti_unsupervised_trainable}
\end{table}
\begin{small}
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
Method & $t_{err}$ & $r_{err}$ \\
\hline
ORB-SLAM2\cite{mur2017orb} & 2.41 & \textbf{0.245} \\
\hline
Ours, SLAM & \textbf{1.54} & 0.74 \\
\hline
\end{tabular}
\caption{
Metrics on KITTI for supervised methods. \\ Numbers are taken from~\cite{Zhan2019learnt}
}
\label{table:kitti_supervised_all}
\end{center}
\end{table}
\end{small}
\begin{small}
\begin{table}
\centering
\begin{tabular}{|l|c|c|c|}
\hline
Method & ATE & $t_{err}$ & $r_{err}$ \\
\hline
SfMLearner\cite{zhou2017unsupervised} & 28.14 & 12.21 & 4.74 \\
\hline
Depth-VO-Feat\cite{Zhan2018Depth-VO-Feat} & 16.83 & 8.15 & 4.00 \\
\hline
SC-SfMLearner\cite{bian2019sc-sfmlearner} & 17.92 & 7.42 & 3.35 \\
\hline
UnDeepVO\cite{Li2017UnDeepVO} & – & 6.27 & 3.39 \\
\hline
GeoNet\cite{yin2018geonet} & – & 13.12 & 7.38 \\
\hline
Vid2Depth\cite{mahjourian2018unsupervised} & – & 37.98 & 18.24 \\
\hline
SGANVO\cite{Feng2019sganvo} & – & 5.12 & 2.53 \\
\hline
Ours, VO & 14.30 & 5.57 & 2.23 \\
\hline
Ours, SLAM & \textbf{~~3.24} & \textbf{~~3.37} & \textbf{~~1.24} \\
\hline
\end{tabular}
\caption{
Metrics on KITTI for unsupervised methods. \\ Numbers are taken from~\cite{Zhan2019learnt}
}
\label{table:kitti_unsupervised_all}
\end{table}
\end{small}
\subsection{Training procedure}
The visual odometry model is trained from scratch using Adam optimization algorithm with amsgrad option switched on. The batch size is set to 128, the momentum is fixed to (0.9, 0.999).
In our experiments, we noticed that despite loss being almost constant among several re-runs, final metrics (ATE, RPE etc.) may fluctuate significantly. This spreading is assumed to be caused by optimization algorithm terminating at different local minima depending on weights initialization and randomness incorporated by sampling batches. We address this challenge by adopting learning rate schedule in order to force optimization algorithm to traverse several local minima during the training process. Switching our training procedure for cyclic learning rate helped to decrease standard deviation of final metrics and the values of metrics themselves.
Initially, values of learning rate are bounded by [~0.0001,~0.001~]. In addition, if validation loss does not improve for 10 epochs, both the lower and upper bounds are multiplied by 0.5. Training process is terminated when learning rate becomes negligibly small. We used $10^{-5}$ as a learning rate threshold. Under these conditions, models are typically trained for about 80 epochs.
In several papers on trainable visual odometry~\cite{costante2018ls, Lv18eccv, wang2017deepvo, zhao2018learning, zhou2018deeptam}, different weights are used for translation loss and rotation loss. Since small rotation errors may have a crucial impact on the shape of trajectory, precise estimation of Euler angles is more important compared to translations. We multiply loss for rotation components by 50, as it was proposed in~\cite{costante2018ls}.
\subsection{Evaluation protocol}
We evaluate visual odometry methods with several commonly used metrics.
For KITTI, we follow the evaluation protocol implemented in KITTI devkit \footnote{\url{https://github.com/alexkreimer/odometry/devkit}} that computes translation ($t_{err}$) and rotation ($r_{err}$) errors. Both translation and rotation errors are calculated as root-mean-squared-error for all possible sub-sequences of length (100,~\dots,~800) meters. The metrics reported are the average values of these errors per 100 meters.
For EuRoC, we use RPE metric that measures frame-to-frame relative pose error.
To provide a detailed analysis, we also report values of absolute trajectory error (ATE), that measures the average pairwise distance between predicted and ground truth camera poses.
Since the results between different runs vary significantly, in order to obtain fair results we conduct all experiments for 5 times with different random seeds. The metrics reported are mean and standard deviation of execution results.
\subsection{Results on KITTI}
The results of our supervised and unsupervised visual odometry and SLAM are listed in Tab.~\ref{table:kitti_supervised} and Tab.~\ref{table:kitti_unsupervised_trainable}, respectively. According to them, visual odometry network $NN_{cons}$ trained on consecutive frames yields better results comparing to $NN_{loops}$ trained to estimate targets coming from a wider distribution. Combination of these two networks within a deep SLAM architecture helps to improve accuracy of predictions significantly.
The existing quality gap between supervised and unsupervised approaches can be explained by non-rigidity of the scene, that exceeds the limitations of our data generation method. To obtain synthetic optical flow, a combination of translation and rotation is applied to a point cloud. Since it does not affect pairwise distances between points, the shapes of objects presenting in the scene remain unchanged and no new points appear. Thereby, rigidity of the scene is implicitly incorporated into the data synthesizing pipeline. For KITTI, scene does not meet these requirements due to the large displacements between consecutive frames and numerous moving objects appearing in the scene.
According to Tab.~\ref{table:kitti_supervised_all}, the proposed method is comparable with ORB-SLAM2. We summarize results of unsupervised learnable methods in Tab.~\ref{table:kitti_unsupervised_all}. We show that our method significantly outperforms current state-of-the-art among all unsupervised deep learning-based approaches to trajectory estimation.
\begin{figure}[!t]
\includegraphics[width=\linewidth]{images/trajectories_euroc.png}
\caption{Ground truth and estimated EuRoC trajectory}
\label{fig:euroc_trajectories}
\end{figure}
\subsection{Results on EuRoC}
For EuRoC dataset, we observed that it is more profitable to train visual odometry $NN_{cons}$ on a mixture of strides 1, 2, 3, rather than training on a single stride. Results of $NN_{cons}$ and $NN_{loops}$ are presented in Tab.~\ref{table:euroc_supervised} for supervised training and in Tab.~\ref{table:euroc_unsipervised} for unsupervised training. We expected the results on EuRoC to resemble KITTI results, where supervised method surpasses unsupervised method remarkably. Surprisingly, metrics for our SLAM model trained in supervised and unsupervised manner are nearly the same. We attribute this to the following reasons. Firstly, since EuRoC scenes are rigid, generated flow looks similar to estimated flow. Secondly, randomly sampled training data prevent unsupervised method from overfitting, while supervised method tends to memorize the entire dataset.
\begin{table*}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Method & val ATE & val RPE\textsubscript{t} & val RPE\textsubscript{r} & test ATE & test RPE\textsubscript{t} & test RPE\textsubscript{r} \\
\hline
$NN_{1+2+3}$ & $1.35 \pm 0.07$ & $3.16 \pm 0.20$ & $19.75 \pm 1.20$ & $1.32 \pm 0.06$ & $2.78 \pm 0.25$ & $55.76 \pm 2.16$ \\
\hline
$NN_{loops}$ & $1.43 \pm 0.11$ & $3.64 \pm 0.30$ & $23.88 \pm 4.78$ & $1.36 \pm 0.05$ & $3.02 \pm 0.13$ & $53.45 \pm 2.00$ \\
\hline
SLAM & $~~0.51 \pm 0.015$ & $1.06 \pm 0.03$ & $~~8.36 \pm 0.17$ & $0.81 \pm 0.01$ & $1.51 \pm 0.02$ & $19.59 \pm 1.37$\\
\hline
\end{tabular}
\caption{
Results of supervised visual odometry models and SLAM method on EuRoC dataset. \\ Optimal parameters for SLAM are $C_{s_i}=10000$, $C_r=0.001$, $T_{loop}=100$
}
\label{table:euroc_supervised}
\end{center}
\end{table*}
\begin{table*}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Method & val ATE & val RPE\textsubscript{t} & val RPE\textsubscript{r} & test ATE & test RPE\textsubscript{t} & test RPE\textsubscript{r} \\
\hline
$NN_{1+2+3}$ & $1.06 \pm 0.04$ & $2.26 \pm 0.14$ & $20.97 \pm 1.39$ & $1.35 \pm 0.46$ & $3.04 \pm 0.35$ & $62.56 \pm 3.29$ \\
\hline
$NN_{loops}$ & $1.37 \pm 0.12$ & $4.23 \pm 0.59$ & $33.49 \pm 1.73$ & $1.28 \pm 0.05$ & $3.43 \pm 0.16$ & $67.17 \pm 4.36$ \\
\hline
SLAM & $~~0.57 \pm 0.008$ & $1.12 \pm 0.03$ & $~~9.03 \pm 0.20$ & $0.84 \pm 0.17$ & $1.49 \pm 0.27$ & $23.13 \pm 7.40$\\
\hline
\end{tabular}
\caption{
Results of unsupervised visual odometry models and SLAM method on EuRoC dataset. \\ Optimal parameters for SLAM are $C_{s_i}=1000$, $C_r=0.0001$, $T_{loop}=100$
}
\label{table:euroc_unsipervised}
\end{center}
\end{table*}
\vspace*{2cm} $ $\\
\section{Conclusion}
We proposed an unsupervised method of training visual odometry and SLAM models on synthetic optical flow generated from depth map and arbitrary camera movement between selected real frame and virtual frame. This approach does not use frame sequences for training and does not require ground truth camera poses.
We also presented a simple way to build SLAM system from an arbitrary visual odometry model. To prove our ideas, we conducted experiments of training unsupervised SLAM on KITTI and EuRoC datasets. The implemented method demonstrated state-of-the-art results on KITTI dataset among unsupervised methods and showed robust performance on EuRoC. To the best of our knowledge, our visual odometry method is a pioneer work to train deep learning-base model on EuRoC in an unsupervised mode.
\newpage
{\small
\bibliographystyle{ieee}
|
2,869,038,154,652 | arxiv | \subsection{\texorpdfstring{}{}}}
\newcommand{\subsection{}}{\subsection{}}
\newcommand{\parref}[1]{\hyperref[#1]{\S\ref*{#1}}}
\newcommand{\chapref}[1]{\hyperref[#1]{Chapter~\ref*{#1}}}
\makeatletter
\newcommand*\if@single[3]{%
\setbox0\hbox{${\mathaccent"0362{#1}}^H$}%
\setbox2\hbox{${\mathaccent"0362{\kern0pt#1}}^H$}%
\ifdim\ht0=\ht2 #3\else #2\fi
}
\newcommand*\rel@kern[1]{\kern#1\dimexpr\macc@kerna}
\newcommand*\widebar[1]{\@ifnextchar^{{\wide@bar{#1}{0}}}{\wide@bar{#1}{1}}}
\newcommand*\wide@bar[2]{\if@single{#1}{\wide@bar@{#1}{#2}{1}}{\wide@bar@{#1}{#2}{2}}}
\newcommand*\wide@bar@[3]{%
\begingroup
\def\mathaccent##1##2{%
\if#32 \let\macc@nucleus\first@char \fi
\setbox\z@\hbox{$\macc@style{\macc@nucleus}_{}$}%
\setbox\tw@\hbox{$\macc@style{\macc@nucleus}{}_{}$}%
\dimen@\wd\tw@
\advance\dimen@-\wd\z@
\divide\dimen@ 3
\@tempdima\wd\tw@
\advance\@tempdima-\scriptspace
\divide\@tempdima 10
\advance\dimen@-\@tempdima
\ifdim\dimen@>\z@ \dimen@0pt\fi
\rel@kern{0.6}\kern-\dimen@
\if#31
\overline{\rel@kern{-0.6}\kern\dimen@\macc@nucleus\rel@kern{0.4}\kern\dimen@}%
\advance\[email protected]\dimexpr\macc@kerna
\let\final@kern#2%
\ifdim\dimen@<\z@ \let\final@kern1\fi
\if\final@kern1 \kern-\dimen@\fi
\else
\overline{\rel@kern{-0.6}\kern\dimen@#1}%
\fi
}%
\macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax}%
\macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\if#31
\macc@nested@a\relax111{#1}%
\else
\def\gobble@till@marker##1\endmarker{}%
\futurelet\first@char\gobble@till@marker#1\endmarker
\ifcat\noexpand\first@char A\else
\def\first@char{}%
\fi
\macc@nested@a\relax111{\first@char}%
\fi
\endgroup
}
\makeatother
\mapcitekey{Saito:HodgeModules}{Saito-HM}
\mapcitekey{Saito:MixedHodgeModules}{Saito-MHM}
\mapcitekey{Saito:Theory}{Saito-th}
\mapcitekey{Saito:Kollar}{Saito-K}
\mapcitekey{Saito:Kaehler}{Saito-Kae}
\mapcitekey{Schnell:sanya}{Schnell-sanya}
\mapcitekey{Schnell:holonomic}{Schnell-holo}
\mapcitekey{Takegoshi:HigherDirectImages}{Takegoshi}
\mapcitekey{Wang:TorsionPoints}{Wang}
\mapcitekey{Schnell:lazarsfeld}{Schnell-laz}
\mapcitekey{Popa+Schnell:mhmgv}{PS}
\mapcitekey{Green+Lazarsfeld:GV1}{GL1}
\mapcitekey{Green+Lazarsfeld:GV2}{GL2}
\mapcitekey{Simpson:Subspaces}{Simpson}
\mapcitekey{Kollar:DualizingII}{Kollar}
\mapcitekey{Beilinson+Bernstein+Deligne}{BBD}
\mapcitekey{Deligne:Finitude}{Deligne}
\mapcitekey{Deligne:HodgeII}{Deligne-H}
\mapcitekey{Lazarsfeld+Popa+Schnell:BGG}{LPS}
\mapcitekey{Schmid:VHS}{Schmid}
\mapcitekey{Chen+Jiang:VanishingEulerCharacteristic}{ChenJiang}
\mapcitekey{Ueno:ClassificationTheory}{Ueno}
\mapcitekey{Schnell:saito-vanishing}{Schnell-van}
\mapcitekey{Pareschi+Popa:cdf}{PP1}
\mapcitekey{Pareschi+Popa:regIII}{PP2}
\mapcitekey{Pareschi+Popa:GV}{PP3}
\mapcitekey{Pareschi+Popa:regI}{PP4}
\mapcitekey{Popa:perverse}{Popa}
\mapcitekey{Ben-Bassat+Block+Pantev:noncomm_FM}{BBP}
\mapcitekey{Pareschi:BasicResults}{Pareschi}
\mapcitekey{Jiang:effective}{jiang}
\mapcitekey{Varouchas:image}{Varouchas}
\mapcitekey{Ein+Lazarsfeld:Theta}{EL}
\mapcitekey{Chen+Hacon:abelian}{CH1}
\mapcitekey{Chen+Hacon:FiberSpaces}{CH2}
\mapcitekey{Lazarsfeld+Popa:BGG}{LP}
\mapcitekey{Hacon:GV}{Hacon}
\mapcitekey{Birkenhake+Lange:ComplexTori}{BL}
\mapcitekey{Schmid+Vilonen:Unitary}{SV}
\mapcitekey{Cattani+Kaplan+Schmid:Degeneration}{CKS}
\mapcitekey{Zucker:DegeneratingCoefficients}{Zucker}
\mapcitekey{Zhang:Quaternions}{Zhang}
\mapcitekey{Debarre:Ample}{Debarre}
\mapcitekey{Grauert+Peternell+Remmert:SCV7}{SCV}
\mapcitekey{Mourougane+Takayama:Metric}{MT}
\mapcitekey{Arapura:GV}{Arapura}
\mapcitekey{Chen+Hacon:Varieties}{CH-pisa}
\newcommand{\HM}[2]{\operatorname{HM}_{\mathbb{R}}(#1, #2)}
\newcommand{\HMC}[2]{\operatorname{HM}_{\mathbb{C}}(#1, #2)}
\newcommand{\mathrm{D}_{\mathit{coh}}^{\mathit{b}}}{\mathrm{D}_{\mathit{coh}}^{\mathit{b}}}
\newcommand{\mathrm{D}^{\mathit{b}}}{\mathrm{D}^{\mathit{b}}}
\newcommand{\mathrm{D}_{\mathit{c}}^{\mathit{b}}}{\mathrm{D}_{\mathit{c}}^{\mathit{b}}}
\newcommand{\omega_X}{\omega_X}
\renewcommand{\mathbb{D}}{\mathbf{D}}
\newcommand{\hat{T}}{\hat{T}}
\newcommand{\shO_T}{\shf{O}_T}
\DeclareMathOperator{\Gal}{Gal}
\newcommand{\CC_{\rho}}{\mathbb{C}_{\rho}}
\newcommand{\CC_{\bar\rho}}{\mathbb{C}_{\bar\rho}}
\newcommand{\CC_{\rho_0}}{\mathbb{C}_{\rho_0}}
\newcommand{\CC_{\bar\rho_0}}{\mathbb{C}_{\bar\rho_0}}
\DeclareMathOperator{\Char}{Char}
\newcommand{\mathcal{H}}{\mathcal{H}}
\newcommand{\varH_{\RR}}{\mathcal{H}_{\mathbb{R}}}
\newcommand{\varH_{\CC}}{\mathcal{H}_{\mathbb{C}}}
\newcommand{\mathcal{B}}{\mathcal{B}}
\newcommand{\mathcal{L}_{\rho}}{\mathcal{L}_{\rho}}
\newcommand{\mathcal{L}_{\rho, \RR}}{\mathcal{L}_{\rho, \mathbb{R}}}
\newcommand{\mathcal{L}}{\mathcal{L}}
\newcommand{\varL_{\RR}}{\mathcal{L}_{\mathbb{R}}}
\newcommand{\varL_{\CC}}{\mathcal{L}_{\mathbb{C}}}
\newcommand{\bar\rho}{\bar\rho}
\newcommand{\mathcal{V}}{\mathcal{V}}
\newcommand{\bar{\mathcal{V}}}{\bar{\mathcal{V}}}
\newcommand{A^{\natural}}{A^{\natural}}
\newcommand{\shO_{\Ash}}{\shf{O}_{A^{\natural}}}
\DeclareMathOperator{\FM}{FM}
\newcommand{M_{\RR}}{M_{\mathbb{R}}}
\newcommand{M_{\CC}}{M_{\mathbb{C}}}
\newcommand{J_{\RR}}{J_{\mathbb{R}}}
\newcommand{J_{\CC}}{J_{\mathbb{C}}}
\newcommand{N_{\RR}}{N_{\mathbb{R}}}
\newcommand{N_{0, \RR}}{N_{0, \mathbb{R}}}
\newcommand{\hat{\shF}}{\hat{\shf{F}}}
\setlength{\parskip}{.05 in}
\begin{document}
\title{Hodge modules on complex tori and generic vanishing for compact K\"ahler manifolds}
\author{Giuseppe Pareschi}
\address{Universit\`a di Roma ``Tor Vergata'', Dipartimento di Matematica,
Via della Ricerca Scientifica, I-00133 Roma, Italy}
\email{[email protected]}
\author{Mihnea Popa}
\address{Department of Mathematics, Northwestern University,
2033 Sheridan Road, Evanston, IL 60208, USA}
\email{[email protected]}
\author{Christian Schnell}
\address{Department of Mathematics, Stony Brook University, Stony Brook, NY 11794-3651}
\email{[email protected]}
\begin{abstract}
We extend the results of generic vanishing theory to polarizable real Hodge
modules on compact complex tori, and from there to arbitrary compact K\"ahler
manifolds. As applications, we obtain a bimeromorphic characterization of
compact complex tori (among compact K\"ahler manifolds), semi-positivity results,
and a description of the Leray filtration for maps to tori.
\end{abstract}
\date{\today}
\maketitle
\markboth{GIUSEPPE PARESCHI, MIHNEA POPA, AND CHRISTIAN SCHNELL}
{GENERIC VANISHING ON COMPACT K\"AHLER MANIFOLDS}
\section{Introduction}
The term ``generic vanishing'' refers to a collection of theorems about the
cohomology of line bundles with trivial first Chern class. The first results of this
type were obtained by Green and Lazarsfeld in the late 1980s \cite{GL1,GL2}; they
were proved using classical Hodge theory and are therefore valid on arbitrary compact
K\"ahler manifolds. About ten years ago, Hacon \cite{Hacon} found a more algebraic
approach, using vanishing theorems and the Fourier-Mukai transform, that has led to
many additional results in the projective case; see also \cite{PP3,ChenJiang,PS}. The
purpose of this paper is to show that the newer results are in fact also valid on
arbitrary compact K\"ahler manifolds.
Besides \cite{Hacon}, our motivation also comes from a 2013 paper by Chen and Jiang
\cite{ChenJiang} in which they prove, roughly speaking, that the direct image of the
canonical bundle under a generically finite morphism to an abelian variety is
semi-ample. Before we can state more precise results, recall the following useful
definitions (see \parref{par:GV-sheaves} for more details).
\begin{definition*}
Given a coherent $\shO_T$-module $\shf{F}$ on a compact complex torus $T$, define
\[
S^i(T, \shf{F}) = \menge{L \in \Pic^0(T)}{H^i(T, \shf{F} \otimes L) \neq 0}.
\]
We say that $\shf{F}$ is a \define{GV-sheaf} if $\codim S^i(T, \shf{F}) \geq i$ for every
$i \geq 0$; we say that $\shf{F}$ is \define{M-regular} if $\codim S^i(T, \shf{F}) \geq
i+1$ for every $i \geq 1$.
\end{definition*}
Hacon \cite[\S4]{Hacon} showed that if $f \colon X \to A$ is a morphism from a smooth
projective variety to an abelian variety, then the higher direct image sheaves $R^j
f_{\ast} \omega_X$ are GV-sheaves on $A$; in the special case where $f$ is generically finite
over its image, Chen and Jiang \cite[Theorem~1.2]{ChenJiang} proved the much stronger
result that $f_{\ast} \omega_X$ is, up to tensoring by line bundles in $\Pic^0(A)$, the direct
sum of pullbacks of M-regular sheaves from quotients of $A$. Since GV-sheaves are
nef, whereas M-regular sheaves are ample, one should think of this as saying that
$f_{\ast} \omega_X$ is not only nef but actually semi-ample. One of our main results is the
following generalization of this fact.
\begin{intro-theorem}\label{thm:direct_image}
Let $f \colon X \to T$ be a holomorphic mapping from a compact K\"ahler manifold to a
compact complex torus. Then for $j \geq 0$, one has a decomposition
\[
R^j f_{\ast} \omega_X \simeq \bigoplus_{k=1}^n \bigl( q^{\ast}_k \shf{F}_k \otimes L_k \bigr),
\]
where each $\shf{F}_k$ is an M-regular (hence ample) coherent sheaf with projective support
on the compact complex torus $T_k$, each $q_k \colon T \to T_k$ is a surjective morphism with connected fibers,
and each $L_k \in \Pic^0(T)$ has finite order. In particular, $R^j f_{\ast} \omega_X$ is a
GV-sheaf on $T$.
\end{intro-theorem}
This leads to quite strong positivity properties for higher direct images of
canonical bundles under maps to tori. For instance, if $f$ is a surjective map which
is a submersion away from a divisor with simple normal crossings, then $R^j f_{\ast} \omega_X$
is a semi-positive vector bundle on $T$. See \parref{par:semi-positivity} for more
on this circle of ideas.
One application of \theoremref{thm:direct_image} is the following effective criterion
for a compact K\"ahler manifold to be bimeromorphically equivalent to a torus; this
generalizes a well-known theorem by Chen and Hacon in the projective case \cite{CH1}.
\begin{intro-theorem}\label{torus_intro}
A compact K\"ahler manifold $X$ is bimeromorphic to a compact complex
torus if and only if $\dim H^1(X, \mathbb{C}) = 2 \dim X$ and $P_1(X) = P_2(X) = 1$.
\end{intro-theorem}
The proof is inspired by the approach to the Chen-Hacon theorem given in
\cite{Pareschi}; even in the projective case, however, the result in
\corollaryref{cor:gen-finite} greatly simplifies the existing proof. In
\theoremref{thm:jiang}, we deduce that the Albanese map of a compact K\"ahler
manifold with $P_1(X) = P_2( X) = 1$ is surjective with connected fibers; in the
projective case, this was first proved by Jiang \cite{jiang}, as an effective version
of Kawamata's theorem about projective varieties of Kodaira dimension zero.
It is likely that the present methods can also be applied to the classification
of compact K\"ahler manifolds with $\dim H^1(X, \mathbb{C}) = 2\dim X$ and small
plurigenera; for the projective case, see for instance \cite{CH-pisa} and the
references therein.
In a different direction, \theoremref{thm:direct_image} combined with results in
\cite{LPS} leads to a concrete description of the Leray filtration on the cohomology
of $\omega_X$, associated with a holomorphic mapping $f \colon X \to T$ as above. Recall
that, for each $k \geq 0$, the Leray filtration is a decreasing filtration
$L^{\bullet} H^k(X, \omega_X)$ with the property that
\[
\gr_L^i H^k(X, \omega_X) = H^i \bigl( T, R^{k-i} f_{\ast} \omega_X \bigr).
\]
One can also define a natural decreasing filtration $F^{\bullet} H^k (X, \omega_X)$ induced by the cup-product
action of $H^1(T, \shf{O}_T)$, namely
\[
F^i H^k(X, \omega_X) = \Im \left( \bigwedge^i H^1(T, \shf{O}_T) \otimes H^{k-i} (X, \omega_X) \to H^{k}(X, \omega_X)\right).
\]
\begin{intro-theorem}\label{leray_intro}
The filtrations $L^{\bullet} H^k(X, \omega_X)$ and $F^{\bullet} H^k(X, \omega_X)$ coincide.
\end{intro-theorem}
A dual description of the filtration on global holomorphic forms is given in
\corollaryref{cor:Leray_forms}. Despite the elementary nature of the statement, we do
not know how to prove \theoremref{leray_intro} using only methods from classical
Hodge theory; finding a more elementary proof is an interesting problem.
Our approach to \theoremref{thm:direct_image} is to address generic vanishing for a
larger class of objects of Hodge-theoretic origin, namely polarizable real Hodge
modules on compact complex tori. This is not just a matter of higher generality; we
do not know how to prove \theoremref{thm:direct_image} using methods of classical
Hodge theory in the spirit of \cite{GL1}. This is precisely due to the lack of an
\emph{a priori} description of the Leray filtration on $H^k (X, \omega_X)$ as in
\theoremref{leray_intro}.
The starting point for our proof of \theoremref{thm:direct_image} is a result by
Saito \cite{Saito-Kae}, which says that the coherent $\shO_T$-module $R^j f_{\ast} \omega_X$ is
part of a polarizable real Hodge module $M = (\mathcal{M}, F_{\bullet} \mathcal{M}, M_{\RR}) \in
\HM{T}{\dim X + j}$ on the torus $T$; more precisely,
\begin{equation} \label{eq:Saito}
R^j f_{\ast} \omega_X \simeq \omega_T \otimes F_{p(M)} \mathcal{M}
\end{equation}
is the first nontrivial piece in the Hodge filtration $F_{\bullet} \mathcal{M}$ of the
underlying regular holonomic $\mathscr{D}$-module $\mathcal{M}$. (Please see \parref{par:RHM} for
some background on Hodge modules.) Note that $M$ is supported on the image
$f(X)$, and that its restriction to the smooth locus of $f$ is the polarizable
variation of Hodge structure on the $(\dim f + j)$-th cohomology of the fibers. The
reason for working with real coefficients is that the polarization is induced by a
choice of K\"ahler form in $H^2(X, \mathbb{R}) \cap H^{1,1}(X)$; the variation of Hodge
structure itself is of course defined over $\mathbb{Z}$.
In light of \eqref{eq:Saito}, \theoremref{thm:direct_image} is a consequence of the
following general statement about polarizable real Hodge modules on compact complex tori.
\begin{intro-theorem} \label{thm:Chen-Jiang}
Let $M = (\mathcal{M}, F_{\bullet} \mathcal{M}, M_{\RR}) \in \HM{T}{w}$ be a polarizable real Hodge
module on a compact complex torus $T$. Then for each $k \in \mathbb{Z}$, the coherent
$\shO_T$-module $\gr_k^F \mathcal{M}$ decomposes as
\[
\gr_k^F \mathcal{M} \simeq \bigoplus_{j=1}^n
\bigl( q^{\ast}_j \shf{F}_j \otimes_{\shO_T} L_j \bigr),
\]
where $q_j \colon T \to T_j$ is a surjective map with connected fibers to a
complex torus, $\shf{F}_j$ is an M-regular coherent sheaf on $T_j$ with projective
support, and $L_j \in \Pic^0(T)$. If $M$ admits an integral structure, then each
$L_j$ has finite order.
\end{intro-theorem}
Let us briefly describe the most important elements in the proof. In \cite{PS}, we
already exploited the relationship between generic vanishing and Hodge modules on
abelian varieties, but the proofs relied on vanishing theorems. What allows us to go
further is a beautiful new idea by Botong Wang \cite{Wang}, also dating to 2013,
namely that up to taking
direct summands and tensoring by unitary local systems, every polarizable real Hodge
module on a complex torus actually comes from an abelian variety. (Wang showed this
for Hodge modules of geometric origin.) This is a version with coefficients of Ueno's
result \cite{Ueno} that every irreducible subvariety of $T$ is a torus bundle over a
projective variety, and is proved by combining this geometric fact with some
arguments about variations of Hodge structure.
The existence of the decomposition in \theoremref{thm:Chen-Jiang} is due to the fact
that the regular holonomic $\mathscr{D}$-module $\mathcal{M}$ is semi-simple, hence isomorphic to
a direct sum of simple regular holonomic $\mathscr{D}$-modules. This follows from a theorem
by Deligne and Nori \cite{Deligne}, which says that the local system underlying a
polarizable real variation of Hodge structure on a Zariski-open subset of a compact
K\"ahler manifold is semi-simple. It turns out that the decomposition of $\mathcal{M}$ into
simple summands is compatible with the Hodge filtration $F_{\bullet} \mathcal{M}$; in order
to prove this, we introduce the category of ``polarizable complex Hodge
modules'' (which are polarizable real Hodge modules together with an endomorphism
whose square is minus the identity), and show that every simple summand of $\mathcal{M}$
underlies a polarizable complex Hodge module in this sense.
\begin{note}
Our ad-hoc definition of complex Hodge modules is good enough for the purposes of
this paper. As of 2016, a more satisfactory treatment, in terms of
$\mathscr{D}$-modules and distribution-valued pairings, is currently being developed by
Claude Sabbah and the third author. The reader is advised to consult the website
\begin{center}
\url{www.cmls.polytechnique.fr/perso/sabbah.claude/MHMProject/mhm.html}
\end{center}
for more information.
\end{note}
The M-regularity of the individual summands in \theoremref{thm:Chen-Jiang} turns out
to be closely related to the Euler characteristic of the corresponding
$\mathscr{D}$-modules. The results in \cite{PS} show that when $(\mathcal{M}, F_{\bullet} \mathcal{M})$
underlies a polarizable complex Hodge module on an abelian variety $A$, the Euler
characteristic satisfies $\chi(A, \mathcal{M}) \geq 0$, and each coherent $\mathscr{O}_A$-module
$\gr_k^F \mathcal{M}$ is a GV-sheaf. The new result (in \lemmaref{lem:M-regular}) is that
each $\gr_k^F \mathcal{M}$ is actually M-regular, provided that $\chi(A, \mathcal{M}) > 0$.
That we can always get into the situation where the Euler characteristic is positive
follows from some general results about simple holonomic $\mathscr{D}$-modules from
\cite{Schnell-holo}.
\theoremref{thm:Chen-Jiang} implies that each graded quotient $\gr_k^F \mathcal{M}$ with
respect to the Hodge filtration is a GV-sheaf, the K\"ahler analogue of a result in
\cite{PS}. However, the stronger formulation above is new even in the case of smooth
projective varieties, and has further useful consequences. One such is the following:
for a holomorphic mapping $f \colon X \to T$ that is generically finite onto its
image, the locus
\[
S^0 (T, f_* \omega_X) = \menge{L \in \Pic^0(T)}%
{H^i(T, f_*\omega_X \otimes_{\shO_T} L) \neq 0}
\]
is preserved by the involution $L \mapsto L^{-1}$ on $\Pic^0(T)$; see \corollaryref{cor:gen-finite}.
This is a crucial ingredient in the proof of \theoremref{torus_intro}.
Going back to Wang's paper \cite{Wang}, its main purpose was to prove Beauville's
conjecture, namely that on a compact K\"ahler manifold $X$, every irreducible
component of every $\Sigma^k(X) = \menge{\rho \in \Char(X)}{H^k(X, \CC_{\rho}) \neq
0}$ contains characters of finite order. In the projective case, this is of course a
famous theorem by Simpson \cite{Simpson}. Combining the structural
\theoremref{thm:CHM-main} with known results about Hodge modules on abelian varieties
\cite{Schnell-laz} allows us to prove the following generalization of Wang's theorem
(which dealt with Hodge modules of geometric origin).
\begin{intro-theorem} \label{thm:finite-order}
If a polarizable real Hodge module $M \in \HM{T}{w}$ on a compact complex torus
admits an integral structure, then the sets
\[
S_m^i(T, M)
= \menge{\rho \in \Char(T)}{\dim H^i(T, M_{\RR} \otimes_{\mathbb{R}} \CC_{\rho}) \geq m}
\]
are finite unions of translates of linear subvarieties by points of finite order.
\end{intro-theorem}
The idea is to use Kronecker's theorem (about algebraic integers all of whose
conjugates have absolute value one) to prove that certain characters have finite
order. Roughly speaking, the characters in question are unitary because of the
existence of a polarization on $M$, and they take values in the group of algebraic
integers because of the existence of an integral structure on $M$.
\noindent
{\bf Projectivity questions.}
We conclude by noting that many of the results in this paper can be placed in the broader context of the following problem:
how far are natural geometric or sheaf theoretic constructions on compact K\"ahler manifolds in general,
and on compact complex tori in particular, from being determined by similar constructions on projective
manifolds? Theorems \ref{thm:direct_image} and \ref{thm:Chen-Jiang} provide the answer on tori
in the case of Hodge-theoretic constructions. We thank J. Koll\'ar for suggesting this point of view, and also the
statements of the problems in the paragraph below.
Further structural results could provide a general machine for reducing certain questions about K\"ahler manifolds to
the algebraic setting. For instance, by analogy with positivity conjectures in the algebraic case, one hopes for the following result
in the case of varying families: if $X$ and $Y$ are compact K\"ahler manifolds and
$f: X \rightarrow Y$ is a fiber space of maximal variation, i.e. such that the general fiber is bimeromorphic
to at most countably many other fibers, then $Y$ is projective. More generally, for an arbitrary such $f$,
is there a mapping $g: Y \rightarrow Z$ with $Z$ projective, such that the fibers of $f$ are bimeromorphically
isotrivial over those of $Y$?
A slightly more refined version in the case when $Y = T$ is a torus, which is essentially a combination of Iitaka fibrations and Ueno's conjecture,
is this: there should exist a morphism $h: X \rightarrow Z$, where $Z$ is a variety of general type
generating an abelian quotient $g : T \rightarrow A$, such that the fibers of $h$ have Kodaira dimension $0$ and
are bimeromorphically isotrivial over the fibers of $g$.
\section{Real and complex Hodge modules}
\subsection{Real Hodge modules}
\label{par:RHM}
In this paper, we work with polarizable real Hodge modules on complex manifolds. This
is the natural setting for studying compact K\"ahler manifolds, because the
polarizations induced by K\"ahler forms are defined over $\mathbb{R}$ (but usually not
over $\mathbb{Q}$, as in the projective case). Saito originally developed the theory of
Hodge modules with rational coefficients, but as explained in \cite{Saito-Kae},
everything works just as well with real coefficients, provided one relaxes the
assumptions about local monodromy: the eigenvalues of the monodromy operator on the
nearby cycles are allowed to be arbitrary complex numbers of absolute value one,
rather than just roots of unity. This has already been observed several times in the
literature \cite{SV}; the point is that Saito's theory rests on certain results
about polarizable variations of Hodge structure \cite{Schmid,Zucker,CKS}, which hold
in this generality.
Let $X$ be a complex manifold. We first recall some terminology.
\begin{definition}
We denote by $\HM{X}{w}$ the category of polarizable real Hodge modules of weight
$w$; this is a semi-simple $\mathbb{R}$-linear abelian category, endowed with a faithful functor
to the category of real perverse sheaves.
\end{definition}
Saito constructs $\HM{X}{w}$ as a full subcategory of the category of all filtered
regular holonomic $\mathscr{D}$-modules with real structure, in several stages. To begin
with, recall that a \define{filtered regular holonomic $\mathscr{D}$-module with real
structure} on $X$ consists of the following four pieces of data: (1) a regular
holonomic left $\mathscr{D}_X$-module $\mathcal{M}$; (2) a good filtration $F_{\bullet} \mathcal{M}$ by
coherent $\shf{O}_X$-modules; (3) a perverse sheaf $M_{\RR}$ with coefficients in $\mathbb{R}$; (4)
an isomorphism $M_{\RR} \otimes_{\mathbb{R}} \mathbb{C} \simeq \DR(\mathcal{M})$. Although the isomorphism
is part of the data, we usually suppress it from the notation and simply
write $M = (\mathcal{M}, F_{\bullet} \mathcal{M}, M_{\RR})$. The \define{support} $\Supp M$ is
defined to be the support of the underlying perverse sheaf $M_{\RR}$; one says that $M$
has \define{strict support} if $\Supp M$ is irreducible and if $M$ has no nontrivial
subobjects or quotient objects that are supported on a proper subset of $\Supp M$.
Now $M$ is called a \define{real Hodge module of weight $w$} if it satisfies several
additional conditions that are imposed by recursion on the dimension of $\Supp M$.
Although they are not quite stated in this way in \cite{Saito-HM}, the essence of
these conditions is that (1) every Hodge module decomposes into a sum of Hodge
modules with strict support,
and (2) every Hodge module with strict support is generically a variation of Hodge
structure, which uniquely determines the Hodge module. Given $k \in \mathbb{Z}$, set $\mathbb{R}(k)
= (2\pi i)^k \mathbb{R} \subseteq \mathbb{C}$; then one has the \define{Tate twist}
\[
M(k) = \bigl( \mathcal{M}, F_{\bullet - k} \mathcal{M}, M_{\RR} \otimes_{\mathbb{R}} \mathbb{R}(k) \bigr)
\in \HM{X}{w-2k}.
\]
Every real Hodge module of weight $w$ has a well-defined \define{dual} $\mathbb{D} M$,
which is a real Hodge module of weight $-w$ whose underlying perverse sheaf is the
Verdier dual $\mathbb{D} M_{\RR}$. A \define{polarization} is an isomorphism of real Hodge
modules $\mathbb{D} M \simeq M(w)$, subject to certain conditions that are again imposed
recursively; one says that $M$ is \define{polarizable} if it admits at least one
polarization.
\begin{example}
Every polarizable real variation of Hodge structure of weight $w$ on $X$ gives rise
to an object of $\HM{X}{w + \dim X}$. If $\mathcal{H}$ is such a variation, we denote the
underlying real local system by $\varH_{\RR}$, its complexification by $\varH_{\CC} = \varH_{\RR}
\otimes_{\mathbb{R}} \mathbb{C}$, and the corresponding flat bundle by $(\mathcal{H}, \nabla)$; then
$\mathcal{H} \simeq \varH_{\CC} \otimes_{\mathbb{C}} \shf{O}_X$. The flat connection makes $\mathcal{H}$ into a
regular holonomic left $\mathscr{D}$-module, filtered by $F_{\bullet} \mathcal{H} = F^{-\bullet}
\mathcal{H}$; the real structure is given by the real perverse sheaf $\varH_{\RR} \decal{\dim X}$.
\end{example}
We list a few useful properties of polarizable real Hodge modules. By definition,
every object $M \in \HM{X}{w}$ admits a locally finite \define{decomposition by
strict support}; when $X$ is compact, this is a finite decomposition
\[
M \simeq \bigoplus_{j=1}^n M_j,
\]
where each $M_j \in \HM{X}{w}$ has strict support equal to an irreducible analytic
subvariety $Z_j \subseteq X$. There are no nontrivial morphisms between Hodge modules
with different strict support; if we assume that $Z_1, \dotsc, Z_n$ are distinct, the
decomposition by strict support is therefore unique. Since the category $\HM{X}{w}$
is semi-simple, it follows that every polarizable real Hodge module of weight $w$ is
isomorphic to a direct sum of simple objects with strict support.
One of Saito's most important results is the following structure theorem relating
polarizable real Hodge modules and polarizable real variations of Hodge structure.
\begin{theorem}[Saito] \label{thm:Saito-structure}
The category of polarizable real Hodge modules of weight $w$ with strict support $Z
\subseteq X$ is equivalent to the category of generically defined polarizable real
variations of Hodge structure of weight $w - \dim Z$ on $Z$.
\end{theorem}
In other words, for any $M \in \HM{X}{w}$ with strict support $Z$, there is a dense
Zariski-open subset of the smooth locus of $Z$ over which it restricts to a
polarizable real variation of Hodge structure; conversely, every such variation
extends uniquely to a Hodge module with strict support $Z$. The proof in
\cite[Theorem~3.21]{Saito-MHM} carries over to the case of real coefficients; see
\cite{Saito-Kae} for further discussion.
\begin{lemma} \label{lem:Kashiwara}
The support of $M \in \HM{X}{w}$ lies in a submanifold $i \colon Y \hookrightarrow
X$ if and only if $M$ belongs to the image of the functor $i_{\ast} \colon \HM{Y}{w} \to
\HM{X}{w}$.
\end{lemma}
This result is often called \define{Kashiwara's equivalence}, because Kashiwara
proved the same thing for arbitrary coherent $\mathscr{D}$-modules. In the case of
Hodge modules, the point is that the coherent $\shf{O}_X$-modules $F_k \mathcal{M} /
F_{k-1} \mathcal{M}$ are in fact $\shO_Y$-modules.
\subsection{Compact K\"ahler manifolds and semi-simplicity}
In this section, we prove some results about the underlying regular holonomic
$\mathscr{D}$-modules of polarizable real Hodge modules on compact K\"ahler manifolds.
Our starting point is the following semi-simplicity theorem, due to
Deligne and Nori.
\begin{theorem}[Deligne, Nori]
Let $X$ be a compact K\"ahler manifold. If
\[
M = (\mathcal{M}, F_{\bullet} \mathcal{M}, M_{\RR}) \in \HM{X}{w},
\]
then the perverse sheaf $M_{\RR}$ and the $\mathscr{D}$-module $\mathcal{M}$ are
semi-simple.
\end{theorem}
\begin{proof}
Since the category $\HM{X}{w}$ is semi-simple, we may assume without loss of
generality that $M$ is simple, with strict support an irreducible analytic subvariety
$Z \subseteq X$. By Saito's \theoremref{thm:Saito-structure}, $M$ restricts to a
polarizable real variation of Hodge structure $\mathcal{H}$ of weight $w - \dim Z$ on a
Zariski-open subset of the smooth locus of $Z$; note that $\mathcal{H}$ is a simple object
in the category of real variations of Hodge structure. Now $M_{\RR}$
is the intersection complex of $\varH_{\RR}$, and so it suffices to prove that $\varH_{\RR}$ is
semi-simple. After resolving singularities, we can assume that $\mathcal{H}$ is defined on
a Zariski-open subset of a compact K\"ahler manifold; in that case, Deligne and Nori
have shown that $\varH_{\RR}$ is semi-simple \cite[\S1.12]{Deligne}. It follows that the
complexification $M_{\RR} \otimes_{\mathbb{R}} \mathbb{C}$ of the perverse sheaf is semi-simple as
well; by the Riemann-Hilbert correspondence, the same is true for the underlying
regular holonomic $\mathscr{D}$-module $\mathcal{M}$.
\end{proof}
A priori, there is no reason why the decomposition of the regular holonomic
$\mathscr{D}$-module $\mathcal{M}$ into simple factors should lift to a decomposition in the
category $\HM{X}{w}$. Nevertheless, it turns out that we can always chose the
decomposition in such a way that it is compatible with the filtration $F_{\bullet}
\mathcal{M}$.
\begin{proposition} \label{prop:complexification}
Let $M \in \HM{X}{w}$ be a simple polarizable real Hodge module on a compact K\"ahler
manifold. Then one of the following two statements is true:
\begin{enumerate}
\item The underlying perverse sheaf $M_{\RR} \otimes_{\mathbb{R}} \mathbb{C}$ is simple.
\item There is an endomorphism $J \in \End(M)$ with $J^2 = -\id$ such that
\[
\bigl( \mathcal{M}, F_{\bullet} \mathcal{M}, M_{\RR} \otimes_{\mathbb{R}} \mathbb{C} \bigr)
= \ker(J - i \cdot \id) \oplus \ker(J + i \cdot \id),
\]
and the perverse sheaves underlying $\ker(J \pm i \cdot \id)$ are simple.
\end{enumerate}
\end{proposition}
We begin by proving the following key lemma.
\begin{lemma} \label{lem:simple-RVHS}
Let $\mathcal{H}$ be a polarizable real variation of Hodge structure on a Zariski-open
subset of a compact K\"ahler manifold. If $\mathcal{H}$ is simple, then
\begin{aenumerate}
\item either the underlying complex local system $\varH_{\CC}$ is also simple,
\item or there is an endomorphism $J \in \End(\mathcal{H})$ with $J^2 = -\id$, such that
\[
\varH_{\CC} = \ker(J_{\CC} - i \cdot \id) \oplus \ker(J_{\CC} + i \cdot \id)
\]
is the sum of two (possibly isomorphic) simple local systems.
\end{aenumerate}
\end{lemma}
\begin{proof}
Since $X$ is a Zariski-open subset of a compact K\"ahler manifold, the theorem of the
fixed part holds on $X$, and the local system $\varH_{\CC}$ is semi-simple
\cite[\S1.12]{Deligne}. Choose a base point $x_0 \in X$, and write $H_{\RR}$ for the
fiber of the local system $\varH_{\RR}$ at the point $x_0$; it carries a polarizable
Hodge structure
\[
H_{\CC} = H_{\RR} \otimes_{\mathbb{R}} \mathbb{C} = \bigoplus_{p+q=w} H^{p,q},
\]
say of weight $w$. The fundamental group $\Gamma = \pi_1(X, x_0)$ acts on $H_{\RR}$, and as
we remarked above, $H_{\CC}$ decomposes into a sum of simple $\Gamma$-modules. The proof
of \cite[Proposition~1.13]{Deligne} shows that there is a nontrivial simple
$\Gamma$-module $V \subseteq H_{\CC}$ compatible with the Hodge decomposition, meaning
that
\[
V = \bigoplus_{p+q=w} V \cap H^{p,q}.
\]
Let $\bar{V} \subseteq H_{\CC}$ denote the conjugate of $V$ with respect to the real
structure $H_{\RR}$; it is another nontrivial simple $\Gamma$-module with
\[
\bar{V} = \bigoplus_{p+q=w} \bar{V} \cap H^{p,q}.
\]
The intersection $(V + \bar{V}) \cap H_{\RR}$ is therefore a $\Gamma$-invariant real
sub-Hodge structure of $H_{\RR}$. By the theorem of the fixed part, it extends to a real
sub-variation of $\mathcal{H}$; since $\mathcal{H}$ is simple, this means
that $H_{\CC} = V + \bar{V}$. Now there are two possibilities. (1) If $V = \bar{V}$, then
$H_{\CC} = V$, and $\varH_{\CC}$ is a simple local system. (2) If $V \neq \bar{V}$, then $H_{\CC}
= V \oplus \bar{V}$, and $\varH_{\CC}$ is the sum of two (possibly isomorphic) simple local
systems. The endomorphism algebra $\End(\varH_{\RR})$ coincides with the subalgebra of
$\Gamma$-invariants in $\End(H_{\RR})$; by the theorem of the fixed part, it is also a real
sub-Hodge structure. Let $p \in \End(H_{\CC})$ and $\bar{p} \in \End(H_{\CC})$ denote the
projections to the two subspaces $V$ and $\bar{V}$; both preserve the Hodge
decomposition, and are therefore of type $(0,0)$. This shows that the element $J =
i(p - \bar{p}) \in \End(H_{\CC})$ is a real Hodge class of type $(0,0)$ with $J^2 =
-\id$; by the theorem of the fixed part, $J$ is the restriction to $x_0$ of an
endomorphism of the variation of Hodge structure $\mathcal{H}$. This completes the proof
because $V$ and $\bar{V}$ are exactly the $\pm i$-eigenspaces of $J$.
\end{proof}
\begin{proof}[Proof of \propositionref{prop:complexification}]
Since $M$ is simple, it has strict support equal to an irreducible analytic
subvariety $Z \subseteq X$; by \theoremref{thm:Saito-structure}, $M$ is obtained from
a polarizable real variation of Hodge structure $\mathcal{H}$ of weight $w - \dim Z$ on a
dense Zariski-open subset of the smooth locus of $Z$. Let $\varH_{\RR}$ denote the
underlying real local system; then $M_{\RR}$ is isomorphic to the intersection complex
of $\varH_{\RR}$. Since we can resolve the singularities of $Z$ by blowing up along
submanifolds of $X$, \lemmaref{lem:simple-RVHS} applies to this situation; it shows
that $\varH_{\CC} = \varH_{\RR} \otimes_{\mathbb{R}} \mathbb{C}$ has at most two simple factors. The same is
true for $M_{\RR} \otimes_{\mathbb{R}} \mathbb{C}$ and, by the Riemann-Hilbert correspondence, for
$\mathcal{M}$.
Now we have to consider two cases. If $\varH_{\CC}$ is simple, then $\mathcal{M}$ is also
simple, and we are done. If $\varH_{\CC}$ is not simple, then by
\lemmaref{lem:simple-RVHS}, there is an endomorphism $J \in \End(\mathcal{H})$ with $J^2 =
-\id$ such that the two simple factors are the $\pm i$-eigenspaces of $J$. By
\theoremref{thm:Saito-structure}, it extends uniquely to an endomorphism of $J \in
\End(M)$ in the category $\HM{X}{w}$; in particular, we obtain an induced endomorphism
\[
J \colon \mathcal{M} \to \mathcal{M}
\]
that is strictly compatible with the filtration $F_{\bullet} \mathcal{M}$ by
\cite[Proposition~5.1.14]{Saito-HM}. Now the $\pm i$-eigenspaces of $J$
give us the desired decomposition
\[
(\mathcal{M}, F_{\bullet} \mathcal{M}) =
(\mathcal{M}', F_{\bullet} \mathcal{M}') \oplus (\mathcal{M}'', F_{\bullet} \mathcal{M}'');
\]
note that the two regular holonomic $\mathscr{D}$-modules $\mathcal{M}'$ and $\mathcal{M}''$ are simple
because the corresponding perverse sheaves are the intersection complexes of the
simple complex local systems $\ker(J_{\CC} \pm i \cdot \id)$, where $J_{\CC}$ stands
for the induced endomorphism of the complexification $M_{\RR} \otimes_{\mathbb{R}} \mathbb{C}$.
\end{proof}
\subsection{Complex Hodge modules}
\label{par:CHM}
In Saito's recursive definition of the category of polarizable Hodge modules, the
existence of a real structure is crucial: to say that a given filtration on a complex
vector space is a Hodge structure of a certain weight, or that a given bilinear form
is a polarization, one needs to have complex conjugation. This explains why there is
as yet no general theory of ``polarizable complex Hodge modules'' -- although it
seems likely that such a theory can be constructed within the framework of twistor
$\mathscr{D}$-modules developed by Sabbah and Mochizuki. We now explain a workaround for
this problem, suggested by \propositionref{prop:complexification}.
\begin{definition}
A \define{polarizable complex Hodge module} on a complex manifold $X$ is a pair
$(M, J)$, consisting of a polarizable real Hodge module $M \in \HM{X}{w}$ and an
endomorphism $J \in \End(M)$ with $J^2 = -\id$.
\end{definition}
The space of morphisms between two polarizable complex Hodge modules $(M_1, J_1)$ and
$(M_2, J_2)$ is defined in the obvious way:
\[
\Hom \bigl( (M_1, J_1), (M_2, J_2) \bigr) =
\menge{f \in \Hom(M_1, M_2)}{f \circ J_1 = J_2 \circ f}
\]
Note that composition with $J_1$ (or equivalently, $J_2$) puts a natural complex
structure on this real vector space.
\begin{definition}
We denote by $\HMC{X}{w}$ the category of polarizable complex Hodge modules of weight
$w$; it is $\mathbb{C}$-linear and abelian.
\end{definition}
From a polarizable complex Hodge module $(M, J)$, we obtain a filtered regular
holonomic $\mathscr{D}$-module as well as a complex perverse sheaf, as follows. Denote by
\[
\mathcal{M} = \mathcal{M}' \oplus \mathcal{M}'' = \ker(J - i \cdot \id) \oplus \ker(J + i \cdot \id)
\]
the induced decomposition of the regular holonomic $\mathscr{D}$-module underlying $M$, and
observe that $J \in \End(\mathcal{M})$ is strictly compatible with the Hodge filtration
$F_{\bullet} \mathcal{M}$. This means that we have a decomposition
\[
(\mathcal{M}, F_{\bullet} \mathcal{M}) = (\mathcal{M}', F_{\bullet} \mathcal{M}')
\oplus (\mathcal{M}'', F_{\bullet} \mathcal{M}'')
\]
in the category of filtered $\mathscr{D}$-modules. Similarly, let $J_{\CC} \in \End(M_{\CC})$
denote the induced endomorphism of the complex perverse sheaf underlying $M$; then
\[
M_{\CC} = M_{\RR} \otimes_{\mathbb{R}} \mathbb{C} =
\ker(J_{\CC} - i \cdot \id) \oplus \ker(J_{\CC} + i \cdot \id),
\]
and the two summands correspond to $\mathcal{M}'$ and $\mathcal{M}''$ under the Riemann-Hilbert
correspondence. Note that they are isomorphic as \emph{real} perverse
sheaves; the only difference is in the $\mathbb{C}$-action. We obtain a functor
\[
(M, J) \mapsto \ker(J_{\CC} - i \cdot \id)
\]
from $\HMC{X}{w}$ to the category of complex perverse sheaves on $X$; it is faithful,
but depends on the choice of $i$.
\begin{definition}
Given $(M, J) \in \HMC{X}{w}$, we call
\[
\ker(J_{\CC} - i \cdot \id) \subseteq M_{\CC}
\]
the \define{underlying complex perverse sheaf}, and
\[
(\mathcal{M}', F_{\bullet} \mathcal{M}') = \ker(J - i \cdot \id)
\subseteq (\mathcal{M}, F_{\bullet} \mathcal{M})
\]
the \define{underlying filtered regular holonomic $\mathscr{D}$-module}.
\end{definition}
There is also an obvious functor from polarizable real Hodge modules to polarizable
complex Hodge modules: it takes $M \in \HM{X}{w}$ to the pair
\[
\bigl( M \oplus M, J_M \bigr),
\quad J_M(m_1, m_2) = (-m_2, m_1).
\]
Not surprisingly, the underlying complex perverse sheaf is isomorphic to $M_{\RR}
\otimes_{\mathbb{R}} \mathbb{C}$, and the underlying filtered regular holonomic $\mathscr{D}$-module to
$(\mathcal{M}, F_{\bullet} \mathcal{M})$. The proof of the following lemma is left as an easy exercise.
\begin{lemma}
A polarized complex Hodge module $(M, J) \in \HMC{X}{w}$ belongs to the image of
$\HM{X}{w}$ if and only if there exists $r \in \End(M)$ with
\[
r \circ J = -J \circ r \quad \text{and} \quad r^2 = \id.
\]
\end{lemma}
In particular, $(M, J)$ should be isomorphic to its \define{complex conjugate} $(M,
-J)$, but this in itself does not guarantee the existence of a real structure -- for
example when $M$ is simple and $\End(M)$ is isomorphic to the quaternions $\mathbb{H}$.
\begin{proposition} \label{prop:semi-simple}
The category $\HMC{X}{w}$ is semi-simple, and the simple objects are of the following
two types:
\begin{renumerate}
\item \label{en:simple-1}
$(M \oplus M, J_M)$, where $M \in \HM{X}{w}$ is simple and $\End(M) = \mathbb{R}$.
\item \label{en:simple-2}
$(M, J)$, where $M \in \HM{X}{w}$ is simple and $\End(M) \in \{\mathbb{C}, \mathbb{H}\}$.
\end{renumerate}
\end{proposition}
\begin{proof}
Since $\HM{X}{w}$ is semi-simple, every object of $\HMC{X}{w}$ is isomorphic to a
direct sum of polarizable complex Hodge modules of the form
\begin{equation} \label{eq:semi-simple}
\bigl( M^{\oplus n}, J \bigr),
\end{equation}
where $M \in \HM{X}{w}$ is simple, and $J$ is an $n \times n$-matrix with entries in
$\End(M)$ such that $J^2 = -\id$. By Schur's lemma and the classification of real
division algebras, the endomorphism algebra of a
simple polarizable real Hodge module is one of $\mathbb{R}$, $\mathbb{C}$, or $\mathbb{H}$. If
$\End(M) = \mathbb{R}$, elementary linear algebra shows that $n$ must be even and that
\eqref{eq:semi-simple} is isomorphic to the direct sum of $n/2$ copies of
\ref{en:simple-1}. If $\End(M) = \mathbb{C}$, one can diagonalize the matrix $J$; this
means that \eqref{eq:semi-simple} is isomorphic to a direct sum of $n$ objects of
type \ref{en:simple-2}. If $\End(M) = \mathbb{H}$, it is still possible to diagonalize
$J$, but this needs some nontrivial results about matrices with entries in the quaternions
\cite{Zhang}. Write $J \in M_n(\mathbb{H})$ in the form $J = J_1 + J_2 j$, with $J_1, J_2
\in M_n(\mathbb{C})$, and consider the ``adjoint matrix''
\[
\chi_J = \begin{pmatrix}
J_1 & J_2 \\ - \overline{J_2} & \overline{J_1}
\end{pmatrix} \in M_{2n}(\mathbb{C}).
\]
Since $J^2 = -\id$, one also has $\chi_J^2 = -\id$, and so the matrix $J$ is normal
by \cite[Theorem~4.2]{Zhang}. According to \cite[Corollary~6.2]{Zhang}, this implies
the existence of a unitary matrix $U \in M_n(\mathbb{H})$ such that $U^{-1} A U = i \cdot
\id$; here unitary means that $U^{-1} = U^{\ast}$ is equal to the conjugate transpose
of $U$. The consequence is that \eqref{eq:semi-simple} is again isomorphic to a
direct sum of $n$ objects of type \ref{en:simple-2}. Since it is straightforward to
prove that both types of objects are indeed simple, this concludes the proof.
\end{proof}
\begin{note}
The three possible values for the endomorphism algebra of a simple object $M \in
\HM{X}{w}$ reflect the splitting behavior of its complexification $(M \oplus M, J_M)
\in \HMC{X}{w}$: if $\End(M) = \mathbb{R}$, it remains irreducible; if $\End(M) = \mathbb{C}$, it
splits into two non-isomorphic simple factors; if $\End(M) = \mathbb{H}$, it splits into two
isomorphic simple factors. Note that the endomorphism ring of a simple polarizable
complex Hodge module is always isomorphic to $\mathbb{C}$, in accordance with Schur's lemma.
\end{note}
Our ad-hoc definition of the category $\HMC{X}{w}$ has the advantage that every
result about polarizable real Hodge modules that does not explicitly mention the
real structure extends to polarizable complex Hodge modules. For example, each $(M,
J) \in \HMC{X}{w}$ admits a unique decomposition by strict support: $M$ admits such a
decomposition, and since there are no nontrivial morphisms between objects with
different strict support, $J$ is automatically compatible with the decomposition.
For much the same reason, Kashiwara's equivalence (in \lemmaref{lem:Kashiwara}) holds
also for polarizable complex Hodge modules.
Another result that immediately carries over is Saito's direct image theorem. The
strictness of the direct image complex is one of the crucial properties of polarizable
Hodge modules; in the special case of the morphism from a projective variety $X$ to a
point, it is equivalent to the $E_1$-degeneration of the spectral sequence
\[
E_1^{p,q} = H^{p+q} \bigl( X, \gr_p^F \DR(\mathcal{M}') \bigr)
\Longrightarrow H^{p+q} \bigl( X, \DR(\mathcal{M}') \bigr),
\]
a familiar result in classical Hodge theory when $\mathcal{M}' = \shf{O}_X$.
\begin{theorem}
Let $f \colon X \to Y$ be a projective morphism between complex manifolds.
\begin{aenumerate}
\item If $(M, J) \in \HMC{X}{w}$, then for each $k \in \mathbb{Z}$, the pair
\[
\mathcal{H}^k f_{\ast}(M, J) = \bigl( \mathcal{H}^k f_{\ast} M, \mathcal{H}^k f_{\ast} J \bigr) \in \HMC{Y}{w+k}
\]
is again a polarizable complex Hodge module.
\item The direct image complex $f_{+}(\mathcal{M}', F_{\bullet} \mathcal{M}')$ is strict, and
$\mathcal{H}^k f_{+}(\mathcal{M}', F_{\bullet} \mathcal{M}')$ is the filtered regular holonomic
$\mathscr{D}$-module underlying $\mathcal{H}^k f_{\ast}(M, J)$.
\end{aenumerate}
\end{theorem}
\begin{proof}
Since $M \in \HM{X}{w}$ is a polarizable real Hodge module, we have $\mathcal{H}^k f_{\ast} M \in
\HM{Y}{w+k}$ by Saito's direct image theorem \cite[Th\'eor\`eme~5.3.1]{Saito-HM}. Now
it suffices to note that $J \in \End(M)$ induces an endomorphism $\mathcal{H}^k f_{\ast} J \in
\End \bigl( \mathcal{H}^k f_{\ast} M \bigr)$ whose square is equal to minus the identity. Since
\[
(\mathcal{M}, F_{\bullet} \mathcal{M}) = (\mathcal{M}', F_{\bullet} \mathcal{M}') \oplus
(\mathcal{M}'', F_{\bullet} \mathcal{M}''),
\]
the strictness of the complex $f_{+}(\mathcal{M}', F_{\bullet} \mathcal{M}')$ follows from that of
$f_{+}(\mathcal{M}, F_{\bullet} \mathcal{M})$, which is part of the above-cited theorem by Saito.
\end{proof}
On compact K\"ahler manifolds, the semi-simplicity results from the previous
section can be summarized as follows.
\begin{proposition} \label{prop:CHM-Kaehler}
Let $X$ be a compact K\"ahler manifold.
\begin{aenumerate}
\item A polarizable complex Hodge module $(M, J) \in \HMC{X}{w}$ is simple if and
only if the underlying complex perverse sheaf
\[
\ker \Bigl( J_{\CC} - i \cdot \id \, \colon
M_{\RR} \otimes_{\mathbb{R}} \mathbb{C} \to M_{\RR} \otimes_{\mathbb{R}} \mathbb{C} \Bigr)
\]
is simple.
\item If $M \in \HM{X}{w}$, then every simple factor of the complex perverse sheaf
$M_{\RR} \otimes_{\mathbb{R}} \mathbb{C}$ underlies a polarizable complex Hodge module.
\end{aenumerate}
\end{proposition}
\begin{proof}
This is a restatement of \propositionref{prop:complexification}.
\end{proof}
\subsection{Complex variations of Hodge structure}
In this section, we discuss the relation between polarizable complex Hodge modules
and polarizable complex variations of Hodge structure.
\begin{definition}
A \define{polarizable complex variation of Hodge structure} is a pair $(\mathcal{H}, J)$,
where $\mathcal{H}$ is a polarizable real variation of Hodge structure, and $J \in
\End(\mathcal{H})$ is an endomorphism with $J^2 = -\id$.
\end{definition}
As before, the \define{complexification} of a real variation $\mathcal{H}$ is defined as
\[
\bigl( \mathcal{H} \oplus \mathcal{H}, J_{\mathcal{H}} \bigr), \quad
J_{\mathcal{H}}(h_1, h_2) = (-h_2, h_1),
\]
and a complex variation $(\mathcal{H}, J)$ is real if and only if there is an endomorphism
$r \in \End(\mathcal{H})$ with $r \circ J = - J \circ r$ and $r^2 = \id$. Note that the
direct sum of $(\mathcal{H}, J)$ with its \define{complex conjugate} $(\mathcal{H}, -J)$ has an
obvious real structure.
The definition above is convenient for our purposes; it is also not hard to show that
it is equivalent to the one in \cite[\S1]{Deligne}, up to the choice of weight.
(Deligne only considers complex variations of weight zero.)
\begin{example} \label{ex:unitary}
Let $\rho \in \Char(X)$ be a unitary character of the fundamental group, and denote
by $\CC_{\rho}$ the resulting unitary local system. It determines a polarizable complex
variation of Hodge structure in the following manner. The underlying real local
system is $\mathbb{R}^2$, with monodromy acting by
\[
\begin{pmatrix}
\Re \rho & - \Im \rho \\
\Im \rho & \Re \rho
\end{pmatrix};
\]
the standard inner product on $\mathbb{R}^2$ makes this into a polarizable real variation of
Hodge structure $\mathcal{H}_{\rho}$ of weight zero, with $J_{\rho} \in \End(\mathcal{H}_{\rho})$
acting as $J_{\rho}(x,y) = (-y,x)$; for simplicity, we continue to denote the pair
$\bigl( \mathcal{H}_{\rho}, J_{\rho} \bigr)$ by the symbol $\CC_{\rho}$.
\end{example}
We have the following criterion for deciding whether a polarizable complex Hodge
module is \define{smooth}, meaning induced by a complex variation of Hodge structure.
\begin{lemma} \label{lem:CHM-smooth}
Given $(M, J) \in \HMC{X}{w}$, let us denote by
\[
\mathcal{M} = \mathcal{M}' \oplus \mathcal{M}'' = \ker(J - i \cdot \id) \oplus \ker(J + i \cdot \id)
\]
the induced decomposition of the regular holonomic $\mathscr{D}$-module underlying $M$.
If $\mathcal{M}'$ is coherent as an $\shf{O}_X$-module, then $M$ is smooth.
\end{lemma}
\begin{proof}
Let $M_{\CC} = \ker(J_{\CC} - i \cdot \id) \oplus \ker(J_{\CC} + i \cdot \id)$ be the
analogous decomposition of the underlying perverse sheaf. Since $\mathcal{M}'$ is
$\shf{O}_X$-coherent, it is a vector bundle with flat connection; by the Riemann-Hilbert
correspondence, the first factor is therefore (up to a shift in degree by $\dim X$) a
complex local system. Since it is isomorphic to $M_{\RR}$ as a real perverse sheaf, it
follows that $M_{\RR}$ is also a local system; but then $M$ is smooth by
\cite[Lemme~5.1.10]{Saito-HM}.
\end{proof}
In general, the relationship between complex Hodge modules and complex variations of
Hodge structure is governed by the following theorem; it is of course an immediate
consequence of Saito's results (see \theoremref{thm:Saito-structure}).
\begin{theorem} \label{thm:structure-complex}
The category of polarizable complex Hodge modules of weight $w$ with strict support
$Z \subseteq X$ is equivalent to the category of generically defined polarizable
complex variations of Hodge structure of weight $w - \dim Z$ on $Z$.
\end{theorem}
\subsection{Integral structures on Hodge modules}
By working with polarizable real (or complex) Hodge modules, we lose certain
arithmetic information about the monodromy of the underlying perverse sheaves, such
as the fact that the monodromy eigenvalues are roots of unity. One can recover some of
this information by asking for the existence of an ``integral structure''
\cite[Definition~1.9]{Schnell-laz}, which is just a constructible complex of sheaves
of $\mathbb{Z}$-modules that becomes isomorphic to the perverse sheaf underlying the Hodge
module after tensoring by $\mathbb{R}$.
\begin{definition}
An \define{integral structure} on a polarizable real Hodge module $M \in \HM{X}{w}$
is a constructible complex $E \in \mathrm{D}_{\mathit{c}}^{\mathit{b}}(\mathbb{Z}_X)$ such that $M_{\RR} \simeq E \otimes_{\mathbb{Z}} \mathbb{R}$.
\end{definition}
As explained in \cite[\S1.2.2]{Schnell-laz}, the existence of an integral structure
is preserved by many of the standard operations on (mixed) Hodge modules, such as
direct and inverse images or duality. Note that even though it makes sense to ask
whether a given (mixed) Hodge module admits an integral structure, there appears to
be no good functorial theory of ``polarizable integral Hodge modules''.
\begin{lemma} \label{lem:integral-summand}
If $M \in \HM{X}{w}$ admits an integral structure, then the same is true for every
summand in the decomposition of $M$ by strict support.
\end{lemma}
\begin{proof}
Consider the decomposition
\[
M = \bigoplus_{j=1}^n M_j
\]
by strict support, with $Z_1, \dotsc, Z_n \subseteq X$ distinct irreducible analytic
subvarieties. Each $M_j$ is a polarizable real Hodge module with strict support
$Z_j$, and therefore comes from a polarizable real variation of Hodge structure
$\mathcal{H}_j$ on a dense Zariski-open subset of $Z_j$. What we have to prove is that each
$\mathcal{H}_j$ can be defined over $\mathbb{Z}$. Let $M_{\RR}$ denote the underlying real perverse
sheaf, and set $d_j = \dim Z_j$. According to \cite[Proposition~2.1.17]{BBD}, $Z_j$
is an irreducible component in the support of the $(-d_j)$-th cohomology sheaf of
$M_{\RR}$, and $\mathcal{H}_{j, \mathbb{R}}$ is the restriction of that constructible sheaf to a
Zariski-open subset of $Z_j$. Since $M_{\RR} \simeq E \otimes_{\mathbb{Z}} \mathbb{R}$, it follows
that $\mathcal{H}_j$ is defined over the integers.
\end{proof}
\subsection{Operations on Hodge modules}
\label{par:operations}
In this section, we recall three useful operations for polarizable real (and
complex) Hodge modules. If $\Supp M$ is compact, we define the \define{Euler
characteristic} of $M = (\mathcal{M}, F_{\bullet} \mathcal{M}, M_{\RR}) \in \HM{X}{w}$ by the formula
\[
\chi(X, M) = \sum_{i \in \mathbb{Z}} (-1)^i \dim_{\mathbb{R}} H^i(X, M_{\RR})
= \sum_{i \in \mathbb{Z}} (-1)^i \dim_{\mathbb{C}} H^i \bigl( X, \DR(\mathcal{M}) \bigr).
\]
For $(M, J) \in \HMC{X}{w}$, we let $\mathcal{M} = \mathcal{M}' \oplus \mathcal{M}'' = \ker(J - i \cdot
\id) \oplus \ker(J + i \cdot \id)$ be the decomposition into eigenspaces, and define
\[
\chi(X, M, J)
= \sum_{i \in \mathbb{Z}} (-1)^i \dim_{\mathbb{C}} H^i \bigl( X, \DR(\mathcal{M}') \bigr).
\]
With this definition, one has $\chi(X, M) = \chi(X, M, J) + \chi(X, M, -J)$.
Given a smooth morphism $f \colon Y \to X$ of relative dimension $\dim f = \dim Y -
\dim X$, we define the \define{naive inverse image}
\[
f^{-1} M = \bigl( f^{\ast} \mathcal{M}, f^{\ast} F_{\bullet} \mathcal{M}, f^{-1} M_{\RR} \bigr).
\]
One can show that $f^{-1} M \in \HM{Y}{w + \dim f}$; see \cite[\S9]{Schnell-van} for
more details. The same is true for polarizable complex Hodge modules: if
$(M, J) \in \HMC{X}{w}$, then one obviously has
\[
f^{-1}(M, J) = \bigl( f^{-1} M, f^{-1} J \bigr) \in \HMC{Y}{w + \dim f}.
\]
One can also twist a polarizable complex Hodge module by a unitary character.
\begin{lemma} \label{lem:twist}
For any unitary character $\rho \in \Char(X)$, there is an object
\[
(M, J) \otimes_{\mathbb{C}} \CC_{\rho} \in \HMC{X}{w}
\]
whose associated complex perverse sheaf is $\ker(J_{\CC} - i \cdot \id) \otimes_{\mathbb{C}}
\CC_{\rho}$.
\end{lemma}
\begin{proof}
In the notation of \exampleref{ex:unitary}, consider the tensor product
\[
M \otimes_{\mathbb{R}} \mathcal{H}_{\rho} \in \HM{X}{w};
\]
it is again a polarizable real Hodge module of weight $w$ because $\mathcal{H}_{\rho}$ is a
polarizable real variation of Hodge structure of weight zero. The square of the
endomorphism $J \otimes J_{\rho}$ is the identity, and so
\[
N = \ker \bigl( J \otimes J_{\rho} + \id \bigr)
\subseteq M \otimes_{\mathbb{R}} \mathcal{H}_{\rho}
\]
is again a polarizable real Hodge module of weight $w$. Now $K = J \otimes \id
\in \End(N)$ satisfies $K^2 = -\id$, which means that the pair $(N, K)$ is a
polarizable complex Hodge module of weight $w$. On the associated complex perverse
sheaf
\[
\ker \bigl( K_{\mathbb{C}} - i \cdot \id \bigr)
\subseteq M_{\CC} \otimes_{\mathbb{C}} \mathcal{H}_{\rho, \mathbb{C}},
\]
both $J_{\CC} \otimes \id$ and $\id \otimes J_{\rho, \mathbb{C}}$ act as multiplication by $i$,
which means that
\[
\ker \bigl( K_{\mathbb{C}} - i \cdot \id \bigr)
= \ker(J_{\CC} - i \cdot \id) \otimes_{\mathbb{C}} \CC_{\rho}.
\]
The corresponding regular holonomic $\mathscr{D}$-module is obviously
\[
\mathcal{N}' = \mathcal{M}' \otimes_{\shf{O}_X} (L, \nabla),
\]
with the filtration induced by $F_{\bullet} \mathcal{M}'$; here $(L, \nabla)$ denotes the
flat bundle corresponding to the complex local system $\CC_{\rho}$, and $\mathcal{M} = \mathcal{M}'
\oplus \mathcal{M}''$ as above.
\end{proof}
\begin{note}
The proof shows that
\begin{align*}
N_{\mathbb{C}} &= \bigl( \ker(J_{\CC} - i \cdot \id) \otimes_{\mathbb{C}} \CC_{\rho} \bigr)
\oplus \bigl( \ker(J_{\CC} + i \cdot \id) \otimes_{\mathbb{C}} \CC_{\bar\rho} \bigr) \\
\mathcal{N} &= \bigl( \mathcal{M}' \otimes_{\shf{O}_X} (L, \nabla) \bigr)
\oplus \bigl( \mathcal{M}'' \otimes_{\shf{O}_X} (L, \nabla)^{-1} \bigr),
\end{align*}
where $\bar\rho$ is the complex conjugate of the character $\rho \in \Char(X)$.
\end{note}
\section{Hodge modules on complex tori}
\subsection{Main result}
The paper \cite{PS} contains several results about Hodge modules of geometric origin
on abelian varieties. In this chapter, we generalize these results to arbitrary
polarizable complex Hodge modules on compact complex tori. To do so, we develop a
beautiful idea due to Wang \cite{Wang}, namely that up to direct sums and character
twists, every such object actually comes from an abelian variety.
\begin{theorem} \label{thm:CHM-main}
Let $(M, J) \in \HMC{T}{w}$ be a polarizable complex Hodge module on a compact
complex torus $T$. Then there is a decomposition
\begin{equation} \label{eq:decomposition}
(M, J) \simeq \bigoplus_{j=1}^n
q_j^{-1} (N_j, J_j) \otimes_{\mathbb{C}} \mathbb{C}_{\rho_j}
\end{equation}
where $q_j \colon T \to T_j$ is a surjective morphism with connected fibers,
$\rho_j \in \Char(T)$ is a unitary character, and $(N_j, J_j) \in \HMC{T_j}{w -
\dim q_j}$ is a simple polarizable complex Hodge module with $\Supp N_j$ projective
and $\chi(T_j, N_j, J_j) > 0$.
\end{theorem}
For Hodge modules of geometric origin, a less precise result was proved by Wang
\cite{Wang}. His proof makes use of the decomposition theorem, which in the setting
of arbitrary compact K\"ahler manifolds, is only known for Hodge modules of geometric
origin \cite{Saito-Kae}. This technical issue can be circumvented by putting
everything in terms of generically defined variations of Hodge structure.
To get a result for a polarizable real Hodge module $M \in \HM{T}{w}$, we simply
apply \theoremref{thm:CHM-main} to its complexification $(M \oplus M, J_M) \in
\HMC{T}{w}$. One could say more about the terms in the decomposition below, but the
following version is enough for our purposes.
\begin{corollary} \label{cor:CHM-real}
Let $M \in \HM{T}{w}$ be a polarizable real Hodge module on a compact complex torus
$T$. Then in the notation of \theoremref{thm:CHM-main}, one has
\[
(M \oplus M, J_M)
\simeq \bigoplus_{j=1}^n q_j^{-1}(N_j, J_j) \otimes_{\mathbb{C}} \mathbb{C}_{\rho_j}.
\]
If $M$ admits an integral structure, then each $\rho_j \in \Char(T)$ has finite
order.
\end{corollary}
The proof of these results takes up the rest of the chapter.
\subsection{Subvarieties of complex tori}
This section contains a structure theorem for subvarieties of compact complex
tori. The statement is contained in \cite[Propositions~2.3~and~2.4]{Wang}, but
we give a simpler argument below.
\begin{proposition} \label{prop:structure}
Let $X$ be an irreducible analytic subvariety of a compact complex torus $T$. Then
there is a subtorus $S \subseteq T$ with the following two properties:
\begin{aenumerate}
\item $S + X = X$ and the quotient $Y = X/S$ is projective.
\item If $D \subseteq X$ is an irreducible analytic subvariety with $\dim D = \dim X -
1$, then $S + D = D$.
\end{aenumerate}
In particular, every divisor on $X$ is the preimage of a divisor on $Y$.
\end{proposition}
\begin{proof}
It is well-known that the algebraic reduction of $T$ is an abelian variety. More
precisely, there is a subtorus $S \subseteq T$ such that $A = T/S$ is an abelian
variety, and every other subtorus with this property contains $S$; see e.g.
\cite[Ch.2~\S6]{BL}.
Now let $X \subseteq T$ be an irreducible analytic subvariety of $T$; without loss of
generality, we may assume that $0 \in X$ and that $X$ is not contained in any proper
subtorus of $T$. By a theorem of Ueno \cite[Theorem~10.9]{Ueno}, there is a subtorus
$S' \subseteq T$ with $S' + X \subseteq X$ and such that $X/S' \subseteq T/S'$ is of
general type. In particular, $X/S'$ is projective; but then $T/S'$ must also be
projective, which means that $S \subseteq S'$. Setting $Y = X/S$, we get a cartesian
diagram
\[
\begin{tikzcd}
X \rar[hook] \dar & T \dar \\
Y \rar[hook] & A
\end{tikzcd}
\]
with $Y$ projective. Now it remains to show that every divisor on $X$ is the pullback
of a divisor from $Y$.
Let $D \subseteq X$ be an irreducible analytic subvariety with $\dim D = \dim X - 1$;
as before, we may assume that $0 \in D$. For dimension reasons, either $S + D = D$ or
$S + D = X$; let us suppose that $S + D = X$ and see how this leads to a
contradiction. Define $T_D \subseteq T$ to be the smallest subtorus of $T$ containing
$D$; then $S + T_D = T$. If $T_D = T$, then the same reasoning as above would show that $S
+ D = D$; therefore $T_D \neq T$, and $\dim (T_D \cap S) \leq \dim S - 1$. Now
\[
D \cap S \subseteq T_D \cap S \subseteq S,
\]
and because $\dim(D \cap S) = \dim S - 1$, it follows that $D \cap S = T_D \cap S$
consists of a subtorus $S''$ and finitely many of its translates. After dividing out
by $S''$, we may assume that $\dim S = 1$ and that $D \cap S = T_D \cap S$ is a
finite set; in particular, $D$ is finite over $Y$, and therefore also projective. Now
consider the addition morphism
\[
S \times D \to T.
\]
Since $S + D = X$, its image is equal to $X$; because $S$ and $D$ are both
projective, it follows that $X$ is projective, and hence that $T$ is projective. But
this contradicts our choice of $S$. The conclusion is that $S + D = D$, as asserted.
\end{proof}
\begin{note}
It is possible for $S$ to be itself an abelian variety; this is why the proof that $S
+ D \neq X$ requires some care.
\end{note}
\subsection{Simple Hodge modules and abelian varieties}
We begin by proving a structure theorem for \emph{simple} polarizable complex Hodge
modules on a compact complex torus $T$; this is evidently the most important case,
because every polarizable complex Hodge module is isomorphic to a direct sum of
simple ones. Fix a simple polarizable complex Hodge module $(M, J) \in \HMC{T}{w}$.
By \propositionref{prop:semi-simple}, the polarizable real Hodge module $M \in
\HM{X}{w}$ has strict support equal to an irreducible analytic subvariety; we assume
in addition that $\Supp M$ is not contained in any proper subtorus of $T$.
\begin{theorem} \label{thm:CHM}
There is an abelian variety $A$, a surjective morphism $q \colon T \to A$ with
connected fibers, a simple $(N, K) \in \HMC{A}{w - \dim q}$ with $\chi(A, N,
K) > 0$, and a unitary character $\rho \in \Char(T)$, such that
\begin{equation} \label{eq:CHM}
(M, J) \simeq q^{-1} (N, K) \otimes_{\mathbb{C}} \CC_{\rho}.
\end{equation}
In particular, $\Supp M = q^{-1}(\Supp N)$ is covered by translates of $\ker q$.
\end{theorem}
Let $X = \Supp M$. By \propositionref{prop:structure}, there is a subtorus $S
\subseteq T$ such that $S + X = X$ and such that $Y = X/S$ is projective. Since $Y$
is not contained in any proper subtorus, it follows that $A = T/S$ is an abelian
variety. Let $q \colon T \to A$ be the quotient mapping, which is proper and smooth of
relative dimension $\dim q = \dim S$. This will not be our final choice for
\theoremref{thm:CHM}, but it does have almost all the properties that we want (except
for the lower bound on the Euler characteristic).
\begin{proposition} \label{prop:CHM-weak}
There is a simple $(N, K) \in \HMC{A}{w - \dim q}$ with strict support $Y$ and a
unitary character $\rho \in \Char(T)$ for which \eqref{eq:CHM} holds.
\end{proposition}
By \theoremref{thm:structure-complex}, $(M, J)$ corresponds to a polarizable complex
variation of Hodge structure of weight $w - \dim X$ on a dense Zariski-open subset of
$X$. The crucial observation, due to Wang, is that we can choose this set to be of
the form $q^{-1}(U)$, where $U$ is a dense Zariski-open subset of the smooth locus of
$Y$.
\begin{lemma}
There is a dense Zariski-open subset $U \subseteq Y$, contained in the smooth locus
of $Y$, and a polarizable complex variation of Hodge structure $(\mathcal{H}, J)$ of weight
$w - \dim X$ on $q^{-1}(U)$, such that $(M, J)$ is the polarizable complex Hodge module
corresponding to $(\mathcal{H}, J)$ in \theoremref{thm:structure-complex}.
\end{lemma}
\begin{proof}
Let $Z \subseteq X$ be the union of the singular locus of $X$ and the singular locus
of $M$. Then $Z$ is an analytic subset of $X$, and according to
\theoremref{thm:Saito-structure}, the restriction of $M$ to $X \setminus Z$ is a
polarizable real variation of Hodge
structure of weight $w - \dim X$. By \propositionref{prop:structure}, no
irreducible component of $Z$ of dimension $\dim X - 1$ dominates $Y$; we can
therefore find a Zariski-open subset $U \subseteq Y$, contained in the smooth locus
of $Y$, such that the intersection $q^{-1}(U) \cap Z$ has codimension $\geq 2$ in
$q^{-1}(U)$. Now $\mathcal{H}$ extends uniquely to a polarizable real variation of Hodge
structure on the entire complex manifold $q^{-1}(U)$, see
\cite[Proposition~4.1]{Schmid}. The assertion about $J$ follows easily.
\end{proof}
For any $y \in U$, the restriction of $(\mathcal{H}, J)$ to the fiber $q^{-1}(y)$ is a
polarizable complex variation of Hodge structure on a translate of the compact complex
torus $\ker q$. By \lemmaref{lem:VHS-torus}, the restriction to $q^{-1}(y)$ of the
underlying local system
\[
\ker \Bigl( J_{\CC} - i \cdot \id \, \colon \varH_{\CC} \to \varH_{\CC} \Bigr)
\]
is the direct sum of local systems of the form $\CC_{\rho}$, for $\rho \in \Char(T)$
unitary; when $M$ admits an integral structure, $\rho$ has finite order in the group
$\Char(T)$.
\begin{proof}[Proof of \propositionref{prop:CHM-weak}]
Let $\rho \in \Char(T)$ be one of the unitary characters in question, and let
$\bar\rho \in \Char(T)$ denote its complex conjugate. The tensor product $(\mathcal{H}, J)
\otimes_{\mathbb{C}} \CC_{\bar\rho}$ is a polarizable complex variation of Hodge structure of
weight $w - \dim X$ on the open subset $q^{-1}(U)$. Since all fibers of $q \colon
q^{-1}(U) \to U$ are translates of the compact complex torus $\ker q$, classical
Hodge theory for compact K\"ahler manifolds \cite[Theorem~2.9]{Zucker} implies that
\begin{equation} \label{eq:ql-varH}
q_{\ast} \bigl( (\mathcal{H}, J) \otimes_{\mathbb{C}} \CC_{\bar\rho} \bigr)
\end{equation}
is a polarizable complex variation of Hodge structure of weight $w - \dim X$ on
$U$; in particular, it is again semi-simple. By our choice of $\rho$, the adjunction
morphism
\[
q^{-1} q_{\ast} \bigl( (\mathcal{H}, J) \otimes_{\mathbb{C}} \CC_{\bar\rho} \bigr) \to
(\mathcal{H}, J) \otimes_{\mathbb{C}} \CC_{\bar\rho}
\]
is nontrivial. Consequently, \eqref{eq:ql-varH} must have at least one simple summand
$(\mathcal{H}_U, K)$ in the category of polarizable complex variations of Hodge structure of
weight $w - \dim X$ for which the induced morphism $q^{-1} (\mathcal{H}_U, K) \to (\mathcal{H},
J) \otimes_{\mathbb{C}} \CC_{\bar\rho}$ is nontrivial. Both sides being simple, the
morphism is an isomorphism; consequently,
\begin{equation} \label{eq:varHY}
q^{-1}(\mathcal{H}_U, K) \otimes_{\mathbb{C}} \CC_{\rho} \simeq (\mathcal{H}, J).
\end{equation}
Now let $(N, K) \in \HMC{A}{w - \dim q}$ be the polarizable complex Hodge module on
$A$ corresponding to $(\mathcal{H}_U, K)$; by construction, $(N, K)$ is simple with strict
support $Y$. Arguing as in \cite[Lemma~20.2]{Schnell-holo}, one proves that the naive
pullback $q^{-1}(N, K) \in \HMC{T}{w}$ is simple with strict support $X$. Because of
\eqref{eq:varHY}, this means that $(M, J)$ is isomorphic to $q^{-1}(N, K)
\otimes_{\mathbb{C}} \CC_{\rho}$ in the category $\HMC{T}{w}$.
\end{proof}
We have thus proved \theoremref{thm:CHM}, except for the inequality $\chi(A, N, K) >
0$. Let $\mathcal{N}$ denote the regular holonomic $\mathscr{D}$-module underlying $N$; then
\[
\mathcal{N} = \mathcal{N}' \oplus \mathcal{N}''
= \ker(K - i \cdot \id) \oplus \ker(K + i \cdot \id),
\]
where $K \in \End(\mathcal{N})$ refers to the induced endomorphism. By
\propositionref{prop:CHM-Kaehler}, both $\mathcal{N}'$ and $\mathcal{N}''$ are simple
with strict support $Y$. Since $A$ is an abelian variety, one has for example by
\cite[\S5]{Schnell-holo} that
\[
\chi(A, N, K) = \sum_{i \in \mathbb{Z}} (-1)^i \dim H^i \bigl( A, \DR(\mathcal{N}') \bigr)
\geq 0.
\]
Now the point is that a simple holonomic $\mathscr{D}$-module with vanishing Euler
characteristic is always (up to a twist by a line bundle with flat connection) the
pullback from a lower-dimensional abelian variety \cite[\S20]{Schnell-holo}.
\begin{proof}[Proof of \theoremref{thm:CHM}]
Keeping the notation from \propositionref{prop:CHM-weak}, we have a surjective
morphism $q \colon T \to A$ with connected fibers, a simple polarizable complex Hodge
module $(N, K) \in \HMC{Y}{w - \dim q}$ with strict support $Y = q(X)$, and a unitary
character $\rho \in \Char(T)$ such that
\[
(M, J) \simeq q^{-1}(N, K) \otimes_{\mathbb{C}} \CC_{\rho}.
\]
If $(N, K)$ has positive Euler characteristic, we are done, so let us assume from now
on that $\chi(A, N, K) = 0$. This means that $\mathcal{N}'$ is a simple regular holonomic
$\mathscr{D}$-module with strict support $Y$ and Euler characteristic zero.
By \cite[Corollary~5.2]{Schnell-holo}, there is a surjective morphism $f \colon A \to
B$ with connected fibers from $A$ to a lower-dimensional abelian variety $B$, such
that $\mathcal{N}'$ is (up to a twist by a line bundle with flat connection) the pullback of
a simple regular holonomic $\mathscr{D}$-module with positive Euler characteristic. Setting
\[
\mathcal{M} = \mathcal{M}' \oplus \mathcal{M}''
= \ker(J - i \cdot \id) \oplus \ker(J + i \cdot \id),
\]
it follows that $\mathcal{M}'$ is (again up to a twist by a line bundle with flat
connection) the pullback by $f \circ q$ of a simple regular holonomic $\mathscr{D}$-module
on $B$. Consequently, there is a dense Zariski-open subset $U \subseteq f(Y)$ such that the
restriction of $\mathcal{M}'$ to $(f \circ q)^{-1}(U)$ is coherent as an $\shf{O}$-module. By
\lemmaref{lem:CHM-smooth}, the restriction of $(M, J)$ to this open set is therefore
a polarizable complex variation of Hodge structure of weight $w - \dim X$. After
replacing our original morphism $q \colon T \to A$ by the composition $f \circ q
\colon T \to B$, we can argue as in the proof of \propositionref{prop:CHM-weak} to
show that \eqref{eq:CHM} is still satisfied (for a different choice of $\rho \in
\Char(T)$, perhaps).
With some additional work, one can prove that now $\chi(A, N, K) > 0$. Alternatively,
the same result can be obtained by the following more indirect method: as long as
$\chi(A, N, K) = 0$, we can repeat the argument above; since the dimension of $A$
goes down each time, we must eventually get to the point where $\chi(A, N, K) > 0$. This
completes the proof of \theoremref{thm:CHM}.
\end{proof}
\subsection{Proof of the main result}
As in \theoremref{thm:CHM-main}, let $(M, J) \in \HMC{T}{w}$ be a polarizable complex
Hodge module on a compact complex torus $T$. Using the decomposition by strict
support, we can assume without loss of generality that $(M, J)$ has strict support
equal to an irreducible analytic subvariety $X \subseteq T$. After translation, we
may assume moreover that $0 \in X$. Let $T' \subseteq T$ be the smallest subtorus of
$T$ containing $X$; by Kashiwara's equivalence, we have $(M, J) = i_{\ast} (M', J')$ for
some $(M', J') \in \HMC{T'}{w}$, where $i \colon T' \hookrightarrow T$ is the inclusion. Now
\theoremref{thm:CHM} gives us a morphism $q' \colon T' \to A'$ such that $(M', J')$
is isomorphic to the direct sum of pullbacks of polarizable complex Hodge modules
twisted by unitary local systems. Since $i^{-1} \colon \Char(T) \to \Char(T')$ is
surjective, the same is then true for $(M, J)$ with respect to the quotient mapping
$q \colon T \to T/\ker q'$. This proves \theoremref{thm:CHM-main}.
\begin{proof}[Proof of \corollaryref{cor:CHM-real}]
By considering the complexification
\[
(M \oplus M, J_M) \in \HMC{T}{w},
\]
we reduce the problem to the situation considered in \theoremref{thm:CHM-main}.
It remains to show that all the characters in \eqref{eq:decomposition} have finite
order in $\Char(T)$ if $M$ admits an integral structure. By \lemmaref{lem:integral-summand}, every summand in the
decomposition of $M$ by strict support still admits an integral structure, and so we
may assume without loss of generality that $M$ has strict support equal to $X
\subseteq T$ and that $0 \in X$. As before, we have $(M, J) = i_{\ast} (M', J')$, where $i
\colon T' \hookrightarrow T$ is the smallest subtorus of $T$ containing $X$; it is easy to see
that $M'$ again admits an integral structure. Now we apply the same argument as in
the proof of \theoremref{thm:CHM-main} to the finitely many simple factors of $(M,
J)$, noting that the characters $\rho \in \Char(T)$ that come up always have finite
order by \lemmaref{lem:VHS-torus} below.
\end{proof}
\begin{note}
As in the proof of \lemmaref{lem:twist}, it follows that $M \oplus M$ is isomorphic
to the direct sum of the polarizable real Hodge modules
\begin{equation} \label{eq:summand-j}
\ker \Bigl( q_j^{-1} J_j \otimes J_{\rho_j} + \id \Bigr)
\subseteq q_j^{-1} N_j \otimes_{\mathbb{R}} \mathcal{H}_{\rho_j}.
\end{equation}
Furthermore, one can show that for each $j = 1, \dotsc, n$, exactly one of two things
happens. (1) Either the object in \eqref{eq:summand-j} is simple, and therefore
occurs among the simple factors of $M$; in this case, the underlying regular
holonomic $\mathscr{D}$-module $\mathcal{M}$ will contain the two simple factors
\[
\Bigl( q^{\ast}_j \mathcal{N}_j' \otimes_{\shO_T} (L_j, \nabla_j) \Bigr)
\oplus \Bigl( q^{\ast}_j \mathcal{N}_j'' \otimes_{\shO_T} (L_j, \nabla_j)^{-1} \Bigr).
\]
(2) Or the object in \eqref{eq:summand-j} splits into two copies of a simple
polarizable real Hodge module, which also has to occur among the simple factors of $M$.
In this case, one can actually arrange that $(N_j, J_j)$ is real and that the
character $\rho_j$ takes values in $\{-1, +1\}$. The simple object in question is the
twist of $(N_j, J_j)$ by the polarizable real variation of Hodge structure of rank
one determined by $\rho_j$; moreover, $\mathcal{M}$ will contain $q^{\ast}_j
\mathcal{N}_j' \otimes_{\shO_T} (L_j, \nabla_j) \simeq q^{\ast}_j \mathcal{N}_j'' \otimes_{\shO_T} (L_j,
\nabla_j)^{-1}$ as a simple factor.
\end{note}
\subsection{A lemma about variations of Hodge structure}
The fundamental group of a compact complex torus is abelian, and so every polarizable
complex variation of Hodge structure is a direct sum of unitary local systems of rank
one; this is the content of the following elementary lemma \cite[Lemma~1.8]{Schnell-laz}.
\begin{lemma} \label{lem:VHS-torus}
Let $(\mathcal{H}, J)$ be a polarizable complex variation of Hodge structure on a compact complex
torus $T$. Then the local system $\varH_{\CC} = \varH_{\RR} \otimes_{\mathbb{R}} \mathbb{C}$ is isomorphic to
a direct sum of unitary local systems of rank one. If $\mathcal{H}$ admits an integral
structure, then each of these local systems of rank one has finite order.
\end{lemma}
\begin{proof}
According to \cite[\S1.12]{Deligne}, the underlying local system of a polarizable
complex variation of Hodge structure on a compact K\"ahler manifold is semi-simple; in
the case of a compact complex torus, it is therefore a direct sum of rank-one local
systems. The existence of a polarization implies that the individual local systems
are unitary \cite[Proposition~1.13]{Deligne}. Now suppose that $\mathcal{H}$ admits an
integral structure, and let $\mu \colon \pi_1(A, 0) \to \GL_n(\mathbb{Z})$ be the monodromy
representation. We already know that the complexification of $\mu$ is a
direct sum of unitary characters. Since $\mu$ is defined over $\mathbb{Z}$, the values of
each character are algebraic integers of absolute value one; by Kronecker's theorem,
they must be roots of unity.
\end{proof}
\subsection{Integral structure and points of finite order}
One can combine the decomposition in \corollaryref{cor:CHM-real} with known results
about Hodge modules on abelian varieties \cite{Schnell-laz} to prove the following
generalization of Wang's theorem.
\begin{corollary} \label{cor:finite-order}
If $M \in \HM{T}{w}$ admits an integral structure, then the sets
\[
S_m^i(T, M)
= \menge{\rho \in \Char(T)}{\dim H^i(T, M_{\RR} \otimes_{\mathbb{R}} \CC_{\rho}) \geq m}
\]
are finite unions of translates of linear subvarieties by points of finite order.
\end{corollary}
\begin{proof}
The result in question is known for abelian varieties: if $M \in \HM{A}{w}$
is a polarizable real Hodge module on an abelian variety, and if $M$ admits an
integral structure, then the sets $S_m^i(A, M)$
are finite unions of ``arithmetic subvarieties'' (= translates of linear subvarieties
by points of finite order). This is proved in \cite[Theorem~1.4]{Schnell-laz} for
polarizable rational Hodge modules, but the proof carries over unchanged to the case
of real coefficients. The same argument shows more generally that if the underlying
perverse sheaf $M_{\CC}$ of a polarizable real Hodge module $M \in \HM{A}{w}$ is
isomorphic to a direct factor in the complexification of some $E \in \mathrm{D}_{\mathit{c}}^{\mathit{b}}(\mathbb{Z}_A)$,
then each $S_m^i(A, M)$ is a finite union of arithmetic subvarieties.
Now let us see how to extend this result to compact complex tori. Passing to the
underlying complex perverse sheaves in \corollaryref{cor:CHM-real}, we get
\[
M_{\CC} \simeq \bigoplus_{j=1}^n \bigl( q_j^{-1} N_{j, \mathbb{C}} \otimes_{\mathbb{C}}
\mathbb{C}_{\rho_j} \bigr);
\]
recall that $\Supp N_j$ is a projective subvariety of the complex torus $T_j$, and
that $\rho_j \in \Char(T)$ has finite order. In light of this decomposition and the
comments above, it is therefore enough to prove that each $N_{j, \mathbb{C}}$ is isomorphic
to a direct factor in the complexification of some object of $\mathrm{D}_{\mathit{c}}^{\mathit{b}}(\mathbb{Z}_{T_j})$.
Let $E \in \mathrm{D}_{\mathit{c}}^{\mathit{b}}(\mathbb{Z}_T)$ be some choice of integral structure on the real Hodge module
$M$; obviously $M_{\CC} \simeq E \otimes_{\mathbb{Z}} \mathbb{C}$. Let $r \geq 1$ be the order of
the point $\rho_j \in \Char(T)$, and denote by $[r] \colon T \to T$ the finite
morphism given by multiplication by $r$. We define
\[
E' = \mathbf{R} [r]_{\ast} \bigl( [r]^{-1} E \bigr) \in \mathrm{D}_{\mathit{c}}^{\mathit{b}}(\mathbb{Z}_T)
\]
and observe that the complexification of $E'$ is isomorphic to the direct sum of $E
\otimes_{\mathbb{Z}} \CC_{\rho}$, where $\rho \in \Char(T)$ runs over the finite set of
characters whose order divides $r$. This set includes $\rho_j^{-1}$, and so $q_j^{-1}
N_{j, \mathbb{C}}$ is isomorphic to a direct factor of $E' \otimes_{\mathbb{Z}} \mathbb{C}$. Because $q_j
\colon T \to T_j$ has connected fibers, this implies that
\[
N_{j, \mathbb{C}} \simeq \mathcal{H}^{-\dim q_j} q_{j \ast} \bigl( q_j^{-1} N_{j, \mathbb{C}} \bigr)
\]
is isomorphic to a direct factor of
\[
\mathcal{H}^{-\dim q_j} q_{j \ast} \bigl( E' \otimes_{\mathbb{Z}} \mathbb{C} \bigr).
\]
As explained in \cite[\S1.2.2]{Schnell-laz}, this is again the complexification of a
constructible complex in $\mathrm{D}_{\mathit{c}}^{\mathit{b}}(\mathbb{Z}_{T_j})$, and so the proof is complete.
\end{proof}
\section{Generic vanishing theory}
Let $X$ be a compact K\"ahler manifold, and let $f \colon X \to T$ be a holomorphic
mapping to a compact complex torus. The main purpose of this chapter is to show that
the higher direct image sheaves $R^j f_{\ast} \omega_X$ have the same properties as in the
projective case (such as being GV-sheaves). As explained in the introduction, we do
not know how to obtain this using classical Hodge theory; this forces us to prove a
more general result for arbitrary polarizable complex Hodge modules.
\subsection{GV-sheaves and M-regular sheaves}
\label{par:GV-sheaves}
We begin by reviewing a few basic definitions. Let $T$ be a compact complex torus,
$\widehat{T} = \Pic^0(T)$ its dual, and $P$ the normalized Poincar\'e
bundle on the product $T \times \widehat T$. It induces an integral transform
\[
\mathbf{R} \Phi_P \colon \mathrm{D}_{\mathit{coh}}^{\mathit{b}}(\shO_T) \to \mathrm{D}_{\mathit{coh}}^{\mathit{b}}(\shf{O}_{\widehat T}), \quad
\mathbf{R} \Phi_P (\shf{F}) = \mathbf{R} {p_2}_*(p_1^* \shf{F} \otimes P),
\]
where $\mathrm{D}_{\mathit{coh}}^{\mathit{b}}(\shO_T)$ is the derived category of cohomologically bounded and coherent
complexes of $\shO_T$-modules. Likewise, we have $\mathbf{R} \Psi_P \colon
\mathrm{D}_{\mathit{coh}}^{\mathit{b}}(\shf{O}_{\widehat T}) \to \mathrm{D}_{\mathit{coh}}^{\mathit{b}}(\shf{O}_T)$ going in the opposite direction.
An argument analogous to Mukai's for abelian varieties shows that the Fourier-Mukai
equivalence holds in this case as well \cite[Theorem~2.1]{BBP}.
\begin{theorem}\label{mukai}
With the notations above, $\mathbf{R} \Phi_P$ and $\mathbf{R} \Psi_P$ are equivalences of
derived categories. More precisely, one has
\[
\mathbf{R} \Psi_P \circ \mathbf{R} \Phi_P \simeq (-1)_T^* [-\dim T] \quad \text{and} \quad
\mathbf{R} \Phi_P \circ \mathbf{R} \Psi_P \simeq (-1)_{\widehat T}^* [-\dim T].
\]
\end{theorem}
Given a coherent $\shO_T$-module $\shf{F}$ and an integer $m\ge 1$, we define
\[
S^i_m(T, \shf{F}) = \menge{L \in \Pic^0(T)}{\dim ~H^i(T, \shf{F} \otimes_{\shO_T} L) \ge m}.
\]
It is customary to denote
\[
S^i (T, \shf{F}) = S^i_1(T, \shf{F}) = \menge{L \in \Pic^0(T)}{H^i(T, \shf{F} \otimes_{\shO_T} L) \neq 0}.
\]
Recall the following definitions from \cite{PP3} and \cite{PP4} respectively.
\begin{definition}
A coherent $\shO_T$-module $\shf{F}$ is called a \define{GV-sheaf} if the inequality
\[
\codim_{\Pic^0(T)} S^i(T, \shf{F}) \geq i
\]
is satisfied for every integer $i \geq 0$. It is called \define{M-regular} if the
inequality
\[
\codim_{\Pic^0(T)} S^i(T, \shf{F}) \geq i+1
\]
is satisfied for every integer $i \geq 1$.
\end{definition}
A number of local properties of integral transforms for complex manifolds, based only
on commutative algebra results, were proved in \cite{PP1,Popa}. For
instance, the following is a special case of \cite[Theorem~2.2]{PP1}.
\begin{theorem}\label{GV_WIT}
Let $\shf{F}$ be a coherent sheaf on a compact complex torus $T$.
Then the following statements are equivalent:
\begin{renumerate}
\item $\shf{F}$ is a GV-sheaf.
\item $R^i \Phi_P (\mathbf{R} \Delta \shf{F}) = 0$ for $i \neq \dim T$, where $\mathbf{R} \Delta \shf{F} : =
\mathbf{R} \mathcal{H}\hspace{-1pt}\mathit{om} (\shf{F}, \shf{O}_T)$.
\end{renumerate}
\end{theorem}
Note that this statement was inspired by work of Hacon \cite{Hacon} in the projective
setting. In the course of the proof of \theoremref{GV_WIT}, and also for some of
the results below, the following consequence of Grothendieck duality for compact
complex manifolds is needed; see the proof of \cite[Theorem~2.2]{PP1}, and especially
the references there.
\begin{equation}\label{duality}
\mathbf{R} \Phi_P (\shf{F}) \simeq
\mathbf{R} \Delta \bigl( \mathbf{R} \Phi_{P^{-1}} (\mathbf{R} \Delta \shf{F}) [\dim T] \bigr).
\end{equation}
In particular, if $\shf{F}$ is a GV-sheaf,
then if we denote $\hat{\shf{F}} : = R^{\dim T} \Phi_{P^{-1}} (\mathbf{R} \Delta \shf{F})$,
\theoremref{GV_WIT} and \eqref{duality} imply that
\begin{equation}\label{basic}
\mathbf{R} \Phi_P (\shf{F}) \simeq \mathbf{R} \mathcal{H}\hspace{-1pt}\mathit{om}(\hat{\shf{F}}, \shf{O}_{\hat A}).
\end{equation}
As in \cite[Proposition~2.8]{PP2}, $\shf{F}$ is $M$-regular if and only if
$\hat{\shf{F}}$ is torsion-free.
The fact that \theoremref{mukai}, \theoremref{GV_WIT} and ($\ref{basic}$) hold for
arbitrary compact complex tori allows us to deduce important properties of
GV-sheaves in this setting. Besides these statements, the proofs only rely on local
commutative algebra and base change, and so are completely analogous to those for
abelian varieties; we will thus only indicate references for that case.
\begin{proposition} \label{sliding}
Let $\shf{F}$ be a GV-sheaf on $T$.
\begin{aenumerate}
\item One has $S^{\dim T} (T, \shf{F}) \subseteq \cdots \subseteq S^1 (T, \shf{F})
\subseteq S^0 (T, \shf{F}) \subseteq \widehat T$.
\item If $S^0(T, \shf{F})$ is empty, then $\shf{F} = 0$.
\item If an irreducible component $Z \subseteq S^0 (T, \shf{F})$ has codimension $k$ in
$\Pic^0(X)$, then $Z \subseteq S^k (T, \shf{F})$, and hence $\dim \Supp \shf{F} \geq k$.
\end{aenumerate}
\end{proposition}
\begin{proof}
For (a), see \cite[Proposition~3.14]{PP3}; for (b), see \cite[Lemma~1.12]{Pareschi};
for (c), see \cite[Lemma~1.8]{Pareschi}.
\end{proof}
\subsection{Higher direct images of dualizing sheaves}
Saito \cite{Saito-Kae} and Takegoshi \cite{Takegoshi} have extended to K\"ahler
manifolds many of the fundamental theorems on higher direct images of canonical
bundles proved by Koll\'ar for smooth projective varieties. The following theorem
summarizes some of the results in \cite[p.390--391]{Takegoshi} in the special case
that is needed for our purposes.
\begin{theorem}[Takegoshi] \label{takegoshi}
Let $f \colon X \to Y$ be a proper holomorphic mapping from a compact K\"ahler
manifold to a reduced and irreducible analytic space, and let $L \in \Pic^0(X)$ be a
holomorphic line bundle with trivial first Chern class.
\begin{aenumerate}
\item The Leray spectral sequence
\[
E^{p,q}_2 = H^p \bigl( Y, R^q f_{\ast}(\omega_X \otimes L) \bigr) \Longrightarrow
H^{p+q}(X, \omega_X \otimes L)
\]
degenerates at $E_2$.
\item If $f$ is surjective, then $R^q f_{\ast}(\omega_X \otimes L)$ is torsion free for every
$q\ge 0$; in particular, it vanishes for $q > \dim X - \dim Y$.
\end{aenumerate}
\end{theorem}
Saito \cite{Saito-Kae} obtained the same results in much greater generality, using the
theory of Hodge modules. In fact, his method also gives the splitting of the complex
$\mathbf{R} f_{\ast} \omega_X$ in the derived category, thus extending the main result of
\cite{Kollar} to all compact K\"ahler manifolds.
\begin{theorem}[Saito]
Keeping the assumptions of the previous theorem, one has
\[
\mathbf{R} f_{\ast} \omega_X \simeq \bigoplus_j \bigl( R^j f_{\ast} \omega_X \bigr) \decal{-j}
\]
in the derived category $\mathrm{D}_{\mathit{coh}}^{\mathit{b}}(\shO_Y)$.
\end{theorem}
\begin{proof}
Given \cite{Saito-Kae}, the proof in \cite{Saito-K} goes through under the assumption
that $X$ is a compact K\"ahler manifold.
\end{proof}
\subsection{Euler characteristic and M-regularity}
In this section, we relate the Euler characteristic of a simple polarizable complex
Hodge module on a compact complex torus $T$ to the M-regularity of the associated
graded object.
\begin{lemma} \label{lem:M-regular}
Let $(M, J) \in \HMC{T}{w}$ be a simple polarizable complex Hodge module on a compact
complex torus. If $\Supp M$ is projective and $\chi(T, M, J) > 0$, then the coherent
$\shO_T$-module $\gr_k^F \mathcal{M}'$ is M-regular for every $k \in \mathbb{Z}$.
\end{lemma}
\begin{proof}
$\Supp M$ is projective, hence contained in a translate of an abelian subvariety $A
\subseteq T$; because \lemmaref{lem:Kashiwara} holds for polarizable complex Hodge
modules, we may therefore assume without loss of generality that $T = A$ is an
abelian variety.
As usual, let $\mathcal{M} = \mathcal{M}' \oplus \mathcal{M}'' = \ker(J - i \cdot \id) \oplus \ker(J +
i \cdot \id)$ be the decomposition into eigenspaces. The summand $\mathcal{M}'$ is a simple
holonomic $\mathscr{D}$-module with positive Euler characteristic on an abelian variety,
and so \cite[Theorem~2.2 and Corollary~20.5]{Schnell-holo} show that
\begin{equation} \label{eq:CSL}
\menge{\rho \in \Char(A)}%
{H^i \bigl( A, \DR(\mathcal{M}') \otimes_{\mathbb{C}} \CC_{\rho} \bigr) \neq 0}
\end{equation}
is equal to $\Char(A)$ when $i = 0$, and is equal to a finite union of translates of
linear subvarieties of codimension $\geq 2i+2$ when $i \geq 1$.
We have a one-to-one correspondence between $\Pic^0(A)$ and the subgroup of unitary
characters in $\Char(A)$; it takes a unitary character $\rho \in \Char(A)$ to the
holomorphic line bundle $L_{\rho} = \CC_{\rho} \otimes_{\mathbb{C}} \mathscr{O}_A$. If $\rho \in \Char(A)$ is
unitary, the twist $(M, J) \otimes_{\mathbb{C}} \CC_{\rho}$ is still a polarizable complex Hodge
module by \lemmaref{lem:twist}, and so the complex computing its hypercohomology is
strict. It follows that
\[
H^i \bigl( A, \gr_k^F \DR(\mathcal{M}') \otimes_{\mathscr{O}_A} L_{\rho} \bigr)
\quad \text{is a subquotient of} \quad
H^i \bigl( A, \DR(\mathcal{M}') \otimes_{\mathbb{C}} \CC_{\rho} \bigr).
\]
If we identify $\Pic^0(A)$ with the subgroup of unitary characters, this means that
\[
\menge{L \in \Pic^0(A)}%
{H^i \bigl( A, \gr_k^F \DR(\mathcal{M}') \otimes_{\mathscr{O}_A} L \bigr) \neq 0}
\]
is contained in the intersection of \eqref{eq:CSL} and the subgroup of unitary
characters. When $i \geq 1$, this intersection is a finite union of translates of
subtori of codimension $\geq i+1$; it follows that
\[
\codim_{\Pic^0(A)} \menge{L \in \Pic^0(A)}%
{H^i \bigl( A, \gr_k^F \DR(\mathcal{M}') \otimes_{\mathscr{O}_A} L \bigr) \neq 0} \geq i+1.
\]
Since the cotangent bundle of $A$ is trivial, a simple induction on $k$ as in the
proof of \cite[Lemma~1]{PS} gives
\[
\codim_{\Pic^0(A)} \menge{L \in \Pic^0(A)}
{H^i \bigl( A, \gr_k^F \mathcal{M}' \otimes_{\mathscr{O}_A} L \bigr) \neq 0} \geq i+1,
\]
and so each $\gr_k^F \mathcal{M}'$ is indeed M-regular.
\end{proof}
\begin{note}
In fact, the result still holds without the assumption that $\Supp M$ is projective;
this is an easy consequence of the decomposition in \eqref{eq:decomposition}.
\end{note}
\subsection{Chen-Jiang decomposition and generic vanishing}
\label{par:ChenJiang}
Using the decomposition in \theoremref{thm:CHM-main} and the result of the previous
section, we can now prove the most general version of the generic vanishing theorem,
namely \theoremref{thm:Chen-Jiang} in the introduction.
\begin{proof}[Proof of \theoremref{thm:Chen-Jiang}]
We apply \theoremref{thm:CHM-main} to the complexification $(M \oplus M, J_M) \in
\HMC{T}{w}$. Passing to the associated graded in \eqref{eq:decomposition}, we obtain
a decomposition of the desired type with $\shf{F}_j = \gr_k^F \mathcal{N}_j'$ and $L_j =
\mathbb{C}_{\rho_j} \otimes_{\mathbb{C}} \shO_T$, where
\[
\mathcal{N}_j = \mathcal{N}_j' \oplus \mathcal{N}_j'' = \ker(J_j - i \cdot \id) \oplus
\ker(J_j + i \cdot \id)
\]
is as usual the decomposition into eigenspaces of $J_j \in \End(\mathcal{N}_j)$. Since
$\Supp N_j$ is projective and $\chi(T_j, N_j, J_j) > 0$, we conclude from
\lemmaref{lem:M-regular} that each coherent $\shf{O}_{T_j}$-module $\shf{F}_j$ is M-regular.
\end{proof}
\begin{corollary}\label{cor:MHM-GV}
If $M = (\mathcal{M}, F_{\bullet} \mathcal{M}, M_{\RR}) \in \HM{T}{w}$, then for every $k \in \mathbb{Z}$,
the coherent $\shO_T$-module $\gr_k^F \mathcal{M}$ is a GV-sheaf.
\end{corollary}
\begin{proof}
This follows immediately from \theoremref{thm:Chen-Jiang} and the fact that if $p
\colon T \to T_0$ is a surjective homomorphism of complex tori and $\shf{G}$ is a GV-sheaf on
$T_0$, then $\shf{F} = f^{\ast} \shf{G}$ is a GV-sheaf on $T$. For this last statement and more
refined facts (for instance when $\shf{G}$ is $M$-regular), see e.g.
\cite[\S2]{ChenJiang}, especially Proposition 2.6. The arguments in \cite{ChenJiang}
are for abelian varieties, but given the remarks in \parref{par:GV-sheaves}, they
work equally well on compact complex tori.
\end{proof}
By specializing to the direct image of the canonical Hodge module $\mathbb{R}_X \decal{\dim
X}$ along a morphism $f \colon X \to T$, we are finally able to conclude that each
$R^j f_{\ast} \omega_X$ is a GV-sheaf. In fact, we have the more refined \theoremref{thm:direct_image};
it was first proved for smooth projective varieties of maximal Albanese dimension by Chen
and Jiang \cite[Theorem~1.2]{ChenJiang}, which was a source of inspiration for us.
\begin{proof}[Proof of \theoremref{thm:direct_image}]
Denote by $\mathbb{R}_X \decal{\dim X} \in \HM{X}{\dim X}$ the polarizable real Hodge module
corresponding to the constant real variation of Hodge structure of rank one and
weight zero on $X$. According to \cite[Theorem~3.1]{Saito-Kae}, each $\mathcal{H}^j
f_{\ast} \mathbb{R}_X \decal{\dim X}$ is a polarizable real Hodge module of weight $\dim X + j$
on $T$; it also admits an integral structure \cite[\S1.2.2]{Schnell-laz}. In the
decomposition by strict support, let $M$ be the summand with strict support $f(X)$;
note that $M$
still admits an integral structure by \lemmaref{lem:integral-summand}. Now $R^j f_{\ast}
\omega_X$ is the first nontrivial piece of the Hodge filtration on the underlying regular
holonomic $\mathscr{D}$-module \cite{Saito-K}, and so the result follows directly from
\theoremref{thm:Chen-Jiang} and \corollaryref{cor:MHM-GV}. For the ampleness
part in the statement, see \corollaryref{cor:MHM_positivity}.
\end{proof}
\begin{note}
Except for the assertion about finite order, \theoremref{thm:direct_image} still holds
for arbitrary coherent $\shO_T$-modules of the form
\[
R^j f_{\ast} ( \omega_X\otimes L)
\]
with $L \in \Pic^0(X)$. The point is that every such $L$ is the holomorphic line
bundle associated with a unitary character $\rho \in \Char(X)$; we can therefore
apply the same argument as above to the polarizable complex Hodge module $\CC_{\rho}
\decal{\dim X}$.
\end{note}
If the given morphism is generically finite over its image, we can say more.
\begin{corollary} \label{cor:gen-finite}
If $f \colon X \to T$ is generically finite over its image, then $S^0(T,
f_{\ast} \omega_X)$ is preserved by the involution $L \mapsto L^{-1}$ of $\Pic^0(T)$.
\end{corollary}
\begin{proof}
As before, we define $M = \mathcal{H}^0 f_{\ast} \mathbb{R}_X \decal{\dim X} \in \HM{T}{\dim X}$.
Recall from \corollaryref{cor:CHM-real} that we have a decomposition
\[
(M \oplus M, J_M) \simeq \bigoplus_{j=1}^n \bigl( q_j^{-1}(N_j, J_j)
\otimes_{\mathbb{C}} \mathbb{C}_{\rho_j} \bigr).
\]
Since $f$ is generically finite over its image, there is a dense Zariski-open subset
of $f(X)$ where $M$ is a variation of Hodge structure of type $(0,0)$; the
above decomposition shows that the same is true for $N_j$ on $(q_j \circ f)(X)$. If
we pass to the underlying regular holonomic $\mathscr{D}$-modules and remember
\lemmaref{lem:twist}, we see that
\[
\mathcal{M} \oplus \mathcal{M} \simeq \bigoplus_{j=1}^n
\Bigl( q^{\ast}_j \mathcal{N}_j' \otimes_{\shO_T} (L_j, \nabla_j) \Bigr) \oplus
\bigoplus_{j=1}^n \Bigl( q^{\ast}_j \mathcal{N}_j'' \otimes_{\shO_T} (L_j, \nabla_j)^{-1} \Bigr),
\]
where $(L_j, \nabla_j)$ is the flat bundle corresponding to the character $\rho_j$.
By looking at the first nontrivial step in the Hodge filtration on $\mathcal{M}$, we then get
\[
f_{\ast} \omega_X \oplus f_{\ast} \omega_X \simeq
\bigoplus_{j=1}^n \Bigl( q^{\ast}_j \shf{F}_j' \otimes_{\shO_T} L_j \Bigr)
\oplus \bigoplus_{j=1}^n \Bigl( q^{\ast}_j \shf{F}_j'' \otimes_{\shO_T} L_j^{-1} \Bigr),
\]
where $\shf{F}_j' = F_{p(M)} \mathcal{N}_j'$ and $\shf{F}_j'' = F_{p(M)} \mathcal{N}_j''$, and $p(M)$
is the smallest integer with the property that $F_p \mathcal{M} \neq 0$. Both sheaves are
torsion-free on $(q_j \circ f)(X)$, and can therefore be nonzero only when $\Supp N_j
= (q_j \circ f)(X)$; after re-indexing, we may assume that this holds exactly in the
range $1 \leq j \leq m$.
Now we reach the crucial point of the argument: the fact that $N_j$ is generically a
polarizable real variation of Hodge structure of type $(0,0)$ implies that $\shf{F}_j'$
and $\shf{F}_j''$ have the same rank at the generic point of $(q_j \circ f)(X)$. Indeed,
on a dense Zariski-open subset of $(q_j \circ f)(X)$, we have $\shf{F}_j' = \mathcal{N}_j'$
and $\shf{F}_j'' = \mathcal{N}_j''$, and complex conjugation with respect to the real
structure on $N_j$ interchanges the two factors.
Since $\shf{F}_j'$ and $\shf{F}_j''$ are M-regular by \lemmaref{lem:M-regular}, we have (for
$1 \leq j \leq m$)
\[
S^0(T, q^{\ast}_j \shf{F}_j' \otimes_{\shO_T} L_j) =
L_j^{-1} \otimes S^0(T_j, \shf{F}_j')
= L_j^{-1} \otimes \Pic^0(T_j),
\]
and similarly for $q^{\ast}_j \shf{F}_j'' \otimes_{\shO_T} L_j^{-1}$; to simplify the notation,
we identify $\Pic^0(T_j)$ with its image in $\Pic^0(T)$. The decomposition from above
now gives
\[
S^0(T, f_{\ast} \omega_X) = \bigcup_{j=1}^m \Bigl( L_j^{-1} \otimes \Pic^0(T_j) \Bigr)
\cup \bigcup_{j=1}^m \Bigl( L_j \otimes \Pic^0(T_j) \Bigr),
\]
and the right-hand side is clearly preserved by the involution $L \mapsto L^{-1}$.
\end{proof}
\subsection{Points of finite order on cohomology support loci}
Let $f \colon X \to T$ be a holomorphic mapping from a compact K\"ahler manifold to a
compact complex torus. Our goal in this section is to prove that the cohomology
support loci of the coherent $\shO_T$-modules $R^j f_{\ast} \omega_X$ are finite unions of
translates of subtori by points of finite order. We consider the refined cohomology
support loci
\[
S_m^i(T, R^j f_{\ast} \omega_X) = \menge{L \in \Pic^0(T)}%
{\dim H^i(T, R^j f_{\ast} \omega_X \otimes L) \geq m} \subseteq \Pic^0(T).
\]
The following result is well-known in the projective case.
\begin{corollary} \label{cor:Wang}
Every irreducible component of $S_m^i(T, R^j f_{\ast} \omega_X)$ is a translate of a subtorus of
$\Pic^0(T)$ by a point of finite order.
\end{corollary}
\begin{proof}
As in the proof of \theoremref{thm:direct_image} (in \parref{par:ChenJiang}), we let
$M \in \HM{T}{\dim X + j}$ be the summand with strict support $f(X)$ in the decomposition
by strict support of $\mathcal{H}^j f_{\ast} \mathbb{R}_X \decal{\dim X}$; then $M$ admits an integral
structure, and
\[
R^j f_{\ast} \omega_X \simeq F_{p(M)} \mathcal{M},
\]
where $p(M)$ again means the smallest integer such that $F_p \mathcal{M} \neq 0$.
Since $M$ still admits an integral structure by \lemmaref{lem:integral-summand}, the
result in \corollaryref{cor:finite-order} shows that the sets
\[
S_m^i(T, M) = \menge{\rho \in \Char(T)}%
{\dim H^i(T, M_{\RR} \otimes_{\mathbb{R}} \CC_{\rho}) \geq m}
\]
are finite unions of translates of linear subvarieties by points of finite order. As
in the proof of \lemmaref{lem:M-regular}, the strictness of the complex computing the
hypercohomology of $(M \oplus M, J_M) \otimes_{\mathbb{C}} \CC_{\rho}$ implies that
\[
\dim H^i(T, M_{\RR} \otimes_{\mathbb{R}} \CC_{\rho}) = \sum_{p \in \mathbb{Z}}
\dim H^i \bigl( T, \gr_p^F \DR(\mathcal{M}) \otimes_{\shO_T} L_{\rho} \bigr)
\]
for every unitary character $\rho \in \Char(T)$; here $L_{\rho} = \CC_{\rho}
\otimes_{\mathbb{C}} \shO_T$. Note that $\gr_p^F \DR(\mathcal{M})$ is acyclic for $p \gg 0$, and so
the sum on the right-hand side is actually finite. Intersecting $S_m^i(T, M)$ with
the subgroup of unitary characters, we see that each~set
\[
\Menge{L \in \Pic^0(T)}%
{\sum_{p \in \mathbb{Z}} \dim H^i \bigl( T, \gr_p^F \DR(\mathcal{M})
\otimes_{\shO_T} L \bigr) \geq m}
\]
is a finite union of translates of subtori by points of finite order. By a standard
argument \cite[p.~312]{Arapura}, it follows that the same is true for each of the
summands; in other words, for each $p \in \mathbb{Z}$, the set
\[
S_m^i \bigl( T, \gr_p^F \DR(\mathcal{M}) \bigr) \subseteq \Pic^0(T)
\]
is itself a finite union of translates of subtori by points of finite order. Since
\[
\gr_{p(M)}^F \DR(\mathcal{M}) = \omega_T \otimes F_{p(M)} \mathcal{M} \simeq R^j f_{\ast} \omega_X,
\]
we now obtain the assertion by specializing to $p = p(M)$.
\end{proof}
\begin{note}
Alternatively, one can deduce \corollaryref{cor:Wang} from Wang's theorem \cite{Wang}
about cohomology jump loci on compact K\"ahler manifolds, as follows. Wang shows that
the sets $S_m^{p,q}(X) = \menge{L \in \Pic^0(X)}{\dim H^q(X, \Omega_X^p
\otimes L) \geq m}$ are finite unions of translates of subtori by points of finite
order; in particular, this is true for $\omega_X = \Omega_X^{\dim X}$. Takegoshi's results
about higher direct images of $\omega_X$ in \theoremref{takegoshi} imply the
$E_2$-degeneration of the spectral sequence
\[
E_2^{i,j} = H^i \bigl( T, R^j f_{\ast} \omega_X \otimes L \bigr)
\Longrightarrow H^{i+j}(X, \omega_X \otimes f^{\ast} L)
\]
for every $L \in \Pic^0(T)$, which means that
\[
\dim H^q(X, \omega_X \otimes f^{\ast} L)
= \sum_{k+j=q} \dim H^k \bigl( T, R^j f_{\ast} \omega_X \otimes L \bigr).
\]
The assertion now follows from Wang's theorem by the same argument as above.
\end{note}
\section{Applications}
\subsection{Bimeromorphic characterization of tori}
Our main application of generic vanishing for higher direct images of dualizing
sheaves is an extension of the
Chen-Hacon birational characterization of abelian varieties \cite{CH1} to the
K\"ahler case.
\begin{theorem}\label{torus}
Let $X$ be a compact K\"ahler manifold with $P_1(X) = P_2 (X) = 1$ and $h^{1,0}(X) =
\dim X$. Then $X$ is bimeromorphic to a compact complex torus.
\end{theorem}
Throughout this section, we take $X$ to be a compact K\"ahler manifold, and denote by $f
\colon X \to T$ its Albanese mapping; by assumption, we have
\[
\dim T = h^{1,0}(X) = \dim X.
\]
We use the following standard notation, analogous to that in \parref{par:GV-sheaves}:
\[
S^i (X, \omega_X) = \menge{L \in \Pic^0(X)}{H^i (X, \omega_X \otimes L) \neq 0}
\]
To simplify things, we shall identify $\Pic^0(X)$ and $\Pic^0(T)$ in what
follows. We begin by recalling a few well-known results.
\begin{lemma}\label{isolated}
If $P_1(X) = P_2(X) = 1$, then there cannot be any positive-dimensional analytic
subvariety $Z \subseteq \Pic^0 (X)$ such that both $Z$ and $Z^{-1}$ are contained in $S^0
(X, \omega_X)$. In particular, the origin must be an isolated point in $S^0 (X, \omega_X)$.
\end{lemma}
\begin{proof}
This result is due to Ein and Lazarsfeld \cite[Proposition~2.1]{EL}; they state it
only in the projective case, but their proof actually works without any changes on
arbitrary compact K\"ahler manifolds.
\end{proof}
\begin{lemma}\label{surjective}
Assume that $S^0 (X, \omega_X)$ contains isolated points. Then the Albanese map of $X$
is surjective.
\end{lemma}
\begin{proof}
By \theoremref{thm:direct_image} (for $j = 0$), $f_{\ast} \omega_X$ is a GV-sheaf.
\propositionref{sliding}
shows that any isolated point in $S^0 (T, f_{\ast} \omega_X) = S^0 (X, \omega_X)$ also
belongs to $S^{\dim T}(T, f_{\ast} \omega_X)$; but this is only possible if the support of
$f_{\ast} \omega_X$ has dimension at least $\dim T$.
\end{proof}
To prove \theoremref{torus}, we follow the general strategy introduced in
\cite[\S4]{Pareschi}, which in turn is inspired by \cite{EL,CH2}. The crucial new
ingredient is of course \theoremref{thm:direct_image}, which had only been known in the
projective case. Even in the projective case however, the argument below is
substantially cleaner than the existing proofs; this is due to \corollaryref{cor:gen-finite}.
\begin{proof}[Proof of \theoremref{torus}]
The Albanese map $f \colon X \to T$ is surjective by \lemmaref{isolated} and
\lemmaref{surjective}; since $h^{1,0}(X) = \dim X$, this means that $f$ is
generically finite. To conclude the proof, we just have to argue that $f$ has degree
one; more precisely, we shall use \theoremref{thm:direct_image} to show that $f_{\ast} \omega_X
\simeq \shO_T$.
As a first step in this direction, let us prove that $\dim S^0(T, f_{\ast} \omega_X) = 0$. If
\[
S^0(T, f_{\ast} \omega_X) = S^0(X, \omega_X)
\]
had an irreducible component $Z$ of positive dimension, \corollaryref{cor:gen-finite}
would imply that $Z^{-1}$ is contained in $S^0(X, \omega_X)$ as well. As this would
contradict \lemmaref{isolated}, we conclude that $S^0(T, f_{\ast} \omega_X)$ is
zero-dimensional.
Now $f_{\ast} \omega_X$ is a GV-sheaf by \theoremref{thm:direct_image}, and so \propositionref{sliding}
shows that
\[
S^0(T, f_{\ast} \omega_X) = S^{\dim T}(T, f_{\ast} \omega_X).
\]
Since $f$ is generically finite, \theoremref{takegoshi} implies that $R^j f_{\ast} \omega_X = 0$ for $j > 0$, which
gives
\[
S^{\dim T}(T, f_{\ast} \omega_X) = S^{\dim T}(X, \omega_X)
= S^{\dim X}(X, \omega_X) = \{\shO_T\}.
\]
Putting everything together, we see that $S^0(T, f_{\ast} \omega_X) = \{\shO_T\}$.
We can now use the Chen-Jiang decomposition for $f_{\ast} \omega_X$ to get more information.
The decomposition in \theoremref{thm:direct_image} (for $j = 0$) implies that
\[
\{\shO_T\} = S^0(T, f_{\ast} \omega_X) = \bigcup_{k=1}^n L_k^{-1} \otimes \Pic^0(T_k),
\]
where we identify $\Pic^0(T_k)$ with its image in $\Pic^0(T)$. This equality forces
$f_{\ast} \omega_X$ to be a trivial bundle of rank $n$; but then
\[
n = \dim H^{\dim T}(T, f_{\ast} \omega_X) = \dim H^{\dim X}(X, \omega_X) = 1,
\]
and so $f_{\ast} \omega_X \simeq \shO_T$. The conclusion is that $f$ is generically
finite of degree one, and hence birational, as asserted by the theorem.
\end{proof}
\subsection{Connectedness of the fibers of the Albanese map}
As another application, one obtains the following analogue of an effective version of
Kawamata's theorem on the connectedness of the fibers of the Albanese map, proved by
Jiang \cite[Theorem 3.1]{jiang} in the projective setting. Note that the statement is
more general than \theoremref{torus}, but uses it in its proof.
\begin{theorem}\label{thm:jiang}
Let $X$ be a compact K\"ahler manifold with $P_1(X) = P_2 (X) = 1$. Then the Albanese map of $X$ is
surjective, with connected fibers.
\end{theorem}
\begin{proof}
The proof goes entirely along the lines of \cite{jiang}. We only indicate the necessary modifications in the K\"ahler case.
We have already seen that the Albanese map $f \colon X \to T$ is surjective. Consider
its Stein factorization
\[
\begin{tikzcd}[column sep=large]
X \dar{g} \drar{f} & \\
Y \rar{h} & T.
\end{tikzcd}
\]
Up to passing to a resolution of singularities and allowing $h$ to be generically
finite, we can assume that $Y$ is a compact complex manifold. Moreover, by
\cite[Th\'eor\`eme 3]{Varouchas}, after performing a further bimeromorphic
modification, we can assume that $Y$ is in fact compact K\"ahler. This does not
change the hypothesis $P_1(X) = P_2(X) = 1$.
The goal is to show that $Y$ is bimeromorphic to a torus, which is enough to
conclude. If one could prove that $P_1(Y) = P_2 (Y) = 1$, then
\theoremref{torus} would do the job. In fact, one can show precisely as in
\cite[Theorem 3.1]{jiang} that $H^0 (X, \omega_{X/Y}) \neq 0$, and consequently that
\[
P_m (Y) \le P_m (X) \quad \text{for all $m \geq 1$.}
\]
The proof of this statement needs the degeneration of the Leray spectral sequence for
$g_* \omega_X$, which follows from \theoremref{takegoshi}, and the fact that $f_*
\omega_X$ is a GV-sheaf, which follows from \theoremref{thm:direct_image}. Besides this, the
proof is purely Hodge-theoretic, and hence works equally well in the K\"ahler case.
\end{proof}
\subsection{Semi-positivity of higher direct images}
\label{par:semi-positivity}
In the projective case, GV-sheaves automatically come with positivity properties;
more precisely, on abelian varieties it was proved in \cite[Corollary~3.2]{Debarre}
that $M$-regular sheaves are ample, and in \cite[Theorem~ 4.1]{PP2} that GV-sheaves
are nef. Due to \theoremref{thm:Chen-Jiang} a stronger result in fact holds true,
for arbitrary graded quotients of Hodge modules on compact complex tori.
Recall that to a coherent sheaf $\shf{F}$ on a compact complex manifold one can
associate the analytic space $\mathbb{P} (\shf{F}) = \mathbb{P} \left( {\rm Sym}^\bullet \shf{F}
\right)$, with a natural mapping to $X$ and a line bundle $\shf{O}_{\mathbb{P}(\shf{F})} (1)$. If
$X$ is projective, the sheaf $\shf{F}$ is called \emph{ample} if the line bundle
$\shf{O}_{\mathbb{P}(\shf{F})} (1)$ is ample on $\mathbb{P}(\shf{F})$.
\begin{corollary}\label{cor:MHM_positivity}
Let $M = (\mathcal{M}, F_{\bullet} \mathcal{M}, M_{\RR})$ be a polarizable real Hodge
module on a compact complex torus $T$. Then, for each $k \in \mathbb{Z}$, the coherent
$\shO_T$-module $\gr_k^F \mathcal{M}$ admits a decomposition
\[
\gr_k^F \mathcal{M} \simeq \bigoplus_{j=1}^n
\bigl( q^{\ast}_j \shf{F}_j \otimes_{\shO_T} L_j \bigr),
\]
where $q_j \colon T \to T_j$ is a quotient torus, $\shf{F}_j$ is an
ample coherent $\shf{O}_{T_j}$-module whose support $\Supp \shf{F}_j$ is projective,
and $L_j \in \Pic^0(T)$.
\end{corollary}
\begin{proof}
By \theoremref{thm:Chen-Jiang} we have a decomposition as in the statement, where
each $\shf{F}_j$ is an $M$-regular sheaf on the abelian variety generated by its
support. But then \cite[Corollary~3.2]{Debarre} implies that each $\shf{F}_j$ is ample.
\end{proof}
The ampleness part in \theoremref{thm:direct_image} is then a consequence of the
proof in \parref{par:ChenJiang} and the statement above. It implies that higher
direct images of canonical bundles have a strong semi-positivity property
(corresponding to semi-ampleness in the projective setting). Even the following very
special consequence seems to go beyond what can be said for arbitrary holomorphic
mappings of compact K\"ahler manifolds (see e.g. \cite{MT} and the references
therein).
\begin{corollary} \label{cor:semi-pos}
Let $f: X \rightarrow T$ be a surjective holomorphic mapping from a compact K\"ahler
manifold to a complex torus. If $f$ is a submersion outside of a simple normal
crossings divisor on $T$, then each $R^i f_* \omega_X$ is locally free and admits a
smooth hermitian metric with semi-positive curvature (in the sense of Griffiths).
\end{corollary}
\begin{proof}
Note that if $f$ is surjective, then \theoremref{takegoshi} implies that $R^i f_*
\omega_X$ are all torsion free. If one assumes in addition that $f$ is a submersion
outside of a simple normal crossings divisor on $T$, then they are locally free; see
\cite[Theorem~V]{Takegoshi}. Because of the decomposition in
\theoremref{thm:direct_image}, it is therefore enough to show that an M-regular
locally free sheaf on an abelian variety always admits a smooth hermitian metric with
semi-positive curvature. But this is an immediate consequence of the fact that
M-regular sheaves are continuously globally generated \cite[Proposition~2.19]{PP1}.
\end{proof}
The existence of a metric with semi-positive curvature on a vector bundle $E$ implies
that the line bundle $\shf{O}_{\mathbb{P}(E)}(1)$ is nef, but is in general known to be a
strictly stronger condition. \corollaryref{cor:semi-pos} suggests the following
question.
\begin{problem}
Let $T$ be a compact complex torus. Suppose that a locally free sheaf $\shf{E}$ on $T$
admits a smooth hermitian metric with semi-positive curvature (in the sense of
Griffiths or Nakano). Does this imply the existence of a decomposition
\[
\shf{E} \simeq \bigoplus_{k=1}^n \bigl( q^{\ast}_k \shf{E}_k \otimes L_k \bigr)
\]
as in \theoremref{thm:direct_image}, in which each locally free sheaf $\shf{E}_k$
has a smooth hermitian metric with strictly positive curvature?
\end{problem}
\subsection{Leray filtration}
Let $f \colon X \to T$ be a holomorphic mapping from a compact K\"ahler manifold $X$
to a compact complex torus $T$. We use \theoremref{thm:direct_image} to describe the Leray
filtration on the cohomology of $\omega_X$, induced by the Leray spectral sequence
associated to $f$. Recall that, for each $k$, the Leray filtration on $H^k(X, \omega_X)$
is a decreasing filtration $L^{\bullet} H^k(X, \omega_X)$ with the property that
\[
\gr_L^i H^k(X, \omega_X) = H^i \bigl( T, R^{k-i} f_{\ast} \omega_X \bigr).
\]
On the other hand, one can define a natural decreasing filtration $F^{\bullet} H^k
(X, \omega_X)$ induced by the action of $H^1(T, \shf{O}_T)$, namely
\[
F^i H^k(X, \omega_X) = \Im \left( \bigwedge^i H^1(T, \shf{O}_T) \otimes H^{k-i} (X, \omega_X)
\to H^{k}(X, \omega_X)\right).
\]
It is obvious that the image of the cup product mapping
\begin{equation} \label{eq:cup-product}
H^1(T, \shf{O}_T) \otimes L^i H^k(X, \omega_X) \to H^{k+1}(X, \omega_X)
\end{equation}
is contained in the subspace $L^{i+1} H^{k+1}(X, \omega_X)$. This implies that
\[
F^i H^k (X, \omega_X) \subseteq L^i H^k (X, \omega_X)
\quad \text{for all $i \in \mathbb{Z}$.}
\]
This inclusion is actually an equality, as shown by the following result.
\begin{theorem}\label{leray}
The image of the mapping in \eqref{eq:cup-product} is equal to $L^{i+1} H^{k+1}(X,
\omega_X)$. Consequently, the two filtrations $L^{\bullet} H^k(X, \omega_X)$ and $F^{\bullet}
H^k(X, \omega_X)$ coincide.
\end{theorem}
\begin{proof}
By \cite[Theorem~A]{LPS}, the graded module
\[
Q_X^j = \bigoplus_{i=0}^{\dim T} H^i \bigl( T, R^j f_{\ast} \omega_X \bigr)
\]
over the exterior algebra on $H^1(T, \shf{O}_T)$ is $0$-regular, hence generated in
degree $0$. (Since each $R^j f_{\ast} \omega_X$ is a GV-sheaf by \theoremref{thm:direct_image}, the
proof in \cite{LPS} carries over to the case where $X$ is a compact K\"ahler
manifold.) This means that the cup product mappings
\[
\bigwedge^i H^1(T, \shf{O}_T) \otimes H^0 \bigl( T, R^j f_{\ast} \omega_X \bigr)
\to H^i \bigl( T, R^j f_{\ast} \omega_X \bigr)
\]
are surjective for all $i$ and $j$, which in turn implies that the mappings
$$H^1(T, \shf{O}_T) \otimes \gr_L^i H^k(X, \omega_X) \to \gr_L^{i+1} H^{k+1} (X, \omega_X)$$
are surjective for all $i$ and $k$. This implies the assertion by ascending induction.
\end{proof}
If we represent cohomology classes by smooth forms, Hodge conjugation and Serre
duality provide for each $k \geq 0$ a hermitian pairing
$$H^0 (X, \Omega_X^{n-k}) \times H^k (X, \omega_X) \rightarrow \mathbb{C}, \,\,\,\,(\alpha, \beta) \mapsto
\int_X \alpha \wedge \overline{\beta},$$
where $n = \dim X$. The Leray filtration on $H^k (X , \omega_X)$ therefore induces a filtration on
$H^0 (X, \Omega_X^{n-k})$; concretely, with a numerical convention which again gives us a decreasing
filtration with support in the range $0, \ldots, k$, we have
\[
L^i H^0 (X, \Omega_X^{n-k}) = \menge{\alpha \in H^0(X, \Omega_X^{n-k})}%
{\alpha \perp L^{k+1 - i} H^k (X, \omega_X)}.
\]
Using the description of the Leray filtration in \theoremref{leray}, and the elementary fact that
$$\int_X \alpha \wedge \overline{\theta\wedge \beta} = \int_X \alpha \wedge \overline{\theta} \wedge
\overline{\beta}$$
for all $\theta \in H^1 (X, \shf{O}_X)$, we can easily deduce that $L^i H^0 (X, \Omega_X^{n-k})$ consists of those
holomorphic $(n-k)$-forms whose wedge product with
$$\bigwedge^{k+1-i} H^0 (X, \Omega_X^1)$$
vanishes. In other words, for all $j$ we have:
\begin{corollary}\label{cor:Leray_forms}
The induced Leray filtration on $H^0 (X, \Omega_X^j)$ is given by
\[
L^i H^0 (X, \Omega_X^j) = \Menge{\alpha \in H^0(X, \Omega_X^j)}%
{\alpha \wedge \bigwedge^{n+1 - i - j} H^0 (X, \Omega_X^1) = 0}.
\]
\end{corollary}
\begin{remark}
It is precisely the fact that we do not know how to obtain this basic description of
the Leray filtration using standard Hodge theory that prevents us from giving a proof
of \theoremref{thm:direct_image} in the spirit of \cite{GL1},
and forces us to appeal to the theory of Hodge modules for the main results.
\end{remark}
\subsection*{Acknowledgements}
We thank Christopher Hacon, who first asked us about the behavior of higher direct
images in the K\"ahler setting some time ago, and with whom we have had numerous
fruitful discussions about generic vanishing over the years. We also thank Claude Sabbah
for advice about the definition of polarizable complex Hodge modules, and Jungkai
Chen, J\'anos Koll\'ar, Thomas Peternell, and Valentino Tosatti for useful discussions.
During the preparation of this paper CS has been partially supported by the NSF grant
DMS-1404947, MP by the NSF grant DMS-1405516, and GP by the MIUR PRIN project
``Geometry of Algebraic Varieties''.
\bibliographystyle{amsalpha}
|
2,869,038,154,653 | arxiv | \section{Introduction}
Unordered maps are a commonly used abstract data types, which store key-value associations: You can \emph{search} for which value is associated with a given key, \emph{insert} a new key-value pair association, or \emph{remove} a key-value pair association.
\emph{0rdered maps} also support predecessor and successor searches, which return the next or previous key-value pair, according to an ordering over keys.
Ordered maps are typically implemented with balanced search trees or skiplists, while unordered maps are typically implemented with hash maps and tries.
Hash maps and tries are the most popular implementation choices, because they are fast in the general case. Unfortunately, hash maps and tries also have pathological cases, where they are slower than balanced search trees:
\begin{itemize}
\item Hash maps with poor hash functions have $O(n)$ running time; specifically operations on a hash map have an expected $O(n)$ running time if any bucket has $b$ key-value pairs associated, where $b \propto n$.
For instance, if 1 \% of all key-value pairs use the same bucket regardless of the number of key-value pairs, then the operations have an expected $O(n)$ running time.
\item Tries are generally higher than balanced trees, especially when the key have many significant bits; for instance, a binary trie storing the $n$ first powers of two will have height $n$, whereas a balanced tree storing the same keys will have height $O(\log n)$.
\end{itemize}
To avoid the weaknesses of traditional ordered and unordered maps, we have developed BT-trees. BT-trees are ordered maps with similar performance to unordered maps, without pathological performance in corner cases. BT-trees are faster than traditional ordered maps, because they are designed for spatial locality, like B+trees, \emph{and} highly efficient operation implementations, unlike B+trees.
Compared to B+trees, BT-trees have a weaker balancing invariant and a simpler leaf node representation, which together permit a highly efficient implementation.
\section{BT-trees}
BT-trees are external, multiway, balanced search trees, \textit{ie} internal and leaf nodes have different representations, each internal node can have multiple children, and the has an $O(\log n)$ height.
BT-trees are intended to be used concurrently, by synchronizing with lock-elision:
In the common case, \verb|acquire(lock)| and \verb|release(lock)| respectively correspond to starting a transaction, and committing a transaction while checking that the lock is not held.
If a transaction fails 2 times in a row, we instead acquire and release lock as a normal test-and-set lock.
Listing~\ref{lst:types} illustrates how BT-trees, key-value pairs, and
nodes are represented in pseudocode resembling C++. The classes
\verb|I| and \verb|L| represent internal and leaf nodes respectively,
while \verb|E| represents key-value pairs. Internal nodes have other
internal nodes or leaves as children. Leaf and internal nodes are
aligned to cache line boundaries by using the C++11 \verb|alignas|
keyword, and allocating with \verb|new|. Each leaf node can store up
to $L_C$ key-value pairs, and internal nodes have up to $I_C$
children, where $L_C, I_C \ge 6$. The lower bound node capacities of
$6$ ensure that we can split a full node into two nodes with at least
3 children or key-value pairs.
\begin{figure}
\begin{lstlisting}[caption=Type definitions for BT-trees., label=lst:types]
class E<K, V> {K key; V value; }; // Key-value pairs
class alignas(64) L<K, V> { // Leaf nodes
E<K, V> e[L_C]; // Unordered key-value pairs
};
class alignas(64) I<K> { // Internal nodes
I* child[I_C]; // Pointers to children
int size; // Number of children
K key[I_C - 1]; // Internal node keys
};
class BT { // BT trees
int height; // The tree's height
I* root; // Pointer to the tree's root
Lock lock; // The tree's lock
};
\end{lstlisting}
\vspace*{-5mm}
\end{figure}
\begin{figure}
\begin{lstlisting}[caption=Remove operation. Insert and search operations have the same structure, label=lst:rOps]
bool remove(const K& k, V& res, BT* t) {
I* p, **pp; // Parent of leaf node
sf16 ci; I* c; // leaf node
while (1) { // 1. Find the leaf node
findNode(k, pp, p, ci, c, t);
// 2. Operate on the leaf node
switch(remL(k, (L*) c, res)) {
case SUCCESS:
release(lock); return true;
case FAILURE:
release(t->lock); return false;
case MERGE: // 3. Merge if near empty
mergeL(pp, p, ci, c, t);
release(t->lock); break;
case SPLIT: // 3. Split if near full
splitL(pp, p, ci, c, t);
release(t->lock); break;
}
}
}
\end{lstlisting}
\vspace*{-7mm}
\end{figure}
~\\
~\\
All BT-tree operations follow the same template, as illustrated by Listing~\ref{lst:rOps}:
\begin{enumerate}
\item Find the leaf node which may hold the key.
\item Perform the operation on the leaf node.
\item If the leaf node is full or almost empty, split or merge it and try again.
\end{enumerate}
Listing~\ref{lst:lOps} illustrates how we search, insert, and remove from leaf nodes.
BT-trees split full leaf nodes when inserting into them, and merge non-root leaf nodes if they only have 3 key-value pairs when removing from them.
The operations iterate over up to $L_C$ key-value pairs in very simple loops, which can easily be unrolled manually, or by a compiler. We found that unrolling the \verb~srchL~ and \verb~insL~ is very beneficial, while \verb~remL~ benefits more from specializing the loops, to avoid looking for matching key-value pairs, or tracking the size of the leaf node, after a match has been found, or the size is confirmed to be above 3.
\begin{figure}
\begin{lstlisting}[caption=Operations on leaf nodes, label=lst:lOps]
Res srchL(const K& k, L* l, V& res) {
for(int i = 0; i < L_C; i++) {
E<K, V> e = l.e[uf32(i)]; // Look at all keys
if (e.k == k) { // If we have a match
res = e.v; // Return the matching value
return SUCCESS;
}
}
return FAILURE; // No matches in the leaf node
}
Res insL(const K& k, const V& v, L* l) {
bool unused = false;
int j; // Look at all keys
for(int i = 0; i < L_C; i++) {
E<K, V> e = l->e[i];
if (e.k == 0) {
unused = true;
j = i; // Remember unused key-value pairs
}
if (e.k == k) {
l->e[i] = {k, v}; // Replace any match
return SUCCESS;
}
}
if (unused) {
l->e[j] = {k, v}; // Otherwise, replace any
return SUCCESS; // empty key-value pair
}
return SPLIT; // Otherwise split the leaf node
}
Res remL(const K& k, V& res, L* l) {
bool match = false;
int m, n = 0; // Look at all keys
for(int i = 0; i < L_C; i++) {
E<K, V> e = l->e[i];
if(e.k != 0) {
n++; // Track unused keys
}
if(e.k == k) {
m = i; // Remember matching key
res = e.v; // and value
match = true;
}
}
if(n <= 2 && !isRoot(l))
return MERGE; // Merge nearly empty nodes
if(match) {
l->e[m].k = 0; // Remove matching key
return SUCCESS;
}
return FAILURE; // No matching key
}
\end{lstlisting}
\vspace*{-7mm}
\end{figure}
Listing~\ref{lst:find} illustrates how we find the leaf node which may
hold a key \verb|k|: Preallocate nodes, begin the critical section,
and iteratively traverse from the root to the child which may
hold~\verb|k| until we reach a leaf node. The internal nodes are
traversed by performing a linear search over the keys in the internal
nodes. Any node with fewer than 3 children is merged, and any full
node is split. After balancing nodes, the operation ends the
transaction and restarts. In order to balance nodes we keep track of
the current node~\verb|c|, its parent~\verb|p|, and the pointer to the
parent, \verb|pp|.
\begin{figure}
\begin{lstlisting}[caption=Finding the leaf node which may hold the key $k$., label=lst:find]
void findNode(K k, I**& pp, I*& p, int& ci,
I*& c, BT* t) {
start: // Allocate nodes before transactions
ensureCapacity(6);
pp = &(t->r);
p = 0; // The root has no parent
acquire(t->lock);
c = r; // Start at the root
int h = t->h;
if (h == 0)
return; // The root is a leaf node
int size = c->size;
if (size == I::C) { // Split the root
splitRoot(pp, c, h, size, t);
release(t->lock); goto start;
}
p = c; ci = 0; // Traverse to child
while(p->k[ci] <= k && ++ci != size - 1) {}
c = p->c[ci];
while (--h > 0) {
size = c->size;
if(size == 2) { // Merge small nodes
mergeI(pp, p, ci, c, );
release(t->lock); goto start;
}
if(size == I_C) { // Split full nodes
splitInternal(c, p, pp, ci, size, t);
release(t->lock); goto start;
}
pp = &p->c[ci]; // Traverse to child
p = c; ci = 0;
while(p->k[ci] <= k && ++ci != size - 1) {}
c = p->c[ci];
}
}
\end{lstlisting}
\vspace*{-7mm}
\end{figure}
Splitting and merging nodes is handled by the same balancing function
given different arguments. We split one node to produce two nodes
($in=1, out=2$), and we merge two nodes to produce one or two nodes
($in=2, out=1\vee out=2$). Merging produces one output node when the
two input nodes have a combined size less than or equal to $b =
\frac{2}{3} (C + 2)$, where $C$ is the capacity of the output node
type, that is $L_C$ or $I_C$. We chose between merging to one or
merging to two nodes in this way, because it maximizes the number of
operations required to bring the new nodes out of balance. It takes
at least $C-b$ operations to fill a merged node, and at least $b-2$
operations to reduce a merged node's size to $2$. Similarly it takes
at least $C-\frac{b}{2}$ operations to fill a split node, and at least
$\frac{b}{2}-2$ operations to reduce a split nodes capacity to $2$.
Listing~\ref{lst:rebI} illustrates how internal nodes are balanced.
The key-value pairs of balanced leaf nodes are produced the same way
child pointers are produced for balanced internal nodes, except the
key-value pairs only have to be partially sorted: Copy the first $s =
\lceil ( c1.size + c2.size) / out \rceil$ elements from the unbalanced
node(s) to the first balanced node, \verb|n1|, and copy the remaining
elements to the second balanced node, \verb|n2|, if $out = 2$. The
keys of balanced internal nodes are produced differently, because they
have to respect the order of the previous unbalanced nodes the key,
\verb|k|, which the parent of the previous nodes used to guide tree
searches: Order \verb|k| and the keys from the unbalanced nodes, and
copy the nodes into the balanced nodes such that the first balanced
node receives the first $\lceil ( c1.size + c2.size) / out \rceil - 1$
keys, and the second balanced node receives the last $\lfloor (
c1.size + c2.size) / out \rfloor - 1$ keys. The $\lceil ( c1.size +
c2.size) / out \rceil - 1$'th key in the order is then inserted into
the parent of the balanced nodes, if $out=2$. The actual
implementation uses more complicated code than~Listing~\ref{lst:rebI}
to avoid copying the keys and child pointers into intermediate arrays,
but is functionally equivalent.
\begin{figure}
\begin{lstlisting}[caption={Balancing internal nodes.}, label={lst:rebI}]
void balanceI(I* c1, I* c2, I* p, I** pp,
int i, int in, int out, BT* t) {
I* n1,*n2, I* p1; // The new nodes
int s = ceiling((c1->size + c2->size) / out);
auto ccomb = conc(c1->c, c2->c);
auto kcomb = in == 2 ? c1->k :
conc(c1->k, p->k[i], c2->k); // Gather the keys
// Fill the new nodes
memcpy(n1->c, ccomb, s * sizeof(void*));
memcpy(n2->c, &ccomb[s],
(c1->size + c2.size - s) * sizeof(void*));
memcpy(n1->k, kcomb, (s - 1) * sizeof(K));
memcpy(n2->k, &kcomb[s],
(c1->size + c2.size - s - 1) * sizeof(K));
p1.c[i] = (L*) n1; // Insert the balanced nodes
if(out == 2) {
p1.c[i + 1] = (L*) n2;
p1.k[i] = n2.e[0].k;
}
int pSize = 0;
if(p != 0) {
pSize = p->size;
... copy p@{\color{gray}\verb~'~}@s other children and keys
if(isRoot(p) && pSize == 2) {
t->h--; // Merging the root
}
} else {
t->h++; // Splitting the root
}
p1->size = pSize + out - in;
*pp = p1; // Replace the parent
dealloc(c1, c2, p);
}
\end{lstlisting}
\vspace*{-7mm}
\end{figure}
\section{Evaluation}
\begin{figure*}[tb]
\includegraphics[width=0.98\textwidth]{ops2}
\caption{Mean map throughput as a function of threads for 3 workloads and 3 key ranges.}
\label{fig:throughput2}
\end{figure*}
\begin{figure*}[tb]
\includegraphics[width=0.98\textwidth]{L1C_Load_Misses}
\caption{The mean number of L1 cache load misses per operations for BT-trees (solid black line) and Intel TBB \texttt~concurrent\_hash\_map~ (dashed grey line). }
\label{fig:l1_load}
\end{figure*}
\begin{table}
\normalsize
\caption{Experimental machine}
\label{tab:machine}
\centering
\begin{tabular}{l|l}
Processor & Intel Xeon E3-1276 [email protected] \\
Processor specs & 4 cores, 8 threads \\
Processor specs(2) & 32KB L1D cache, 8 MB L3 cache \\
C++ Compiler & GCC 4.9.1 \\
Java Compiler/Runtime & Oracle Server JRE 1.8.0\_20 \\
Operating system & Ubuntu Server 14.04.1 LTS \\
Kernel & 3.17.0-031700-generic \\
libc & eglibc 2.19
\end{tabular}
\end{table}
\begin{table}
\normalsize
\caption{Evaluated ordered (O) and unordered (U) maps}
\label{tab:maps}
\centering
\begin{tabular}{l|l}
Data structure name (Ordering) & Details \\\hline
BT-trees (O) & Node sizes $L_C=I_C=32$\\
Chromatic6 (O)~\cite{brown2014general} & Available online~\cite{brown}\\
ConcurrentSkipListMap (O)~\cite{javaSkipList} & Java (v1.8.0\_20)\\
ConcurrentHashMap (U)~\cite{javaCHM} & Java (v1.8.0\_20)\\
TrieMap (U)~\cite{Prokopec:2012} & Scala-library (v2.11.2)\\
concurrent\_hash\_map (U)~\cite{tbb} & Intel TBB (v4.3\_20141023) \\
\end{tabular}
\end{table}
This section covers our evaluation of BT-trees.
The evaluation compares the throughput of several ordered and unordered maps, on an established experiment, which avoids the weaknesses of unordered maps.
First we describe the experimental setup, then we discuss the implications of the experiments design, and finally we show and discuss the results of the experiments.
We evaluated BT-trees and the other maps listed in Table~\ref{tab:maps}, on the experimental machine described in Table~\ref{tab:machine}, with the map benchmark available at: \url{http://www.cs.toronto.edu/~tabrown/chromatic/testharness.zip}.
The benchmark is written in Java, so we had to port it to C++.
In the benchmark, up to 8 threads repeatedly operate on one shared map for 5 seconds, after pre-filling the map with $n$ key-value pairs.
After the 5 seconds we recorded the number of operations performed on the map in the 5 seconds.
The map used 32 bit keys, and the operations used random keys, uniformly sampled from 1 to $k$, where $k$ is either 100,10,000, or 1,000,000.
We used 3 distributions of map operations:
\begin{enumerate}
\item \textit{Update}, with 50\% insertion, 50\% removal ($n = k / 2$);
\item \textit{Mixed}, with 70\% searches, 20\% insertion, and 10\% removal
($n = 2 k /3$); and
\item \textit{Constant}, with 100\% searches ($n = k$)
\end{enumerate}
The benchmark is designed to produce the highest possible throughput and contention for any given data structure size ($n$) and distribution of operations: The threads only generate keys and operate on the maps, unlike real applications which perform work do useful work between each map operation.
The maps are pre-filled with $n$ key-value pairs to minimize fluctuations in the maps size, and therefore minimize the changes in operation throughput during the benchmark; $n$ is
the expected number of key-value pairs in a map after infinitely many
operations.
The benchmark's design favors hash maps, because the keys have a very dense distribution.
A dense key distribution implies that most common integer hash functions are perfect hash functions.
In particular, the hash functions of the hash maps in Table~\ref{tab:maps} are perfect hash functions even when truncated to the least significant $log_2 (n)$ bits.
As a consequence, we expect the hash maps to have lower conflict rates, and higher throughput, than they would have for realistic inputs.
Despite being somewhat unrealistic, the benchmark has advantages: it is relatively well known, it is useful as a stress test, and it is a best case evaluation of hash maps.
Figure~\ref{fig:throughput2} shows the throughput of each map implementations on the benchmark.
The relative single threaded throughput of the implementations follow the same trend for all distributions of operations and keyranges:
\verb~ConcurrentHashMap~ is always faster than, BT-trees, \verb|TrieMap|, and
\verb~concurrent_hash_map~, which are usually faster than \verb|Chromatic6|, which in turn are usually faster than
\verb~ConcurrentSkipListMap~.
There are 2 deviations from the usual
trend: (1) \verb|Chromatic6| are faster than
\verb~concurrent_hash_map~ in the \textit{Constant} workload when $k=100$, and
(2) \verb~ConcurrentSkipListMap~ achieves higher performance than
\verb|Chromatic6| in the \textit{Update} workload when $k=100$.
The gap in performance between the traditional ordered maps, and the BT-trees / the unordered maps largest for large data structures (large $k$).
The increasing gap is caused by two factors: (1) BT-trees and \verb|TrieMap| being more cache efficient than traditional ordered maps, (2) hash maps have constant asymptotic running time, while skiplists have logarithmic asymptotic running time $O(1)$.
To illustrate the performance gap, BT-trees are 1.75, 2.84, and 4.71 times faster than \verb|Chromatic6| in the single threaded \textit{Update} workload for $k = 100$, 10,000, and 1,000,000, respectively.
In summary, BT-trees are slower than \verb~ConcurrentHashMap~, similar to \verb|TrieMap|, and
\verb~concurrent_hash_map~, and faster than the traditional ordered maps.
The relative performance of the map implementations is similar in parallel cases and the single threaded case.
Therefore we will focus on the gray area, the performance of BT-trees when compared to \verb|TrieMap| and \verb|concurrent_hash_map|.
\paragraph{BT-trees compared to \texttt{TrieMap}}
BT-trees are typically faster than \verb|TrieMap| in the \textit{Update} workloads, except when $k=100$, and slower in the \textit{Constant} workloads.
We believe that this is because the relative cost of insert / remove operations compared to search operations:
Insert and remove operations in BT-trees are performed in place on leaf nodes, and have similar costs to searching, while the \verb|TrieMap| insert and remove operations use copy-on-write, which increases their cost relative to search operations.
\paragraph{BT-trees compared to \texttt{concurrent\_hash\_map}}
BT-trees and \verb|concurrent_hash_map| have similar performance, except when $k=100$.
BT-trees scale poorly to multiple threads in the \textit{Mixed} and \textit{Update} workloads when $k=100$, but still achieves higher throughput than \verb~ConcurrentSkipListMap~ and \verb|Chromatic6|.
BT-trees poor scalability when $k=100$ is a side effect of using lock-elision; a side effect known as the Lemming effect~\cite{Dice:2009:techreport}.
Threads acquire the underlying lock when transactional executions of the critical section fails.
Acquiring the lock makes concurrent transactions more likely to fail:
Once a few transactions fail, many will transactions follow suit.
Meanwhile, \verb~concurrent_hash_map~ scales poorly in the \textit{Constant} workloads.
When $k=100$, \verb~concurrent_hash_map~ is the slowest
map in the \textit{Constant} workload. Figure~\ref{fig:l1_load} illustrates cache performance of BT-trees and \verb~concurrent_hash_map~ in the constant workloads. When going from 1 thread to 8 threads in \textit{Constant} $k=100$, \verb|concurrent_hash_map| execute more instructions per operation, and
cause up to up to 2.3 L1 cache load misses per operation. By
comparison no other data structure we measured caused more than 0.01
L1 cache load misses per operation in the \textit{Constant} workload
with $k=100$. The TBB \verb~concurrent_hash_map~ scales poorly in
this case because it uses a read-write lock per hash bucket. search
operations acquire and release read locks by executing a
\verb|fetchAndAdd| atomic instruction. The \verb|fetchAndAdd|
instructions, as well as any write instructions, invalidates the cache
lines of the other cores. By comparison the other maps' search
operations do not write to the data structures memory. The TBB
\verb~concurrent_hash_map~ is not significantly contended for larger
values of $k$, because then the hash map has more buckets, reducing
the risk of multiple threads searching adjacent buckets.
\section{Conclusion}
Traditional ordered maps are significantly slower than unordered maps, except in corner cases where the unordered maps have pathologically poor performance.
As an alternative, we present BT-trees, an ordered map, which has similar performance to unordered maps even in the best case scenario for unordered maps.
Specifically, BT-trees have similar performance to Intel TBB \verb|concurrent_hash_map| and Scala \verb|TrieMap|, but lower performance than Java \verb|ConcurrentHashMap|.
\bibliographystyle{ieeetr}
|
2,869,038,154,654 | arxiv | \section{Introduction}
Generation of versatile quantum networks is one of the key features towards efficient and scalable quantum information processing. Recently their continuous variable implementation has raised a lot of interests \cite{Braunstein:2005wr}, in particular in optics where practical preparation and measurement protocols do exist, both at the theoretical and experimental level. The most promising achievements have been demonstrated using independent squeezed resources and a linear optical network \cite{Su:2007ts,Yukawa:2008iu}. More recently, proposals have emerged where different degrees of freedom of a single beam are used as the nodes of the network, such as spatial modes \cite{Armstrong:2012tt}, frequency modes \cite{Pysher:2011hn,Chen:2014jx}, or even temporal modes \cite{Yokoyama:2013jp}. In all these realizations, a given experimental setup corresponds to one quantum optical network. However, the specific structure of a quantum network depends on the mode basis on which it is interrogated, thus changing the detection system allows for on-demand network architecture. This has been applied in particular to ultra-fast optics \cite{Roslund:2013cb} where a pulse shaped homodyne detection is used to reveal any quantum network. In order to combine the flexibility of this mode dependent property with the simultaneous detection of all the modes, multi-pixel homodyne detection was introduced \cite{Armstrong:2012tt}, and it was shown that combined with phase control and signal post-processing it could be turned into a versatile source for quantum information processing\cite{Ferrini:2013cr}.
Here we propose a scheme based on four-wave mixing (FWM) in warm rubidium vapors to generate efficiently flexible quantum networks. A single FWM process can generate strong intensity-correlated twin beams \cite{McCormick:2007,Liu:2011,Qin:2012}, which has been proved to be a promising candidate in quantum information processing and has many applications such as quantum entangled imaging \cite{Boyer:2008}, realization of stopped light \cite{Camacho:2009} and high purity narrow-bandwidth single photons generation \cite{MacRae:2012}. Recently, it has been reported that by cascading two FWM processes, tunable delay of EPR entangled states \cite{Marino:2009}, low-noise amplification of an entangled state \cite{Pooser:2009}, realization of phase sensitive nonlinear interferometer \cite{Jing:2011,Kong:2013}, quantum mutual information \cite{Clark:2014} and three quantum correlated beams with stronger quantum correlations \cite{Qin:2014} can be realized experimentally. Inspired by these previous works we propose in the present work to cascade several FWM processes in which way we can turn this system into a controllable quantum network. We elaborate the theory of the optical quantum networks generated via cascading two and three FWM processes, calculating the covariance matrix and the eigenmodes of the processes from Bloch-Messiah decomposition \cite{Braunstein:2005fn}. We then study how cluster states can be measured using phase controlled homodyne detection and digital post-processing.
\section{Single FWM Process}
\begin{figure}[h]
\includegraphics[width=8cm]{principle.pdf}
\caption{(a) Energy level diagram for the FWM process. For experimental implementation the pump beam is tuned about 0.8 GHz to the blue of the D1 line of rubidium ($5S_{1/2}, F=2\rightarrow5P_{1/2}$, 795 nm) and the signal beam is red tuned about 3GHz to the pump beam. The two-photon detuning is about 4 MHz. (b) A single FWM process. $\hat{a}_{s0}$ is the coherent input and $\hat{a}_{v0}$ is the vacuum input. $\hat{a}_{s1}$ is the amplified signal beam and $\hat{a}_{i1}$ is the generated idler beam.} \label{fig principle}
\end{figure}
A single FWM process in Rb vapor is shown in Fig. \ref{fig principle}, where an intense pump beam and a much weaker signal beam are crossed in the center of the Rb vapor cell with a slight angle. During the process, the signal beam is amplified and a beam called idler beam is generated simultaneously. It propagates at the same pump-signal angle on the other side of the pump beam due to the phase-matching condition, having a frequency slightly shifted as compared to the signal beam. The input-output relation of the single FWM process is given by:
\begin{eqnarray}
\begin{aligned}
\hat{a}_{s1} & = & G\hat{a}_{s0}+g\hat{a}_{v0}^{\dagger} \\
\hat{a}_{i1} & = & g\hat{a}_{s0}^{\dagger}+G\hat{a}_{v0}
\end{aligned}
\end{eqnarray}
where G is the amplitude gain in the FWM process and $G^{2}-g^{2}=1$, $\hat{a}_{s0}$ is the coherent input and $\hat{a}_{v0}$ is the vacuum input. $\hat{a}_{s1}$ is the generated signal beam and $\hat{a}_{i1}$ is the generated idler beam, see \cite{Boyd:1992} for details.
Defining the amplitude and phase quadrature operators $\hat{X}=\hat{a}+\hat{a}^{\dagger}$ and $\hat{P}=i(\hat{a}^{\dagger}-\hat{a})$, the input-output relation can be re-written as:
\begin{equation}\label{eq:FWM1}
\left(
\begin{array}{c}
\hat{X}_{\text{s1}} \\
\hat{X}_{\text{i1}}
\end{array}
\right)=\left(
\begin{array}{cc}
G & g \\
g & G
\end{array}
\right)\left(
\begin{array}{c}
\hat{X}_{\text{s0}} \\
\hat{X}_{\text{v0}}
\end{array}
\right)\end{equation}
\begin{equation}\label{eq:FWM2}
\left(
\begin{array}{c}
\hat{P}_{\text{s1}} \\
\hat{P}_{\text{i1}}
\end{array}
\right)=\left(
\begin{array}{cc}
G & -g \\
-g & G
\end{array}
\right)\left(
\begin{array}{c}
\hat{P}_{\text{s0}} \\
\hat{P}_{\text{v0}}
\end{array}
\right)\end{equation}
We immediately see from this set of equations that the system does not couple $X$ and $P$ quadratures of the fields, which can thus be treated independently. Furthermore, input beams are vacuum or coherent states, and as the global transformation is symplectic the system retains gaussian statistic and can thus be fully characterized by its covariance matrix \cite{Braunstein:2005wr}. In our specific case, the covariance matrix is block diagonal:
\begin{equation}
C=\left(
\begin{array}{cc}
C_{XX} & 0 \\
0 & C_{PP}
\end{array}
\right)
\end{equation}
where, by definition, $C_{XX} =
\Big \langle \left(\begin{array}{c} \hat{X}_{\text{s1}} \\ \hat{X}_{\text{i1}} \end{array}\right)
\left(\begin{array}{c} \hat{X}_{\text{s1}} \\ \hat{X}_{\text{i1}} \end{array}\right)^T
\Big \rangle$, and the equivalent definition holds for $C_{PP}$. For coherent and vacuum input, the variances of input modes are normalized to one, and one obtains:
\begin{equation} \label{eq:CX}
{C_{XX}}=\left(\begin{array}{cc}
-1+2G^{2}& 2Gg\\
2Gg&-1+2G^{2} \end{array}
\right)
\end{equation}
and
\begin{equation} \label{eq:CP}
{C_{PP}}=\left(\begin{array}{cc}
-1+2G^{2}& -2Gg\\
-2Gg&-1+2G^{2} \end{array}
\right).
\end{equation}
${C}_{XX}$ and ${C}_{PP}$ are respectively the amplitude and phase quadrature parts of the covariance matrix of a single FWM process. The covariance matrix contains all the correlations between any two parties in the outputs. As the quantum state is pure, it is possible to diagonalize the covariance matrix to find the eigenmodes of the system, which are two uncorrelated squeezed modes, each one being a given linear combination of the output modes of the FWM process. In this pure case $C_{PP}$ is simply the inverse of $C_{XX}$, so they share the same eigenmodes with inverse eigenvalues. We find that the eigenvalues of the $C_{XX}$ matrix are $\eta_{a1}=(G-g)^{2}$, $\eta_{b1}=(G+g)^{2}$ and the corresponding eigenmodes are $\hat X_{a1} = \frac{1}{\sqrt{2}}(\hat X_{s1} - \hat X_{i1})$ and $\hat X_{b1} = \frac{1}{\sqrt{2}}(\hat X_{s1} + \hat X_{i1})$. The first eigenmode is amplitude squeezed, while the second one is phase squeezed, which is the well known signature that, in a single stage FWM process, signal and idler beams are EPR correlated \cite{Marino:2009}.
It is important to stress here that each eigenmode of the covariance matrix is squeezed independently and diagonalization of the covariance matrix corresponds to a basis change from the output basis of FWM to squeezing basis. Even if this basis change can be difficult to be implemented experimentally, as output beams have different optical frequencies, it nevertheless remains a linear operation that reveals the underlying structure of the output state of the FWM process.
\section{Cascaded FWM Processes}
The above procedure can be readily applied to the more interesting multimode case, when one considers the multiple FWM processes, generating more than two output beams. We study here three-mode asymmetrical and four-mode symmetrical structures, whose input-output relation is derived by successively applying the matrix corresponding to the single FWM process of Eq. (\ref{eq:FWM1}) and (\ref{eq:FWM2}).
\subsection{Asymmetrical structure: Double FWM Case}
\begin{figure}[h]
\includegraphics[width=8cm]{3modeFWM.pdf}
\caption{Double stage structure of FWM Rb system. $\hat{a}_{s0}$ is the coherent input and $\hat{a}_{v0}$ is the vacuum input for the first FWM process. $\hat{a}_{s1}$ is the amplified signal beam and $\hat{a}_{i1}$ is the generated idler beam from the first FWM process. $\hat{a}_{v1}$ is the vacuum input for the second FWM process. $\hat{a}_{s2}$ is the generated signal beam and $\hat{a}_{i2}$ is the amplified idler beam from the second FWM process.} \label{figdouble}
\end{figure}
We first consider the case where two FWM processes are cascaded. Without loss of generality, we take the idler beam from the first FWM process as the seed for the second FWM process, as described in Fig. \ref{figdouble}. The corresponding unitary transformation can be directly derived and writen:
\begin{eqnarray}
\begin{aligned}
\left(
\begin{array}{c}
\hat{X}_{\text{s1}} \\
\hat{X}_{\text{i2}} \\
\hat{X}_{\text{s2}}
\end{array}
\right)& = U_{X_{3mode}}\left(
\begin{array}{c}
\hat{X}_{\text{s0}} \\
\hat{X}_{\text{v0}} \\
\hat{X}_{\text{v1}}
\end{array}
\right)& \\
\left(
\begin{array}{c}
\hat{P}_{\text{s1}} \\
\hat{P}_{\text{i2}} \\
\hat{P}_{\text{s2}}
\end{array}
\right)&=U_{P_{3mode}}\left(
\begin{array}{c}
\hat{P}_{\text{s0}} \\
\hat{P}_{\text{v0}} \\
\hat{P}_{\text{v1}}
\end{array}
\right)&
\end{aligned}
\end{eqnarray}
where
\begin{eqnarray}
\begin{aligned}
U_{X_{3mode}}&=\left(
\begin{array}{ccc}
G_1 & g_1 & 0 \\
g_1 G_2 & G_1 G_2 & g_2 \\
g_1 g_2 & g_2 G_1 & G_2
\end{array}
\right)&\\
U_{P_{3mode}}&=\left(
\begin{array}{ccc}
G_1 & -g_1 & 0 \\
-g_1 G_2 & G_1 G_2 & -g_2 \\
g_1 g_2 & -g_2 G_1 & G_2
\end{array}
\right)&
\end{aligned}
\end{eqnarray}
Using the same procedure as for Eqs. (\ref{eq:CX}) and (\ref{eq:CP}) we can get the covariance matrix of the double stage FWM. It is still block diagonal, and for coherent or vacuum input states each block is given by:
\begin{equation}
C_{X_{3mode}}=U_{X_{3mode}}U_{X_{3mode}}^T
\end{equation}
\begin{equation}
C_{P_{3mode}}=U_{P_{3mode}}U_{P_{3mode}}^T\end{equation}
We can now evaluate the eigenvalues and eigenmodes of these matrices. For the X quadrature, the eigenvalues of $U_{X_{3mode}}$ are:
\begin{equation}
\begin{aligned}
\eta_{a3}&=1&\\
\eta_{b3}&=-1+2\text{G}_{1}^2 \text{G}_{2}^2-2 \sqrt{\text{G}_{1}^2 \text{G}_{2}^2 \left(-1+\text{G}_{1}^2 \text{G}_{2}^2\right)}&\\
\eta_{c3}&=-1+2\text{G}_{1}^2 \text{G}_{2}^2+2\sqrt{\text{G}_{1}^2 \text{G}_{2}^2 \left(-1+\text{G}_{1}^2 \text{G}_{2}^2\right)}&
\end{aligned}
\end{equation}
Remarkably, one sees that one of the eigenvalues is equal to one, meaning that the system is composed of only two squeezed modes and one vacuum mode. This property can be extended if one generalizes this system to n-cell case in the similar asymmetrical way, there is always one vacuum mode. More expected, we also note that squeezing increases with gain, that eigenmode 2 and eigenmode 3 have the same squeezing but on different quadratures, and that both gains play an equivalent role and can be interchanged. The results for three different values of the gain, in the specific case where both processes share the same gain ($G_1 = G_2$) are shown in Fig. \ref{threemodeFWM}. We also show the shapes of the eigenmodes, i.e. their decomposition on the FWM output mode basis. The vacuum eigenmode appears to be composed only of modes 1 and 3 (i.e. $\hat a_{s1}$ and $\hat a_{s2}$), and tends to mode 1 when gain goes to infinity. This can be surprising, but it only reflects the fact that the noise of this mode becomes negligible compared to the two others when gain increases.
\begin{figure}[h]
\includegraphics[width=8.5cm]{eigenmodes3modefig.pdf}
\caption{Eigenmodes of the asymmetrical FWM cascade, decomposed in the FWM output mode basis, for three different gain values. For each graph, the bars represent the relative weight of modes $\hat a_{s1},\ \hat a_{i2},\ \hat a_{s2}$, respectively. Below are given the noise variances $\eta_{a3}$, $\eta_{b3}$ and $\eta_{c3}$ of the corresponding $\hat X$ quadrature. The state being pure, we see that eigenmode 3 shares the same squeezing as eigenmode 2 but on the phase quadrature.}
\label{threemodeFWM}
\end{figure}
\subsection{Symmetrical structure: Triple FWM Case}
\begin{figure}[h]
\includegraphics[width=8.5cm]{4modeFWM.pdf}
\caption{Symmetrical structure of FWM Rb system. $\hat{a}_{s0}$ is the coherent input and $\hat{a}_{v0}$ is the vacuum input for the first FWM process. $\hat{a}_{s1}$ is the amplified signal beam and $\hat{a}_{i1}$ is the generated idler beam from the first FWM process. $\hat{a}_{v1}$ and $\hat{a}_{v2}$ are the vacuum inputs for the second and third FWM processes. $\hat{a}_{s2}$ is the generated signal beam and $\hat{a}_{i2}$ is the amplified idler beam from the second FWM process. $\hat{a}_{s3}$ is the amplified signal beam and $\hat{a}_{i3}$ is the generated idler beam from the third FWM process.}\label{figsymstage}
\end{figure}
We consider now the case of three cascaded FWM processes, where signal and idler of the first cell are used to seed each of the two other FWM processes, as shown in Fig. \ref{figsymstage}. For simplicity, we assume that all three FWM processes have the same gain value $G$. The evolution equations can be directly derived and lead to:
\begin{eqnarray}
\begin{aligned}
\left(
\begin{array}{c}
\hat{X}_{\text{s3}} \\
\hat{X}_{\text{i2}} \\
\hat{X}_{\text{s2}} \\
\hat{X}_{\text{i3}}
\end{array}
\right)&=U_{X_{4mode}}\left(
\begin{array}{c}
\hat{X}_{\text{s0}} \\
\hat{X}_{\text{v0}} \\
\hat{X}_{\text{v1}} \\
\hat{X}_{\text{v2}}
\end{array}
\right)&\\
\left(
\begin{array}{c}
\hat{P}_{\text{s3}} \\
\hat{P}_{\text{i2}} \\
\hat{P}_{\text{s2}} \\
\hat{P}_{\text{i3}}
\end{array}
\right)&=U_{P_{4mode}}\left(
\begin{array}{c}
\hat{P}_{\text{s0}} \\
\hat{P}_{\text{v0}} \\
\hat{P}_{\text{v1}} \\
\hat{P}_{\text{v2}}
\end{array}
\right)&
\end{aligned}
\end{eqnarray}
where,
\begin{eqnarray}
\begin{aligned}
U_{X_{4mode}}&=\left(
\begin{array}{cccc}
G^2 & g G & 0 & g \\
g G & G^2 & g & 0 \\
g^2 & g G & G & 0 \\
g G & g^2 & 0 & G
\end{array}
\right)&\\
U_{P_{4mode}}&=\left(
\begin{array}{cccc}
G^2 & -g G & 0 & -g \\
-g G & G^2 & -g & 0 \\
g^2 & -g G & G & 0 \\
-g G & g^2 & 0 & G
\end{array}
\right)&
\end{aligned}
\end{eqnarray}
No analytic expression of the eigenvalues can be simply given here, but for instance when G=1.2, we find for the X quadrature the following levels of squeezing $\{-9dB,-3.6dB,3.6dB,9dB\}$ (and opposite signs in the $P$ quadrature). This system is indeed composed of four independent squeezed modes, with two different squeezing values. Fig. \ref{fourSqzFWM} represents, similar as in the previous case, the mode shapes for three different values of the gain. As gain goes to infinity, we see that they tend to a perfectly symmetric decomposition, meaning that the output basis of FWM becomes mostly entangled then.
\begin{figure}[h]
\includegraphics[width=9cm]{eigenmodesof4modefig.pdf}
\caption{Eigenmodes of the symmetrical 4-mode FWM cascade, decomposed in the FWM output modes basis, for three different gain values. For each graph, the bars represent the relative weight of modes $\hat a_{s3},\ \hat a_{i2},\ \hat a_{s2},\ \hat a_{i3}$, respectively. Below are given the noise variances of the corresponding $\hat X$ quadrature.}
\label{fourSqzFWM}
\end{figure}
\section{cluster states}
We have shown in the previous section that the output states of different FWM processes were entangled states, whose underlying mode structure could be exactly calculated. We study here whether these outputs can be manipulated in order to generate cluster states, which are states of interest for quantum information processing.
A cluster state is a specific multimode entangled state, defined through an adjacency matrix $V$ \cite{vanLoock:2010kt}. Let us call $\hat X_i^C$ and $\hat P_i^C$ the quadrature operators for the mode $\hat a_i^C $. The nullifier operators of the N-mode cluster states are defined by:
\begin{equation} \label{eq:nullifier}
\hat{\delta}_i = \left( \hat{P}_{i}^{C} - \sum_{j} V_{ij} \cdot \hat{X}_{j}^{C} \right),
\end{equation}
Theoretically, a state is considered a cluster state of the adjacency matrix $V$ if and only if the variance of each nullifier approaches zero as the squeezing of the input modes approaches infinity, assuming that the cluster is built from a set of independently squeezed modes. Experimentally, one compares the variance of each nullifier to the corresponding standard quantum limit.
It turns out that the output states of the FWM processes, as we have calculated in the previous sections, do not directly satisfy the cluster state criteria. However, it is still possible to derive cluster states when one can control the quadratures detected on each output mode (i.e. setting the phase of the homodyne detection local oscillator) and digitally post-process the data, as explained in \cite{Ferrini:2013cr}. To apply this theory to the present case, we first model the entangled state that one can produce with FWM, homodyne detection and post-processing, following the scheme of Fig. \ref{FWMnetwork}. First, to match the input of traditional cluster generation, we call $\hat a^\textrm{sqz}_i$ independent modes squeezed on the $P$ quadrature, with the squeezing values of the modeled FWM process (i.e. as displayed in Fig. \ref{threemodeFWM} and \ref{fourSqzFWM} for instance). Then we introduce the $U_{FWM}$ matrix so that $U_{FWM} \hat{\vec a}^\textrm{sqz}$, where $\hat{\vec a}^\textrm{sqz} = (\hat a^\textrm{sqz}_1, \hat a^\textrm{sqz}_2, \ldots)^T$, corresponds to the annihilation operators of the output modes of a given experimental setup. One can write:
\begin{equation}
U_{FWM}=U_{0}P_{sqz}
\end{equation}
where $P_{sqz}$ is a diagonal matrix which rotates the squeezing quadrature so that they match the results of previous sections and $U_0$ is a basis change from the squeezing basis to the output basis of the FWM setup, where homodyne detection is performed. With this convention, $U_0$ can be directly linked to the basis change matrices calculated in previous sections. Indeed, if for a given FWM process we call $D = diag(\eta_1, \eta_2, \ldots)$ the diagonal matrix composed of the eigenvalues of the process, then by definition the covariance matrix can be decomposed as $C_{Xnmodes} = U_0 D U_0^T$.
Then, the total transformation can be written as:
\begin{equation}
U_{total}=O_{post}P_{homo}U_{FWM}
\end{equation}
where $P_{homo}$ is a diagonal matrix that sets the quadrature measured by each homodyne detection, and $O_{post}$ is an orthogonal matrix describing post-processing by computer on the photocurrents measured by the homodyne detections.
We now compare this transformation to a given cluster state matrix $U_V$. Traditionally, $U_V$ is a matrix that moves from $p$ squeezed modes to cluster state modes, with $V$ the cluster adjacency matrix\cite{vanLoock2007}. Thus, the system is equivalent to a cluster state if one can find experimental parameters such that:
\begin{equation}
U_{V}=O_{post}P_{homo}U_{0}P_{sqz}
\end{equation}
In practice, it is possible to act on the gains of the different FWM processes, the local oscillators phases $P_{homo}$, and the post-processing operations $O_{post}$ to make the system achieve the transformation $U_V$ of the clusters state. According to \cite{Ferrini:2013cr}, defining $U'_V=U_{V}R^\dagger$ with $R=U_{0}P_{sqz}$, this problem has a solution if and only if $U_{V}^{'T}U_{V}^{'}$ is a diagonal matrix. Equivalently, if and only if one can write:
\begin{equation}\label{eq:criteria}
P_{homo}^2=U_{V}^{'T}U_{V}^{'}.\end{equation}
In that case, one finds that $O_{post}$ is given by:
\begin{equation}
O_{post}=U'_{V}P^{-1}_{homo}.
\end{equation}
Using this formalism, it is thus possible to exploit the entanglement naturally generated by the cascaded FWM processes in order to generate cluster states. We will see in the following how it is possible to optimize the different experimental parameters to achieve some specific clusters.
\section{optimizations and solutions}
For a given cluster state specified by its adjacency matrix $V$, one can directly check whether using proper phases for homodyne detection ($P_{homo}$) and post-processing with a computer ($O_{post}$) it is possible to realize the cluster state $U_V$. Furthermore, one can demonstrate that if $U_V$ is a unitary matrix that leads to a cluster defined by $V$, then for any arbitrary orthogonal matrix $O$, $U_V O$ leads to the same cluster state \cite{NewGiulia}. Thus, it is possible to run a searching algorithm to find an $O$ matrix that allows to satisfy our criteria of cluster generation. In practice, and as this is numerical calculation, we never find the exact equality in equation (\ref{eq:criteria}), thus we run an evolutionary algorithm \cite{Jonathan} leading to the matrix which is the closest to a diagonal one, then keep only the diagonal terms (re-normalized to one) to define the $P_{homo}$ matrix, and finally calculate the values of the nullifiers. This is the optimization procedure which is applied to find the results below.
\begin{figure}[h]
\includegraphics[width=8cm]{FWMnetworkFIG.pdf}
\caption{Quantum networks can be constructed by applying phase controlled homodyne detections and post-processing the signals of the FWM outputs.}\label{FWMnetwork}
\end{figure}
\subsection{three-mode cascaded FWM}
We first start with the three-mode cascaded FWM process, which we have demonstrated is composed of only two squeezed modes and one vacuum mode. There are only two possible cluster graphs in that case, and as an exemple we study here only the possibility to generate a linear cluster state. The corresponding $U_V$ matrix can be found in \cite{Yukawa:2008iu}. We choose gains values $G_1=G_2=1.2$ as they give realistic experimental squeezing values. Performing the optimization with an evolutionary algorithm, we find solutions for the three-mode linear cluster state (matrix values given in the appendix). The normalized nullifiers are $\{0.22,0.16,0.94\}$, all below the shot noise limit, meaning that the 3-mode linear cluster state can be generated by the structure of the FWM. But there is no feasible solution when $G_1=G_2=2$, or for higher values of the gain. This can be surprising, but is directly linked to the mode structure at the output of the asymmetrical FWM, where one eigenmode is vacuum, and is getting closer to the first mode while gain increases, making it impossible to be transferred into a cluster state by post-processing. The nullifiers values are summarized in Table \ref{threemodenullifier}.
\begin{table}[h]
\centering
\begin{tabular}{|c|c|c|c|} \hline
FWM gain & nullifier 1 &nullifier 2 & nullifier 3\\ \hline
G=1.2 & 0.16 & 0.22 & 0.94 \\ \hline
G=1.5 & 0.06 & 0.11 & 0.93 \\ \hline
G=2 & 0.18 & 0.22 & \cellcolor[gray]{0.8} 1.09\\ \hline
\end{tabular}
\caption{Normalized variances of the 3-mode linear cluster state nullifiers, for different values of the gain.}
\label{threemodenullifier}
\end{table}
\subsection{four-mode cascaded FWM}
In the case of four-mode symmetric cascaded FWM, there are several possible graphs of cluster states. We first focus here on the linear one, whose $U_V$ matrix can also be found in \cite{Yukawa:2008iu}. Using our optimization strategy, we calculate the best possible nullifiers for different values of the gain, as shown in Table \ref{fourmodenullifier}. We see a completely different situation from the three-mode case. As the state impinging on the detectors is already an entangled state, it can be turned into a cluster state with phase controlled homodyne detection and post-processing more efficiently. In particular, we see that the values of the nullifiers follow roughly those of the squeezing values.
The same procedure can be applied to other cluster shapes, for instance we tested square and T shape clusters, which showed a very different behavior: in these cases, nullifiers values evolution is not monotonous with G values, and there is an optimal gain for each shape. Other shapes could be tested, or other types of clusters such as weighted graph \cite{Menicucci2011}. Hence, this system is readily applicable for quantum information processing. One should stress, however, that in order to exhibit cluster statistics it is necessary to precisely control the phase of the local oscillator in each homodyne detection, which can be accomplished for instance with digital locking electronics. Otherwise, it is also possible to build in the optimization routine within certain range of possible homodyne detection phase, and obtain solutions under theses constraints.
\begin{table}[h]
\centering
\begin{tabular}{|c|c|c|c|c|} \hline
FWM gain &nullifier 1 &nullifier 2 & nullifier 3 & nullifier 4\\ \hline
G=1.2 & 0.13 & 0.44 & 0.13 & 0.44 \\ \hline
G=1.5 & 0.04 & 0.25 & 0.04 & 0.25 \\ \hline
G=2 & 0.02 & 0.13 & 0.02 & 0.13\\ \hline
\end{tabular}
\caption{Normalized variances of the 4-mode linear cluster state nullifiers, for different values of the gain.}
\label{fourmodenullifier}
\end{table}
\section{summary}
In summary, we theoretically proposed to cascade two and three FWM processes to generate three-mode and four-mode cluster states respectively. The three-mode cluster state generation is sensitive to the gain values of the FWM processes. We considered the specific situation where the two FWM processes share the same gain value and found that when the gain value is below a certain value, we can construct the three-mode cluster state, but the intrinsic two mode structure of the system prevent from generating good clusters. In contrary, in the four-mode case, we found that for a wide range of gain values when the three FWM processes share the same gain value, different graphs of four-mode cluster states can be constructed. Thus, we expect that by cascading more FWM processes, multimode cluster states with different graphs can be constructed and this scheme for realizing versatile quantum networks promises potential applications in quantum information processing.
\acknowledgments
This work is supported by the European Research Council starting grant Frecquam and the French National Research Agency project Comb. Y.C. recognizes the China Scholarship Council. J. J. acknowledge the support from NSFC (Nos. 11374104 and 10974057), the SRFDP (20130076110011), the Program for Eastern Scholar at Shanghai Institutions of Higher Learning, the Program for New Century Excellent Talents in University (NCET-10-0383), the Shu Guang project (11SG26), the Shanghai Pujiang Program (09PJ1404400). X. Xu thanks the National Natural Science Foundation of China (Grant No. 11134003) and Shanghai Excellent Academic Leaders Program of China (Grant No. 12XD1402400).
|
2,869,038,154,655 | arxiv | \section{Introduction}\label{sec:i}
A Traveling Salesman wants to visit all vertices of a graph $G=(V,E)$, starting from his home $s\in V$, and -- since it is Friday -- ending his tour at his week-end residence, $t\in V$.
Given the nonnegative valued length function $c:E\longrightarrow \mathbb{R}_+$, he is looking for a shortest $\{s,t\}$-tour, that is, one of smallest possible (total) length.
The Traveling Salesman Problem (TSP) is usually understood as the $s=t$ particular case of the defined problem, where in addition every vertex is visited exactly once. This ``minimum length Hamiltonian circuit'' problem is one of the main exhibited problems of combinatorial optimization. Besides being \NP-hard even for very special graphs or lengths \cite{GarJT76}, even the best up to date methods of operations research, the most powerful computers programmed by the brightest hackers fail solving reasonable size problems exactly.
On the other hand, some implementations provide solutions only a few percent away from the optimum on some large ``real-life'' instances. A condition on the length function that certainly helps both in theory and practice is the triangle inequality. A nonnegative function on the edges that satisfies this inequality is called a {\em metric} function. The special case of the TSP where $G$ is a complete graph and $c$ is a metric is called the {\em metric TSP}. For a thoughtful and distracting account of the difficulties and successes of the TSP, see Bill Cook's book \cite{Coo12}.
If $c$ is not necessarily a metric function, the TSP is hopeless in general: it is not only \NP-hard to solve but also to approximate, and even for quite particular lengths, since the Hamiltonian cycle problem in $3$-regular graphs is \NP-hard \cite{GarJT76}. The practical context makes it also natural to suppose that $c$ is a metric.
\medskip
A {\em $\rho$-approximation algorithm} for a minimization problem is a polynomial-time algorithm that computes a solution
of value at most $\rho$ times the optimum, where $\rho\in\mathbb{R}$, $\rho\ge 1$. The {\em guarantee} or {\em ratio} of the approximation is $\rho$.
\medskip
The first trace of allowing $s$ and $t$ be different is Hoogeveen's article \cite{Hoo91}, providing a Christofides type $5/3$-approximation algorithm, again in the metric case. There had been no improvement until An, Kleinberg and Shmoys \cite{AKS12} improved this ratio to $\frac{1+\sqrt{5}}{2}<1.618034$ with a simple algorithm, an ingenious new framework for the analysis, but a technically involved realization.
The algorithm first determines an optimum $x^*$ of the fractional relaxation; writing $x^*$ as a convex combination of spanning trees and applying Christofides' heuristic for each, it outputs the best of the arising tours. For the TSP problem $x^*/2$ dominates any possible parity correction, as Wolsey \cite{Wol80} observed, but this is not true if $s\ne t$. However, \cite{AKS12} manages to perturb $x^*/2$, differently for each spanning tree of the constructed convex combination, with small average increase of the length.
We adopt this algorithm and this global framework for the analysis, and develop new tools that essentially change its realization and shortcut the most involved parts. This results in a simpler analysis guaranteeing a solution within $8/5$ times the optimum.
\medskip
We did not fix that the Traveling Salesman visits each vertex exactly once, our problem statement requires only that {\em every vertex is visited at least once.} This version has been introduced by Cornu\'ejols, Fonlupt and Naddef \cite{CFN85} and was called the {\em graphical TSP}. In other words, this version asks for the ``shortest spanning Eulerian subgraph'', and puts forward an associated polyhedron and its integrality properties, characterized in terms of excluded minors.
This version has many advantages: while the metric TSP is defined on the complete graph, the graphical problem can be sparse, since an edge which is not a shortest path between its endpoints can be deleted; however, it is equivalent to the metric TSP (see Tours below); the length function $c$ does not have to satisfy the triangle inequality; this version has an unweighted special case, asking for the minimum size of a spanning Eulerian subgraph.
The term ``graphic'' or ``graph-TSP'' has eventually been taken by this all $1$ special case, that we do not investigate here and avoid these three terms used in a too diversified way, different from habits for other problems. For comparison, let us only note the guaranteed ratios for the cardinality versions of the problems: $3/2$ for the min cardinality of a spanning connected subgraph with two given odd degree vertices, and $7/5$ if all vertices are of even degree \cite{SV12}.
\section{Notation, Terminology and Preliminaries}
The set of non-negative real numbers is denoted by $\mathbb{R_+}$, $\mathbb{Q}$ denotes the set of rational numbers. We fix the notation $G=(V,E)$ for the input graph. For $X\subseteq V$ we write $\delta(X)$ for the set of edges with exactly one endpoint in $X$. If $w:E\longrightarrow \mathbb{R}$ and $A\subseteq E$, then we use the standard notation $w(A):=\sum_{e\in A} w(e)$.
\medskip\noindent
{\bf Tours}: For a graph $G=(V,E)$ and $T\subseteq V$ with $|T|$ even, a {\em $T$-join} in $G$ is a set $F\subseteq E$
such that $T=\{v\in V: \hbox{$|\delta(v)\cap F|$ is odd}\}.$ For $(G,T)$, where $G$ is connected, it is well-known and easy to see that a $T$-join exists if and only if $|T|$ is even \cite{LPL}, \cite{VYGENyellow}.
A {\em $T$-tour} $(T\subseteq V)$ of $G=(V,E)$ is a set $F\subseteq 2E$ such that
\begin{itemize}
\item[(i)]$F$ is a $T$-join of $2G$,
\item[(ii)] $(V, F)$ is a connected multigraph,
\end{itemize}
where $2E$ is the multiset consisting of the edge-set $E$, and the multiplicity of each edge is $2$; we then denote $2G:=(V,2E)$. It is not false to think about $2G$ as $G$ with a parallel copy added to each edge, but we find the multiset terminology better, since it allows for instance to keep the length function and its notation $c:E\longrightarrow \mathbb{R}_+$, or in the polyhedral descriptions to allow variables to take the value $2$ without increasing the number of variables; the length of a multi-subset will be the sum of the lengths of the edges multiplied by their multiplicities, with obvious, unchanged terms or notations: for instance the size of a multiset is the sum of its multiplicities; $\chi_A$ is the multiplicity vector of $A$; $x(A)$ is the scalar product of $x$ with the multiplicity vector of $A$; a {\em subset of a multiset} $A$ is a multiset with multiplicities smaller than or equal to the corresponding multiplicities of $A$, etc.
A {\em tour} is a $T$-tour with $T=\emptyset$.
When $(G,T)$ or $(G,T,c)$ are given, we always assume without repeating, that $G$ is a connected graph, $|T|$ is even, and $c:E\longrightarrow \mathbb{R_+}$. The latter will be called the {\em length} function, $c(A)$ $(A\subseteq E)$ is the length of $A$.
The {\em $T$-tour problem (TTP)} is to minimize the length of a $T$-tour for $(G,T,c)$ as input. The subject of this work is the TTP for an arbitrary length function.
If $F\subseteq E$, we denote by $T_F$ the set of vertices incident to an odd number of edges in $F$; if $F$ is a spanning tree, $F(T)$ denotes the {\em unique $T$-join of $F$.}
The {\em sum} of two (or more) multisets is a multiset whose multiplicities are the sums of the two corresponding multiplicities. If $X,Y\subseteq E$, $X+Y\subseteq 2E$ and $(V,X+Y)$ is a multigraph. Given $(G,T)$, $F\subseteq E$ such that $(V,F)$ is connected, and a $T_F\triangle T$-join $J_F$, the multiset $F + J_F$ is a $T$-tour;
the notation ``$\triangle$'' stays for the {\em symmetric difference} (mod~$2$ sum of sets).
In \cite{SV12} $T$-tours were introduced under the term {\em connected $T$-joins}. (This first name may be confusing, since $T$-joins have only $0$ or $1$ multiplicities.) Even if the main target remains $|T|\le 2$, the arguments concerning this case often lead out to problems with larger $T$.
By ``Euler's theorem'' a subgraph of $2G$ is a tour or $\{s,t\}$-tour if and only if
its edges can be ordered to form a closed ``walk'' or a walk from $s$ to $t$,
that visits every vertex of $G$ at least once, and uses every edge as many times as its multiplicity.
\medskip
For the TTP, a $2$-approximation algorithm is trivial by taking a minimum cost spanning tree $F$
and doubling the edges of a $T_F\triangle T$-join of $F$, that is, of $F(T_F\triangle T)$.
For $T=\emptyset$, Christofides \cite{Chr76} proposed determining first a minimum length spanning tree $F$ to assure connectivity, and then to add to it a shortest $T_F$-join. The obvious approximation guarantee $3/2$ of this algorithm has not been improved ever since. A {\em Christofides type algorithm} for general $T$ {\em adds a shortest $T_F\triangle T$-join instead.}
For $T=\{s,t\}$ $(s,t\in V)$ this has been proved to guarantee a ratio of $5/3$ by Hoogeveen \cite{Hoo91} and improved by An, Kleinberg and Shmoys \cite{AKS12}. Hoogeveen's approach and ratio can be obviously extended to $T$-tours for arbitrary $T$ providing the same guarantee with a Christofides type algorithm and proof \cite[Introduction]{SV12} . In Section~\ref{sec:Results} we show an ``even more Christofides type'' proof, relevant for our improved ratio 8/5 (see Proposition). Cheriyan, Friggstad and Gao \cite{C12} provided the first ratio better than $5/3$ for arbitrary $T$, by extending the analysis of \cite{AKS12}, with extra work, different for $|T|\ge 4$, leading to the ratio $13/8=1.625$.
\medskip
{\em Minimizing the length of a tour or $\{s,t\}$-tour is equivalent to the metric TSP problem or its path version} (with all degrees $2$ except $s$ and $t$ of degree $1$, that is, a shortest Hamiltonian circuit or path). Indeed, any length function of a connected graph can be replaced by a function on the complete graph with lengths equal to the lengths of shortest paths (metric completion): then a tour or an $\{s,t\}$-tour can be ``shortcut'' to a sequence of edges with all inner degrees equal to $2$. Conversely, if in the metric completion we have a shortest Hamiltonian circuit or path we can replace the edges by paths and get a tour or $\{s,t\}$-tour.
Given $(G,T,c)$, the minimum length of a $T$-join in $G$ is denoted by $\tau(G,T,c)$. A {\em $T$-cut} is a cut $\delta(X)$ such that $|X\cap T|$ is odd. It is easy to see that a $T$-join and a $T$-cut meet in an odd number of edges. If in addition $c$ is integer, the maximum number of $T$-cuts so that every edge is contained in at most $c$ of them is denoted by $\nu(G,T,c)$. By a theorem of Edmonds and Johnson \cite{EdmJ73}, \cite{LPL} $\tau(G,T,c)=\nu(G,T,2c)/2$, and a minimum length $T$-join can be determined in polynomial time. These are useful for an intuition, even if we only use the weaker Theorem~\ref{thm:polyhedron} below. For an introduction and more about different aspects of $T$-joins, see
\cite{LPL}, \cite{SCHRIJVERyellow}, \cite{Fra11}, \cite{VYGENyellow}.
\medskip\noindent
{\bf Linear Relaxation}: We adopt the polyhedral background and notations of \cite{SV12}.
Let $G=(V,E)$ be a graph. For a partition $\Wscr$ of $V$ we introduce the notation
$$\delta(\Wscr) \ := \ \bigcup_{W\in\Wscr} \delta (W),$$
that is, $\delta(\Wscr)$ is the set of edges that have their two endpoints in different classes of $\Wscr$.
Let $G$ be a connected graph, and $T\subseteq V$ with $|T|$ even.
\begin{eqnarray*}
P(G,T) &\! := \!& \{x\in\mathbb{R}^{E}\!: \
x(\delta(W)) \ge 2 \mbox{ for all } \emptyset\not=W\subset V \mbox{ with } |W\cap T| \hbox{ even,}\\
&& \hspace{2.3cm} x(\delta(\Wscr)) \ge |\Wscr| - 1 \mbox{ for all partitions $\Wscr$ of $V$,}\\
&&\hspace{2.3cm} 0\le x(e)\le 2 \hbox{ for all $e\in E$} \Bigr\} .
\end{eqnarray*}
Denote $\opt(G,T,c)$ the length of the shortest $T$-tour for input $(G,T,c)$. Let
$x^*\in P(G,T)$ minimize $c^\top x$ on $P(G,T)$.
\medskip\noindent
{\bf Fact}: Given $(G,T,c)$, $\opt(G,T,c)\ge \min_{x\in P(G,T)}c^\top x=c^\top x^*$ .
\medskip
Indeed, if $F$ is a $T$-tour, $\chi_F$ satisfies the defining inequalities of $P(G,T)$.
The following theorem is essentially the same as Schrijver \cite[page 863, Corollary 50.8]{SCHRIJVERyellow}.
\begin{theorem}\label{thm:polytop}
Let $x\in\mathbb{Q}^E$ satisfy the inequalities
\begin{eqnarray*}
& \hspace{0cm} x(\delta(\Wscr)) \ge |\Wscr| - 1 \mbox{ for all partitions $\Wscr$ of $V$,}\\
&\hspace{-1cm} 0\le x(e)\le 2 \hspace{0.2cm}\hbox{for all $e\in E$}.
\end{eqnarray*}
Then there exists a set $\Fscr_{>0}$, $|\Fscr_{>0}|\le |E|$ of spanning trees and coefficients $\lambda_F\in\mathbb{R}, \lambda_F>0$, $(F\in\Fscr_{>0})$ so that
\[\sum_{F\in\Fscr_{>0}}\lambda_F=1, \qquad x\ge \sum_{F\in\Fscr_{>0}}\lambda_F\chi_F,\]
and for given $x$ as input, $\Fscr_{>0}$, $\lambda_F$ $(F\in\Fscr_{>0})$ can be computed in polynomial time.\end{theorem}
\prove Let $x$ satisfy the given inequalities. If $(2\ge) x(e)>1$ $(e\in E)$, introduce an edge $e'$ parallel to $e$, and define $x'(e'):=x(e)-1$, $x'(e):=1$, and $x'(e):=x(e)$ if $x(e)\le 1$. Note that the constraints are satisfied for $x'$, and $x'\le \underline 1$. Apply Fulkerson's theorem \cite{F70} (see \cite[page 863, Corollary 50.8]{SCHRIJVERyellow}) on the blocking polyhedron of spanning trees: $x'$ is then a s convex combination of spanning trees, and by replacing $e'$ by $e$ in each spanning tree containing $e'$; applying then Carath\'eodory's theorem, we get the assertion. The statement on polynomial solvability follows from Edmonds' matroid partition theorem \cite{Edm70}, or the ellipsoid method \cite{GLS}.
\endproof
Note that the inequalities in Theorem~\ref{thm:polytop} form a subset of those that define $P(G,T)$. In particular, any optimal solution $x^*\in P(G,T)$ for input $(G,T,c)$ satisfies the conditions of the theorem. Fix $\Fscr_{>0}$, $\lambda_F$ provided by the theorem for $x^*$, that is, \[\sum_{F\in\Fscr_{>0}}\lambda_F\chi_F\le x^*.\]
{\em We fix the input $(G,T,c)$ and keep the definitions $x^*$, $\Fscr_{>0}$, $\lambda_F$ until the end of the paper.}
\medskip
It would be possible to keep the context of \cite{AKS12} for $s\ne t$ where metrics in complete graphs are kept and only Hamiltonian paths are considered (so the condition $x(\delta(v))=2$ if $v\ne s$, $v\ne t$ is added), or the corresponding generalization in \cite{C12} for $T\ne\emptyset$. However, we find it more comfortable to have in mind only $(G,T,c)$, where $c$ is the given function which is not necessarily a metric, and $G$ is the original graph that is not necessarily the complete graph, and without having a restriction on $T$. The paper can be read though with either definitions in mind, the only difference being the use of $\sum_{F\in\Fscr_{>0}}\lambda_F\chi_F\le x^*$ without the irrelevant equality here to hold.
The reader can also substitute $T=\{s,t\}$ $(s,t\in V$ with $s=t$ allowed, meaning $T=\emptyset)$ for easier reading, none of the relevant features of the proofs will disappear.
\bigskip
Last, we state a well-known analogous theorem of Edmonds and Johnson for the blocking polyhedron of $T'$-joins in the form we will use it. (The notation $T$ is now fixed for our input $(G,T,c)$, and the theorem will be applied for several different $T'$ in the same graph.)
\begin{theorem}\label{thm:polyhedron} {\rm \cite{EdmJ73}, (cf. \cite{LPL}, \cite{SCHRIJVERyellow})}
Given $(G,T',c)$, ($T'\subseteq V$, $|T'|$ even, $c:E\longrightarrow\mathbb{R_+})$, let
\[Q_+(G,T') :=\{x\in\mathbb{R}^{E}\!: x(C) \ge 1 \mbox{ for each $T'$-cut $C$,}\,\, x(e)\ge 0 \hspace{0.2cm}\hbox{for all $e\in E$}\}.\]
A shortest $T'$-join can be found in polynomial time, and if $x\in Q_+(G,T')$, $\tau(G,T',c)\le c^\top x$.
\end{theorem}
\medskip\noindent{\bf The guarantee of Christofides' algorithm for $T$-tours}
\smallskip
We finish the introduction to the $T$-tour problem with a proof of the $5/3$-approximation ratio for Christofides's algorithm. Watch the partition of the edges of a spanning tree into a $T$-join --if $T=\{s,t\}$, an $\{s,t\}$ path -- and the rest of the tree in this proof! For $\{s,t\}$-paths this ratio was first proved by Hoogeveen \cite{Hoo91} slightly differently (see for $T$-tours in the Introduction of \cite{SV12}), and in \cite{fivethird} in a similar way, as pointed out to me by David Shmoys.
\medskip\noindent
{\bf Proposition}: {\em Let $F$ be an arbitrary $c$-minimum spanning tree. Then
$\tau(G,T_F\triangle T,c)\le \frac{2}{3}\opt(G,T,c).$}
\smallskip
\prove $\{F(T), F\setminus F(T)\}$ is a partition of $F$ into a $T$-join and a $T\triangle T_F$-join (see Figure~\ref{fig:tree}). The shortest $T$-tour $K$ has a $T_F$-join $F'$ by connectivity, so $\{F', K\setminus F'\}$ is a partition of $K$ to a $T_F$-join and a $T_F\triangle T$-join.
If either $c(F\setminus F(T))\le \frac{2}{3}c(F)$ or $c(K\setminus F')\le \frac{2}{3}c(K),$ then we are done, since both
are $T\triangle T_F$-joins. If neither hold, then we use the $T\triangle T_F$-join $F(T)\triangle F'$. Since $c(F(T))\le \frac{1}{3}c(F)\le \frac{1}{3}\opt(G,T,c)$ and $c(F')\le \frac{1}{3}c(K)=\frac{1}{3}\opt(G,T,c)$, we have $c(F(T)\triangle F')\le c(F(T)) + c(F')\le \frac{2}{3}\opt(G,T,c).$ \endproof
In the next section we exploit this simple argument in a more advanced context (see Proposition and its Corollary) that anticipates the proof of the main result.
\section{Results}\label{sec:Results}
\medskip
In this section we introduce the ``language'' of the paper, random sampling, that has been proved to be helpful for numerous problems. The ancestor of the method for the TSP can be viewed to be Wolsey's proof \cite{Wol80} of $\opt(G,\emptyset,c)\le 3/2c^\top x^*$, improved recently in the cardinality case by \cite{GhaSS11}, \cite{MomS11}, \cite{Muc12}, and for $T$-tours by \cite{AKS12}, \cite{C12}. Our use of probabilities here is only notational though, but an elegant notation does really help. In the second half of this section we state and prove the key lemmas.
The random sampling framework has been used by An, Kleinberg and Shmoys for TSP paths in a simple and original way with surprising success \cite{AKS12}. Readers familiar with \cite{AKS12} may find helpful the explanations in Section~\ref{sec:connections} about the relation of the new results to this framework. In this section watch the new ideas contributed by the present work: the separation of $x^*$ into $p^*$ and $q^*$, and a further decomposition of $p^*$.
\noindent
\medskip
The coefficient $\lambda_F$ of each spanning tree $F\in\Fscr_{>0}$ in the convex combination dominated by $x^*$ (see Theorem~\ref{thm:polytop}) will be {\em interpreted as a probability distribution of a random variable $\Fscr$, $$\Pr (\Fscr =F):= \lambda_F$$ whose values are spanning trees of $G$, and}
\[\Fscr_{>0}=\{ F\subseteq E : \hbox{$F$ spanning tree of $G$, }\Pr(\Fscr=F)>0\}.\]
The notations for spanning trees will also be used for random variables whose values are spanning trees. For instance $\Fscr (T)$ denotes the random variable whose value is $F(T)$ precisely when $\Fscr=F$. Another example is $\chi_\Fscr$, a random variable whose value is $\chi_F$ when $\Fscr=F$. Similarly, $T_\Fscr$ is a random variable whose value for $\Fscr=F$ is $T_F:=\{v\in V: \hbox{$|\delta(v)\cap F|$ is odd}\}.$
\smallskip
We use now the probability notation for defining two vectors that will be extensively used:
\smallskip
$p^*(e):=\Pr(e \in \Fscr (T))$;\quad $q^*(e):=\Pr(e \in \Fscr\setminus \Fscr (T))$ $(e\in E).$
(These are short notations for the sum of $\lambda_F$ for spanning trees $F$ with $e \in F(T)$ or $e \in F\setminus F(T)$, respectively.)
\medskip\noindent
{\bf Fact}: $E[\chi_{\Fscr(T)}]=p^*$, $E[\chi_{\Fscr\setminus \Fscr(T)}]=q^*$, $E[\chi_\Fscr]=p^*+q^*\le x^*$. \,\,\,{\bf Proof}: Apply Theorem~\ref{thm:polytop}. \endproof
\smallskip
Let us familiarize with the introduced vectors $p^*$, $q^*$ by sharpening the proposition at the end of the preceding section using the minimum objective value of the fractional relaxation. This is irrelevant for the proofs in the sequel, but shows the intuition of using $p^*$ and $q^*$.
\medskip\noindent
{\bf Proposition}: {\em For each $T'\subseteq V$, $|T'|$ even,
$\frac{1}{2}(x^*+p^*)\in Q_+(G,T')$.}
\medskip
Let $\Qscr := \{\hbox {$Q$ is a cut: } x^*(Q) < 2 \}$. The assertion is that $p^*$ repairs the deficit of each $Q\in\Qscr$.
\medskip
\prove If $C$ is a cut, $C\notin\Qscr$, then $x^*(C)\ge 2$, so $\frac{1}{2}(x^*(C) + p^*)\ge \frac{1}{2}x^*(C)\ge 1$. If $C\in\Qscr$:
$x^*(C)+p^*(C)\ge E[\chi_{\Fscr}](C)+ E[\chi_{\Fscr(T)}](C)\ge 2$, since the event $|C\cap\Fscr|=1$ implies that the unique edge of $C\cap\Fscr$ is also contained in $\Fscr (T)$. (The $T$-cut $C$ intersects every $T$-join.)
\endproof
\begin{figure}[t]
\vskip - 1.3cm
\includegraphics{figur1eps}
\caption{\footnotesize One of many: \rouge{\bf $T_F\triangle T$-joins}, in $F$ (left), minimum in $G$ (right), \rouge{$J_F$}; $T:=\{s,t\}$. } \label{fig:tree}
\end{figure}
\medskip\noindent
{\bf Corollary}: {\em
$\displaystyle E[\tau(G,T_\Fscr\triangle T,c)]\le \min\{c^\top x^*- \frac{c^\top q^*}{2}, c^\top q^*\}\le \frac{2}{3}c^\top x^*$.}
\prove Apply the Proposition and Theorem~\ref{thm:polyhedron} to get $\tau(G,T',c)\le c^\top \frac{1}{2}(x^*+p^*).$ Applying this to $T'=T_F\triangle T$ $(F\in\Fscr_{>0})$, and then substituting $\frac{1}{2}(x^*+p^*)\le x^* - \frac{1}{2}q^*$ (by the Fact), and finally taking the mean value: $E[\tau(G,T_\Fscr\triangle T,c)]\le c^\top x^*- \frac{c^\top q^*}{2}$.
On the other hand, since $\Fscr \setminus \Fscr (T)$ is a $T_\Fscr\triangle T$-join,
$E[\tau(G,T_\Fscr\triangle T,c)]\le E[c(\Fscr \setminus \Fscr (T))]= c^\top q^*$.
The minimum of our two linear bounds takes its maximum value at $c^\top q^*= \frac{2}{3} c^\top x^*$.
\endproof
\medskip
While $c^\top q^*$ (the second upper bound of the corollary) is the {\em mean value} of the length of the parity correcting $\Fscr \setminus \Fscr(T)$, $\frac{1}{2}(x^*+p^*)$ (of the first bound) is in $Q_+(G,T')$ {\em for all} $T'\subseteq V$, $|T'|$ even. This ``for all'' is a superfluous luxury! Indeed, it is not very economic to add $p^*$ for all $F\in\Fscr_{>0}$, when a smaller vector, adapted to $F$ (see below) is enough!
\iffalse
: $\frac{1}{2}(x^* + s^\Fscr )$ is also always in $Q_+(G,T')$, and has at most as large, and maybe much smaller average length.
The vector $s^\Fscr/2$ seems to be an advantageous random vector for completing cuts where $x^*/2$ has a deficit, but we were not able to give a sufficiently good estimate for the mean value of its length if used alone; $\chi_\Fscr$ is another good random vector though, for limited use. The new ratio $8/5$ is a result of a sharpening and optimal combination of these two. Another vector was proposed in \cite{AKS12} and generalized to arbitrary $T$ by \cite{C12}; optimally combining it with the two aforementioned vectors we have the confirmation of computational tools that it does not lead to further improvement. There may be other fruitful possibilities, leaving place to future research.
\fi
The reader may find helpful to have a look at Figure~\ref{fig:tree} for these remarks, for the following algorithm and for the subsequent arguments and theorem.
\medskip
\noindent
{\bf Best of Many Christofides Algorithm \cite{AKS12}:} Input $(G,T,c)$.
Determine $x^*$ \cite{GLS} using \cite{BC87}, see \cite{SV12}. (Recall: $x^*$ is an optimal solution of $\min_{x\in P(G,T)}c^\top x$.)
Determine $\Fscr_{>0}$. (see Theorem~\ref{thm:polytop} and its proof.)
Determine the best {\em parity correction} for each $F\in \Fscr_{>0}$, i.e. a shortest $T_F \triangle T$-join $J_F$ \cite{EdmJ73}, \cite{VYGENyellow}.
Output that $F+J_F$ $(F\in \Fscr_{>0})$ for which $c(F + J_F)$ is minimum.
\bigskip
When $T=\emptyset$ $(s=t)$ Wolsey \cite{Wol80} observed that $x^*/2\in Q_+(G,T)$ and then by Theorem~\ref{thm:polyhedron} parity correction costs at most $c^\top x^*/2$, so Christofides's tour is at most $3/2$ times $c^\top x^*$; in \cite{AKS12}, \cite{C12} and here this analysis is refined for paths and in general for $T$-tours.
\medskip
Define $\displaystyle R:= \min_{F\in \Fscr_{>0}} \frac {c(F)+ \tau(G,T_F\triangle T,c)}{c^\top x^*}\le \frac{E[c(\Fscr)+\tau(G,T_\Fscr\triangle T,c)]}{c^\top x^*}\le 1+E[\frac{\tau(G,T_\Fscr\triangle T,c)}{c^\top x^*}].$
Ratios of tour lengths versus $c^\top x^*$ may be better than $R$, since Christofides' way of choosing a spanning tree and adding parity correction is not the only way for constructing tours. For instance M\"omke and Svensson \cite{MomS11} get better results for some problems by starting from larger graphs than trees and deleting some edges instead of adding them for parity correction. However, here we are starting with trees and correct their parity by adding edges for deducing the ratio $R\le 8/5$ through the following theorem, the main result of the paper:
\begin{theorem}\label{thm:main}
$\displaystyle E[\tau(G,T_\Fscr\triangle T,c)]\le \frac{3}{5}c^\top x^*$
\end{theorem}
Recall $\Qscr := \{\hbox {$Q$ is a cut: } x^*(Q) < 2 \}.$ {\em Every $Q\in \Qscr$ is a $T$-cut}, since non-$T$-cuts $C$ are required to have $x(C)\ge 2$ in the definition of $P(G,T)$. In \cite{AKS12} it is proved that the vertex-sets defining $\Qscr$ form a chain if $|T|=2$; in \cite{C12} they are proved to form a laminar family for general $T$. We do not use these properties, but we need the following simple but crucial observation from \cite{AKS12}:
\begin{lemma}\label{lem:probound}
If $C$ is a cut, then $\Pr (|C\cap \Fscr | \ge 2)\le x^*(C) - 1,$ $\Pr (|C\cap \Fscr | =1)\ge 2 - x^*(C)$. Moreover if $C\in\Qscr$, then the event $|C\cap \Fscr | =1$ implies that $C$ is not a $T_\Fscr \triangle T$-cut.
\end{lemma}
\prove If $C$ is a cut of $G$, $x^*(C) \ge E[|C \cap \Fscr|] \ge \Pr (|C\cap \Fscr | = 1) + 2 \Pr (|C\cap \Fscr | \ge 2),$ where
$\Pr (|C\cap \Fscr | = 1) + \Pr (|C\cap \Fscr | \ge 2)=1,$ so the inequalities follow for an arbitrary cut. The last statement also follows, since $C\in\Qscr$ implies that $C$ is a $T$-cut, and on the event $|C\cap \Fscr | =1$ it is also a $T_\Fscr$-cut --by degree counting--, so it is not a $T_\Fscr \triangle T$-cut, as claimed.
\endproof
It is for cuts $C\in\Qscr$ that this lemma provides relevant information. An, Kleinberg and Shmoys \cite{AKS12} need and prove more about $\Qscr$, their main technical tool \cite[Lemma~3]{AKS12} is actually a linear programming fact about this family which is the more difficult half of their proof. Cheriyan, Friggstad, Gao \cite{C12} generalize these properties. The following two lemmas provide a natural simple alternative to this approach, inherent in the problem:
\begin{lemma}\label{lem:disjoint}
If $C_1\ne C_2$ are cuts of $G$, $e\in E$, then the events $\{e\}=C_1\cap \Fscr$ and $\{e\}=C_2\cap \Fscr$ are disjoint, and if they are $T$-cuts, these events are included in
the event $e\in \Fscr (T)$.
\end{lemma}
The statement is true for arbitrary cuts $C_1, C_2$, but it will be applied only for $C_1, C_2\in\Qscr.$
\medskip
\prove Indeed, $\{e\}=C_1\cap F$ for some $F\in\Fscr_{>0}$ means that $e$ is the unique edge of $F$ in $C_1$, so $C_1$ is the set of edges of $G$ joining the two components of $F\setminus \{e\}$. If $C_1\ne C_2$, then the event that $\Fscr\setminus \{e\}$ defines $C_1$ or that it defines $C_2$, mutually exclude one another.
Moreover, if say $C_1$ is a $T$-cut, then it has a common edge with every $T$-join, so in the event $\{e\}=C_1\cap \Fscr$ we have $e\in \Fscr(T)$, proving the last statement.
\endproof
For all $Q\in\Qscr$ and $e\in E$ define $x^Q(e):=\Pr (\{e\}=Q\cap \Fscr)$. In linear terms $x^Q\in\mathbb{R}^E$ is equivalently defined as
$$x^Q:=\sum_{F\in \Fscr_{>0}, |Q\cap F|=1} \lambda_F\chi_{Q\cap F}.$$
\begin{lemma}\label{lem:xC}
Outside $Q$, $x^Q$ is $0$. Moreover, $1^\top x^Q=x^Q(Q)\ge 2 - x^*(Q)$, and
$$\sum_{Q\in\Qscr} x^Q \le p^*.$$
\end{lemma}
\prove If $e\notin Q$, then $e\notin Q\cap F$ for all $F\in \Fscr_{>0}$, so $x^Q(e):=\Pr (\{e\}=Q\cap \Fscr)=0.$
Now
$$1^\top x^Q:=\sum_{F\in \Fscr_{>0}, |Q\cap F|=1} \lambda_F1^\top\chi_{Q\cap F}=\sum_{F\in \Fscr_{>0}, |Q\cap F|=1} \lambda_F1=\Pr (|Q\cap \Fscr | =1),$$
so the first inequality follows now from Lemma~\ref{lem:probound}. To see the second inequality note that for each $e\in E,$
$$\sum_{Q\in\Qscr} x^Q(e) = \sum_{Q\in\Qscr}\quad\sum_{F\in \Fscr_{>0}, Q\cap F=\{e\}} \lambda_F = \sum_{Q\in\Qscr}\Pr (Q\cap\Fscr=\{e\}),$$
and by Lemma~\ref{lem:disjoint} this is at most $\Pr(e\in \Fscr (T))=p^*(e).$
\endproof
\section{Proof}\label{sec:proof}
In this section we prove the promised approximation ratio (Theorem~\ref{thm:main}). As \cite{AKS12}, we want to complete the random variable $\beta x^* + (1-2\beta)\chi_\Fscr,\,$ $1/3< \beta < 1/2$ to one that is in $Q_+(G,T_\Fscr\triangle T)$, by {\em adding a random variable}. The length expectation of what we get then is an upper bound for the price $\tau(G,T_\Fscr\triangle T,c)$ of parity correction, by Theorem~\ref{thm:polyhedron}. The difficulty is to estimate the length expectation of the added random variable in terms of $c^\top x^*$.
Why just the form $\beta x^* + (1-2\beta)\chi_\Fscr\,$? We follow \cite{AKS12} here: for all cuts $C\notin\Qscr$, that is, if $x^*(C)\ge 2$, we have then $\beta x^*(C) + (1-2\beta)\chi_\Fscr (C)\ge 2\beta + 1 - 2\beta =1$. By this choice it is sufficient to add correcting vectors to $T_\Fscr\triangle T$-cuts in $\Qscr$, and we do not know of any alternative for this.
Why just in the interval $1/3< \beta< 1/2$ ? We need $1-2\beta\ge 0$; $\beta\le 1/3$
would make the approximation ratio at least $5/3$.
\medskip
For any cut $C$ we call the random variable $\max \{0, 1-(\beta x^*(C) + (1 - 2\beta)|C\cap\Fscr|)\}$ the {\em deficit} of $C$ for $\beta$, unless $C\in \Qscr$, $|C\cap\Fscr|=1$, when we define the deficit to be $0$ (see Lemma~\ref{lem:probound}).
\begin{lemma}\label{lem:def} The deficit of a $T_\Fscr\triangle T$-cut $C$ for $\beta$ $(\beta\in(1/3,1/2))$ is constantly $0$, unless $C\in\Qscr$ and $|C\cap \Fscr|\ge 2$, and when it is positive, it is never larger than \[4\beta - 1 - \beta x^*(C).\]
\end{lemma}
Note that this value can be negative, but then the deficit of $C$ is constantly $0$.
\medskip
\prove
If $C\notin\Qscr$, then $x^*(C)\ge 2$, and we saw three paragraphs above that the deficit of $C$ for $\beta$ is $0$. If $C\in\Qscr$ then $C$ is a $T$-cut; if in addition $|C\cap \Fscr|=1$, then $C$ is also a $T_\Fscr$-cut, so it is not a $T_\Fscr\triangle T$-cut (Lemma~\ref{lem:probound}), and the deficit is defined to be $0$.
We proved: if $C$ is a $T_\Fscr\triangle T$-cut and the deficit of $C$ for $\beta$ is not $0$, then $|C\cap \Fscr|\ge 2$. Substituting this inequality to the deficit: $1-(\beta x^*(C) + (1 - 2\beta)|C\cap\Fscr|)\le 4\beta -1 - \beta x^*(C).$
\endproof
Let $ f^Q(\beta):=\max \left\{0,\frac{ 4\beta - 1 - \beta x^*(Q)}{2- x^*(Q)} \right\}$, and $s^F(\beta):=\sum_{Q\in\Qscr, |Q\cap F|\ge 2} f^Q(\beta)x^Q$.
\begin{lemma}\label{lem:sure} $\beta x^* + (1-2\beta)\chi_\Fscr + s^\Fscr (\beta) \in Q_+(G,T_\Fscr\triangle T)$ is the sure event for all $\beta\in(1/3,1/2).$
\end{lemma}
\prove By Lemma~\ref{lem:xC}, $x^Q(Q)\ge 2 - x^*(Q)$, so $f^Q(\beta)x^Q(Q)\ge 4\beta - 1 - \beta x^*(Q)$ by substituting the above definition of $f^Q(\beta)$. On the other hand, by Lemma~\ref{lem:def}, the deficit of a $T_\Fscr\triangle T$-cut, if positive at all, is at most $4\beta - 1 - \beta x^*(Q).$
\endproof
\noindent{\bf Theorem~\ref{thm:main}}\quad
$\displaystyle E [\tau(G,T_\Fscr\triangle T, c) ] \le \frac{3}{5} \, c^\top x^*.$
\begin{figure}[t]
\vskip -1.3cm
\includegraphics{figur2eps}
\caption{\footnotesize The approximation guarantee cannot be improved below 3/2. This example is essentially the same as the more complicated one in \cite[ Figure~3]{SV12} providing the same lower bound for a more powerful algorithm in the cardinality case. $|V|=2k, \opt(G,T, \underline 1)=c^\top x^*=2k-1$ (left). Best of Many Christofides output (right): $3k-2$ if $\Fscr_{>0}$ consists of the thick (red) tree and its central symmetric image. There are more potential spanning trees for $\Fscr_{>0}$, but $\tau(G,T_F\triangle T,\underline 1)\ge k-2$ for each, so $c(F+J_F)\ge 3k-3$ for each, and with any $T_F\triangle T$-join $J_F$.}
\label{fig:threehalves}
\end{figure}
Figure~\ref{fig:threehalves} shows that this bound cannot be decreased below $1/2 \, c^\top x^*$.
\smallskip
\prove Fix $\beta$, $1/3< \beta< 1/2$.
\smallskip\noindent{\bf Claim~1}:
$E[\tau(G,T_\Fscr\triangle T,c)]\le (1-\beta)c^\top x^* + c^\top E[s^\Fscr (\beta)]$ for all $1/3\le \beta\le 1/2$.
\medskip
By Lemma~\ref{lem:sure},
$\beta x^* + (1-2\beta)\chi_\Fscr + s^\Fscr (\beta)\in Q_+(G,T_\Fscr\triangle T)$ is the sure event, so
$\tau(G,T_\Fscr\triangle T,c)\le c^\top(\beta x^* + (1-2\beta)\chi_\Fscr + s^\Fscr (\beta))$ also always holds. Taking the expectation of both sides and applying $E[c^\top(\beta x^* + (1-2\beta)\chi_\Fscr]\le(1-\beta)c^\top x^*$ (Fact of Section~\ref{sec:Results}), the Claim is proved.
\smallskip\noindent{\bf Claim~2}: For each $Q\in\Qscr$, $\displaystyle \Pr(|Q\cap \Fscr|\ge 2)f^Q(\beta)\le \frac{\beta\omega (3-\frac{1}{\beta} -\omega)}{1-\omega},$ where {\small $0\le \omega=1 - \sqrt{\frac{1}{\beta} - 2}<1$.}
By Lemma~\ref{lem:probound}, $\Pr(|Q\cap \Fscr|\ge 2)\displaystyle f^Q(\beta)\le (x^*(Q)-1)f^Q(\beta)\le\max_{Q\in\Qscr}(x^*(Q)-1)\frac{ 4\beta - 1 - \beta x^*(Q)}{2- x^*(Q)}.$
Substitute $\omega:=x^*(Q)-1$. Then the quantity to maximize becomes the function of $\omega$ in the claim. This function takes its maximum at the given value of $\omega$, and if $1/3\le\beta<1/2$ then $0\le \omega <1$, proving the Claim.
To be concise, denote $f(\beta):=\frac{\beta\omega (3-\frac{1}{\beta} -\omega)}{1-\omega},$ where $\omega=1 - \sqrt{\frac{1}{\beta} - 2}$.
\medskip\noindent{\bf Claim~3}: $\displaystyle E[s^\Fscr (\beta)]\le f(\beta) p^*.$
\smallskip
$\displaystyle E[s^\Fscr (\beta)]= \sum_{F\in\Fscr}\Pr(\Fscr=F)\sum_{Q\in\Qscr, |Q\cap F|\ge 2} f^Q(\beta)x^Q=\sum_{Q\in\Qscr}\Pr(|Q\cap\Fscr|\ge 2)f^Q(\beta)x^Q\le$\newline
$\displaystyle\le f(\beta) \sum_{Q\in\Qscr}x^Q,$ by Claim~2. Finally, substituting $\sum_{Q\in\Qscr} x^Q \le p^*$ (Lemma~\ref{lem:xC}) we get the claim.
\medskip
Now we are ready to finish the proof of the theorem. By Claim~1 and Claim~3, we have:
\[ E[\tau(G,T_\Fscr\triangle T,c)]\le (1-\beta)c^\top x^* + f(\beta) c^\top p^*, \]
where for all $\varepsilon\in\mathbb{R}, \varepsilon>0$, either $c^\top p^*\le (\frac{1}{2}-\varepsilon)c^\top x^*$, or $c^\top q^*\le (\frac{1}{2}+\varepsilon)c^\top x^*$ because $p^*+q^*\le x^*$ (Fact of Section~\ref{sec:Results}). So -- using the Fact again --, if the latter case holds we have:
\[E[\tau(G,T_\Fscr\triangle T,c)]\le E[c(\Fscr\setminus \Fscr(T))]=c^\top q^*\le (\frac{1}{2}+\varepsilon)c^\top x^*,\]
and if the first case holds we can substitute $c^\top p^*\le (\frac{1}{2}-\varepsilon)c^\top x^*$ to the result we got before:
\[ E[\tau(G,T_\Fscr\triangle T,c)]\le (1-\beta)c^\top x^* + (\frac{1}{2}-\varepsilon)f(\beta)c^\top x^*.\]
We got two upper bounds for $E[\tau(G,T_\Fscr\triangle T,c)]$, both having, for any fixed $\beta$, linear functions of $\varepsilon$ as coefficients of $c^\top x^*$. The minimum of the two functions has its maximum at $\varepsilon=\displaystyle \frac{1}{2} - \frac{\beta}{f(\beta)+1}$ which, as a function of $\beta$, has a unique minimum at $\beta=4/9$ (and then $\omega=1/2$, $f(\beta)=1/9)$, with minimum value $\varepsilon=1/10$.
\endproof
\section{Connections}\label{sec:connections}
Finally, we explain the connection of the results to their immediate predecessor, to some variants and to some open questions.
\noindent {\bf 5.1} First, we explain the content of this work in terms of An, Kleinberg and Shmoys \cite{AKS12}:
Replace the $\hat f_{U_i}^*$ provided by \cite[Lemma~3]{AKS12} -- whose existence is proved with linear-programming and network flow methods -- by the vector $x^Q$, $x^Q(e):=\Pr (\{e\}=Q\cap \Fscr)$, see just above Lemma~\ref{lem:xC}. The
Lemma provides alternative simple properties for $x^Q$ that turn out to be more advantageous than those of $f_{U_i}^*$, moreover easy to prove.
The result of this change is that the maximum possible deficit $\beta\omega (\tau -\omega)$ of $T'$-cuts for a tentative `$T'$-join dominator', where $\omega=\tau/2$ (the place of the maximum) in \cite{AKS12},
is replaced by $\displaystyle\frac{\beta\omega (\tau -\omega)}{1-\omega}$, where $\omega=1 - \sqrt{\frac{1}{\beta} - 2}$ (the new place of the maximum), see Claim 2 of the proof.
Another advantage is due to the fact that the new vectors sum up to a smaller vector than $c^\top x^*$: actually to at most $c^\top x^*/2$, and are in fact dominated by $p^*$ (Lemma~\ref{lem:xC}), where $c^\top p^*<(\frac{1}{2}-\varepsilon)c^\top x^*$ unless $c^\top q^*<(\frac{1}{2}+\varepsilon)c^\top x^*$ (Fact in Section~\ref{sec:Results}).
Despite these advantages, I cannot compare $\hat f_{U_i}^*$ and $x^Q$ directly. Therefore it seemed reasonable to hope that combining the two may further improve the bound $8/5$. Figure~\ref{fig:computation} is the Wolfram Alpha output showing that this is not the case.
If the coefficient of the sum of the $\hat f_{U_i}^*$ is $y$ -- this is the only single number that determines the extent of acting as \cite{AKS12} did --, our formulas in Section~\ref{sec:proof} are revised as follows. In Lemma~\ref{lem:def} the upper bound becomes $4\beta - 1 - \beta x^*(Q) - y$ and then replacing Claim~1 and redoing Claim~3 accordingly (cf. the conclusion of these in the two lines following the proof of Claim~3), furthermore replacing $f^Q(\beta)$, $f(\beta)$ by the two-variable functions $f^Q(\beta,y)$, $f(\beta,y)$:
$$E[\tau(G,T_\Fscr\triangle T,c)]\le (1-\beta +y)c^\top x^* + c^\top E[s^\Fscr (\beta)]\le (1-\beta +y)c^\top x^* + f(\beta,y)c^\top p^*,$$ with $\displaystyle \Pr(|Q\cap \Fscr|\ge 2)f^Q(\beta,y)\le \frac{\beta\omega (3-\frac{1}{\beta} -\omega) - y}{1-\omega}=:f(\beta,y)$, where $\omega=\sqrt{\frac{1}{\beta} - 2+\frac{y}{\beta}}$. We get now $\varepsilon=\displaystyle \frac{1}{2} - \frac{\beta - y}{f(\beta,y)+1}.$
This is the function minimized in Figure~\ref{fig:computation}. According to Wolfram Alpha the minimum is reached for $y=0$, and then the result only confirms the hand computations of Section~\ref{sec:proof}.
\begin{figure}[!h]
\vskip -1cm
\includegraphics[bb=28 498 461 571]{figur3eps}
\caption{\footnotesize Mixing the performance of our analysis with that of An, Kleinberg, Shmoys \cite{AKS12}, optimally.} \label{fig:computation}
\end{figure}
I was not able to exclude by hand that the minimum of this two-variable function for $\varepsilon$ could be smaller than $1/10$. Thanks to Louis Esperet and Nicolas Catusse for a pointer and a first guiding to Wolfram Alpha, that provided the answer of Figure~\ref{fig:computation}, and to Sebastian Pokutta who has double-checked the computations, with Mathematica. Of course, besides $\hat f_{U_i}^*$ and $x^Q$ there may be many other vectors to combine, and other possibilities for improvement.
\medskip\noindent
{\bf 5.2}
The results of the paper have obvious corollaries according to reductions of variants of the TSP to the TSP path problem, as a black box, for instance:
For the {\em clustered traveling salesman} problem \cite{fivethird} in which vertices of pairwise disjoint sets have to be visited consecutively, the update for the performance guarantee, where the number of clusters is a constant, is $8/5$; substituting our results to \cite{AKS12}, we get that the {\em prize-collecting $s$-$t$ path TSP problem,} is $1.94837$-approximable.
\medskip\noindent
{\bf 5.3} Some of the questions that arise may be easier than the famous questions of the field:
Could the results of \cite{SV12} {\em $3/2$-approximating minimum size $T$-tours or $7/5$-approximating tours} be reached {\em with the Best of Many Christofides} algorithm ? Could the methods make the so far rigid bound of $3/2$ move down at least for {\em shortest $2$-edge-connected multigraphs} ?
\subsection*{Acknowledgment}
\small Many thanks to the organizers and participants of the Carg\`ese Workshop of Combinatorial Optimization devoted to the TSP problem, for their time and interest, furthermore to Corinna, Jens, Kenjiro, Marcin and Zoli Szigeti for their comments on this manuscript. I am highly indebted to Joseph Cheriyan, Zoli Kir\'aly and David Shmoys for their prompt and pertinent opinions before my presentation, to R.~Ravi and Attila Bern\'ath, for their continuous interest and wise suggestions.
\medskip
Thanks are also due to an anonymous pickpocket for a free day I could spend at Orly Airport, and to Easyjet for a delayed flight followed by a night I could spend at Saint Exup\'ery Airport. This research began, thanks to their accidental, but helpful, day and night contributions.
|
2,869,038,154,656 | arxiv | \section{Introduction}
In the 1960s H.~Davenport popularized the following problem, motivated by an application in algebraic number theory.
Let $G$ be an additive finite abelian group. Determine the smallest integer $\ell$ such that each sequence over $G$ of length at least $\ell$ has a non-empty subsequence the sum of whose terms equals $0 \in G$.
This integer is now called Davenport's constant of $G$, denoted $\Do(G)$.
We refer to the recent survey article \cite{gaogeroldingersurvey}, the lecture notes \cite{geroldinger_lecturenotes}, the monographs \cite{geroldingerhalterkochBOOK}, in particular Chapters 5 to 7, and \cite{taovuBOOK}, in particular Chapter 9, for detailed information on and applications of Davenport's constant, e.g., in investigations of the arithmetic of maximal orders of algebraic number fields.
Parallel to the problem of determining Davenport's constant, a direct problem, the associated inverse problem, i.e., the problem of determining the structure of the longest sequences that do not have a subsequence with sum zero, was intensely investigated as well.
On the one hand, solutions to the inverse problem are relevant in the above mentioned applications as well, and on the other hand, inverse results for one type of group can be applied in investigations of the direct problem for other, more complicated, types of groups
(see, e.g., \cite{bhowmik3}).
In this paper, we investigate the inverse problem associated to Davenport's constant for general finite abelian groups of rank two, complementing the investigations of the first paper in this series \cite{WAS20} that focused on groups of the form $C_m^2$, i.e., the direct sum of two cyclic groups of order $m$. To put this in context, we recall that the value of Davenport's constant for groups of rank two is well-known (cf.~Theorem \ref{thm_dir} and the references there); moreover, for cyclic groups, answers to both the direct and the inverse problem are well-known (cf.~Theorems \ref{thm_dir} and \ref{thm_invcyc} and see, e.g., \cite{savchevchen07,yuan07} for refinements), whereas, for groups of rank at least three, both the direct and the inverse problem is in general wide open (see, e.g., \cite{bhowmik,WAS_c222n} for results in special cases).
For groups of the form $C_m^2$ there is a well-known and well-supported conjecture regarding the answer to the inverse problem (see Definition \ref{def_B} for details).
For groups of the form $C_2 \oplus C_{2n}$ and $C_3 \oplus C_{3n}$ the inverse problem was solved in \cite[Section 3]{gaogeroldinger02} and \cite{chensavchev07}, respectively, and in \cite[Section 8]{gaogeroldinger99} and \cite{girard08} partial results in the general case were obtained.
Here we solve, \emph{assuming} the above mentioned conjecture for groups of the form $C_m^2$ is true, the inverse problem for general groups of rank two (see Theorem \ref{thm_new}).
In our proof, we use direct and inverse results for cyclic groups and groups of the form $C_m^2$, which we recall in Subsection \ref{sub_known}, that we combine by using the Inductive Method (cf.~\cite[Section 5.7]{geroldingerhalterkochBOOK}).
\section{Notation and terminology}
\label{sec_not}
We recall some standard notation and terminology (we follow \cite{gaogeroldingersurvey} and \cite{geroldingerhalterkochBOOK}).
We denote by $\mathbb{Z}$ the set of integers, and by $\mathbb{N}$ and $\mathbb{N}_0$ the positive and non-negative integers, respectively.
For $a,b \in \mathbb{Z}$, we denote by $[a,b]=\{z \in \mathbb{Z} \colon a\le z\le b\}$, the interval of integers.
For $k \in \mathbb{Z}$ and $m \in \mathbb{N}$, we denote by $[k]_m$ the integer in $[0, m-1]$ that is congruent to $k$ modulo $m$.
Let $G$ denote an additively written finite abelian group. (Throughout, we use additive notation for abelian groups.)
For a subset $G_0 \subset G$, we denote by $\langle G_0\rangle$ the subgroup generated by $G_0$.
We call elements $e_1, \dots, e_r \in G\setminus \{0\}$ independent
if $\sum_{i=1}^rm_ie_i=0$ with $m_i \in \mathbb{Z}$ implies that $m_ie_i=0$ for each $i \in [1,r]$.
We call a subset of $G$ a basis if it generates $G$ and its elements are independent.
For $n \in \mathbb{N}$, we denote by $C_n$ a cyclic group of order $n$.
For each finite abelian group $G$, there exist uniquely determined $1< n_1 \mid \dots \mid n_r$ such that
$G\cong C_{n_1}\oplus \dots \oplus C_{n_r}$; we refer to $r$ as the rank of $G$ and to $\exp(G)=n_r$ as the exponent of $G$.
We denote by $\mathcal{F}(G)$ the, multiplicatively written, free abelian monoid over $G$, that is,
the monoid of all formal commutative products
\[S=\prod_{g\in G} g^{\vo_g(S)}\]
with $\vo_g(S)\in \mathbb{N}_0$.
We call such an element $S$ a sequence over $G$.
We refer to $\vo_g(S)$ as the multiplicity of $g$ in $S$. Moreover, $\s(S)=\sum_{g \in G} \vo_g(S)g\in G$ is called the sum of $S$,
$|S|=\sum_{g \in G} \vo_g(S)\in \mathbb{N}_0$ the length of $S$, and $\supp(S) = \{g \in G\colon \vo_g(S) > 0\}\subset G$ the support of $S$.
We denote the unit element of $\mathcal{F}(G)$ by $1$ and call it the empty sequence.
If $T \in \mathcal{F}(G)$ and $T \mid S$ (in $\mathcal{F}(G)$), then we call $T$ a subsequence of $S$; we say that it is a proper subsequence if $1\neq T \neq S$.
Moreover, we denote by $T^{-1}S$ its co-divisor, i.e., the unique sequence $R$ with $RT=S$.
If $\s(S)=0$, then we call $S$ a zero-sum sequence (zss, for short), and if $\s(T)\neq 0$ for each $1 \neq T \mid S$, then we say that $S$ is zero-sum free.
We call a zss a minimal zss (mzss, for short) if it is non-empty and has no proper subsequence with sum zero.
Using the notation recalled above, the definition of Davenport's constant can be given as follows.
For a finite abelian group $G$, let $\ell\in \mathbb{N}$ be minimal with the property that each $S\in \mathcal{F}(G)$ with $|S| \ge \ell$ has a subsequence $1\neq T\mid S$ such that $\s(T)=0$.
It is a simple and well-known fact that $\Do(G)$ is the maximal length of a mzss over $G$ and that each zero-sum free sequence of length $\Do(G)-1$ over $G$, i.e., a sequence appearing in the inverse problem associated to $\Do(G)$, is a subsequence of a mzss of length $\Do(G)$.
Since it has technical advantages, we thus in fact investigate the structure of mzss of maximal length (ml-mzss, for short) instead of zero-sum free sequences of length $\Do(G)-1$.
Each map $f: G \to G'$ between finite abelian groups extends uniquely to a monoid homomorphism $\mathcal{F}(G) \to \mathcal{F}(G')$, which we denote by $f$ as well.
If $f$ is a group homomorphism, then $\s(f(S))= f(\s(S))$ for each $S \in \mathcal{F}(G)$.
\section{Formulation of result}
\label{sec_res}
In this section we recall the conjecture mentioned in the introduction and formulate our result.
\begin{definition}
\label{def_B}
Let $m \in \mathbb{N}$. The group $C_m^2$ is said to have
Property \textbf{B} if each ml-mzss equals $g^{\exp(G)-1}T$ for some $g\in C_m^2$ and $T \in \mathcal{F}(C_m^2)$.
\end{definition}
Property \textbf{B} was introduced by W.~Gao and A.~Geroldinger \cite{gaogeroldinger99,gaogeroldinger03a}.
It is conjectured that for each $m\in \mathbb{N}$ the group $C_m^2$ has Property \textbf{B} (see the just mentioned papers and, e.g., \cite[Conjecture 4.5]{gaogeroldingersurvey}).
We recall some result on this conjecture.
By a very recent result (see \cite{gaogeroldingergryn}, and \cite{gaogeroldinger03a} for an earlier partial result) it is known that to establish Property \textbf{B} for $C_m^2$ for each $m \in \mathbb{N}$, it suffices to establish it for $C_p^2$ for each prime $p$.
Moreover, Property \textbf{B} is known to hold for $C_m^2$ for $m \le 28$ (see \cite{bhowmik2} and \cite{gaogeroldinger03a} for $m \le 7$).
For further recent results towards establishing Property \textbf{B} see \cite{WAS15,WAS20,bhowmik2}.
As indicated in the introduction, we characterize ml-mzss for finite abelian groups of rank two, under the assumption that a certain subgroup of the group has Property \textbf{B}.
\begin{theorem}
\label{thm_new}
Let $G$ be a finite abelian group of rank two, say, $G\cong C_m \oplus C_{mn}$ with $m, n \in \mathbb{N}$ and $m \ge 2$.
The following sequences are minimal zero-sum sequences of maximal length.
\begin{enumerate}
\item \( S = e_j^{\ord e_j-1} \prod_{i=1}^{\ord e_k} (-x_ie_j+e_k)\)
where $\{e_1,e_2\}$ is a basis of $G$ with $\ord e_2= mn$, $\{j,k\}=\{1,2\}$, and $x_i \in \mathbb{N}_0$ with $\sum_{i=1}^{\ord e_k}x_i \equiv -1 \pmod{\ord e_j}$.
\item
\(S=g_1^{s m -1} \prod_{i=1}^{(n+1-s)m} (-x_ig_1 + g_2)\)
where $s \in [1,n]$, $\{g_1,g_2\}$ is a generating set of $G$ with $\ord g_2=mn$ and, in case $s\neq 1$, $mg_1=mg_2$, and
$x_i \in \mathbb{N}_0$ with $\sum_{i=1}^{(n+1-s)m}x_i = m -1$.
\end{enumerate}
If $C_m^2$ has Property \textbf{B}, then all minimal zero-sum sequences of maximal length over $G$ are of this form.
\end{theorem}
The case $G \cong C_m^2$, i.e.\ $n=1$, of this result is well-known and included for completeness only (see, e.g., \cite[Theorem 5.8.7]{geroldingerhalterkochBOOK}); in particular, note that (2) is redundant for $n=1$.
This result can be combined with the above mentioned results on Property \textbf{B} to yield unconditional results for special types of groups. We do not formulate these explicitly and only point out that, since $C_2^2$ and $C_3^2$ have Property \textbf{B} (cf.\ above), the results on $C_2\oplus C_{2n}$ and $C_3 \oplus C_{3n}$ mentioned in the introduction can be obtained in this way.
\section{Proof of the result}
In this section we give the proof of Theorem \ref{thm_new}. First, we recall some results that we use in the proof. Then, we give the actual argument.
\subsection{Known results}
\label{sub_known}
The value of $\mathsf{D}(G)$ for $G$ a group of rank two, i.e., the answer to the direct problem, is well-known (see \cite{olson69_2,vanemdeboas69}).
\begin{theorem}
\label{thm_dir}
Let $m,n\in \mathbb{N}$. Then $\Do(C_m \oplus C_{mn})=m + mn - 1$.
\end{theorem}
Next, we recall some results on sequences over cyclic groups.
Namely, the solution to the inverse problem associated to Davenport's constant for cyclic groups, a simple special case of \cite{boveyetal},
and the Theorem of Erd{\H o}s--Ginzburg--Ziv~\cite{erdosginzetal61}
\begin{theorem}
\label{thm_invcyc}
Let $n \in \mathbb{N}$ and $S \in \mathcal{F}(C_n)$.
\begin{enumerate}
\item $S$ is a ml-mzss if and only if $S=e^{n}$ for some $e\in C_n$ with $\langle e \rangle=C_n$.
\item If $|S|\ge 2n-1$, then there exists some $T \mid S$ with $|T|=n$ and $\s(T)=0$.
\end{enumerate}
\end{theorem}
The following result is a main tool in the proof of Theorem \ref{thm_new}. It was obtained in \cite[Proposition 4.1, Theorem 7.1]{gaogeroldinger03a}; note that, now the additional assumption in the original version (regarding the existence of zss of length $m$ and $2m$) can be dropped, since by \cite[Theorem 6.5]{gaogeroldingersurvey} it is known to be fulfilled for each $m \in \mathbb{N}$, also note that the second type of sequence requires $t \ge 3$).
\begin{theorem}
\label{thm_tm-1}
Let $m,t \in \mathbb{N}$ with $m\ge 2$ and $t \ge 2$. Suppose that $C_m^2$ has Property \textbf{B}. Let $S\in \mathcal{F}(C_m^2)$ be a zss of length $tm-1$
that cannot be written as the product of $t$ non-empty zss.
Then for some basis $\{f_1, f_2\}$ of $C_m^2$,
\[S= f_1^{sm-1}\prod_{i=1}^{(t-s)m}(a_if_1+f_2)\]
with $s\in [1,t-1]$ and $a_i \in [0, m-1]$ where $\sum_{i=1}^{(t-s)m} a_i \equiv 1 \pmod{m}$, or
\[S= f_1^{s_1m}f_2^{s_2 m-1} (bf_1 + f_2)^{s_3 m-1} (bf_1 + 2f_2)\]
with $s_i\in \mathbb{N}$ such that $s_1+s_2+s_3= t$ and $b \in [1,m-1]$ such that $\gcd\{b, m\}=1$.
\end{theorem}
\subsection{Proof of Theorem \ref{thm_new}}
We start by establishing that all the sequences are indeed ml-mzss.
Since the length of each sequence is $mn+n-1$, and by Theorem \ref{thm_dir}, it suffices to show that they are mzss.
It is readily seen that $\s(S)=0$, thus it remains to show minimality. Let $1\neq T\mid S$ be a zss. We assert that $T=S$.
If $S$ is as given in (1), then it suffices to note that $e_j^{\ord e_j-1}$ is zero-sum free, thus $(-x_ie_j+e_k)\mid T$ for some $i\in [1, \ord e_k] $ and this implies $\prod_{i=1}^{\ord e_k} (-x_ie_j+e_k)\mid T$, which implies $S=T$.
Suppose $S$ is as given in (2). We first note that $ag_1\in \langle g_2 \rangle$ if and only if $m\mid a$. Let $v \in \mathbb{N}_0$ and $I \subset [1, (n+1-s)m]$ such that $T=g_1^v\prod_{i \in I}(-x_ig_1 + g_2)$. Since $\s(T)=0$ and by the above observation, it follows that $m \mid (v-\sum_{i \in I}x_i)$, say $mb=v-\sum_{i \in I}x_i$, where $b \in [0, s-1]$. Furthermore, we get $mbg_1+|I|g_2=0$.
If $s=1$, then $b=0$, implying that $|I|=mn$ and $v=m-1$, that is $S=T$.
If $s>1$, we have $mg_1= mg_2$, thus $mn \mid |I|+mb$ and indeed $mn= |I|+mb$. Yet, $mn= |I|+mb$ implies $|I|= [1, (n+1-s)m]$ and $b=s-1$, that is $S=T$.
Thus, the sequences are mzss.
Now, we show that if $C_m^2$ has Property \textbf{B}, then each ml-mzss is of this form.
As already mentioned, the case $n=1$ is well-known (cf.~Theorem \ref{thm_tm-1}).
We thus assume $n \ge 2$, that is $G\cong C_{m}\oplus C_{mn}$ with $m\ge 2$ and $n \ge 2$. Furthermore, let $H = \{mg \colon g \in G\} \cong C_n$ and let $\varphi : G \to G/H$ be the canonical map; we have $G/H \cong C_m^2$. We apply the Inductive Method, as in \cite[Section 8]{gaogeroldinger99}, with the exact sequence
\[0 \to H \hookrightarrow G \overset{\varphi}{\to} G/H \to 0.\]
Let $S \in \mathcal{F}(G)$ be a ml-mzss.
First, we assert that $\varphi(S)$ cannot be written as the product of $n+1$ non-empty zss, in order to apply Theorem \ref{thm_tm-1}.
Suppose this is possible, say $\varphi(S)= \prod_{i=1}^{n+1}\varphi(S_i)$ with non-empty zss $\varphi(S_i)$.
Then $\prod_{i=1}^{n+1}\s(S_i)\in \mathcal{F}(H)$ has a proper subsequence that is a zss, yielding a proper subsequence of $S$ that is a zss.
Thus, by Theorem \ref{thm_tm-1} there exists a basis $\{f_1, f_2\}$ of $C_m^2$ such that
\begin{equation}
\label{eq_struc1}
\varphi(S)= f_1^{sm-1}\prod_{i=1}^{(n+1-s)m}(a_if_1+f_2)\end{equation}
with $s\in [1,n]$, $a_i \in [0, m-1]$, and $\sum_{i=1}^{(n+1-s)m} a_i \equiv 1 \pmod{m}$ or
\begin{equation}
\label{eq_struc2}
\varphi(S)= f_1^{s_1m}f_2^{s_2 m-1} (bf_1 + f_2)^{s_3 m-1} (bf_1 + 2f_2)
\end{equation}
with $s_i\in \mathbb{N}$ such that $s_1+s_2+s_3= n+1$ and $b \in [1,m-1]$ such that $\gcd(b, m)=1$.
We distinguish two cases, depending on which of the two structures $\varphi(S)$ has.
\medskip
Case 1: $\varphi(S)$ is of the form given in \eqref{eq_struc1}. Moreover, we assume the basis $\{f_1,f_2\}$ is chosen in such a way that $s$ is maximal.
Furthermore, let $\psi: G/H \to \langle f_1\rangle$ denote the projection with respect to $G/H= \langle f_1 \rangle \oplus \langle f_2 \rangle$.
Let $S=FT$ such that $\varphi(F)= f_1^{sm-1}$ and $T= \prod_{i=1}^{(n+1-s)m} h_i$ such that $\varphi(h_i)=a_if_1+f_2$.
We call a factorization $T= S_0S_1 \dots S_{n-s}$ admissible if $\s(\varphi(S_i))=0$ and $|S_i|=m$ for $i \in [1, n-s]$ (then $\s(\varphi(S_0))=f_1$ and $|S_0|=m$). Since for a sequence $T'\mid T$ of length $m$ the conditions $\s(\varphi(T'))=0$ and $\s(\psi(\varphi(T')))=0$ are equivalent, the existence of admissible factorizations follows using Theorem \ref{thm_invcyc}.
Let $T= S_0S_1 \dots S_{n-s}$ be an admissible factorization such that $|\supp(S_0)|$ is maximal (among all admissible factorizations of $T$).
Moreover, let $F = F_0 F_1 \dots F_{s-1}$ with $|F_0| = m-1$ and $ |F_i| = m $ for $i \in [1,s-1]$.
Then $\s(\varphi(F_i))=0$ for $i \in [1, s-1]$, $\s(\varphi(S_i))=0$ for $i \in [1, n-s]$, and $\s(\varphi(S_0F_0))=0$.
Thus, $\s(S_0F_0)\prod_{i=1}^{s-1} \s(F_i)\prod_{i=1}^{n-s} \s(S_i)$ is a sequence over $H$, and it is a mzss.
Since its length is $n$, it follows by Theorem \ref{thm_invcyc} that there exists some generating element $e \in H$ such that this sequence is equal to $e^n$.
We show that $|\supp(F)|=1$. We assume to the contrary that there exist distinct $g,g'\in \supp(F)$.
First, suppose $s\ge 2$. We may assume $g\mid F_i$ and $g'\mid F_j$ for distinct $i,j\in [0, s-1]$.
Now we consider $F_i'=g^{-1}g'F_i$ and $F_j'=g'^{-1}gF_j$ and $F_k'=F_k$ for $k \notin \{i,j\}$.
As above, we get that $\s(S_0F_0')\prod_{i=1}^{s-1} \s(F_i')\prod_{i=1}^{n-s} \s(S_i)$ is a ml-mzss over $H$ and thus equal to $\bar{e}^n$ for some generating element $\bar{e}\in H$ and indeed, since at most two elements in the sequence are changed and for $n=2$ there is only one generating element of $H$, we have $e=\bar{e}$.
Thus, $\s(F_i)=\s(F_i')= \s(F_i)+g'-g$, a contradiction.
Second, suppose $s=1$. It follows that $m \ge 3$, since for $m=2$ we have $|F|=1$.
We consider $S_0S_j$ for some $j \in [1, n-1]$. Let $S_0S_j= T'T''$ with $|T'|=|T''|=m$.
Since $\s(\varphi(T')) + \s(\varphi(T''))= f_1$ and $\s(\varphi(T')), \s(\varphi(T'')) \in \langle f_1 \rangle$ it follows that there exists some $a \in [0, m-1]$ such that $\s(\varphi(T'))= (a+1)f_1$ and $\s(\varphi(T''))= -af_1$.
Let $F_0=F'F''$ with $|F'|= m-(a+1)$ and $|F''|=a$.
We note that $\s(T'F')\s(T''F'')\prod_{i=1, i \neq j}^{n-1} \s(S_i)$, is a ml-mzss over $H$ and again it follows that it is equal to $e^n$ (with the same element $e$ as above).
If both $F'$ and $F''$ are non-empty, we may assume $g\mid F'$ and $g'\mid F''$, to obtain a contradiction as above.
Thus, it remains to investigate whether there exists a factorization $S_0S_j= T'T''$ with $|T'|=|T''|=m$ such that
$\{\s(\varphi(T')),\s(\varphi(T''))\}\neq \{0, f_1\}$.
We observe that such a factorization exists except if $\varphi(S_0 S_j) = (bf_1+ f_2)^{2m-1}(cf_1+f_2)$ (note that $\varphi(S_0 S_j)=(bf_1+ f_2)^{2m}$ is impossible, since $\s(\varphi(S_0 S_j))\neq 0$).
Thus, if such a factorization does not exist, for each $j \in [1,n-1]$, then $\varphi(T)=(bf_1+ f_2)^{mn-1}(cf_1+f_2)$. Since $\s(\varphi(T))=f_1$, we get $cf_1=(b+1)f_1$.
Thus, with respect to the basis consisting of $\bar{f}_1=bf_1+ f_2$ and $\bar{f}_2=f_1$, we have $\varphi(S)=\bar{f}_1^{mn-1}\bar{f}_2^{m-1}(\bar{f}_1+\bar{f}_2)$, contradicting the assumption that the basis $\{f_1, f_2\}$ maximizes $s$.
Therefore, we have $|\supp(F)|=1$ and thus
\[S = g_1^{sm - 1} T\]
for some $g_1 \in G$.
First, we consider the case $s= n$.
We have $\ord g_1 = mn$ and thus $G = \langle g_1 \rangle \oplus H_2$ where $H_2 \subset G$ is a cyclic group of order $m$.
Let $\pi:G \to H_2$ denote the projection with respect to $G=\langle g_1 \rangle \oplus H_2$.
We observe that $\pi(\prod_{i=1}^m h_i) \in \mathcal{F}(H_2)$ is a mzss and consequently it is equal to $g_2^m$ for some generating element $g_2$ of $H_2$. We note that $\{g_1, g_2\}$ is a basis of $G$. Thus, $S$ is of the form given in (1).
Thus, we may assume $s<n$.
Next, we show that if $\varphi(h_j)=\varphi(h_k)$ for $j,k \in [1,(n-s+1)m]$, then $h_j=h_k$.
Since $|h_j^{-1}T|= (n-s+1 -2)m + 2m-1$ and using again the projection $\psi$ introduced above and Theorem \ref{thm_dir}, it follows that there exists
an admissible factorization $T=S_0'S_1'\dots S_{n-s}'$ with $h_j \mid S_0'$.
Let $\ell \in [1,n-s]$ such that $h_jh_k \mid S_0'S_{\ell}'$.
Let $S_0'S_{\ell}'= T_j'T_k'$ such that $h_j \mid T_j'$, $h_k \mid T_k'$ and $|T_j'|= |T_k'|=m$.
Similarly as above, it follows that $\s(\varphi(T_j'))=(a'+1)f_1$ and $\s(\varphi(T_k'))=-a'f_1$ for some $a'\in [0,m-1]$.
We note that $\s(T_j'g_1^{m-a'-1})\s(T_k'g_1^{a'})\s(g_1^m)^{s-1}\prod_{i=1, i \neq \ell}^{n-s} \s(S_i')$, is a ml-mzss over $H$ and thus equal to $e'^n$ for a generating element $e'\in H$.
Similarly as above, it follows that $\s(h_j^{-1}h_kT_j'g_1^{m-a'-1})=e'$. Thus, $h_j=h_k$.
Consequently, we have \[S=g_1^{sm-1}\prod_{x \in [0,m-1]} k_x^{v_x}\]
with $\varphi(k_x)=xf_1+f_2$ for $x\in [0,m-1]$ and suitable $v_x \in \mathbb{N}_0$.
In the following we show that $S$ is of the form given in (2) or $\ord g_1=m$.
At the end we show that if $\ord g_1=m$, then $S$ is of the form given in (1).
We start with the following assertion.
\medskip
\noindent
\textbf{Assertion:}
Let $T= \bar{S_0}\bar{S_1} \dots \bar{S}_{n-s}$ be an admissible factorization.
Let $k_x \mid \bar{S}_0$ and let $k_y\mid \bar{S}_i$ for some $i \in[1, n-s]$. If $x< y$, then $k_y-k_x= (y-x)g_1$ and if
$x> y$, then $k_y - k_x= (y-x)g_1 +mg_1$.
\noindent
\emph{Proof of Assertion:}
We note that $\s(\varphi(\bar{S}_0k_x^{-1}k_y))=(-x+y+1)f_1$ and $\s(\varphi(\bar{S}_ik_y^{-1}k_x))=(-y+x)f_1$. Thus,
we have $\s(\varphi(\bar{S}_0k_x^{-1}k_yg_1^{[x-y-1]_m}))=0= \s(\varphi(\bar{S}_ik_y^{-1}k_xg_1^{[y-x]_m}))$.
We observe that $[x-y-1]_m+[y-x]_m= m-1$. Thus, similarly as above, $\s(\bar{S}_i)= \s(\bar{S}_ik_y^{-1}k_xg_1^{[y-x]_m})$. Thus, $k_y= k_x+[y-x]_m g_1$.
Consequently, if $x< y$, then $k_y-k_x= (y-x)g_1$ and if
$x> y$, then $k_y -k_x= (y-x)g_1 +mg_1$, proving the assertion.
\medskip
First, we show that $\supp(S_0^{-1}T)\subset \supp(S_0)$ or $\ord g_1= m$.
We assume that there exists some $i \in [1, n-s]$ and some $k_t \mid S_i$ such that $k_t \nmid S_0$ and show that this implies $\ord g_1=m$.
The sequence $k_t^{-1}S_iS_0$ has length $2m-1$.
Thus, as above, there exists a subsequence $S_i''\mid k_t^{-1}S_iS_0$ such that $\s(\varphi(S_i''))=0$ and $|S_i''|=m$. Let $S_0''= S_i''^{-1}S_iS_0$ and $S_j''=S_j$ for $j \notin \{0, i\}$.
We get that $S_0''S_1''\dots S_{n-s}''$ is an admissible factorization of $T$.
Since $k_t\mid S_0''$ and $k_t \nmid S_0$ and $|\supp(S_0)|$ is maximal (by assumption), there exists some $k_u\mid S_0$ such that $k_u \nmid S_0''$ and thus $k_u \mid S_i''$. Clearly $k_u \neq k_t$ and thus $t\neq u$.
We apply the Assertion twice.
First, to $k_u \mid S_0$ and $k_t \mid S_i$.
If $u< t$, then $k_t - k_u = (t-u)g_1$ and if
$u> t$, then $k_t - k_u = (t-u)g_1 + mg_1$.
Second, to $k_t \mid S_0''$ and $k_u \mid S_i''$.
If $t< u$, then $k_u - k_t = (u-t)g_1$ and if
$t> u$, then $k_u - k_t = (u-t)g_1 + mg_1$.
Thus, if $u< t$, then $k_t-k_u= (t-u)g_1$ and $k_u - k_t= (u-t)g_1 +mg_1$. Adding these two equations, we get $mg_1=0$ and $\ord g_1= m$.
And, if $u> t$, then $k_t - k_u= (t-u)g_1 +mg_1$ and $k_u-k_t= (u-t)g_1$, again yielding $\ord g_1= m$.
Second, we show that $|\supp(S_0^{-1}T)|=1$ or $\ord g_1=m$.
We assume that $|\supp(S_0^{-1}T)|\ge 2$, say it contains elements $k_u, k_t$ with $t> u$.
Let $i, j \in [1, n-s]$, not necessarily distinct, such that $k_u\mid S_i$ and $k_t \mid S_j$.
By the above argument we may assume that $\supp(S_0^{-1}T) \subset \supp(S_0)$.
We apply the Assertion with $k_t \mid S_0$ and $k_u\mid S_i$, to obtain $k_u -k_t= (u-t)g_1 +mg_1$.
And, we apply the Assertion with $k_u \mid S_0$ and $k_t \mid S_j$ to obtain $k_t-k_u= (t-u)g_1$. Thus, we obtain $mg_1=0$.
Consequently, we have $T=k_u^{(n-s)m}S_0$ and $k_u \mid S_0$ for some $u\in [0, m-1]$ or $\ord g_1 = m$.
We assume that $T=k_u^{(n-s)m}S_0$ and $k_u \mid S_0$ for some $u\in [0, m-1]$.
Since $n-s \ge 1$ and $\s(k_u^m)=e$, it follows that $\ord k_u= mn$. Let $f_2'=\varphi(k_u)= uf_1+f_2$. It follows that $\{f_1, f_2'\}$ is a basis of $G/H$.
If, for $x\in [0, m-1]$, an element $h \in \supp (T)=\supp(S_0)$ exists with $\varphi(h)= -xf_1+f_2'$ (as shown above there is at most one such element), then we denote it by $k_{-x}'$. In particular, $k_u=k_0'$.
For each $k_{-x}'\in \supp(S_0)$, similarly as above, $\s(k_0'^{m}) = \s(k_0'^{m-1}k_{-x}'g_1^x)$. Thus, we have $k_0'= k_{-x}' + x g_1$.
Let $x_i \in [0,m-1]$ such that $S_0=\prod_{i=1}^m k_{-x_i}'$. We know that $\sum_{i=1}^m (-x_if_1)=f_1$, i.e., $\sum_{i=1}^m x_i\equiv m-1 \pmod{m}$.
We show that $\sum_{i=1}^m x_i = m-1$ or $\ord g_1=m$. Assume the former does not hold, and let $\ell$ be maximal such that $\sum_{i=1}^{\ell}x_i= c< m $.
We observe that $\s(k_0'^{m-\ell}(\prod_{i=1}^{\ell}f'_{-x_i})g_{1}^{c})= \s(k_0'^{m-\ell-1}(\prod_{i=1}^{\ell+1}f'_{-x_i})g_{1}^{[c+x_{\ell+1}]_m})$. By the choice of $\ell$ it follows that $[c+x_{\ell+1}]_m= c+x_{\ell+1}-m$.
Thus, $k_0'+ c g_1= k_{-x_{\ell +1}}+ (c+x_{\ell+1}-m)g_1$ and
$k_0'= k_{-x_{\ell +1}}+ (x_{\ell+1}-m)g_1$, which implies $mg_1=0$.
So, $\ord g_1= m$, or $S$ is of the form, where $k_0'=g_2$,
\[S= g_1^{sm-1}g_2^{(n-s)m}\prod_{i=1}^m ( - x_{i}g_1+g_2)\]
with $x_i \in [0, m-1]$ and $\sum_{i=1}^m x_i=m-1$.
Clearly, $\{g_1, g_2\}$ is a generating set of $G$.
Moreover, we know that if $s \ge 2$, then $\s(g_1^m)= e=\s(g_2^m)$.
Thus, $S$ is of the form given in (2).
Finally, suppose $\ord g_1= m$.
We have $s=1$. Let $\omega: G\to G / \langle g_1 \rangle$ denote the canonical map.
The sequence $\omega(\prod_{i=1}^{mn} h_i)$ is a mzss. Thus, by Theorems \ref{thm_dir} and \ref{thm_invcyc}, $G / \langle g_1 \rangle$ is a cyclic group of order $mn$ and $\omega(\prod_{i=1}^{mn} h_i)= \omega(g_2)^{mn}$ for some $g_2\in G$, and $\ord g_2=mn$. Thus, $\{g_1,g_2\}$ is a basis of $G$ and $S$ has the form given in (1).
\medskip
Case 2: $\varphi(S)$ is of the form given in \eqref{eq_struc2}. If $m=2$, then $bf_1+2f_2= f_1$ and the sequence $\varphi(S)$ is also of the form given in \eqref{eq_struc1}. Thus, we assume $m \ge 3$. Moreover, we note that with respect to the basis $f'_1 = f_1$ and $f'_2 = bf_1 + f_2$, we have $f_2= (m-b)f'_1+e'_2 $ and $bf_1 + 2f_2= (m-b)f'_1 + 2f'_2$. Thus, we may assume that $b<m/2$.
Let $S=FT$ with $\varphi(T)= f_1^{m}(bf_1 + f_2)^{ m-1}f_2^{m-1}(bf_1 + 2f_2)$.
We note that $F=\prod_{i=1}^{n-2}F_i$ with $\varphi(F_i) \in \{f_1^m, f_2^m, (bf_1 + f_2)^m\}$ for each $i \in [1, n-2]$.
Suppose $T=T_1T_2$ such that $\s(\varphi(T_i))=0$ and $T_i \neq 1$ for $i \in [1,2]$.
Then $\s(T_1)\s(T_2)\prod_{i=1}^{n-2}\s(F_i)$ is a ml-mzss over $H$ and thus equal to $e^n$ for some generating element $e$ of $H$.
It follows that for each factorization $T=T_1T_2$ with $\s(\varphi(T_i))=0$ and $T_i \neq 1$ for $i \in [1,2]$, we have $\s(T_1)=\s(T_2)=e$.
Let $T_1'\mid T$ such that $\varphi(T_1')= f_1^b (bf_1 + f_2)^{ m-1} f_2$ and let $T_2'=T_1'^{-1}T$.
Suppose that, for some $i \in [1,2]$, there exists distinct elements $g_i,g_i'\in \supp(S)$ such that $\varphi(g_i)=\varphi(g_i')=f_i$.
We may assume that $g_i \mid T_1'$ and $g_i'\mid T_2'$.
It follows that $\s(g_i^{-1}g_i'T_1')=e=\s(T_1')$, a contradiction.
Thus, $\varphi^{-1}(f_i)\cap \supp(S)= \{g_i\}$ for $i \in [1,2]$.
Now, let $T_1'' \mid T$ such that $\varphi(T_1'')= f_1^{2b}(bf_1 + f_2)^{ m-2}f_2^2$ and $T_2''=T_1''^{-1}T$.
We can argue in the same way that $\varphi^{-1}(bf_1+f_2)\cap \supp(S)= \{k_b\}$.
Finally, let $k\mid S$ such that $\varphi(k)=bf_1 + 2f_2$.
It follows that
\[ S = g_1^{s_1m}g_2^{s_2m-1} k_b^{s_3 m-1} k.\]
We note that $\ord g_1= mn$ and that $\{g_1, g_2\}$ is a generating set of $G$.
Since, as above, $\s(k_b^{m-2}kg_1^b)=e= \s(k_b^{m-1}kg_2^{m-1})$, it follows that $bg_1=k_b +(m-1)g_2$.
Moreover, $\s(g_1^{2b}k_b^{m-2}g_2^2)= e= \s(g_1^{b}k_b^{m-1}g_2)$ implies that $bg_1+g_2= k_b$. Thus, $mg_2=0$ and and $\{g_2, g_1\}$ is a basis of $G$. Moreover, we have $s_2= 1$.
Additionally, we get $k+(m-1)g_2= bg_1+g_2$, i.e., $k =bg_1+2g_2 $.
We observe that the projection to $\langle g_1 \rangle$ (with respect to $G=\langle g_1 \rangle \oplus \langle g_2 \rangle$) of the sequence $g_2^{-(m-1)}S$ , i.e., the sequence $g_1^{s_1m}(bg_1)^{s_3m}$, is a mzss. Since $s_1+s_3=n$, this implies $b=1$.
Thus, $S$ is of the form given in (1).
\section*{Acknowledgment}
The author would like to thank the referee for suggestions and corrections, and A.~Geroldinger and D.~Grynkiewicz for valuable discussions related to this paper.
|
2,869,038,154,657 | arxiv | \section{Introduction}
The state of an $n$-dimensional system is represented by an $n \times n$ density operator, which needs $n^{2}-1$ real independent parameters for its complete specification. Any density operator in its eigenbasis can be represented as $\rho= \sum _{i=1}^{n} p_{i} \hat{P_{i}}$, where $p_{i}= Tr(\rho \hat{P}_{i})$ are the fractional populations. Thus in an eigenbasis, measurement of projection operators $\hat{P_{i}}$ yields $n-1$ independent probabilities corresponding to the diagonal elements of $\rho$. The remaining $n^2-1-(n-1)$ off-diagonal elements can be accessed by expressing $\rho$ in no fewer than $n+1$ bases.
An optimal measurement strategy thus corresponds to a judicious choice of exactly $n+1$ bases, referred to as Mutually Unbiased Bases (MUBs)\cite{Schwing, woot}, such that measurement performed in each of these bases yields unique, non-redundant information about the system.
For a given system, the practical utility of MUBs is dictated by their existence, and their physical realisation in a laboratory. The question of their existence for arbitrary $n$-dimensional systems has been extensively studied, and has been answered in the affirmative when $n$ is a prime or a power of a prime\cite{Ivano, woot}. For such spaces there always exist $n+1$ sets of MUBs which are complete. In particular when $n=2^{d}$ for some positive integer $d$, it is possible to find a partitioning of $d$-qubit Pauli operators into $n+1$ disjoint maximal commuting classes, where each class consists of $n-1$ maximally commuting set of operators \cite{band, zeii}. The corresponding MUBs are the simultaneous eigenbases of the $n+1$ commuting classes. More generally, in the case of prime power dimensions, several approaches are available to construct a complete set of $n+1$ MUBs (e.g. Heisenberg- Weyl group method\cite{band}; using finite field theory \cite{woot, tdurt} ; using generalized angular momentum operators\cite{kibler}). However, the existence of a complete set of MUBs for general finite-dimension Hilbert spaces remains an open question\cite{zan, gra, weig, beng}.
When the $n+1$ sets of MUBs are known to exist, their construction and physical realization has been achieved for very specific applications such as quantum state tomography \cite{fili, fern, adam}, Mean's King problem \cite{eng, ara, hay}; quantum cryptography \cite{bou, lin}, quantum error correction \cite{cal}, entanglement detection \cite{ent}, and quantum coding and discrete Wigner function \cite{gib, bj}. Several experimental techniques to implement the complete set of MUBs in photonic systems have been investigated \cite{lima, adam}.
Some recent works have focussed on generalising the notion of MUBs \cite{amir}. However, for systems for which they are known to exist, construction of optimal measurement operators based on MUBs is critical for their wider applicability. A general construction mechanism of such operators, to our knowledge, is unavailable in the literature. Our focus in this paper is to fill this void. Specifically, we consider spin systems for which MUBs are known to exist, and:
\begin{enumerate}
\item provide a general method to construct optimal measurement operators based on MUBs that are mutually disjoint and maximally commuting;
\item identify the physical observables to which they correspond;
\item demonstrate how they can be physically realised.
\end{enumerate}
In order to concretize ideas we eschew a general exposition for arbitrary systems, and instead consider specific spin systems for which physically realizable, optimal measurement operators based on MUBs are hitherto unavailable. In particular, we construct an orthonormal set of operators based on MUBs for spin-1, spin-3/2 and spin-2 systems. We achieve this based on a construction mechanism that extends the Stern-Gerlach setup for spin-1/2 systems. Such an extension enables us to identify the corresponding physical observables, as with the spin-1/2 case. We demonstrate how the operators can be physically realised for spin-1 and spin-3/2 cases. \emph{From an operational perspective, a key feature of the construction is that it naturally classifies the operators into mutually disjoint subsets, members of which commute enabling simultaneous measurements, resulting in a optimal measurement strategy}. The circumscription of the proposed methodology to spin systems is mainly in the interest of exposition. Examination of the method of construction will reveal that it is general enough to be applicable to higher-order spin systems and to arbitrary non-spin systems of finite dimension for which MUBs are known to exist. \\
\section{Spin-1/2 density matrix}
Our technique is closely related to the case involving a spin-1/2 density matrix. It is instructive first to review this case along with the appropriate definitions. In a finite dimensional Hilbert space $H_{d}$, orthonormal bases $A= \{|a_{0}\rangle, |a_{1}\rangle,\ldots, |a_{d-1}\rangle\}$ and $B=\{|b_{0}\rangle, |b_{1}\rangle,\ldots, |b_{d-1}\rangle\}$ are said to be mutually unbiased if $|\langle a_{i}|b_{j}\rangle|= d^{-1/2}$, for every $i,j= 0,1,\ldots,d-1$.
For a spin-1/2 density matrix parameterized as $\rho=\frac{1}{2}(I_2+\sigma_xp_x+\sigma_yp_y+\sigma_zp_z)$, it is well known that the eigenbases of Pauli operators $\sigma_x, \sigma_y$ and $\sigma_z$ are the MUBs given by,
\begin{align*}
B_{1}&= \{ |0 \rangle, |1 \rangle \},\\
B_{2}&= \left\{ \frac{1}{\sqrt {2}} (|0 \rangle + |1 \rangle ), \frac{1}{\sqrt {2}} (|0 \rangle -|1 \rangle )\right\},\\
B_{3}&= \left\{ \frac{1}{\sqrt {2}} (|0 \rangle + i |1 \rangle ), \frac{1}{\sqrt {2}} (|0 \rangle -i|1 \rangle )\right\},
\end{align*}
where $|0 \rangle= \left(
\begin{array}{c}
1\\
0\\
\end{array}
\right)$ and $|1\rangle= \left(
\begin{array}{c}
0\\
1\\
\end{array}
\right)$.\\ \\
Optimal measurement operators based on $B_i,i=1,2,3$ can be constructed and physically realized with the Stern--Gerlach apparatus. A detailed analysis of Stern--Gerlach experiment and its implications are extensively discussed in the literature \cite{swift, home, tekin} and references therein. In this experiment, a particle with magnetic moment $\vec{\mu}$ passes through the inhomogenous magnetic field $\vec{B}$. The potential energy associated with the particle is $\hat{\mathcal{H}}= - \vec{\mu}. \vec{B}$, where the magnetic moment $\vec{\mu}$ is proportional to spin. Thus when the magnetic field is oriented along $z$- direction, one can measure the expectation value of $\sigma_{z}$. In terms of the projection operators $\hat{P_{1}}$ and ${\hat{P_{2}}}$ from the two eigenvectors of the operator $\sigma_z$, it is easy to see that $\sigma_{z}= \hat{P_{1}}- \hat{P_{2}}.$
The unitary matrix
{\small
\begin{align*}
U = \frac{1}{\sqrt{2}} \left(
\begin{array}{cc}
1 & 1 \\
1 & -1 \\
\end{array}\right),
\end{align*}
}
transforms $B_{1}$ to $B_{2}$. The observable $\sigma_x$ can be measured using the same apparatus if its diagonal (eigen) basis is of the same form as $\sigma_z$, necessitating that $\sigma_{x}= \hat{P'_{1}}-\hat{P'_{2}}$, where $\hat{P'_{i}}, i=1,2$ are the projection operators of $\sigma_x$. Experimentally this can be implemented by applying magnetic field in $x$-direction. In a similar manner from a unitary transformation of $B_1$ to $B_3$ we obtain $\sigma_{y}= \hat{P''_1}- \hat{P''_2}$ for appropriate projection operators, resulting in three measurements that constitute a complete set of parameters characterizing the spin-1/2 density matrix.
A key observation which we profitably exploit in the sequel is that the Pauli operators are linear combinations of projection matrices constructed from different basis vectors spanning a two-dimensional Hilbert space, related through unitary transformations that relate the basis sets of the MUBs.
\section{Spin-1 density matrix}
Simultaneous measurement of a complete set of commuting operators is equivalent to the measurement of a single nondegenerate operator by means of a maximal or complete quantum test \cite{peres}. In some cases, generalization of Stern- Gerlach experiment for spin \textgreater 1/2 is possible by using electric multipole fields along with multipole magnetic fields \cite{lamb}. Measurements of spin-1 systems require an electric quadrupole field in addition to a dipole magnetic one \cite{swift}.
To perform such measurements one requires four observables whose eigenstates are mutually unbiased; this, however, is not possible for spin components. Thus one cannot easily generalize the spin-1/2 Stern-Gerlach experiment as more number of parameters are needed.
A spin-1 density matrix $\rho$ is characterized by eight independent parameters, and can be expanded using eight orthonormal (excluding identity) operators in infinitely many ways. Extending the program used for the spin-1/2 case requires a representation of the $3 \times 3$ density matrix $\rho$ in a matrix basis that mimics the role played by the Pauli operators. Since the natural choice for spin-$j$ Hamiltonian requires a mutlipole expansion, we choose the spherical tensor representation of the spin density matrix due to Fano \cite{fano}. The density matrix for any spin-$j$ system is given by, $\rho= \sum_{k,q} t^{k}_{q} {\tau^{k}_{q}}^{\dagger}$ where the irreducible spherical tensors $\tau^{k}_{q}$s are the $k^{th}$ degree polynomials constructed out of spin operators, $\vec{J}= (J_{x}, J_{y}, J_{z})$(See Appendix for the detailed description of the representation). For a spin-1/2 system, $\sigma_{z}= \tau^{1}_{0}$. As with SU(3) generators (for e.g. generalised Gell-Mann matrices), the spherical tensor operators for spin-1 density matrix consist of two diagonal matrices $\tau^1_0$ and $\tau^2_0$ which play the role of the diagonal $\sigma_z$.
In summary, optimal measurement of a spin-1 system can be achieved by an MUB consisting of four basis sets, each of which contains two commuting operators constructed using three projection operators.
\subsection{Construction of maximally commuting orthogonal operators}
From the discussion above, and guided by the fact that there are two diagonal matrices amongst the spherical tensors, in order to extend the technique from the spin-1/2 case, we construct an orthonormal basis matrix set consisting of four sets of operators each containing two commuting operators, which enables simultaneous measurement using a single experimental setup.
Analogous to the spin-1/2 case where the Pauli operators $\sigma_x, \sigma_y$ and $\sigma_z$ are linear combinations of the projection operators, we define eight operators, comprising an orthonormal set, as linear combinations of the projection operators arising from the MUB. The coefficients of the linear combinations are chosen in a manner that ensures that the eight operators constitute the requisite maximally commuting orthogonal set.
For a $3 \times 3$ spin-1 density matrix, from each of the four basis sets comprising an MUB $\{\{\psi_i\},\{\phi_j\}, \{\theta_k\}, \{\xi_l\}, i,j,k,l= 1, 2, 3\}$, three projection operators can be constructed. In the canonical $|jm\rangle$ basis $\{\psi_1,\psi_2,\psi_3\}$ with projection operators $\hat{P}_i,i= 1, 2 , 3$, define first the commuting operators $\hat{\alpha_{1}}= \sum_{i} r_{i} \hat{P_{i}}$ and $\hat{\alpha_{2}}= \sum_{i} s_{i} \hat{P_{i}}$ for coefficient vectors $\vec{r}=(r_1,r_2,r_3)$ and $\vec{s}=(s_1,s_2,s_3)$. The two operators are orthogonal if $\vec{r}.\vec{s}=\sum_{i} r_{i}s_{i}= 0$, since
\begin{align*}
Tr(\hat{\alpha}_1\hat{\alpha}_2) &=
\sum_{i,j}r_{i} s_{j} Tr(\hat{P}_{i} \hat{P}_{j}) \\
&=\sum_{i,j} r_{i}s_{j} \delta_{ij}
=\sum_{i} r_{i}s_{i}.
\end{align*}
Furthermore, for the operators to be traceless, we require $\sum \limits_{i}r_{i}= \sum \limits_{i}s_{i}=0.$
Guided by the spin-1/2 case, we demand that the next set of operators has the same form as that of angular momentum basis $|jm \rangle $. That is, we impose the condition that $\hat{\alpha}_3$ and $\hat{\alpha}_4$ be defined with projection operators constructed with $\{\phi_i,i=1,2,3\}$ using the same coefficient vectors $\vec{r}$ and $\vec{s}$. Consequently, for a set $\hat{P}'_i,i=1,2,3$ of projection operators obtained from a second basis $\{\phi_1,\phi_2,\phi_3\}$, we define $\hat{\alpha_{3}}= \sum_{i} r_{i} \hat{P}'_{i}$ and $\hat{\alpha_{4}}= \sum_{i} s_{i} \hat{P'}_{i}$. Orthogonality requirement amongst the $\hat{\alpha}_i$ implies that the for a unitary $U$,
\begin{align*}
Tr\big(\sum_{i} r_{i} \hat{P}_{i} \sum_{j} &r_{j} \hat{P}'_j\big)=
Tr\big(\sum_{i,j} r_{i} r_{j}( \hat{P}_{i} U \hat{P}_{j} U^{\dagger}\big) \\
&=Tr\big(\sum_{i,j} r_{i} r_{j}(|\psi_{i} \rangle \langle \psi_{i}| U |\psi_{j} \rangle \langle \psi_{j} | U^{\dagger}\big) \\
&=\sum_{i,j} r_{i} r_{j} u_{ij} u^{*}_{ij}\\
&=\sum_{i,j} r_{i} r_{j} \langle \psi_{i}|\phi_{j} \rangle \langle \phi_{i}|\psi_{j} \rangle=0 ,
\end{align*}
and thus $\sum_{i,j} r_{i} r_{j}=0$. In similar fashion we have $\sum_{i,j} s_{i} s_{j} =0$ and $\sum_{i,j} r_{i} s_{j}= 0$. Using the same coefficient vectors $\vec{r}$ and $\vec{s}$, we can continue in a similar manner to suitably define $\hat{\alpha}_5$ and $\hat{\alpha}_6$ using $\{\theta_k,k=1,2,3\}$, and $\hat{\alpha}_7$ and $\hat{\alpha}_8$ using $\{\xi_l,l=1,2,3\}$. \\
\subsection{Physical interpretation}
The exact nature of the MUBs $\{\{\psi_i\},\{\phi_j\}, \{\theta_k\}, \{\xi_l\}, i, j, k, l=1,2,3\}$ were given by Bandyopadhyay et al. \cite{band}, and are of the form
{\small
\begin{align*}
B'_{1} = \left(
\begin{array}{ccc}
1 & 0 & 0\\
0 & 1 & 0\\
0 & 0 & 1\\
\end{array}\right),
\hspace{1mm}
B'_{2} = \frac{1}{\sqrt 3}\left(
\begin{array}{ccc}
1 & 1 & 1 \\
1 & \omega^{2} & \omega\\
1 & \omega & \omega^{2} \\
\end{array}\right),
\end{align*}
}
{\small
\begin{align*}
B'_{3} = \frac{1}{\sqrt 3}\left(
\begin{array}{ccc}
1 & 1 & 1 \\
\omega & 1 & \omega^{2} \\
1 & \omega & \omega^{2} \\
\end{array}\right),
\hspace{1mm}
B'_{4} = \frac{1}{\sqrt 3}\left(
\begin{array}{ccc}
1 & 1 & 1 \\
\omega^{2} & \omega & 1 \\
1 & \omega & \omega^{2} \\
\end{array}\right),
\end{align*}
}
where columns of $B'_1, B'_2, B'_3$ and $B'_4$ are $\{\psi_i\}, \{\phi_j\}, \{\theta_k\}$ and $\{\xi_l\}$ respectively, and $\omega= e^{2\pi i/3}$.
We now identify the physical observables corresponding to the operators $\hat{\alpha}_i,i=1,\ldots,8$ and discuss their implementation. Consider first $\hat{\alpha}_1$ and $\hat{\alpha}_2$. Choosing one of $\vec{r}=(r_1,r_2,r_3)$ or $\vec{s}=(s_1,s_2,s_3)$ to be in the x-y plane (say $\vec{r}$), and from the conditions $\sum_{i} r_{i}= \sum_{i} s_{i}=0$ with $\sum_{i,j} r_{i} s_{j}= 0$, the choice of vectors $\vec{r}$ and $\vec{s}$ reduce to $\vec{r}= (r, 0, -r)$ and $\vec{s}= (s, -2s, s)$, up to an arbitrary permutation.
Examining the Fano representation of density matrix, we see that $\tau^{1}_{0}= \sqrt{\frac{3}{2}} J_{z}$ and $\tau^{2}_{0}= \frac{3J^{2}_{z}-J^{2}}{\sqrt{2}}$ and their expectation values determine two of the expansion coefficients of density matrix in this representation. Explicit forms of $\tau^{1}_{0}$ and $\tau^{2}_{0}$ are given by,
{\small
\begin{align*}
\tau^{1}_{0}=& {\sqrt{\frac{3}{2}}} \left(\begin{array}{ccc}
1 & 0 & 0\\
0 & 0 & 0\\
0 & 0 & -1\\
\end{array} \right),
\hspace*{1mm}
\tau^{2}_{0}= \frac{1}{\sqrt 2} \left(\begin{array}{ccc}
1 & 0 & 0\\
0 & -2 & 0\\
0 & 0 & 1\\
\end{array} \right),
\end{align*}}
\noindent where expectation values of $ \tau^{1}_{0}$ and $ \tau^{2}_{0}$ are respectively associated with the first and second order moments of $J_{z}$, and hence constitute experimentally measurable parameters. Thus we choose $\hat{\alpha_{1}}$ to be $\tau^{1}_{0}$ and $\hat{\alpha_{2}}$ to be $\tau^{2}_{0}$. In other words, $\vec{r}={\sqrt{\frac{3}{2}}}(1, 0, -1)$ and $\vec{s}= \frac{1}{\sqrt{2}}(1, -2, 1)$.
Now the first set of commuting operators in terms of projection operators associated with $B'_{1}$ is given by,
\begin{equation*}
\hat{ {\alpha_{1}}}= {\sqrt{\frac{3}{2}}} (\hat{P_{1}}-\hat{P_{3}}), \ \ \ \hat{{\alpha_{2}}}= \frac{1}{\sqrt 2} (\hat{P_{1}}-2\hat{P_{2}}+\hat{P_{3}}) .
\end{equation*}
The bases $B'_{1}$ and $B'_{2}$ are connected by the Fourier transformation $U'$ \cite{pawel} given by,
{\small
\begin{align*}
U' = \frac{1}{\sqrt{3}} \left(
\begin{array}{ccc}
1 & 1 & 1\\
1 & \omega^{2} & \omega \\
1 & \omega & \omega^{2} \\
\end{array}\right).
\end{align*}
}
\noindent
Thus $\hat{\alpha_{3}}$ and $\hat{\alpha_{4}}$ can be written as $\hat{\alpha_{3}}= U' \hat{\alpha_{1}} U'^{\dagger}$ and $\hat{\alpha_{4}}= U' \hat{\alpha_{2}} U'^{\dagger}$.
In a similar manner, transition from $B'_{2}$ to $B'_{3}$ can be obtained by one-axis twisting, $e^{-iS^{2}_{z}t}$ \cite{kit} for $t=2\pi/3$ and from $B'_{2}$ to $B'_{4}$ for $t=4\pi/3$.
The orthonormal set of commuting observables $\hat{\alpha_{i}}$, $i= 1, \ldots,8$ is given by,
\begin{align*}
\hat{\alpha_{1}}&= {\sqrt{\frac{3}{2}}}(\hat{P_{1}}-\hat{P_{3}}), \quad \hat{\alpha_{2}}= \frac{1}{\sqrt 2} (\hat{P_{1}}-2\hat{P_{2}}+\hat{P_{3}}) ,\\
\hat{\alpha_{3}}&= {\sqrt{\frac{3}{2}}}(\hat{P'_{1}}-\hat{P'_{3}}) , \quad \hat{\alpha_{4}}= \frac{1}{\sqrt 2} ( \hat{P'_{1}}-2\hat{P'_{2}}+\hat{P'_{3}}) ,\\
\hat{\alpha_{5}}&= {\sqrt{\frac{3}{2}}}(\hat{P''_{1}}-\hat{P''_{3}}) , \quad \hat{\alpha_{6}}= \frac{1}{\sqrt 2} ( \hat{P''_{1}}-2\hat{P''_{2}}+\hat{P''_{3}}) ,\\
\hat{\alpha_{7}}&= {\sqrt{\frac{3}{2}}}(\hat{P'''_{1}}-\hat{P'''_{3}}) , \quad \hat{\alpha_{8}}= \frac{1}{\sqrt 2} (\hat{P'''_{1}}-2\hat{P'''_{2}}+\hat{P'''_{3}}),
\end{align*}
where projection operators $\hat{P_{i}}, \hat{P'_{i}}, \hat{P''_{i}}, \hat{P'''_{i}}$ ( $i=1,2,3$) are respectively associated with the bases $B'_{1}, B'_{2}, B'_{3}, B'_{4}$. The new orthonormal operator basis is explicitly given by,
{\small
\begin{align*}
\hat{{\alpha}_{1}}&= {\sqrt{\frac{3}{2}}} \left(\begin{array}{ccc}
1 & 0 & 0\\
0 & 0 & 0\\
0 & 0 & -1\\
\end{array} \right),
\qquad
\hat{\alpha_{2}}= \frac{1}{\sqrt 2} \left(\begin{array}{ccc}
1 & 0 & 0\\
0 & -2 & 0\\
0 & 0 & 1\\
\end{array} \right),\\
\hat{\alpha_{3}}&= {\frac{1}{\sqrt{2}}} \left(\begin{array}{ccc}
0 & -i\omega & i\omega^{2}\\
i\omega^{2} & 0 & -i\omega\\
-i\omega & i\omega^{2} & 0\\
\end{array} \right),
\quad
\hat{\alpha_{4}}= \frac{1}{\sqrt 2}\left(\begin{array}{ccc}
0 & -\omega & -\omega^{2}\\
-\omega^{2} & 0 & -\omega\\
-\omega & -\omega^{2} & 0\\
\end{array} \right),\\
\hat{\alpha_{5}}&={\frac{1}{\sqrt{2}}} \left(\begin{array}{ccc}
0 & -i & i\omega^{2} \\
i & 0 & -i \omega^{2} \\
-i \omega & i \omega & 0\\
\end{array} \right),
\quad
\hat{\alpha_{6}}=\frac{1}{\sqrt 2} \left(\begin{array}{ccc}
0 & -1 & -\omega^{2}\\
-1 & 0 & -\omega^{2} \\
-\omega & -\omega & 0\\
\end{array} \right),\\
\hat{\alpha_{7}}& ={\frac{1}{\sqrt{2}}} \left(\begin{array}{ccc}
0 & -i \omega^{2} & i\omega^{2} \\
i \omega & 0 & -i\\
-i \omega & i & 0\\
\end{array} \right),
\quad
\hat{\alpha_{8}}= {\frac{1}{\sqrt{2}}} \left(\begin{array}{ccc}
0 & -\omega^{2} & -\omega^{2}\\
-\omega & 0 & -1\\
-\omega & -1 & 0\\
\end{array} \right).
\end{align*}
}
\noindent
The new operator basis provides an expansion of $\rho$:
\begin{equation*}
\rho= \frac{1}{3}(I + \sum \limits_{i=1}^{8} a_{i} \hat{\alpha_{i}}),
\end{equation*}
where $a_{i}= Tr(\rho \hat{\alpha_{i}})$. The expansion based on the operators constructed using the MUBs in a certain sense constitutes an optimal measurement strategy---complete state determination amounts to determining the $a_i$, which is optimally done using the maximally commuting orthogonal operators $\hat{\alpha}_i, i=1,\ldots,8$.
\subsection{Physical realization} In addition to dipole magnetic field in the Stern- Gerlach apparatus, if one applies an external electric quadrupole field, the resulting Hamiltonian in the multipole expansion is given by,
\begin{equation*}
\hat{\mathcal{H}}= \sum \limits_{k=0}^{2} \sum \limits_{q= -k}^{+k} h^{k}_{q} {\tau^{k}_{q}}^{\dagger}.
\end{equation*}
When the electric quadrupole field with asymmetry parameter $\eta=0$ is along the $z$-axis of the Principal Axes Frame(PAF) of the quadrupole tensor and the dipole magnetic field is oriented along the same $z$-axis \cite{swarna}, $\hat{\mathcal{H}}$ takes the form
\begin{equation*}
\hat{\mathcal{H}}= \sum \limits_{k=0}^{2} h^{k}_{0} \tau^{k}_{0}.
\end{equation*}
In this experimental setup, one can measure the expectation values of $\hat{\alpha_{1}}$ and $\hat{\alpha_{2}}$. Implementation of unitary transformations namely Fourier transform and one-axis twisting in the lab leads to the measurement of all the observables $\hat{\alpha_{3}},\hat{\alpha}_4,\ldots,\hat{\alpha_{8}}$.\\
\section{Construction for a spin-3/2 system}
For a spin-3/2 system, the MUBs comprise five basis sets given by,
{\small
\begin{align*}
B''_{1} = \left(
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1\\
\end{array}\right),
\qquad
B''_{2} = \frac{1}{2} \left(
\begin{array}{cccc}
1 & 1 & 1 &1 \\
1 & -1 & 1& -1 \\
1& 1& -1 &-1\\
1& -1& -1 & 1\\
\end{array}\right),
\qquad
B''_{3} = \frac{1}{2} \left(
\begin{array}{cccc}
1 & 1 & 1 & 1\\
i & -i & i & -i \\
i & i & -i &-i \\
-1 & 1 & 1 & -1\\
\end{array}\right),
\end{align*}
}
{\small
\begin{align*}
B''_{4} = \frac{1}{2} \left(
\begin{array}{cccc}
1 & 1&1 & 1\\
i & -i & i& -i \\
1 & 1 & -1 & -1\\
-i & i & i&-i\\
\end{array}\right),
\qquad
B''_{5} = \frac{1}{2} \left(
\begin{array}{cccc}
1 & 1 & 1 & 1 \\
1 & -1& 1& -1\\
i & i & -i & -i \\
-i & i & i & -i\\
\end{array}\right) .
\end{align*}
}
Thus we construct five sets of mutually disjoint, maximally commuting set of operators $\hat{\beta}_{i}$, $i= 1, 2, \ldots,15$. Fano expansion of spin-3/2 density matrix consists of three diagonal operators $\tau^{1}_{0}$, $\tau^{2}_{0}$ and $\tau^{3}_{0}$ in the $|3/2 \ m \rangle$ basis where $m= -3/2, \ldots,+3/2$. Along the lines of what was done for the spin-1 case, we choose $\hat{\beta_{1}}$, $\hat{\beta_{2}}$ and $\hat{\beta_{3}}$ to be $\tau^{1}_{0}$, $\tau^{2}_{0}$, $\tau^{3}_{0}$, given by,
{\small
\begin{align*}
\tau^{1}_{0} = \hat{\beta_{1}} = \frac{1}{\sqrt{5}} \left(
\begin{array}{cccc}
3 & 0 & 0 & 0 \\
0 & 1 & 0 & 0\\
0 & 0 & -1 & 0\\
0 & 0 & 0 & -3\\
\end{array}\right),
\qquad
\tau^{2}_{0}= \hat{\beta_{2}} = \left(
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & -1 & 0 & 0\\
0 & 0 & -1 & 0\\
0 & 0 & 0 & 1\\
\end{array}\right),
\end{align*}
}
{\small
\begin{align*}
\tau^{3}_{0}= \hat{\beta_{3}} = \frac{1}{\sqrt{5}} \left(
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & -3 & 0 & 0\\
0 & 0 & 3 & 0\\
0 & 0 & 0 & -1\\
\end{array}\right),
\end{align*}
}
\noindent
where $\tau^{3}_{0}= \frac{1}{3\sqrt{5}} [4J_{z}^{3}- (J_{z} J^{2}_{x}+ J^{2}_{x} J_{z}+ J_{x} J_{z} J_{x}) (J_{z} J^{2}_{y}+J^{2}_{y} J_{z}+J_{y} J_{z} J_{y})]$.
In terms of projection operators obtained from the canonical basis,
\begin{eqnarray*}
\hat{\beta_{1}}= 1/{\sqrt{5}} (3\hat{P_{1}}+\hat{P_{2}}-\hat{P_{3}}-3\hat{P_{4}}), \\
\hat{\beta_{2}}= \hat{P_{1}}-\hat{P_{2}}-\hat{P_{3}}+\hat{P_{4}}, \\
\hat{\beta_{3}}= 1/{\sqrt{5}} (\hat{P_{1}}-3\hat{P_{2}}+ 3\hat{P_{3}}-\hat{P_{4}}).
\end{eqnarray*}\\
Similarly remaining four sets of operators each containing three commuting operators are constructed from their respective projection operators employing the same linear combinations as above.\\
Thus spin-3/2 density matrix can be expanded in the new basis as
\begin{equation*}
\rho= \frac{1}{4}(I + \sum \limits_{i=1}^{15} b_{i} \hat{\beta_{i}}).
\end{equation*}
With the suitable application of quadrupole electric field, dipole and octopole magnetic field, one can obtain $\hat{\beta_{1}}$, $\hat{\beta_{2}}$ and $\hat{\beta_{3}}$. As the unitary transformations connecting different MUB sets are known, implementation of these transformations results in the measurement of rest of the observables.
\section{Construction for a spin-2 system}
For spin-2 system, with $\omega=e^{2 \pi i/5}$, the six sets of MUBs are given by
{\small
\begin{align*}
B_{1} &= \frac{1}{\sqrt{5}} \left(
\begin{array}{ccccc}
1 & 0 & 0 & 0 & 0\\
0 & 1 & 0 & 0 & 0\\
0 & 0 & 1 & 0 & 0\\
0 & 0 & 0 & 1 & 0\\
0 & 0 & 0& 0 & 1\\
\end{array}\right),
\qquad \quad
B_{2} = \frac{1}{\sqrt{5}} \left(
\begin{array}{ccccc}
1 & 1 & 1 & 1 & 1 \\
1 & \omega & {\omega}^{2} & {\omega}^{3} & {\omega}^{4}\\
1 & {\omega}^{2} & {\omega}^{4} & {\omega} & {\omega}^{3} \\
1 & {\omega}^{3} & {\omega} & {\omega}^{4} & {\omega}^{2} \\
1 & {\omega}^{4} & {\omega}^{3} &{\omega}^{2} & {\omega} \\
\end{array}\right),
\\
B_{3}& = \frac{1}{\sqrt{5}} \left(
\begin{array}{ccccc}
1 & 1 & 1 & 1 & 1 \\
\omega & \omega^{2} & \omega^{3} & \omega^{4}& 1\\
{\omega}^{4} & \omega& \omega^{3} & 1 & \omega^{2} \\
{\omega}^{4} & \omega^{2} & 1 & \omega^{3} & \omega \\
{\omega} & 1 & \omega^{4} & \omega^{3} & \omega^{2}\\
\end{array}\right),
\quad
B_{4} = \frac{1}{\sqrt{5}} \left(
\begin{array}{ccccc}
1 & 1& 1 & 1 & 1\\
\omega^{2} & \omega^{3}& \omega^{4} & 1 & \omega\\
{\omega}^{3} & 1&\omega^{2} & \omega^{4} & \omega \\
{\omega}^{3} & \omega & \omega^{4} & \omega^{2} & 1\\
{\omega}^{2} & \omega & 1 & \omega^{4} & \omega^{3}\\
\end{array}\right),
\\
B_{5} &= \frac{1}{\sqrt{5}} \left(
\begin{array}{ccccc}
1 & 1 & 1& 1& 1\\
\omega^{3} & \omega^{4}& 1& \omega& \omega^{2} \\
{\omega}^{2} & \omega^{4}&\omega & \omega^{3}& 1\\
{\omega}^{2} & 1 & \omega^{3} & \omega & \omega^{4}\\
{\omega}^{3} &\omega^{2} & \omega & 1 & \omega^{4} \\
\end{array}\right),
\quad
B_{6} = \frac{1}{\sqrt{5}} \left(
\begin{array}{ccccc}
1 & 1& 1& 1& 1 \\
\omega^{4} & 1 & {\omega} & {\omega}^{2}& {\omega}^{3}\\
{\omega}& {\omega}^{3} & 1 & {\omega}^{2}& {\omega}^{4}\\
{\omega}&{\omega}^{4} &{\omega}^{2} & 1 & {\omega}^{3}\\
{\omega}^{4}& {\omega}^{3}& {\omega}^{2}& {\omega} & 1\\
\end{array}\right).
\end{align*}
}
In this case, the four Fano spherical tensors, in terms of the projection operators, are given by,
\begin{eqnarray*}
\tau^{1}_{0}= \hat{\gamma_{1}}= {\sqrt{\frac{5}{4}}} (\hat{P_{1}}-\hat{P_{2}}+\hat{P_{4}}-\hat{P_{5}}), \\
\tau^{2}_{0}= \hat{\gamma_{2}}= {\frac{1}{\sqrt{2}}}(2\hat{P_{1}}+\hat{P_{2}}-2\hat{P_{4}}-\hat{P_{5}}), \\
\tau^{3}_{0}= \hat{\gamma_{3}}= {\frac{1}{\sqrt{2}}}(\hat{P_{1}}-2\hat{P_{2}}-\hat{P_{4}}+2\hat{P_{5}}), \\
\tau^{4}_{0}= \hat{\gamma_{4}}= {\frac{1}{2}} (\hat{P_{1}}+\hat{P_{2}}-4\hat{P_{3}}+\hat{P_{4}}+\hat{P_{5}})
\end{eqnarray*}
In similar fashion the remaining five sets of commuting operators can be obtained by operating the unitary transformations connecting MUBs in the angular momentum basis to rest of the MUBs. Consequently, spin-2 density matrix can now be expressed as
\begin{equation*}
\rho= \frac{1}{5}(I + \sum \limits_{i=1}^{24} c_{i} \hat{\gamma_{i}}).
\end{equation*}
\section{Concluding remarks}
We have provided a mechanism to construct mutually disjoint, maximally commuting operators for dimensions where MUBs are known to exist. Since these operators are maximally commuting, measurements with them correspond to optimal determination of the parameters characterizing the density matrix of the state of a system.
Our construction rests on a key observation that the Pauli operators used in a Stern-Gerlach experiment for spin-1/2 particles are linear combinations of projection operators constructed from different MUBs. Leveraging this observation, we construct Pauli-like operators with eigenbases that are MUBs for spin-1, spin-3/2 and spin-2 systems. For prime and prime power dimensions $d$ (where $d= 2j+1$), using the fact that there always exists a complete set of $d+1$ MUBs, we have constructed $d+1$ sets of mutually disjoint operators, containing $d-1$ commuting operators in each set in the following manner:
\begin{enumerate}
\item Consider the first set of MUBs as canonical basis.
\item Consider an orthonormal set of $d^{2}$ matrices, with $d$ diagonal matrices which includes the identity $I$.
For example, if the angular momentum basis is used as the canonical basis, then the diagonal matrices can be identified as the Fano spherical tensor operators $\tau^{k}_{0}$s with matrix elements $\langle jm |\tau^{k}_{0} | jm \rangle= \sqrt{2k+1} C(jkj; m0m)$, where $k=0 \ldots d-1$ and $C(jkj; m0m)$ are the Clebsch-Gordan coefficients.
\item Express each diagonal matrix (excluding identity) as a linear combination of projection operators of the canonical basis.
\item Identify the unitary transformations $U_{1}, U_{2}, \ldots, U_{d}$ that connect the first set of MUBs with the rest of MUB states.
\item Implementing $U_{i} \tau^{k}_{0} U^{\dagger}_{i}$, where $i= 1, \ldots, d$ and $k=1 \ldots, d-1$, generate the complete set of mutually disjoint, maximally commuting set of operators.
\end{enumerate}
Inspection of our method reveals that the main requirement for extension to higher-order spin systems and arbitrary density matrices representing non-spin systems is that the MUBs are known to exist and are available. For non-spin systems, expansion of the density matrix $\rho$ in an operator basis different from the spherical tensors can be considered. Physical realization then amounts to the identification of a suitable Hamiltonian that plays a role analogous to the multipole fields used in spin-$j$ systems.
Spin-5/2 is an intriguing case as it corresponds to the lowest composite dimension $d=6$ that is not a power of a prime for which the existence of a complete set of MUBs is yet to be established. It has been conjectured by Zauner \cite{zan} that for $d=6$ the maximum number of MUB sets is three. If the conjecture is true, then one cannot construct seven sets of mutually disjoint, maximally commuting set of operators.
There have been numerous attempts to detect entanglement/correlation by using minimal number of experimentally viable local measurements. Recent works\cite{chris, bin} show that MUBs, as well as Mutually Unbiased Measurements(MUMs), can be efficiently used to detect entanglement in bipartite, multipartite and higher dimensional systems. In principle, our method of constructing mutually disjoint, maximally commuting set of operators may be harnessed to detect entanglement, since the eigenbases of our operators are MUBs. Since the constructed operators are maximally commuting, the detection mechanism would require a minimal number of measurements. This work will be taken up elsewhere.
\section*{Appendix}
The density matrix for a spin-$j$ system can be represented as
\begin{equation*}
\rho(\mathbf{J}):=\rho := \frac{1}{(2j+1)} \sum_{k=0}^{2j}\sum_{q=-k}^k t^k_q \tau^{k^\dagger}_q,
\end{equation*}
$\tau^k_q$ are irreducible tensor operators of rank $k$ in the $2j+1$ dimensional spin space with projection $q$ along the axis of quantization in the real 3-dimensional space. The matrix elements of $\tau^{k}_{q}$ are
\begin{equation*}
\langle jm' |\tau^{k}_{q}(\vec{J}) | jm \rangle= [k] C(jkj; mqm'),
\end{equation*}
where $C(jkj;mqm')$ are the Clebsch--Gordan coefficients and $[k]=\sqrt{2k+1}$. $\tau^{k}_{q}$s satisfy orthogonality and symmetry relations,
\begin{equation*}
Tr({\tau^{k^{\dagger}}_{q}\tau^{k^{'}}_{q^{'}}})= (2j+1)\,\delta_{kk^{'}} \delta_{qq^{'}}, \quad \tau^{k^{\dagger}}_{q} = (-1)^{q}\tau^{k}_{-q},
\end{equation*}
where the normalization has been chosen so as to be in agreement with Madison convention.
The Fano statistical tensors or the spherical tensor parameters $t^k_q$ parametrize the density matrix $\rho$ as expectation values of $\tau^k_q$: $Tr(\rho\tau^k_q)=t^k_q$. Because of hermiticity of $\rho$, ${t^{k}_{q}}^{*}= (-1)^{q} t^{k}_{-q}$.
The importance of irreducible spherical tensor operators lies in the fact they can be constructed as symmetrized products of the angular momentum operators $\mathbf{J}$ following the well-known Weyl construction
as,
$\tau_{q}^{k}(\mathbf{J}) = \mathcal {N}_{kj}\,(\mathbf{J}\cdot \vec{\bf{\nabla}})^k \,r^{k} \,{Y}^{k}_{q}(\hat{r}),$
where
\begin{equation*}
\mathcal {N}_{kj}= {\frac{2^{k}}{k!}}{\sqrt{\frac{4\pi(2j-k)!(2j+1)}{(2j+k+1)!}}},
\end{equation*}
are the normalization factors and ${Y}^{k}_{q}(\hat {r})$ are the spherical harmonics. The tensor operators are traceless but not Hermitian, and cannot in general be identified with generators of $SU(N)$. Also, the tensor operators $\tau^{k}_{0}$s are the physical observables which have the physical interpretation. That is, the expectation values of $\tau^{k}_{0}$s correspond to the statistical moments and thus are measurable physical quantities.
\section*{Acknowledgements}
HSS thanks the Department of Science and Technology(DST), India for the grant of INSPIRE Fellowship. KB's research is partially supported by NSF DMS 1613054 and NIH R01 CA214955.
\vspace*{-10pt}
\section*{References}
|
2,869,038,154,658 | arxiv | \subsection{Acknowledgements:}}
\def\APPENDIX#1#2{\par\penalty-300\vskip\chapterskip
\spacecheck\chapterminspace \chapterreset \xdef\chapterlabel{#1}
\titlestyle{APPENDIX #2} \nobreak\vskip\headskip \penalty 30000
\wlog{\string\Appendix~\chapterlabel} }
\def\Appendix#1{\APPENDIX{#1}{#1}}
\def |
2,869,038,154,659 | arxiv | \chapter{Background}
In this chapter we compare the StarGAN against recent facial expressions studies conducted by different approaches. These are basically the baseline models which perform image-to-image translation \cite{kollias8,kollias9} for different domains and how the Star GAN has improved upon these models.
We explain four different baseline models here in this case:
\\
\begin{itemize}
\item \textbf{DIAT}\cite{li2016deep} or Deep identity-aware transfer uses an adversarial loss to learn the mapping from i $\in$ I to j $\in$ J, where i and j are face images in two different domains I and J, respectively.
This method adds a regularization term on the mapping as $\|i- F(G(i))\|_{1}$ to preserve identity features of the source image, where F is a feature extractor pretrained on a face recognition task.
\begin{figure}[H]
\centering
\includegraphics[width=0.95\textwidth]{background/fig/DIAT.png}
\caption{General Overview of a DIAT model}
\label{fig:diat}
\end{figure}
\item \textbf{CycleGAN}\cite{zhu2017unpaired} uses an adversarial loss to learn the mapping between two different domains I and J. This method regularizes the mapping via cycle consistency losses, $\|i-(G_{JI}(G_{IJ}(i)))\|_{1}$ and $\|j-(G_{IJ}(G_{JI}(j)))\|_{1}$.
This method requires \textbf{'n'} generators and discriminators for each pair of \textbf{'n'} different domains. So, in our case we would need 7 different generators and discriminators.
\item \textbf{IcGAN}\cite{perarnau2016invertible} or Invertible Conditional GAN basically combines an encoder with a cGAN \cite{mirza2014conditional} model. cGAN learns the mapping $G :$ \textit{\{z,c\}} $\longrightarrow i$ that generates an image i conditioned on both the latent vector z and the conditional vector c. On top of that the IcGAN introduces the encoder to learn the inverse mappings of cGAN, $E_{z}:i\longrightarrow z$ and $E_{c}:i\longrightarrow c$. This allows IcGAN to synthesis images by only changing the conditional vector and preserving the latent vector.
\begin{figure}[H]
\centering
\includegraphics[width=0.95\textwidth]{background/fig/IcGAN.png}
\caption{An IcGAN model consisting of Encoder and a conditional GAN generator}
\label{fig:icgan}
\end{figure}
\item \textbf{DiscoGAN}\cite{kim2017learning} is the basic foundation baseline of StarGAN as Disco GAN introduces cross relation among different domains. The Disco GAN solves the relations between different unpaired domains with different label values for each domain.
\begin{figure}[H]
\centering
\includegraphics[width=0.95\textwidth]{background/fig/DiscoGAN.png}
\caption{General Overview of a DiscoGAN consisting of Discriminator(D) and Generator(G)}
\label{fig:discogan}
\end{figure}
\end{itemize}
\chapter{Conclusion}
In conclusion, it can be clearly observed that the dataset gets classified into the 7 basic different expressions based on its valence arousal score.\\
We first preprocess the data using face detection, alignment and annotation; following which divide it into training and testing set. After that we feed the data set into the StarGAN model which then completes the training on 20000 iterations and around 250K frames/images. Finally, we test the pretrained model on some sample subjects and see how accurate the images get generated according to the valence arousal scores of each emotion \cite{kollias10}.\\
We can further improve the existing model by increasing the number of iterations which can produce even better accuracy on the test set.\\ We can also increase the number of subjects in the training datasets. For example we used around 500 different identities for training. This number can be further increased while dataset collection and preprocessing.
\chapter{Method Implementation and Results}
In this chapter we use the existing StarGAN model and use it on the dataset we generated. We train the discriminator and generator with the images and finally test the images from our dataset on the trained model.
\section{Model Architecture}
We use the model architecture specified in the StarGAN paper\cite{choi2018stargan} and train the generator and the discriminator accordingly.\\
For the generator network, we use instance normalization\cite{ulyanov2016instance}
in all layers except the last output layer. For the discriminator network, we use Leaky ReLU\cite{xu2015empirical} with a slope of
-0.01.\\
The generator network architecture is given below:
\begin{figure}[H]
\centering
\includegraphics[width=1.1\textwidth]{eval/fig/GenArch.JPG}
\caption{Generator Network Architecture}
\label{fig:genArch}
\end{figure}
\pagebreak
The discriminator network architecture is given below:
\begin{figure}[H]
\centering
\includegraphics[width=1.1\textwidth]{eval/fig/DiscArch.JPG}
\caption{Discriminator Network Architecture}
\label{fig:discArch}
\end{figure}
\vspace{-20mm}
\section{Training the dataset}
For \textbf{Generator}, Total number of parameters: 8436800.\\
For \textbf{Discriminator}, Total number of parameters: 44735424.\\
The models are trained using Adam \cite{kingma2014adam} with $\beta 1 = 0.5$ and $\beta 2 = 0.999$. We perform one generator update after five discriminator updates as in Wasserstein GANs\cite{gulrajani2017improved}. The batch size is set to 16 during training.\\
To produce higher quality images and improve the training process we generate the adversarial loss with the Wasserstein GANs\cite{gulrajani2017improved} objective with gradient penalty\cite{arjovsky2017wasserstein} which can be defined as:
\begin{equation}
\begin{split}
L_{adv} = \E_{x}[D_src(x)]-\E_{x,c}[D_{src}(G(x,c))]\\
- \Lambda_{gp}E_{\hat{x}}[(\| \Delta_{\hat{x}}D_{src}(\hat{x})\| - 1)^{2}]
\end{split}
\end{equation}
where $\hat{x}$ is sampled uniformly along a straight line between a pair of a real and a generated images.\\
After training the entire dataset we get the final training samples as illustrated in the following page:
\pagebreak
\begin{center}
\begin{tabular}{ c c c c c c c c}
Input & Angry & Disgust & Fear & Happy & Neutral & Sad & Surprised
\end{tabular}
\end{center}
\vspace{-6mm}
\begin{figure}[H]
\centering
\includegraphics[width=0.75\textwidth]{eval/fig/20000-images.jpg}
\caption{Training Samples depicting the 7 different expressions}
\label{fig:train}
\end{figure}
\section{Testing the dataset}
After we have trained the entire dataset and generated the images and the 7 different expressions for the images we finally test the trained model using a few subjects to check its accuracy. We split the train and test data set as 90 percent and 10 percent respectively.\\
For subject 1
\vspace{-60mm}
\begin{center}
\begin{tabular}{ c c c c c c c c}
Input & Angry & Disgust & Fear & Happy & Neutral & Sad & Surprised
\end{tabular}
\end{center}
\vspace{-87mm}
\begin{figure}[H]
\centering
\includegraphics[width=0.80\textwidth]{eval/fig/Test1.png}
\caption{Test Results for Subject 1}
\label{fig:rwoman}
\end{figure}
\pagebreak
For subject 2
\begin{center}
\begin{tabular}{ c c c c c c c c}
Input & Angry & Disgust & Fear & Happy & Neutral & Sad & Surprised
\end{tabular}
\end{center}
\vspace{-6mm}
\begin{figure}[H]
\centering
\includegraphics[width=0.80\textwidth]{eval/fig/Test2.png}
\caption{Test Results for Subject 2}
\label{fig:rwoman2}
\end{figure}
So, as we can clearly see the trained model easily generates the 7 basic expressions for a given input image.\\
\textbf{Evaluation of the Discriminator} after 20000 iterations is:\\
Iteration [20000/20000],\\
D/loss\textunderscore real: -69.1152, D/loss\textunderscore fake: 22.4376, D/loss\textunderscore cls: 0.0002, D/loss\textunderscore gp: 1.2612,\\
\textbf{Evaluation of the Generator} after 20000 iterations is:\\
Iteration [20000/20000],\\
G/loss\textunderscore fake: -25.4440, G/loss \textunderscore rec: 0.1537, G/loss \textunderscore cls: 1.2868.
\chapter{Introduction}
We will be using a type of Generative Adversarial Network (GAN) \cite{goodfellow2014generative}: StarGAN \cite{choi2018stargan} in this project for emotion recognition using valence arousal scores. It is basically a multi-task GAN which uses the Generator to produce fake images and the discriminator to identify the real and fake images along with emotion recognition on the basis of attributes, i.e. valence arousal score in our ISO. Applications of emotion recognition are in many fields such as the medical \cite{tagaris1,tagaris2,kollias13}, marketing etc.
\section{Generative Adversarial Networks(GAN)}
A GAN is form of unsupervised learning that simultaneously trains two models: a generative model G that captures the data distribution by using a latent noise \cite{raftopoulos2018beneficial} vector, and a discriminative model D that estimates the probability that a sample came from the training data rather than G \cite{goodfellow2014generative}.The training procedure is such that the G maximizes the probability of D making a mistake. This framework corresponds to a minimax two-player game. The two networks basically contests with each other in a zero-sum game framework.\\
The D is trained to maximize the probability of assigning the correct label to both training examples and samples from G.
We train D to maximize the probability of assigning the
correct label to both training examples and samples from G. We simultaneously train G to minimize $\log(1 − D(G(z)))$.
So, basically, the D and G minimax equation with value function V (G;D) can be written as:
\begin{equation}
\min_{G} \max_{D} V(D,G) = \E_{x\sim p_{data}(x)}[\log(D(x))] + \E_{z\sim p(z)}[\log(1-D(G(z)))]
\end{equation}
Usually a model of GAN looks like this:
\begin{figure}[H]
\centering
\includegraphics[width=0.60\textwidth]{introduction/fig/GAN.png}
\caption{General Overview of a GAN consisting of Discriminator(D) and Generator(G)}
\label{fig:gan}
\end{figure}
In adversarial nets framework, the generative model is put up against an adversary: a discriminative model that learns and determines whether a sample is from the model distribution or the
data distribution. The generative model can be thought of as parallel to a team of money launderers, trying to produce fake money and use it without detection, while the discriminative model is
parallel to the police, trying to detect the fake currency. Competition in this game drives both teams to improve their methods until the fake ones are indistinguishable from the real articles.
\section{StarGAN}
The StarGAN \cite{choi2018stargan} is basically a type of GAN which solves the problem of multi-domain image to image translation. The existing approaches reduces the robustness of a problem when used for multi-domains. So, basically for each pair of image domain we do not have to create a different network.
The task of image-to-image translation is to change a
given aspect of a given image to another,i.e, changing
the facial expression of a person from neutral to happy.\\
The emotion dataset we have created has 7 different basic expressions which are the 7 labels which we will use \cite{kollias11,kollias12}.\\
The basic structure of the StarGAN can be shown as below:
\begin{figure}[H]
\centering
\includegraphics[width=0.65\textwidth]{introduction/fig/StarGAN_Overview.PNG}
\caption{General Overview of a StarGAN consisting of Discriminator(D) and Generator(G)}
\label{fig:stargan}
\end{figure}
As \textbf{attribute} we have annotated the videos in the dataset based on their valence and arousal score.\\
As \textbf{domain} we have the different videos pertaining to the same emotion which shares the same attribute value i.e., angry, sad etc.\\
\\
\textbf{Conditional} GAN-based image generation have been actively studied.
Prior studies have provided both the discriminator and generator with class information in order to generate samples conditioned on the class\cite{mirza2014conditional},\cite{odena2016semi}.This idea of conditional image generation has been successfully applied to domain transfer\cite{kim2017learning} which is a building block for this project.\\
Recent works have achieved significant results in the area of image-to-image translation\cite{kim2017learning},\cite{isola2017image}. For instance, pix2pix \cite{isola2017image} learns this task in a supervised manner using conditional GANs\cite{mirza2014conditional}. It combines an adversarial loss with an L1 penalty loss, thus requires paired data samples.\\
\\
The basic idea in the Star GAN approach is to train a single generator G that learns mappings among multiple domains, in our case it is 7 basic expressions. To reach this, we need to train G
to translate \cite{goudelis2013exploring} an input image x to an output image y conditioned
on the target domain label $c$, $G(x,c) \longrightarrow y$.
We randomly generate the target domain label $c$ ,i.e. from valence or arousal score so that the generator learns to flexibly translate the input image. An auxiliary classifier\cite{odena2017conditional} was introduced in the Star GAN approach that allows a single discriminator to control multiple domains. So, the discriminator produces probability distributions over both source labels and domain labels, $D: x \longrightarrow {D_{src}(x),D_{cls}(x)}$.
\chapter{Collecting and Preprocessing the Dataset}
In this chapter we specifically elaborate how the dataset was collected manually from internet sources as in \cite{kollias15}. Following which face detection was performed on the videos and alignment on the faces was performed so it is easier for the StarGAN to train which resulted in individual frames/images. After that each frame/image was annotated based on its valence arousal score \cite{kollias4,kollias5}.\\
Overall around 484 vidoes were collected spanning more or less equally among the 7 different expressions which resulted in $\sim$250K frames/images.
\section{Collection of Dataset}
We used the website \href{https://www.shutterstock.com/}{https://www.shutterstock.com/} to collect the videos manually that was required for this ISO.\\
The following expressions were elaborately searched along with \cite{doulamis1999interactive,simou2008image,simou2007fire} its synonymous words throughout the website and the resulting videos divided into 7 different folders:\\
\\
\textbf{happy} : glad, pleased, delighted, cheerful, ecstatic, joyful, thrilled, upbeat, overjoyed, excited, amused, astonished.\\
\textbf{neutral} : serene, calm, at ease, relaxed, inactive, indifferent, cool, uninvolved.\\
\textbf{angry} : enraged, annoyed, frustrated, furious, heated, irritated, outraged, resentful, fierce, hateful, ill-tempered, mad, infuriated, wrathful.\\
\textbf{disgust} : antipathy, dislike, loathing, sickness.\\
\textbf{fear} : horror, terror, scare, panic, nightmare, phobia, tremor, fright, creeps, dread, jitter , cold feet, cold sweet.\\
\textbf{sad} : depressed, miserable, sorry, pessimistic, bitter, heartbroken, mournful, melancholy, sorrowful, down, blue, gloomy.\\
\textbf{surprise} : awe, amazement, curious, revelation, precipitation, suddenness, unforeseen, unexpected.\\
\\
Using these searches we developed the dataset folder and hence the number of videos/identities per emotion is :\\
\begin{itemize}
\item Happy - 76 videos.
\item Neutral - 75 videos.
\item Sad - 71 videos.
\item Surprised - 70 videos.
\item Angry - 72 videos.
\item Disgust - 60 videos.
\item Fear - 60 videos.
\end{itemize}
Total Number of videos/identities: 484 videos/identities.
\\
Total number of frames: 250K approx.
\section{Preprocessing the dataset}
In this section we preprocess the dataset using detection and alignment and then annotate the frames/images based on their valence arousal score.
\subsection{Face Detection and Alignment}
The MTCNN\cite{zhang2016joint} or Multi-Task Cascaded Convolutional Neural Networks \cite{kollias6} was used for the face detection and the five facial landmarks was used for the alignment of faces.\\
This entire pipeline of Facial detection and alignment can be explained in the algorithm below:
\begin{algorithm}[H]
\caption{Facial Detection and Alignment}\label{alg:mtcnn}
\begin{algorithmic}[1]
\State We use a fully Convolutional Network\cite{dai2016r}, here called Proposal Network (P-Net), to obtain the candidate windows and their bounding box regression vectors in a similar manner as in \cite{farfade2015multi}. Then we use the estimated bounding box regression vectors to calibrate the candidates. After that, we use non-maximum suppression (NMS)\cite{neubeck2006efficient} to merge highly overlapped candidates.
\State All the candidates are then fed to another CNN, called the Refine Network (R-Net), which further rejects a large number of false candidates, performs calibration with bounding box regression, and NMS candidates merge.
\State This step is similar to the second step, but in this step we attempt to describe the face in more details. In particular, the network will output five facial landmarks’ positions which is basically used to align the faces.
\end{algorithmic}
\end{algorithm}
The three steps can also be explained as figure flow in the following page :
\begin{figure}[H]
\centering
\includegraphics[width=0.60\textwidth]{preprocessing/fig/MTCNN.png}
\caption{The three step pipeline for Face Detection and Alignment}
\label{fig:mtcnn}
\end{figure}
\pagebreak
\subsection{Annotating the dataset}
In this subsection the valence arousal score was given to each frame/image based on its positive or negative range of emotions, as in \cite{kollias1,kollias2,kollias3}. We have defined the valence arousal score in the two dimensional space\cite{schubert1999measuring,kollias7,kollias14} with values ranging from -1 to +1 for valance and arousal each.\\
The following figure illustrates the metrics which we used to measure the extent of a particular emotion and whether the emotion is a positive emotion pr negative emotion.
\begin{figure}[H]
\centering
\includegraphics[width=0.65\textwidth]{preprocessing/fig/Valence-arousal.png}
\caption{Valence and Arousal metrics in 2D space}
\label{fig:va}
\end{figure}
\begin{flushleft}
\textbf{Valence score} is basically the measure of is positive or negative emotion of a particular person.\\
\end{flushleft}
\textbf{Arousal score} measures how calming or exciting the person is which means the intensity or extent of the particular emotion.\\
\\
If we consider an example:\\
For an angry image, the Valence score is $\sim -0.57$ and Arousal score is $\sim 0.63$\\ where the negative sign on valence score ensures a negative emotion and a high arousal score means he/she is very angry with a lot of excitement or energy on his/her face.
\\
Hence, this concludes the preprocessing part of the dataset. We collected a brand new dataset and preprocessed using detection, alignment and annotation.
|
2,869,038,154,660 | arxiv | \subsection*{Notation}\scriptsize
\fontsize{12pt}{14.4pt}\selectfont{\textbf{1. Introduction\\}}\normalsize
Environmental stressors are among the main causes of power system disturbances worldwide \cite{panteli2015influence}. High temperatures, for instance, can limit the transfer capability of transmission lines by increasing energy losses and line sagging \cite{panteli2015influence}. Rising temperatures also disrupt demand patterns, as they have been proved to be positively correlated with extreme temperatures during summertime \cite{miller2008climate}. Preparing power systems for adverse weather is a challenging task since the frequency of these events is increasing, but their precise occurrence in time and space is not known ahead of time. Increases in power system robustness and reliability are among the most common long-term methods to guarantee the operation of the network during extreme circumstances. Generation Expansion Planning (GEP) and Transmission Expansion Planning (TEP) have been proposed as possible pathways to adapt the system to new conditions \cite{li2014electric}. The TEP problem seeks to reinforce the transmission network and provide a stable supply even under worst-case scenarios. TEP has both economic and engineering reliability objectives; this makes the problem a complex case of multi-objective optimization \cite{garcia2016dynamic}. A common approach to solving the TEP problem is the use of the DC optimal power flow (DCOPF) approximation. In this formulation, both economic dispatch and optimal power flow are considered in modeling the problem. The DC formulation offers a good approximation of the AC power flow for planning purposes, especially since it is faster and easier to solve \cite{wood2013power}. The impacts of climate change, and specifically rising ambient temperature, have been studied from the generation and demand perspective in generation and transmission expansion problems \cite{hejeejo2017probabilistic, sathaye2013estimating, mcfarland2015impacts}. TEP has also been solved using a decentralized approach in which the electricity network was divided into regions to account for differences in demand and supply sides \cite{hariyanto2009decentralized, de2008transmission}. On the other hand, the effects of rising temperatures on transmission lines themselves were analyzed and estimated to account for capacity reduction in \cite{bartos2016impacts}.
This study proposes a DCOPF formulation with discrete transmission decisions and regional temperature considerations to solve the transmission expansion problem. The main contribution is the inclusion of a capacity reduction factor for the transmission lines, and the division of the electricity network by considering different climate regions to serve as a bridge between those studies solving TEP and those studying the impact of climate change on transmission networks. The analysis also includes a case study of the Arizona transmission network, calculating the results for 16 different discrete scenarios to account for differences in ambient temperature expectations across regions determined from an analysis of historical data.
\pagestyle{mainpage}
\fontsize{12pt}{14.4pt}\selectfont{\textbf{2. Methodology\\}}\normalsize
For the purpose of this paper, the methodology first characterizes temperature-driven planning scenarios. This involves estimating lower and upper bounds for ambient temperature, dividing Arizona into climate regions, and determining the implications of rising temperature on the capacity of transmission lines. These data are used as inputs for the featured TEP model, which considers two types of transmission investment options: expansion via new lines and expanding the capacity of existing lines. Data collection was performed using ambient temperature historical registers provided by the National Oceanographic and Atmospheric Administration (NOAA) and its National Centers for Environmental Information \cite{ncei}. This consists of 70 years' worth of data in daily increments, reporting maximum, minimum, and average daily temperature for selected locations across the state of Arizona.
\fontsize{10pt}{12pt}\selectfont{\textbf{2.1 Definition of Climate Regions\\}}\normalsize
The climate regions utilized in this work are based on the level II ecoregions defined by the Environmental Protection Agency; note the explanation for the development of these regions is provided in \cite{omernik2014ecoregions}. Arizona was then divided into four major regions, with each county being assigned to a region based on the ecoregion which comprises the largest area within that county. Figures \ref{fig:trans_net}-\ref{fig:county} demonstrate the transmission network, climate regions and corresponding county designations respectively.
\begin{figure}[!htb]
\minipage{0.32\textwidth}
\centering
\includegraphics[width=.8\linewidth]{TransmissionMap.png}
\caption{Transmission Network}
\label{fig:trans_net}
\endminipage\hfill
\minipage{0.32\textwidth}
\centering
\includegraphics[width=.8\linewidth]{EcoregionsMap.png}
\caption{EPA Designated Ecoregions}
\endminipage\hfill
\minipage{0.32\textwidth}
\centering
\includegraphics[width=.8\linewidth]{CountyRegions.png}
\caption{County Region Definitions}
\label{fig:county}
\endminipage\hfill
\end{figure}\vspace{5mm}
\fontsize{10pt}{12pt}\selectfont{\textbf{2.2 Estimation of Temperature Bounds\\}}\normalsize
Datasets incorporating daily temperature records for the period 1950-2019 were downloaded from NOAA for four representative urban centers (Phoenix, Tucson, Flagstaff, and Douglas). The 10 highest temperatures of each of these years were averaged to avoid outliers, and then plotted to analyze the behavior of maximum temperatures over time. The time series also provided enough data to perform regression analysis on the trend for peak annual temperature. This value corresponds to the expected maximum temperature for a given year. The highest maximum temperature per year was determined by selecting the data points above the mean trend line; such values and their associated years used as inputs to another regression analysis which provided a linear equation for the overall maximum temperature trend. Both equations serve as a linear estimation for the future trend (30 years ahead), following the approach used in \cite{garcia2015multi}. These values were validated in comparison to the expected increase predicted by NOAA, often used as the reference forecast temperatures in other studies. NOAA forecasts temperature increases impacted by $CO_{2}$ emissions. The low emissions scenario projects increases in temperature of 2.7°F, 3.6°F, and 4.7°F by 2035, 2055 and 2085, respectively. The expected values due to high emissions are 3°F, 4.8°F, and 8°F for the same years \cite{Hayhoe2017}. By comparison, the projected trends computed from our regression analysis for Phoenix, AZ, suggest an increase of approximately 2.6°F by 2035, 3.60°F by 2055 and 5.1°F by 2085. These align closely with the intervals for low and high emissions scenarios provided by NOAA, hence validating their suitability for this study.
\fontsize{10pt}{12pt}\selectfont{\textbf{2.3 Impact of Temperature on Transmission Line Capacity\\}}\normalsize
The main consequence of rising temperatures on bulk power systems is the reduction in both transmission and generation capacity and the corresponding increase in demand \cite{bartos2016impacts, bartos2015impacts}. Considering the scope of this study, the estimation of the impact of rising temperatures is focused on electricity transmission. Following the Institute of Electrical and Electronics Engineers (IEEE) Standard 738-2006, the formula to calculate the reduction in the ampacity of the transmission lines also used in \cite{bartos2016impacts,abdalla2013weather,michiorri2015forecasting,jiang2016dispatching}, will be applied in this paper. The formula, derived from the energy balance equation, is as follows:
\begin{equation}
I=\sqrt{\frac{\pi \cdot \overline{h} \cdot D \cdot (T_{cond} - T_{amb}) + \pi \cdot \epsilon \cdot \sigma \cdot D \cdot (T_{cond}^4 - T_{amb}^4) - \delta \cdot D \cdot a_{s}}{R \cdot T_{cond}}} \label{eqn:ampacity}
\end{equation}
Where $I$ is the fractional multiplier to the rated capacity of conductor (ampacity), $\overline{h}$ is the average heat transfer coefficient, $D$ is the diameter of the conductor, $T_{cond}$ and $T_{amb}$ are the average conductor temperature and ambient temperature, respectively. This first set of terms corresponds to losses due to convection. The second set of terms, corresponding to the loss due to radiation, includes $\epsilon$ which is the emissivity of conductor surface, $\sigma$ which is the Stefan-Boltzmann constant, and the diameter, plus the conductor temperature, and the ambient temperature. In the last set of terms, $\delta$ is the maximum solar radiation, $a_{s}$ is the absorptivity of the conductor surface and includes the diameter of the conductor, to account for the heat gain from solar radiation. The dividing term $R$ corresponds to the AC resistance of the conductor.
\fontsize{12pt}{14.4pt}\selectfont{\textbf{3. Model Formulation\\}}\normalsize
TEP with the DCOPF approximation can be formulated as a disjunctive, mixed-integer linear model. The objective \eqref{eqn:htls_obj} is to minimize the joint cost of generation and investment in both new lines and capacity expansion on existing lines. The full model is as follows.
\small
\begin{flalign}
& \hspace{0pt}\displaystyle \min \sum_{(i,j) \in \Omega} \left( c_{ij}y_{ij} + h_{ij}z_{ij} \right) + \sum_{n \in B} \sigma c_n g_n & \label{eqn:htls_obj} \\
& s.t. \hspace{0pt}\displaystyle \mathrlap{\sum_{(n,i)\in\Omega} \left(P_{ni}^{0} + P_{ni} \right) - \sum_{(i,n)\in\Omega} \left(P_{in}^{0} + P_{in} \right) + g_{n} = \gamma_r d_{n}} &\forall n \in r, r \in R \label{eqn:htls_balance} \\
& \hspace{0pt}\displaystyle - \eta_r \left( \overline{P}^0_{ij} - \overline{P}^1_{ij} z_{ij} \right) \leq P_{ij}^{0} \leq \eta_r \left( \overline{P}^0_{ij} + \overline{P}^1_{ij} z_{ij} \right) &\forall(i,j) \in \Omega \setminus \Psi, r \in R \label{eqn:htls_capacity} \\
& \hspace{0pt}\displaystyle - \eta_r \left(\overline{P}_{ij} y_{ij}\right) \leq P_{ij} \leq \eta_r \left( \overline{P}_{ij} y_{ij}\right) &\forall(i,j) \in \Psi, r \in R \label{eqn:htls_tep_capacity} \\
& \hspace{0pt}\displaystyle \frac{\T{--} 1}{b_{ij}}P_{ij}^{0} - (\theta_{i}-\theta_{j})= 0 &\forall(i,j) \in \Omega \setminus \Psi \label{eqn:htls_flow} \\
& \hspace{0pt}\displaystyle \mathrlap{ \T{--} M_{ij}(1-y_{ij}) \leq \frac{\T{--} 1}{b_{ij}}P_{ij}-(\theta_{i}-\theta_{j}) \leq M_{ij}(1-y_{ij})} &\forall(i,j) \in \Psi \label{eqn:htls_tep_flow} \\
& \hspace{0pt}\displaystyle 0 \leq g_{n} \leq \overline{g}_n &\forall n \in B & \label{eqn:htls_gen} \\
& \hspace{0pt}\displaystyle -\overline{\theta} \leq \theta_{i}-\theta_{j} \leq \overline{\theta} &\forall (i,j) \in \Omega \label{eqn:htls_busangle} \\
& \hspace{0pt}\displaystyle z_{ij} + y_{ij} \leq 1 &\forall (i,j) \in \Phi \label{eqn:htls_mutual}\\
& \hspace{0pt}\displaystyle y_{ij}, z_{ij} \in \left\{ 0,1 \right\} &\forall(i,j) \in \Psi \forall(i,j) \in \Omega \setminus \Psi
\end{flalign}\normalsize
Throughout, $B$ is the set of all buses, $R$ is the set of regions, $\Omega$ is the set of all lines $(i,j)$, $\Psi$ is the set of candidate lines and $\Phi$ the set of expandable lines. Constraint \eqref{eqn:htls_balance} enforces flow balance of $P_{ij}$, load $d_n$, and generation $g_n$, at each node $n$ in the transmission network. The demand is scaled by the factor $\gamma_r$, which is set at the upper and lower limits (for high and low temperature increases, respectively) of predicted demand increase in \cite{bartos2016impacts}. Constraints \eqref{eqn:htls_capacity} and \eqref{eqn:htls_tep_capacity} represent capacity limits $\overline{P}_{ij}$ on existing lines and candidate lines respectively. The coefficient $\eta_r$ is the scaling factor induced by \eqref{eqn:ampacity} to account for per-region temperature change; specifically, $\eta_r$ is the value of $I$ obtained by substituting in the ambient temperature predicted for each region. Constraints \eqref{eqn:htls_flow} and \eqref{eqn:htls_tep_flow} relate adjacent bus angles to power flow on the transmission line connecting them. Constraint \eqref{eqn:htls_mutual} states that only existing lines can have their capacity expanded, whereas candidate lines can be built but not then expanded. The remaining constraints represent domain limits on decision variables.
\fontsize{10pt}{12pt}\selectfont{\textbf{3.1 Valid Inequalities\\}}\normalsize
TEP is known to be NP-Hard \cite{latorre}. As such, a set of valid inequalities adapted from \cite{Skolfield2019} are incorporated into the solution algorithm to reduce computational time. The model could not be solved to optimality within 24 hours without the valid inequalities; it was solved for all instances under this time limit after their inclusion. The full statement of the valid inequalities is given by
\small
\begin{align*}
\vert \theta_n - \theta_m \vert \leq CR^r\left(\rho \setminus \left(i,j\right)\right) + \left( \overline{CR^r(\rho)} - CR^r\left(\rho \setminus \left(i,j\right)\right) \right) \left( N_e(\rho) - \sum_{(l,k) \in \rho \cap \Psi} y_{lk} \right) +
\left( {CR^r(\rho)} - CR^r\left(\rho \setminus \left(i,j\right)\right) \right) z_{ij}.
\end{align*}\normalsize
Here, $CR^r$ is the sum of the capacity-reactance product of each line in the path $\rho$, using the expanded capacity for all lines that can be reconductored. $N_e(\rho)$ counts the number of candidate lines on $\rho$.
\fontsize{12pt}{14.4pt}\selectfont{\textbf{4. Instance Generation\\}}\normalsize
The case study presented in this paper is an approximation of the transmission grid contained within the borders of Arizona; the basic description of this network is provided by the U.S. Energy Information Administration. Transmission lines rated at 69kV and above are included in the network as are all power plants rated to produce two or more MW/Hr. Additional corridors for transmission expansion are approximated by connecting high generation areas to high demand areas and connecting substations with few adjacent lines to more dense areas of the grid. All listed substations are used as buses to connect transmission lines and meet aggregated demand for nearby areas. Hourly demand for the test instance is based on peak summer demand for the full state. As the state level is the smallest resolution data available for load, this value is disaggregated by assigning load to substations according to the relative population in the nearest census block. Generation costs are approximated by the total statewide costs of generating each category of plant (natural gas, coal, petroleum, hydro, wind, and solar), divided according to the rated MW of the corresponding plant. Transmission expansion and capacity expansion costs are approximated based on the rated voltage and length of lines, using cost estimates from \cite{MISO}.
\fontsize{12pt}{14.4pt}\selectfont{\textbf{5. Results\\}}\normalsize
A MILP approximation of the AZ transmission network is solved for 16 scenarios: each climate zone of the network is projected to have either a large or small increase (e.g., $2\mi5^{\circ}$F for Tucson) in summer peak temperature independent from each other zone. These scenarios are encoded by an ordered quadruplet in which the temperature increase of region $i$ is indicated by an H in position $i$ if it is a large increase, and an L in the same position if a small increase is projected. This model is solved in Gurobi 9.0.2, and the non-zero expansion variables (and their associated objective costs) are tallied for each scenario. The results of these experiments are summarized in Table \ref{tab:my-table}, with costs annualized.
\begin{table}[!hbtp]
\caption{Per Annum Projected Cost}
\label{tab:my-table}
\begin{tabularx}{\textwidth}{|Y|Y|Y|Y|Y|Y|Y|Y|} \hline
\textbf{Scenario} & \textbf{New Lines Built} & \textbf{Cap. Exp. Built} & \textbf{New Line Cost} & \textbf{Cap. Exp. Cost} & \textbf{Total Exp. Cost} & \textbf{\;\;\;Gen.\newline Cost} & \textbf{\;\;Total\newline Cost} \\\hline\hline
L,L,L,L & 80 & 17 & \$ 14.40B & \$ 2.79B & \$ 17.19B & \$ 7.07B & \$ 24.26B \\
L,L,L,H & 83 & 19 & \$ 14.94B & \$ 3.15B & \$ 18.09B & \$ 6.39B & \$ 24.48B \\
L,L,H,L & 81 & 23 & \$ 14.58B & \$ 3.96B & \$ 18.54B & \$ 11.27B & \$ 29.81B \\
L,H,L,L & 89 & 17 & \$ 16.02B & \$ 2.85B & \$ 18.87B & \$ 6.06B & \$ 24.93B \\
H,L,L,L & 76 & 18 & \$ 13.68B & \$ 2.94B & \$ 16.62B & \$ 7.47B & \$ 24.09B \\
L,L,H,H & 83 & 25 & \$ 14.94B & \$ 4.24B & \$ 19.18B & \$ 10.97B & \$ 30.15B \\
L,H,L,H & 84 & 18 & \$ 15.12B & \$ 3.00B & \$ 18.12B & \$ 6.77B & \$ 24.89B \\
H,L,L,H & 85 & 20 & \$ 15.30B & \$ 3.24B & \$ 18.54B & \$ 6.00B & \$ 24.54B \\
L,H,H,L & 70 & 24 & \$ 12.60B & \$ 4.09B & \$ 16.69B & \$ 13.21B & \$ 29.90B \\
H,L,H,L & 88 & 24 & \$ 15.84B & \$ 4.09B & \$ 19.93B & \$ 10.22B & \$ 30.15B \\
H,H,L,L & 81 & 21 & \$ 14.58B & \$ 3.41B & \$ 17.99B & \$ 6.98B & \$ 24.97B \\
L,H,H,H & 83 & 22 & \$ 14.94B & \$ 3.73B & \$ 18.67B & \$ 11.46B & \$ 30.13B \\
H,L,H,H & 85 & 22 & \$ 15.30B & \$ 3.79B & \$ 19.09B & \$ 10.70B & \$ 29.79B \\
H,H,L,H & 83 & 18 & \$ 14.94B & \$ 3.00B & \$ 17.94B & \$ 7.32B & \$ 25.26B \\
H,H,H,L & 89 & 22 & \$ 16.02B & \$ 3.73B & \$ 19.75B & \$ 10.16B & \$ 29.91B \\
H,H,H,H & 78 & 22 & \$ 14.04B & \$ 3.73B & \$ 17.77B & \$ 12.50B & \$ 30.24B \\ \hline
\end{tabularx}
\end{table}
Total costs increase by over 25\% when comparing the scenario in which all regions experience small temperature gains to the scenario in which all regions experience large temperature gains. Since the difference in temperature changes for all regions is less than 3\%, this represents a superlinear relationship between temperature increases and associated power system costs. Furthermore, larger increases in temperatures have different effects depending on the regions in which they occur. For example, when regions 1 and 4 experience large temperature increases, the associated generation costs are much lower than when either region 2 or 3 experiences larger temperature increases. Since regions 2 and 3 contain Phoenix and Tucson respectively, the most populous metropolitan areas in the state by large margins, this is consistent with expectations. Generation costs associated with higher temperatures in these regions are nearly twice as much as those associated with higher temperatures in the less populous regions 1 and 4.
Comparing the relative volume and costs associated with new lines rather than capacity expansion reveals further features of this network. When region 3 (containing Tucson) experiences larger temperature increases, more lines are reinforced with higher capacity than in any other region. This suggest that Tucson has a robust infrastructure in place when considering transmission lines and generation, so the relatively less expensive option of expanding the capacity of existing lines can accommodate a large amount of increased demand in the region. In contrast, when region 2 (containing Phoenix) experiences larger temperature increases, new lines are built at a higher rate (and correspondingly, cost) than for any other region. The increase in demand associated from higher temperatures in this region does not cause generation costs to increase very much, but it does require a large number of new lines to be built to meet this demand. This is consistent with a less robust transmission infrastructure in this region compared to region 3, but better access to current and large sources of generation.
\fontsize{12pt}{14.4pt}\selectfont{\textbf{6. Conclusions\\}}\normalsize
Significant temperature increases are expected by mid century: the question is the magnitude and distribution of these increases. This study considers several scenarios of temperature increase, distributed across both magnitude and location, and their effects on state-level transmission networks. A test case is built from U.S. Government provided data and publicly available data on the existing generation and transmission assets within Arizona; temperature scenarios are similarly designed based on historical data and designated ecoregions. The optimal expected annual cost of operation the power network, as well as costs associated solely with generation, transmission expansion, and capacity expansion is projected. Based on the distribution of these costs, it can be seen that regardless of the nature of temperature increases, a significant investment in transmission infrastructure is required by mid century. Furthermore, conditions associated with the existing network will cause the total cost to depend -- to varying degrees -- on transmission expansion, capacity expansion, or generation. Large increases in temperatures over urban areas correspond to even larger increases in generation costs in those regions. The largest overall cost differences are associated with the degree of temperature increase in these urban regions. There are also network features which are distinct to each such region that dictate whether future demand can be met primarily with less expensive capacity expansion via reconductoring or require significantly more costly transmission expansion to build new overhead lines. These results suggest that further analysis is worth performing, including cost changes due to generation or substation investments. The demand projections are also limited in scope to a fixed percentage increase based on the regional temperature changes. However, the current work demonstrates that such analysis is valuable. Further work is necessary, including an analysis of some subnetworks of the Arizona power system with more varied scenarios, in order to fully understand what network features suggest investment in certain classes of transmission asset. The joint optimization of both generation and transmission, especially with significant state- and nationally-mandated plans to expand renewable generation, also remains a question of interest.
\bibliographystyle{IEEEtran}
|
2,869,038,154,661 | arxiv | \section{Introduction}
\label{Introduction}
The universal low-energy few-body dynamics of two-species compounds is of much
interest both for atomic and many-body physics.
In this respect, the study of the three-body energy spectrum gives insight
into the role of triatomic molecules and few-body scattering.
The area of applications includes the investigation of multi-component
ultra-cold quantum gases, e.~g.,
binary Fermi-Bose~\cite{Ospelkaus06,Karpiuk05} and
Fermi~\cite{Shin06,Chevy06,Iskin06} mixtures and of impurities embedded in
a quantum gas~\cite{Cucchietti06,Kalas06}, which are presently under thorough
experimental and theoretical study.
In addition, one should mention the reactions with negative atomic and
molecular ions~\cite{Penkov99,Jensen03}.
The universal isotopic dependence of the three-body energy spectrum was
multiply discussed~\cite{Efimov73,Ovchinnikov79,Li06,DIncao06,Shermatov03},
nevertheless, the main objective was the description of Efimov's spectrum.
Recently, the infinite number of the $1^+$ bound states was
predicted~\cite{Macek06} for three identical fermions with the resonant
$p$-wave interaction.
Concerning the low-energy scattering, one should mention a two-hump structure
in the isotopic dependence of the three-body recombination rate of
two-component fermions~\cite{Petrov03,Petrov05a,Kartavtsev07} and
the two-component model for the three-body recombination near the Feshbach
resonance~\cite{Kartavtsev02}.
The main aim of the paper is a comprehensive description of the finite
three-body rotational-vibrational spectrum in the zero-range limit of
the interaction between different particles.
Both qualitative and numerical results are obtained by using the solution
of hyper-radial equations (HREs)~\cite{Macek68,Kartavtsev99,Kartavtsev06}.
The detailed study of the bound states and scattering problems for the total
angular momentum $L = 1$ was presented in~\cite{Kartavtsev07}.
\section{Outline of the approach}
\label{approach}
Particle 1 of mass $m_1$ and two identical particles 2 and 3 of mass $m$ are
described by using the scaled Jacobi variables ${\mathbf x} =
\sqrt{2\mu}\left({\mathbf r}_2 - {\mathbf r}_1\right),\
{\mathbf y} = \sqrt{2\tilde\mu}[{\mathbf r}_3 -
(m_1 {\mathbf r}_1 + m {\mathbf r}_2)/(m_1 + m)]$ and the corresponding
hyper-spherical variables $x = \rho\cos\alpha$, $y = \rho\sin\alpha$,
$\hat{\mathbf x} = {\mathbf x}/x$, and $\hat{\mathbf y} = {\mathbf y}/y$,
where ${\mathbf r}_i$ is the position vector of the $i$th particle and
$\mu = m m_1/(m + m_1)$ and $\tilde{\mu} = m (m + m_1)/(m_1 + 2m)$ are
the reduced masses.
In the universal low-energy limit, only the s-wave interaction between
different particles will be taken into account provided the s-wave interaction
is forbidden between two identical fermions and is strongly suppressed between
two heavy bosons in the states of $L > 0$.
The two-body interaction is defined by imposing the boundary condition
at the zero inter-particle distance, which depends on a single parameter,
e.~g., the two-body scattering length $a$~\cite{Kartavtsev07}.
This type of interaction is known in the literature as the zero-range
potential~\cite{Demkov88}, the Fermi~\cite{Wodkiewicz91} or
Fermi-Huang~\cite{Idziaszek06} pseudo-potential, and an equivalent
approach is used in the momentum-space representation~\cite{Braaten03}.
The units $\hbar = 2\mu = |a| = 1$ are used throughout; thus, the binding
energy becomes the universal function depending on the mass ratio $m/m_1$
and the rotational-vibrational quantum numbers $L$ and $N$.
In view of the wave-function symmetry under permutation of identical
particles, a sum of two interactions between different particles is expressed
by a single boundary condition at the zero distance between particles $1$ and
$2$,
\begin{eqnarray}
\label{bch}
\lim_{\alpha\rightarrow \pi/2}\left[ \frac{\partial }{\partial\alpha} -
\tan\alpha - \rho \frac{a}{|a|} \right]\Psi = 0 \ .
\end{eqnarray}
The problem under study is conveniently treated by using the expansion of
the properly symmetrized wave function,
\begin{equation}
\label{Psi} \Psi = (1 + S\widehat{P})
\frac{ Y_{LM}(\hat{\mathbf y})}{\rho^{5/2}\sin 2\alpha} \sum_{n = 1}^{\infty}
f_n(\rho)\varphi_n^L(\alpha, \rho) \ ,
\end{equation}
which leads to the hyper-radial equations for the functions
$f_n(\rho)$~\cite{Kartavtsev07}.
Here $\widehat{P}$ denotes permutation of the identical particles 2 and 3,
$S = 1$ and $S = -1$ if these particles are bosons and fermions, respectively,
$Y_{LM}(\hat{\mathbf y})$ is the spherical function.
The action of $\widehat{P}$ on the angular variables in the limit
$\alpha \to \pi/2$ is given by $\widehat{P}Y_{LM}(\hat{\mathbf y}) \to
(-1)^L Y_{LM}(\hat{\mathbf y})$ and $\widehat{P}\alpha \to \omega$, where
$\omega = \arcsin (1+ m_1/m)^{-1}$.
The functions $\varphi_n^L(\alpha, \rho)$ in the expansion~(\ref{Psi}) are
the solutions of the equation on a hypersphere (at fixed $\rho$),
\begin{equation}
\label{eqonhyp1}
\left[\frac{\partial^2}{\partial \alpha^2} - \frac{L(L + 1)}{\sin^2\alpha}
+ \gamma^2_n(\rho)\right]\varphi_n^L(\alpha,\rho) = 0 \ ,
\end{equation}
complemented by the boundary conditions $\varphi_n^L(0, \rho) = 0$ and
\begin{equation}
\label{bconhyp}
\lim_{\alpha\rightarrow \pi/2}
\left(\frac{\partial}{\partial\alpha} - \rho \frac{a}{|a|} \right)
\varphi_n^L(\alpha, \rho) = S(-)^L\frac{2}{\sin 2\omega}
\varphi_n^L(\omega, \rho) \ ,
\end{equation}
where a set of discrete eigenvalues $\gamma_n^2(\rho)$ plays the role of
the effective channel potentials in a system of the hyper-radial
equations~\cite{Kartavtsev07}.
The functions satisfying Eq.~(\ref{eqonhyp1}) and the zero boundary condition
are straightforwardly expressed~\cite{Bateman53} via the Legendre function
\begin{eqnarray}
\label{varphi}
\varphi_n^L(\alpha, \rho) = \sqrt{\sin\alpha}
Q_{\gamma_n(\rho) - 1/2}^{L + 1/2} (\cos\alpha ) \equiv
\phi_{L, \gamma_n(\rho)}(\alpha)\ .
\end{eqnarray}
The functions $ \phi_{L, \gamma}(\alpha)$ are odd functions on both variables
$\gamma$ and $\alpha$ satisfying the recurrent relations
$\sin\alpha\ \phi_{L + 1, \gamma}(\alpha) = (\gamma - L - 1)\cos\alpha \
\phi_{L, \gamma}(\alpha) - (\gamma + L)\phi_{L, \gamma - 1}(\alpha )$,
which follow from those for the Legendre functions.
It is convenient to write $\phi_{L, \gamma}(\alpha) =
A_{L, \gamma}(\cot\alpha)\sin\gamma\alpha +
B_{L, \gamma}(\cot\alpha)\cos\gamma\alpha$, where $A_{L, \gamma}(x)$ and
$B_{L, \gamma}(x)$ are simple polynomials on $\gamma$ and $x$, which are
explicitly given for few lowest $L$ by $A_{0, \gamma}(x) = 1$,
$B_{0,\gamma}(x) = 0$, $A_{1, \gamma}(x) = -x$, $B_{1,\gamma}(x) = \gamma$,
$A_{2, \gamma}(x) = 1 - \gamma^2 + 3x^2$, $B_{2,\gamma}(x) = -3\gamma x$,
$A_{3, \gamma}(x) = 3x(2\gamma^2 - 3 - 5x^2)$, and $B_{3,\gamma}(x) =
\gamma (15x^2 + 4 - \gamma^2)$.
Substituting~(\ref{varphi}) into the boundary condition~(\ref{bconhyp}) and
using the identity $\phi_{L+1, \gamma}(\pi/2) = \frac{\partial
\phi_{L, \gamma}(\alpha)}{\partial\alpha} \Big|_{\alpha=\pi/2}$
one comes to the transcendental equation for $\gamma_n^2(\rho)$,
\begin{eqnarray}
\label{transeq}
\rho \frac{a}{|a|}\ \phi_{L, \gamma} (\pi/2) = \phi_{L + 1, \gamma} (\pi/2)
- \frac{2 S (-)^L}{\sin 2\omega}\phi_{L, \gamma}(\omega) \ .
\end{eqnarray}
The attractive lowest effective potential determined by $\gamma_1^2(\rho)$
plays the dominant role for the binding-energy and low-energy-scattering
calculations, while the effective potentials in the upper channels for
$n \ge 2$ contain the repulsive term $\gamma_n^2(\rho)/\rho^2$ and are of
minor importance.
Thus, a fairly good description will be obtained by using the one-channel
approximation for the total wave function~(\ref{Psi}) where the first-channel
radial function satisfies the equation~\cite{Kartavtsev07}
\begin{equation}
\label{system1}
\left[\frac{d^2}{d \rho^2} - \frac{\gamma_1^2(\rho) - 1/4}{\rho^2} + E \right]
f_1(\rho) = 0 \ .
\end{equation}
Note that the diagonal coupling term is omitted in Eq.~(\ref{system1}), which
does not affect the final conclusions and leads to the calculation of a lower
bound for the exact three-body energy.
Our calculations~\cite{Kartavtsev07} shows that the one-channel approximation
provides better than few percent overall accuracy of the binding energy.
The most discussed feature~\cite{Efimov73,Ovchinnikov79,Li06,DIncao06} of
the three-body system under consideration is the infinite number of the bound
states for small $L$ and large $m/m_1$ (more precisely, for the finite
interaction radius $r_0$ the number of states unrestrictedly increases with
increasing $|a|/r_0$).
As the effective potential in~(\ref{system1}) is approximately given by
$(\gamma_1^2(0) - 1/4)/\rho^2$ at small $\rho$, the number of vibrational
states is finite (infinite) if $\gamma_1^2(0) > 0$ ($\gamma_1^2(0) < 0$).
According to Eq.~(\ref{transeq}), $\gamma_1^2(0)$ decreases with increasing
$m/m_1$ and becomes zero at the critical value $(m/m_1)_{cL}$.
Thus, one can define the step-like function $L_c(m/m_1)$, which increases by
unity at the points $(m/m_1)_{cL}$, so that the number of vibrational states
is infinite for $L < L_c(m/m_1)$ and finite for $L \ge L_c(m/m_1)$.
Solving Eq.~(\ref{transeq}) at $\gamma_1 \to 0$ and $\rho \to 0$, one obtains
the exact values $(m/m_1)_{cL}$, which approximately equal $13.6069657$,
$38.6301583$, $75.9944943$, $125.764635$, and $187.958355$ for $L = 1 - 5$.
Originally, the dependence $L_c(m/m_1)$ was discussed in~\cite{Efimov73}.
Analyzing the eigenvalue equation~(\ref{transeq}) one concludes that for
$a > 0$ and $S(-)^L = -1$ the effective potential exceeds the threshold energy
$E = -1$, $\gamma_1^2(\rho)/\rho^2 > -1$, therefore, the bound states only
exist if either two identical particles are bosons and $L$ is even or two
identical particles are fermions and $L$ is odd.
Furthermore, one obtains the trivial answer if $a < 0$ and $L \ge L_c(m/m_1)$,
for which $\gamma_1^2(\rho) > 0$ and there are no three-body bound states.
\section{Numerical results}
\label{bound}
The mass-ratio dependence of the binding energies $\varepsilon_{L N}(m/m_1)$
for $L \ge L_c(m/m_1)$ and $a > 0$ is determined numerically by seeking
the square-integrable solutions to Eq.~(\ref{system1}).
Mostly, the properties of the energy spectrum are similar to those for
$L = 1$, which were carefully discussed in~\cite{Kartavtsev07}.
For given $L$, there is the critical value of $m/m_1$ at which the first
bound state arise, in other words, there are no three-body bound states for
$L \ge L_b(m/m_1)$, where the step-like function $L_b(m/m_1)$ undergoes
unity jumps at those critical values.
Furthermore, all the bound states arise at some values of $m/m_1$ being
the narrow resonances just below them.
For the mass ratio near these values, the binding energies and resonance
positions depend linearly and the resonance widths depend quadratically on
the mass-ratio excess.
Exactly at these values one obtains the threshold bound states, whose wave
functions are square-integrable with a power fall-off at large distances.
A set of these values of $m/m_1$ (more precisely, the lower bounds for them)
is obtained numerically and presented in Table~\ref{tab1}.
With increasing $m/m_1$, the binding energies monotonically increase reaching
the finite values (shown in Table~\ref{tab1}) at $(m/m_1)_{cL}$; just below
$(m/m_1)_{cL}$ they follow the square-root dependence on the difference
$m/m_1 - (m/m_1)_{cL}$.
Correspondingly, the number of the vibrational states increases with
increasing $m/m_1$ taking the finite number $N_{max}$ at $(m/m_1)_{cL}$ and
jumping to infinity beyond $(m/m_1)_{cL}$; in the present calculations
$N_{max} = L + 1$ for $L < 9$ and $N_{max} = L + 2$ for $10 \le L \le 12$.
\begin{table}[htb]
\caption{Upper part: Mass ratios for which the $N$th bound state of the total
angular momentum $L$ arises.
Lower part: Binding energies $\varepsilon_{L N}$ for the mass ratio fixed at
$(m/m_1)_{cL}$. }
\label{tab1}
\begin{tabular}{lccccc}
$N$ & $L = 1$ & $L = 2$ & $L = 3$ & $L = 4$ & $L = 5$ \\
\hline
1 & 7.9300 & 22.342 & 42.981 & 69.885 & 103.06 \\
2 & 12.789 & 31.285 & 55.766 & 86.420 & 123.31 \\
3 & - & 37.657 & 67.012 & 101.92 & 142.82 \\
4 & - & - & 74.670 & 115.08 & 160.64 \\
5 & - & - & - & 123.94 & 175.48 \\
6 & - & - & - & - & 185.51 \\
\hline
1 & 5.906 & 12.68 & 22.59 & 35.59 & 52.16 \\
2 & 1.147 & 1.850 & 2.942 & 4.392 & 6.216 \\
3 & - & 1.076 & 1.417 & 1.920 & 2.566 \\
4 & - & - & 1.057 & 1.273 & 1.584 \\
5 & - & - & - & 1.049 & 1.206 \\
6 & - & - & - & - & 1.045 \\
\hline
\end{tabular}
\end{table}
\section{Universal description of the spectrum}
\label{largeL}
A comprehensive description of the spectrum is obtained by using
the large-$L$ (correspondingly, large-$m/m_1$) asymptotic expression for
the binding energies $\varepsilon_{L N}(m/m_1)$.
Taking the quasi-classical solution of Eq.~(\ref{eqonhyp1}) satisfying
the zero boundary condition,
\begin{eqnarray}
\label{qcphi}
\phi_{L, i\kappa}(\alpha) = \exp\left(\kappa \arccos\frac{x \cos\alpha}
{\sqrt{1 + x^2}}\right) \left(\frac{\sqrt{1 + x^2 \sin^2\alpha} - \cos\alpha}
{\sqrt{1 + x^2 \sin^2\alpha} + \cos\alpha}\right)^{L/2 + 1/4} \ ,
\end{eqnarray}
where $\gamma_1 = i\kappa$ and $x = \kappa/(L + 1/2)$,
one writes the eigenvalue equation~(\ref{transeq}) in the form,
\begin{eqnarray}
\label{qceigenv}
\frac{\rho }{L + 1/2} = \sqrt{1 + x^2} -
\frac{2 \exp\left(\kappa \arcsin\frac{x \cos\omega}{\sqrt{1 + x^2}}\right)}
{(L + 1/2)\sin 2\omega} \left(\frac{\sqrt{1 + x^2 \sin^2\omega} - \cos\omega}
{\sqrt{1 + x^2 \sin^2\omega} + \cos\omega}\right)^{L/2 + 1/4} \ .
\end{eqnarray}
In the limit of large $L$ and $m/m_1$ the eigenvalue equation~(\ref{qceigenv})
reduces to
\begin{eqnarray}
\label{adeigenv}
\rho\cos\omega = u - e^{-u} \ ,
\end{eqnarray}
where $u = \cos\omega\sqrt{\kappa^2 + (L + 1/2)^2}$.
Notice that taking the limit $\kappa \to 0$ and $\rho \to 0$
in~(\ref{adeigenv}) one immediately obtains the relation
$\cos\omega_{cL} = u_0/(L + 1/2)$, where $\sin\omega_{cL} =
(m/m_1)_{cL}/[1 + (m/m_1)_{cL}]$ and $u_0 \approx 0.567143$ is the root
of the equation $u = e^{-u}$; as a result, one finds the asymptotic dependence
$(m/m_1)_{cL} \approx 6.2179(L + 1/2)^2$ and the inverse relation
$L_c(m/m_1) \approx (u_0 \sqrt{2m/m_1} - 1)/2 \approx
0.40103\sqrt{m/m_1} - 1/2$.
Now the asymptotic dependence $\varepsilon_{L N}(m/m_1)$
for large $L$ and $m/m_1$ can be obtained by the quasi-classical solution
of~(\ref{system1}) with $\gamma_1^2(\rho) = (L + 1/2)^2 -
[u(\rho)/\cos\omega]^2$ and $u(\rho)$ determined by~(\ref{adeigenv}),
\begin{equation}
\label{qcint}
\displaystyle\int_{u_-}^{u_+} du \frac{1 + e^{-u}}{u -e^{-u}}
\sqrt{u^2 - \varepsilon_{L N} (u - e^{-u})^2 - L(L + 1)\cos^2\omega} =
\pi (N - 1/2)\cos\omega \ ,
\end{equation}
where $u_-$ and $u_+$ are zeros of the integrand.
Following Eq.~(\ref{qcint}), one expects to express the binding energies via
the universal function $\varepsilon_{L N}(m/m_1) = {\cal E}(\xi, \eta)$ of two
scaled variables $\xi = \displaystyle\frac{N - 1/2}{\sqrt{L(L + 1)}}$ and
$\eta = \displaystyle\sqrt{\frac{m}{m_1 L(L + 1)}}$.
This two-parameter dependence is confirmed by the numerical calculations
(up to $m/m_1 \sim 700$), which reveal that the calculated energies for
$L > 2$ lie on a smooth surface as shown in Fig.~\ref{figen_univ}.
Even for the smallest $L = 1, 2$ the calculated energies are in good agreement
with the two-parameter dependence showing only a slight deviation from
the surface.
\begin{figure}[hbt]
\includegraphics[width = .8\textwidth]{ensurfp3a.eps}
{\caption{Universal dependence of the bound-state energy $E$ on the scaled
variables $\xi = \displaystyle\frac{N - 1/2}{\sqrt{L(L + 1)}}$ and
$\eta = \displaystyle\sqrt{\frac{m}{m_1L(L + 1)}}$.
The calculated values for $L = \overline{3, 12}$ are plotted by symbols.
The surface boundary and its projection on the $\xi - \eta$ plane are shown by
solid lines.
\label{figen_univ}}}
\end{figure}
The variables $\xi $ and $\eta $ take values within the area limited by
the line $\xi = 0$, the line $\eta = \eta_{max} \approx \sqrt{2}/u_0
\approx 2.493574$ stemming from the condition of finiteness of bound states
$L \ge L_c(m/m_1)$, and the line ${\cal E}(\xi, \eta ) = 1$ expressing
the condition of arising of the bound states at the two-body threshold.
As shown in Fig.~\ref{figen_univ}, the smallest value $\eta = \eta_{min}$
is at $\xi = 0$, which corresponds to the condition of arising of the first
bound state in the large-$L$ limit.
To find it one requires that $u_+ = u_- \equiv u_b$ at $\varepsilon_{L N} = 1$
in Eq.~(\ref{adeigenv}), which leads to $\eta_{min} \approx
\sqrt{2/(u_b^2 - 1)} \approx 1.775452$, where $u_b \approx 1.278465$ is
the root of the equation $u = 1 + e^{-u}$.
This gives the asymptotic dependence for arising of the first bound state,
$L_b \approx \eta_{min}^{-1}\sqrt{m/m_1} - 1/2 \approx 0.563237
\sqrt{m/m_1} - 1/2$.
At the line $\eta = \eta_{max}$ the variable $\xi$ takes its largest value
$\xi_{max}$, which determines the large-$L$ dependence of the number of
the vibrational states $N_{max}$ for a given $L$.
The calculation of the quasi-classical integral~(\ref{qcint}) gives
$u_- = u_0$, $u_+ \approx 2.872849$,
$\xi_{max} = \displaystyle\frac{1}{\pi u_0}\int_{u_-}^{u_+}
\frac{1 + e^{-u}}{u e^u - 1} \sqrt{e^u(2u - u_0^2 e^u) - 1}\ du
\approx 1.099839 $, and the large-$L$ estimate
$N_{max} = \xi_{max} \sqrt{L(L + 1)} + 1/2$.
Taking the entire part of this expression, one can predict that the dependence
$N_{max} = L + 1$ for $L < 10$ changes to $N_{max} = L + 2$ at $L = 10$, which
is in agreement with the numerical result.
The universal surface ${\cal E}(\xi, \eta )$ is bound by three lines, which
are described by fitting the calculated energies for $L \ge 3$ to simple
dependencies plotted in Fig.~\ref{figen_univ}.
As a result, the line defined by ${\cal E}(\xi, \eta ) = 1$ is fairly well
fitted to $\eta = (\eta_{min} + a\xi)[1 - c \xi (\xi - \xi_{max})]$, where
$a = (\eta_{max} - \eta_{min})/\xi_{max} \approx 0.652933$ is fixed by
the evident condition ${\cal E}(\xi_{max}, \eta_{max}) = 1$ and the only
fitted parameter is $c = 0.1865$.
Furthermore, the analysis shows that the next boundary line defined by
$\eta = \eta_{max}$ is described by ${\cal E}^{-1/2}(\xi, \eta_{max}) =
a_1 \xi(1 - a_2 \xi) [1 - c_1 \xi(\xi - \xi_{max})]$, where
$a_1 = 1 + u_0 \approx 1.56714$ is fixed by the asymptotic behaviour of
the integral~(\ref{qcint}) at $L \to \infty$ and $\eta = \eta_{max}$,
$a_2 \approx 0.38171$ is fixed by the condition
$a_1 \xi_{max}(1 - a_2 \xi_{max}) = 1$, and the only fitted parameter is
$c_1 = 0.1881$.
In particular, at the critical mass ratio the binding energy of the deep
states in the limit of large $L$ is described by
${\cal E}(\xi, \eta_{max}) \to (a_1 \xi)^{-2}$, i.~e.,
$\varepsilon_{N L}[(m/m_1)_{cL}] = \frac{L(L + 1)}{(N - 1/2)^2(1 + u_0)^2}$.
The third boundary line at $\xi \to 0$ is described by the dependence
${\cal E}^{-1/2}(0, \eta) = a_3 \sqrt{\eta_{max} - \eta}
[1 + c_2 (\eta - \eta_{min})]$, where $a_3 = 1/\sqrt{\eta_{max} - \eta_{min}}
\approx 1.18$ is fixed by ${\cal E}(0, \eta_{min}) = 1$ and the only fitted
parameter is $c_2 = 0.3992$.
\section{Conclusion}
\label{Conclusion}
The presented results complemented by the accurate calculations for
$L = 1$~\cite{Kartavtsev07} provide in the universal low-energy limit
a comprehensive description of the rotational-vibrational spectrum of three
two-species particles with the short-range interactions.
Essentially, all the binding energies are described by means of the universal
function ${\cal E}(\xi, \eta )$ for those $L_c(m/m_1) \le L \le L_b(m/m_1)$
which correspond to the finite number of vibrational states.
One expects that the universal picture should be observed in the limit
$|a| \to \infty$, e.~g., if the potential is tuned to produce the loosely
bound two-body state as discussed in~\cite{Blume05,Kartavtsev06}.
It is of interest to discuss briefly the effect of the finite, though small
enough interaction radius $r_0 \ll a$.
For $L < L_c(m/m_1)$ Efimov's infinite energy spectrum is extremely sensitive
to the interaction radius $r_0$ and to the interaction in the vicinity of
the triple-collision point, whereas for $L\ge L_c(m/m_1)$ the binding energies
depend smoothly on the interaction parameters provided $r_0 \ll a$.
For this reason, one expects not an abrupt transition from the finite to
infinite number of bound states for $L = L_c(m/m_1)$ but a smeared off
dependence for any finite value of $r_0/a$.
It is worthwhile to mention that arising of the three-body bound states
with increasing mass ratio is intrinsically connected with the oscillating
behaviour of the $2 + 1$ elastic-scattering cross section and the three-body
recombination rate.
In particular, for $L = 1$ it was shown in~\cite{Kartavtsev07} that two
interference maxima of the scattering amplitudes are related to the arising of
two three-body bound states.
Analogously, the dependence of the scattering amplitudes on the mass ratio for
higher $L$ would exhibit the number of interference maxima which are related
to arising of up to $N_{max} = 1.099839 \sqrt{L(L + 1)} + 1/2$ bound states.
Concerning possible observations of the molecules containing two heavy and one
light particles in the higher rotational states, one should mention
the ultra-cold mixtures of $^{87}\mathrm{Sr}$ with lithium isotopes
\cite{Kartavtsev07} and mixtures of cesium with either lithium or helium.
In particular, for $^{133}\mathrm{Cs}$ and $^6\mathrm{Li}$ the mass ratio
$m/m_1 \approx 22.17$ is just below the value $m/m_1 = 22.34$ at which
the $L = 2$ bound state arises and $m/m_1 \approx 33.25$ for
$^{133}\mathrm{Cs}$ and $^4\mathrm{He}$ is above the value $m/m_1 = 31.29$,
which corresponds to arising of the second $L = 2$ bound state.
Also, a complicated rotational-vibrational spectrum and significant
interference effects are expected for the negatively charged atomic and
molecular ions for which the typical total angular momentum up to $L \sim 100$
becomes important due to the large mass ratio.
|
2,869,038,154,662 | arxiv | \section{Introduction}
Very High Energy (VHE) $\gamma$-ray astronomy \cite{ong} is still
in its infancy. Operating in the energy
range from $\sim$\,100 GeV to 30 TeV and beyond, this subfield of
astronomy represents an exciting,
relatively unsampled region of the electromagnetic spectrum, and a
tremendous challenge. To develop more
fully this field needs an instrument capable of performing
continuous systematic sky surveys and
detecting transient sources on short timescales without
a priori knowledge of their location. These primary
science goals require a telescope with a wide field of view
and high duty cycle, excellent
source location and background rejection capabilities -
an instrument that complements both existing and future ground-
and space-based $\gamma$-ray telescopes.
To be viable VHE astronomy must overcome a number of fundamental
difficulties. Since the flux of VHE photons is small, telescopes
with large collecting areas ($\>10^3\,\mathrm{m}^2$)
are required to obtain statistically significant photon samples;
telescopes of this size can, currently,
only be located on the Earth's surface. However,
VHE photons do not readily penetrate the $\sim\,28$
radiation lengths of the Earth's atmosphere
($1030\,\mathrm{g}/\mathrm{cm}^2$ thick at sea-level) but
instead interact with air molecules to produce secondary
particle cascades, or extensive air showers.
Another difficulty of VHE astronomy is the large background
of hadronic air showers, induced by
cosmic-ray primaries (primarily protons), that cannot be vetoed.
In this paper we describe the conceptual design of an instrument
that builds upon traditional extensive air shower methods;
however, unlike typical extensive air shower arrays the detector
design utilizes unique imaging capabilities
and fast timing to identify (and reject) hadronic cosmic-ray
backgrounds and achieve excellent angular
resolution, both of which lead to improved sensitivity.
In the following sections we briefly motivate the need for such
an instrument
(Section 2), discuss in detail telescope design parameters with
emphasis on their optimization (Section 3),
describe the conceptual design of a VHE telescope and the
simulations used in this study (Section 4), and
evaluate the capabilities of such a detector in terms of source
sensitivity (Section 5). Finally, the results
of this study are summarized and compared to both current and
future VHE telescopes.
\section{Motivation}
VHE $\gamma$-ray astronomy has evolved dramatically in the
last decade with the initial detections of steady
and transient sources, galactic and extragalactic sources.
To date 7 VHE $\gamma$-ray sources have been
unambiguously detected
\cite{crab,mrk421,mrk501,1ES2344,p1706,vela,sn1006};
this contrasts dramatically with
the number of sources detected in the more traditional
regime of $\gamma$-ray astronomy at energies below
$\sim$\,20 GeV. The EGRET instrument aboard the
Compton Gamma-Ray Observatory, for example, has detected
pulsars, supernova remnants, gamma-ray bursts, active
galactic nuclei (AGN), and approximately 50 unidentified
sources in the 100 MeV-20 GeV range \cite{agn1,agn2}.
The power-law spectra of many EGRET sources show no sign
of an energy cutoff, suggesting that they may be observable
at VHE energies.
The 4 Galactic VHE objects, all supernova remnants, appear
to have $\gamma$-ray emission that is constant in both
intensity and spectrum. The 3 extragalactic VHE sources are
AGN of the blazar class. Although AGN have been
detected during both quiescent and flaring states, it is the
latter that produce the most statistically
significant detections. During these flaring states the
VHE $\gamma$-ray flux has been
observed to be as much as 10 times that of the Crab nebula -
the standard candle of TeV astronomy \cite{hegra_flare}.
Although observed seasonally since their initial detection,
long term continuous monitoring of the TeV sources
detected to date has never been possible, nor has there ever
been a systematic survey of the VHE sky. This is
primarily due to the fact that all VHE source detections to
date have been obtained with air-Cherenkov telescopes.
Because they are optical instruments, air-Cherenkov telescopes
only operate on dark, clear, moonless nights -
a $\sim\,5-10\,\%$ duty cycle for observations; these
telescopes also have relatively narrow fields of view
($\sim\,10^{-2}\,\mathrm{sr}$). Although they are likely
to remain unsurpassed in sensitivity for detailed
source observations these telescopes have limited usefulness
as transient monitors and would require over a
century to complete a systematic sky survey.
The identification of additional VHE sources would contribute
to our understanding of a range of unsolved
astrophysical problems such as the origin of cosmic rays,
the cosmological infrared background, and the nature
of supermassive black holes. Unfortunately, the field of VHE
astronomy is data starved; new instruments
capable of providing continuous observations and all-sky
monitoring with a sensitivity approaching that
of the air-Cherenkov telescopes are therefore required. A VHE
telescope with a wide field of view and high duty
cycle could also serve as a high-energy early warning system,
notifying space- and ground-based instruments of
transient detections quickly for detailed multi-wavelength
follow-up observations. Its operation should coincide
with the launch of the next-generation space-based instrument
such as GLAST \cite{glast}.
\section{Figure of Merit Parameters}
A conceptualized figure of merit is used to identify the
relevant telescope design parameters. This
figure of merit, also called the signal to noise ratio,
can be written as
\begin{equation}
\left(\frac{signal}{noise}\right) \propto
\frac{R_\gamma Q \sqrt{A^{eff}~T}}{\sigma_\theta}
\label{equation1}
\end{equation}
\begin{table}
\caption{\label{table1} Figure of merit parameter definitions.}
\vskip 0.5cm
\small
\centerline{
\begin{tabular}{|l l l|}
\hline
{\em Parameter} & {\em Units} & {\em Definition} \\
\hline
\hline
$A_{eff}$ & $\mathrm{m}^2$ & (effective) detector area \\
$T$ & sec & exposure \\
$\sigma_\theta$ & $^o$ & angular resolution \\
$R_\gamma$ & - & $\gamma$/hadron
relative trigger efficiency \\
$Q$ & - & $\gamma$/hadron
identification efficiency \\
\hline
\end{tabular}}
\end{table}
\normalsize
where the various parameters are defined in
Table\,\ref{table1}. Ultimately, source sensitivity is
the combination of these design parameters.
Although a more quantitative
form of the figure of merit is used to estimate the
performance of the conceptual telescope design (see
Equation\,\ref{equation4}), we use Equation\,\ref{equation1}
to address specific design requirements.
\subsection{$R_\gamma$}
\label{sec_alt}
Air showers induced by primary particles in the
100\,GeV to 10\,TeV range
reach their maximum particle\footnote{Throughout the rest
of the paper the generic term ``particles'' will
refer to $\gamma, e^{\pm}, \mu^{\pm}$, and hadrons unless
otherwise noted.} number typically at altitudes
between 10 and 15\,km above sea level (a.s.l.).
An earth-bound detector therefore samples the cascade at
a very late stage of its
development, when the number of shower particles
has already dropped by an
order of magnitude from its value at shower maximum.
Figure\,\ref{longi} shows the result of computer
simulations\footnote {Here and in the following analysis,
the CORSIKA 5.61 \cite{corsika} code is used for air-shower
simulation in the atmosphere. It is briefly
described in the next section.} of the longitudinal profile
of air showers induced in the Earth's
atmosphere by proton and $\gamma$-primaries with fixed
energies and zenith angles
$0^{\mathrm o}\le\theta\le45^{\mathrm o}$. The small number
of particles reaching 2500m detector altitude
sets severe limits for observations at these altitudes.
In addition, the number of particles in proton
showers actually exceeds the number of particles in
$\gamma$-showers at low altitudes (Figure\,\ref{r_gamma}).
This implies that the trigger probability, and thus the
effective area, of the detector is larger for
proton than for $\gamma$-showers, an unfavorable situation
which leads to an $R_{\gamma}$ (the ratio of
$\gamma$-ray to proton trigger efficiency) less than 1.
At 4500\,m, however, the mean number of particles
exceeds the number at 2500\,m by almost an order of magnitude
at all energies. Therefore, telescope location
at an altitude $\geq$\,4000\,m is important for an air
shower array operating at VHE energies not only
because of the larger number of particles, and hence the
lower energy threshold, but also because of the
intrinsic $\gamma$/hadron-separation available, due to
the relative trigger probabilities, that exists at higher
altitudes.
\begin{figure}
\epsfig{file=rmiller_fig1.eps,width=14.0cm}
\caption{\label{longi}
Mean number of particles
($\gamma, e^{\pm}, \mu^{\pm}$, hadrons) vs. altitude for proton-
and $\gamma$-induced air showers with primary
energies 100\,GeV, 500\,GeV, 1\,TeV, and 10\,TeV.
The low energy cutoff of the particle kinetic energy is
100\,keV ($\gamma, e^{\pm}$), 0.1\,GeV ($\mu^{\pm}$),
and 0.3\,GeV (hadrons).}
\end{figure}
\begin{figure}
\epsfig{file=rmiller_fig2.eps,width=14.0cm}
\caption{\label{r_gamma}
Ratio of particle numbers in $\gamma$- and proton-induced
showers vs. altitude.}
\end{figure}
\begin{figure}
\epsfig{file=rmiller_fig3.eps,width=14.0cm}
\caption{\label{n_muon}
For a $150\times 150\,\mathrm{m}^2$ detector area and
cores randomly distributed over the detector area,
(a) shows the mean number of $\mu^{\pm}$ in proton-induced showers
as a function of the energy of the primary particle,
and (b) shows the fraction $f$ of proton showers
without $\mu^{\pm}$ as a function of the energy of the primary
particle. The solid line is the actual value
of $f$, the dashed line is the expected value assuming the
number of $\mu^{\pm}$ follows a
Poisson distribution.}
\end{figure}
\subsection{$Q$}
\begin{figure}
\epsfig{file=rmiller_fig4.eps,width=14.0cm}
\caption{\label{wavelet} Shower image (spatial particle
distribution reaching ground level) for a typical
TeV $\gamma$- and proton shower (top). Event image after
convolution with ``Urban Sombrero'' smoothing function
(middle), and after significance thresholding (bottom).
(0,0) is the center of the detector.}
\end{figure}
The rate of VHE $\gamma$-ray induced showers is significantly
smaller than those produced by
hadronic cosmic-rays\footnote{At 1 TeV the ratio of
proton- to $\gamma$-induced showers
from the Crab Nebula is approximately 10$^4$, assuming an
angular resolution of 0.5 degrees.}.
Therefore rejecting this hadronic background and improving
the signal to noise ratio is crucial to
the success of any VHE $\gamma$-ray telescope. The effectiveness
of a background rejection technique is typically expressed as a
quality factor $Q$ defined as
\begin{equation}
Q = \frac{\epsilon_\gamma}{\sqrt{1-\epsilon_p}}
\label{equation2}
\end{equation}
where $\epsilon_\gamma$ and $\epsilon_p$ are the efficiencies
for {\em identifying} $\gamma$-induced and
proton-induced showers, respectively. Traditional extensive
air-shower experiments have addressed
$\gamma$/hadron-separation (i.\,e. background rejection)
by identifying the penetrating particle
component of air showers (see e.\,g. \cite{hegra_gh,casa_gh}),
particularly muons. Although valid at
energies exceeding 50 TeV, the number of muons detectable
by a telescope of realistic effective area is
small at TeV energies (see Figure\,\ref{n_muon}\,(a)).
In addition, the $N_{\mu}$-distribution deviates
from a Poisson distribution, implying that the fraction of
proton showers {\em without} any muon is larger than
$e^{-\overline{N}_{\mu}}$ (Figure\,\ref{n_muon}\,(b)).
Relying on muon detection for an effective
$\gamma$/hadron-separation requires efficient muon detection
over a large area. A fine-grained absorption
calorimeter to detect muons and perform air shower calorimetry
can, in principle, lead to an effective rejection
factor; however, the costs associated with such a detector
are prohibitive.
In contrast to air-shower experiments, imaging air-Cherenkov
telescopes have achieved quality factors
$Q>$\,7 by performing a {\em shape analysis} on the observed
image \cite{whipple_q}. Non-uniformity of hadronic
images arises from the development of localized regions of
high particle density generated by small
sub-showers. Although some of the background rejection
capability of air-Cherenkov telescopes is a result of their
angular resolution, rejection of hadronic events by
identifying the differences between $\gamma$- and
proton-induced images considerably increases source sensitivity.
Although air-Cherenkov telescopes image the shower as it
(primarily) appears at shower maximum, these
differences should also be evident in the particle distributions
reaching the ground. Figure\,\ref{wavelet}\,(top) shows
the particle distributions reaching ground level for
typical TeV $\gamma$-ray and proton-induced
showers. This figure illustrates the key differences:
the spatial distribution of particles in $\gamma$-ray
showers tends to be compact and smooth, while in proton
showers the distributions are clustered and uneven.
Mapping the spatial distribution of shower particles
(imaging), and identifying/quantifying shower features
such as these should yield improved telescope sensitivity.
\subsection{$\sigma_\theta$}
Shower particles reach the ground as a thin disk of diameter
approximately 100\,m. To first order, the disk
can be approximated as a plane defined by the arrival times
of the leading shower front particles.
The initiating primary's direction is assumed to be a
perpendicular to this plane. Ultimately, the accuracy with which
the primary particle's direction can be determined is related
to the accuracy and total number of the relative arrival
time measurements of the shower particles,
\begin{equation}
\sigma_{\theta} \propto \frac{\sigma_{t}}{\sqrt{\rho}}~,
\end{equation}
where $\sigma_t$ is the time resolution and $\rho$
is the density of independent detector elements
sampling the shower front. The telescope must, therefore,
be composed of elements that have fast timing $\sigma_t$
and a minimum of cross-talk since this can affect the
shower front arrival time
determinations. Once the detector area is larger than the
typical lateral extent of air showers, thus providing
an optimal lever arm, the angular resolution can be further
improved by increasing the sampling density.
To achieve ``shower limited'' resolution, individual
detector elements should have a time response
no larger than the fluctuations inherent in shower particle
arrival times ($\leq10$ ns, see
Figure\,\ref{n_muon}\,(c)); on the other hand, there is no
gain if $\sigma_t$ is significantly smaller than the
shower front fluctuations.
In practice fitting the shower plane is complicated by
the fact that the shower particles undergo
multiple scattering as they propagate to the ground leading
to a curvature of the shower front. This scattering
delays the particle arrival time by $\cal{O}$(ns)/100\,m,
however the actual magnitude of curvature is a function
of the particle's distance from the core. Determination
of the core position, and the subsequent application of a
{\em curvature correction} considerably improves the angular
resolution by returning the lateral particle
distribution to a plane which can then be reconstructed.
Core location accuracy can be improved by increasing
the sampling density of detector elements and the overall
size of the detector itself.
\section{Conceptual Design}
To summarize the previous sections, an all-sky VHE telescope
should satisfy the following design considerations:
\begin{itemize}
\item{$\sim\,100\,\%$ duty cycle ($T$)}
\item{large effective area ($A_{eff}$)}
\item{high altitude ($>$\,4000\,m)}
\item{high sampling density}
\item{fast timing}
\item{imaging capability}
\end{itemize}
In the sections that follow, we study how a pixellated
{\em scintillator-based} large-area
detector with 100$\%$ active sampling performs as an
all-sky monitor and survey instrument. Scintillator is
used since it can provide excellent time resolution and has
high sensitivity to charged particles, ultimately
leading to improvements in angular resolution, energy threshold,
and background rejection. To reduce detector
cross-talk, improve timing, and enhance the imaging
capabilities the detector should be segmented into
optically isolated pixels . This type of detector is easier
to construct, operate, and maintain compared to
other large-area instruments such as water- or gas-based
telescopes - advantageous since the high altitude
constraint is likely to limit potential telescope sites
to remote locations.
Many of the design goals are most effectively achieved by
maximizing the number of detected air-shower
particles. As discussed in Section\,\ref{sec_alt}, detector
altitude is of primary importance; however, at the energies
of interest here only about $10\,\%$
(Figure\,\ref{converter}\,(a,b)) of the particles reaching
the detector level are charged. Thus, the number of detected
particles can be increased dramatically by improving the
sensitivity to the $\gamma$-ray component of showers.
A converter material (e.\,g. lead)
on top of the scintillator converts photons into charged
particles via Compton scattering and pair production,
and, in addition, reduces the time spread of the shower
front by filtering low energy particles
which tend to trail the prompt shower front
(Figure\,\ref{converter}\,(c)) and thus
deteriorate the angular resolution.
Figure\,\ref{converter}\,(d) shows the charged particle
gain expected as a function of the converter thickness
for lead, tin, and iron converters.
The maximum gain is for a lead converter at $\sim$\,2
radiation lengths
($1\,{\mathrm r.\,l.}=0.56\,{\mathrm cm}$), but the gain
function is rather steep below 1\,r.\,l. and flattens
above. Because of the spectrum of secondary $\gamma$-rays
reaching the detector pair production is the most
dominant process contributing to the charged particle gain
(Figure\,\ref{converter}\,(d)).
\begin{figure}
\epsfig{file=rmiller_fig5.eps,width=14.0cm}
\caption{\label{converter}
Mean number vs. primary energy (a) and energy distribution
(b) of secondary $\gamma$'s and $e^{\pm}$
reaching 4500\,m observation altitude. (c) Integral
shower particle arrival time distribution
for particles within 40\,m distance to the core and various
cuts on the particle energy (100\,keV, 1\,MeV, 10\,MeV).
(d) Charged particle gain as a function of the converter
thickness for lead, tin, and iron converters.}
\end{figure}
Techniques for reading out the light produced in
scintillator-based detector elements have progressed
in recent years with the development of large-area sampling
calorimeters. Of particular interest
is the work by the CDF collaboration on scintillating
tiles \cite{bodek}; this technique utilizes fibers,
doped with a wavelength shifter and embedded directly in
the scintillator, to absorb the scintillation light
and subsequently re-emit it at a longer wavelength.
This re-emitted light is then coupled to photomultiplier
tubes either directly or using a separate clear fiber-optic
light guide. This highly efficient configuration
is ideal for detecting minimum ionizing particles (MIPs),
and produces 4 photoelectrons/MIP on average
in a 5\,mm-thick scintillator tile.
Using an array of tile/fiber detector elements one can now
consider a large-area detector that counts
particles and is $\sim\,100\,\%$ active, and it is this
paradigm that we discuss in more detail in the
following sections. It should be noted that a
scintillator-based air-shower detector is not a new idea;
however, the use of the efficient tile/fiber configuration
in a detector whose physical area is fully active
pushes the traditional concept of an air-shower array
to the extreme.
\subsection{Air Shower and Detector Simulation}
The backbone of a conceptual design study is the simulation
code. Here the complete simulation of
the detector response to air showers is done in two steps:
1) initial interaction of the primary particle
(both $\gamma$-ray and proton primaries) with the atmosphere
and the subsequent development
of the air shower, and 2) detector response to air-shower
particles reaching the detector level.
The CORSIKA \cite{corsika} air shower simulation code,
developed by the KASCADE \cite{kascade} group,
provides a sophisticated simulation of the shower development
in the Earth's atmosphere. In CORSIKA,
electromagnetic interactions are simulated using the
EGS\,4 \cite{egs} code. For hadronic interactions,
several options are available. A detailed study of the hadronic
part and comparisons to existing data has
been carried out by the CORSIKA group and is documented in
\cite{corsika}. For the simulations discussed here,
we use the VENUS \cite{venus} code for high energy hadronic
interactions and GHEISHA \cite{gheisha} to treat low energy
($\le\,80\,{\mathrm GeV}$) hadronic interactions.
The simulation of the detector itself is based on the
GEANT \cite{geant321} package. The light yield of
0.5\,cm tile/fiber assemblies has been studied in detail
in \cite{bodek} and \cite{barbaro}, and we adopt
an average light yield of 4 photoelectrons per minimum
ionizing particle. This includes attenuation losses
in the optical fibers and the efficiency of the photomultiplier.
Simulation parameters are summarized in
Table\,\ref{tab_sim}. It should be noted that wavelength
dependencies of the fiber attenuation length and
of the photomultiplier quantum efficiency have not been included.
\begin{table}
\caption{\label{tab_sim} Basic parameters of the shower and
detector simulation.}
\vskip 0.5cm
\small
\centerline{
\begin{tabular}{|l|l|}
\hline
zenith angle range & $0^{\mathrm o}\le\theta\le45^{\mathrm o}$ \\
lower kinetic energy cuts & 0.1\,MeV ($e^{\pm},\,\gamma$) \\
& 0.1\,GeV ($\mu^{\pm}$) \\
& 0.3\,GeV (hadrons) \\
scintillator thickness & 0.5\,cm \\
lead converter thickness & 0.5\,cm \\
PMT transit time spread & 1\,ns (FWHM) \\
average light yield/MIP & 4 photoelectrons \\
\hline
\end{tabular}}
\end{table}
\normalsize
As shown in Section\,\ref{sec_alt} only a detector at an
altitude above 4000\,m can be expected to give the desired
performance; however, we study the effect of three
detector altitudes, 2500\,m
($764.3\,{\mathrm g}\,{\mathrm cm}^{-2}$), 3500\,m
($673.3\,{\mathrm g}\,{\mathrm cm}^{-2}$), and
4500\,m ($591.0\,{\mathrm g}\,{\mathrm cm}^{-2}$), for the
purpose of completeness. These altitudes span
the range of both existing and planned all-sky VHE telescopes
such as the Milagro detector \cite{milagro} near Los Alamos,
New Mexico (2630\,m a.s.l.), and the ARGO-YBJ \cite{argo} detector
proposed for the Yanbajing Cosmic Ray Laboratory in Tibet
(4300\,m a.s.l.).
\section{Detector Performance}
In the remainder of this paper we study the expected performance
of a detector based on the conceptual design
discussed above. Although the canonical design is a detector
with a geometric area $150\times 150\,\mathrm{m}^{2}$, a detector
design incorporating a $200\times 200\,\mathrm{m}^{2}$ area has
also been analyzed in order to understand how telescope
performance scales with area. Pixellation is achieved
by covering the physical area of the detector with a mosaic
of 5\,mm thick scintillator tiles each covering
an area of $1\times 1\,\mathrm{m}^{2}$.
\subsection{Energy Threshold}
The energy threshold of air-shower detectors is not well-defined.
The trigger probability for a shower
induced by a primary of fixed energy is not a step-function but
instead rises rather slowly due
to fluctuations in the first interaction height, shower
development, core positions, and incident angles.
Figure\,\ref{energy}\,(a) shows the trigger probability as a
function of the primary $\gamma$-ray energy
for three trigger conditions.
Typically, the primary energy where the trigger probability
reaches either $10\,\%$ or $50\,\%$ is defined as the
energy threshold (see Figure\,\ref{energy}\,(a)).
A large fraction of air showers that fulfill the trigger
condition will have lower energies since
VHE source spectra appear to be power-laws, $E^{-\alpha}$. A
more meaningful indication of energy threshold
then is the {\it median} energy $E_{med}$; however, this
measure depends on the spectral index of the source.
For a source with spectral index $\alpha=2.49$ (Crab),
Figure\,\ref{energy}\,(b, top) shows $E_{med}$ as a
function of detector altitude (for fixed detector size).
The median energy increases as altitude decreases
since the number of particles reaching the detector level
is reduced at lower altitudes. For a detector at
4500\,m a.s.l., $E_{med}$ is about 500\,GeV after imposing
a 40 pixel trigger criterion.
$E_{med}$ is not a strong function of the detector size
(Figure\,\ref{energy}\,(b, bottom)). It is
also noteworthy that a larger pixel size of
$2\times 2\,\mathrm{m}^2$ instead of $1\times 1\,\mathrm{m}^2$
only slightly increases $E_{med}$; due to the lateral extent
of air showers and the large average distances
between particles, nearly $95\,\%$ of all showers with more
than 50 $1\,\mathrm{m}^2$-pixels also have more
than 50 $4\,\mathrm{m}^2$-pixels.
\begin{figure}
\epsfig{file=rmiller_fig6.eps,width=14.0cm}
\caption{\label{energy}
(a) Trigger efficiency as a function of the primary
particles' energy for three trigger conditions
(10, 40, 400 pixels).
(b) Median energy of detected $\gamma$-showers as a function
of the trigger condition (number
of pixels) for three detector altitudes (top) and as a
function of the detector size for
a fixed altitude (4500\,m) (bottom).}
\end{figure}
\subsection{Background Rejection and Core Location}
\label{sec_bg}
Due to its pixellation and $100\%$ active area the telescope
described here can provide true images of
the spatial distribution of secondary particles reaching the
detector. Image analysis can take many forms;
the method of wavelet transforms\,\cite{kaiser} is well
suited for identifying and extracting localized
image features. To identify localized high-density regions
of particles an image analysis technique that utilizes digital
filters is used; the procedure is briefly summarized below
while details are given in \cite{miller}.
Proton- and $\gamma$-induced showers can be identified by
counting the number of ``hot spots'', or peaks, in a
shower image in an automated, unbiased way. These peaks are
due to small sub-showers created by secondary particles and
are more prevalent in hadronic showers than in $\gamma$-induced
showers. To begin, the shower image (i.\,e. the
spatial distribution of detected secondary particles) is
convolved with a function that smooths the image over a
predefined region or length scale
(see Figure\,\ref{wavelet}\,(middle)).
The smoothing function used in this
analysis is the so-called ``Urban Sombrero''\footnote{Also
known as the ``Mexican Hat'' function.} function:
\begin{equation}
g\left(\frac{r}{a}\right)
= \left(2 - \frac{r^2}{a^2}\right) e^{-\frac{r^2}{2\,a^2}}
\end{equation}
where $r$ is the radial distance between the origin of
the region being smoothed and an image pixel, and $a$ is the
length scale over which the image is to be smoothed.
This function is well suited for this analysis since it is a
localized function having zero mean; therefore, image
features analyzed at multiple scales $a$ will maintain their
location in image space. A peak's maximum amplitude is found
on length scale $a$ corresponding to the actual
spatial extent of the ``hot spot.''
Many peaks exist in these images; the key is to tag
statistically significant peaks. To do this the probability
distribution of peak amplitudes must be derived from a random
distribution of pixels. Using $2\times10^4$ events,
each with a random spatial particle distribution,
the probability of observing a given amplitude is computed. This
is done for events with different pixel multiplicities and
for different scale sizes. Results using only a
single scale size of 8\,m are presented here. This scale size
represents the optimum for the ensemble of showers;
scale size dependence as a function of pixel multiplicity is
studied in \cite{miller}. In order to identify
statistically significant peaks a threshold is
applied to the smoothed image. Peaks are eliminated if the
amplitude was more probable than $6.3\times10^{-5}$
corresponding to a significance of $<4\,\sigma$; the value
of the threshold is chosen to maximize the background
rejection. Figure\,\ref{wavelet}\,(bottom) shows the result
of thresholding. After applying thresholding
the number of significant peaks is counted; if the number of
peaks exceeds the mean number of peaks expected from
$\gamma$-induced showers then the event is tagged as
a ``proton-like'' shower and rejected. The number of
expected peaks is energy dependent starting at 1 peak
(i.\,e. the shower core), on average, for $\gamma$-showers
with less than 100 pixels and increasing with pixel
multiplicity; proton-induced showers show a similar
behavior except
that the number of peaks rises faster with the number of
pixels (see Figure\,\ref{peaks}).
\begin{figure}
\epsfig{file=rmiller_fig7.eps,width=14.0cm}
\caption{\label{peaks}
(a) Number of significant peaks for $\gamma-$ (top) and
proton showers (bottom) as a function of pixel multiplicity.
A random probability of $<\,6.3\times10^{-5}$ is used to
define a significant peak.
(b) Mean number of significant peaks for $\gamma-$ and
proton showers as a function of pixel multiplicity.}
\end{figure}
Additional background rejection may be possible by using
information such as the spatial distribution of peaks and
the actual shape of individual peak regions. This is
currently being investigated. The ability to map
and analyze the spatial distributions of air shower particles
implies that, in conjunction with analysis
techniques such as the one described here, the large cosmic-ray
induced backgrounds can be suppressed thereby
improving the sensitivity of a ground-based air-shower array;
quantitative results on the use of this image analysis technique
are described below and summarized in Figure\,\ref{quality}.
Image analysis can also be used to identify and locate the
shower core. Here the core is identified as the
peak with the largest amplitude; this is reasonable assuming
that typically the core represents a relatively
large region of high particle density.
Figure\,\ref{angle_res}\,(a) shows the accuracy of the core fit;
other methods are less accurate and more susceptible
to detector edge effects
and local particle density fluctuations.
The core location is used to correct for the curvature of the
shower front and to veto events with cores outside the active
detector area. Rejecting ``external'' events is beneficial
as both angular resolution and background rejection
capability are worse for events with cores off the detector.
We define the outer 10\,m of the detector as a veto ring
and restrict the analysis to events with
fitted cores inside the remaining fiducial area
($130\times 130\,\mathrm{m}^{2}$ or
$180\times 180\,\mathrm{m}^{2}$ for the
$200\times 200\,\mathrm{m}^{2}$ design). This cut
identifies and keeps $94\,\%$ of the $\gamma$-showers
with {\em true} cores within the fiducial area while
vetoing $64\,\%$ of events with cores outside. It is
important to note that the non-vetoed events are generally
of higher quality (better angular resolution,
improved $\gamma$/hadron-separation). In addition, $R_{\gamma}$ is
smaller for external events than for internal ones
due to the larger lateral spread of particles in proton
showers; thus the veto cut actually improves overall
sensitivity even though total effective area is decreased.
\begin{figure}
\epsfig{file=rmiller_fig8.eps,width=14.0cm}
\caption{\label{angle_res}
(a) Mean distance between reconstructed and true shower
core location.
(b) Mean angle between reconstructed and true shower direction for
a detector without lead converter and with 0.5\,cm and
1.0\,cm lead.}
\end{figure}
\subsection{Angular Resolution}
The shower direction is reconstructed using an iterative
procedure that fits a plane to the arrival times of the pixels and
minimizes $\chi^{2}$.
Before fitting, the reconstructed core position
(see Section\,\ref{sec_bg}) is used
to apply a curvature correction to the shower front.
In the fit, the pixels are weighted with $w(p)=1/\sigma^{2}(p)$,
where $\sigma^{2}(p)$ is the RMS of the time residuals
$t_{pixel}-t_{fit}$ for pixels with $p$ photoelectrons.
$t_{fit}$ is the expected time according to the previous iteration.
In order to minimize the effect of large time fluctuations
in the shower particle arrival times, we reject pixels with times
$t_{pixel}$ with
$\left|t_{pixel}-t_{fit}\right|\ge\,10\,\mathrm{ns}$.
In addition, only pixels within 80\,m distance to the
shower core are included in the fit.
Figure\,\ref{angle_res}\,(b) shows the mean difference
between the fitted and the true shower direction as a
function of the number of pixels for a
detector with and without 0.5\,cm and 1.0\,cm of lead,
again indicating the benefits of the
converter. The angular resolution does not improve considerably
when the converter thickness is increased from 0.5\,cm
to 1.0\,cm, thus 0.5\,cm is a reasonable
compromise considering the tradeoff between cost and performance.
\subsection{Sensitivity}
\begin{figure}
\epsfig{file=rmiller_fig9.eps,width=14.0cm}
\caption{\label{sourcebin}
(a) Significance for a one day observation of a
Crab-like source for three trigger conditions
(40, 120, 400 pixels) as a function of the source bin size.
(b) Energy distribution of the
detected $\gamma$-showers for the three trigger conditions.
The source location is
$\left|\delta-\lambda\right|\simeq\,5^{\mathrm{o}}$,
where $\delta$ is the source declination and $\lambda$
is the latitude of the detector site.}
\end{figure}
\begin{figure}
\epsfig{file=rmiller_fig10.eps,width=14.0cm}
\caption{\label{quality}
(a) Significance/day for a Crab-like source as a function
of the trigger condition with and without
$\gamma$/hadron-separation for the $150\times 150\,\mathrm{m}^{2}$
prototype. (b) Quality factor as a function of the
trigger condition. (c) Sensitivity as a function of the source
position $\left|\delta-\lambda\right|$.}
\end{figure}
The ultimate characteristic of a detector is its sensitivity
to a known standard candle.
In this section, the methods described so far are
combined to estimate the overall
point source sensitivity of a pixellated scintillation detector.
As indicated in Equation\,\ref{equation1}, the sensitivity
of an air shower array depends on its
angular resolution $\sigma_{\theta}$, its effective area
$A_{eff}$, the trigger probabilities
for source and background showers, and the quality factor of the
$\gamma$/hadron-separation. However, as most of the parameters
are functions of the primary energy, the sensitivity depends on the
spectrum of the cosmic ray background and the spectrum of the
source itself. The significance $S$ therefore has to be
calculated using
\begin{equation}
S=\frac{\int A^{eff}_{\gamma}(E)~\epsilon_{\gamma}(E)
~J_{\gamma}(E)~{\mathrm d}E~~f_{\gamma}~T}
{\sqrt{\int A^{eff}_{p}(E)~(1\,-\,\epsilon_{p}(E))
~J_{p}(E)~{\mathrm d}E~~\Delta\Omega~T}}
\label{equation4}
\end{equation}
where $J_{\gamma}$ and $J_{p}$ are the photon and proton
energy spectrum, and
$f_{\gamma}$ is the fraction of $\gamma$-showers
fitted within the solid angle bin
$\Delta\Omega=2\,\pi\,(1-\mathrm{cos}\,\theta)$.
Other parameters have their standard meaning.
The Crab Nebula is commonly treated as a standard candle
in $\gamma$-ray astronomy; this allows
the sensitivity of different telescopes to be compared.
The differential spectrum of the Crab
at TeV energies has been measured by the
Whipple collaboration \cite{whipple_crab}:
\begin{equation}
J_{\gamma}(E) = (3.20\,\pm0.17\,\pm0.6)\times10^{-7}\,
E_{\mathrm{TeV}}^{-2.49\,\pm0.06\,\pm0.04}\,
\mathrm{m}^{-2}\,\mathrm{s}^{-1}\,\mathrm{TeV}^{-1}.
\label{crab_rate}
\end{equation}
The sensitivity of the detector to a Crab-like source
can be estimated using Equation\,\ref{equation4} and
the differential proton background flux measured by the
JACEE balloon experiment \cite{jacee}:
\begin{equation}
\frac{dJ_{p}(E)}{d\Omega}=(1.11^{+0.08}_{-0.06})\times10^{-1}\,
E_{\mathrm{TeV}}^{-2.80\,\pm0.04}\,
\mathrm{m}^{-2}\,\mathrm{sr}^{-1}\,\mathrm{s}^{-1}\,
\mathrm{TeV}^{-1}.
\label{bg_rate}
\end{equation}
Calculating $S$ using Equation\,\ref{equation4} is not
straightforward since $\epsilon_{\gamma}$
is not a constant, but rather a function of the event size
and core position and, therefore, angular reconstruction accuracy.
This equation can be solved, however, by a Monte Carlo
approach: Using the Crab and cosmic-ray proton spectral
indices, a pool of simulated $\gamma$- and proton showers
($\cal{O}$($10^{6}$) events of each particle type at
each altitude) is generated with energies from 50\,GeV
to 30\,TeV and with zenith angles
$0^{\mathrm o}\ge\theta\ge45^{\mathrm o}$.
A full Julian day source transit can be simulated for a given
source bin size and declination
by randomly choosing $\gamma$-ray and proton-induced showers
from the simulated shower pools at rates
given by Equations\,\ref{crab_rate} and~\ref{bg_rate}. The only
constraint imposed on the events is that
they have the same zenith angle as the source bin at the
given time. The showers are then fully reconstructed,
trigger and core location veto cuts, as well as
$\gamma$/hadron-separation cuts, are applied; $\gamma$-showers
are required to fall into the source bin. This procedure
produces distributions of pixel multiplicity, core
position, etc. reflecting instrumental resolutions and responses.
Because the angular resolution varies with the number of
pixels, the optimal source bin size also varies.
Figure\,\ref{sourcebin}\,(a) shows the significance for a full-day
observation of a Crab-like source with
$\left|\delta-\lambda\right|\simeq\,5^{\mathrm{o}}$,
where $\delta$ is the source declination and $\lambda$
is the latitude of the detector site,
as a function of the source bin size for three trigger conditions.
As the number of pixels increases, the optimal source bin size
decreases from $0.6^{\mathrm{o}}$ (40 pixels) to
$0.2^{\mathrm{o}}$ (400 pixels).
Figure\,\ref{sourcebin}\,(b) shows
how the energy distribution of detected $\gamma$-showers
changes with trigger condition. The
median energy for a 40 pixel trigger is 600\,GeV,
with a substantial fraction of showers having energies
below 200\,GeV. For a 120 pixel trigger, $E_{med}$ is 1\,TeV.
For a 1 day Crab-like source transit, Figure\,\ref{quality}\,(a)
shows how the significance varies as a
function of the trigger condition with and without
$\gamma$/hadron-separation. If no $\gamma$/hadron-separation
is applied the sensitivity increases and then falls above
500 pixels because of the finite size of the detector.
However, as shown in Section\,\ref{sec_bg}, above 500 pixels
the quality factor of the
$\gamma$/hadron-separation counterbalances the loss of area.
Figure\,\ref{quality}\,(b) shows the quality factor
derived solely from the ratio of source significance with
and without separation. As expected, $Q$ increases
dramatically with pixel number, leading to significances
well above $3\,\sigma$ per day for energies above
several TeV.
The sensitivity also depends on the declination of the source.
Results quoted so far refer to sources
$\left|\delta-\lambda\right|\simeq\,5^{\mathrm{o}}$.
Figure\,\ref{quality}\,(c) shows how the
expected significance per source day transit changes with
the source declination $\delta$.
\begin{table}[ht]
\caption{\label{table2} Significances $S$ for a 1 day
observation of a Crab-like source with
$\left|\delta-\lambda\right|\simeq\,5^{\mathrm{o}}$
for different altitudes and trigger conditions. $E_{med}$ is the
median energy of the detected source particles, values
in parentheses denote significances after
$\gamma$/hadron separation.}
\vskip 0.5cm
\small
\centerline{
\begin{tabular}{|c|c|| c c | c || c c | c |}
\hline
& & \multicolumn{3}{|c||}{$150\times 150\,\mathrm{m}^{2}$}
& \multicolumn{3}{|c|}{$200\times 200\,\mathrm{m}^{2}$} \\
\hline
altitude & trigger & \multicolumn{2}{|c|}{$S \left[\sigma\right]$}
& log($E_{med}^{\mathrm{GeV}})$
& \multicolumn{2}{|c|}{$S \left[\sigma\right]$}
& log($E_{med}^{\mathrm{GeV}})$ \\
\hline
\hline
4500\,m & 40 & 1.3 & (1.3) & 2.8 & 1.8 & (1.8) & 2.8 \\
& 1000 & 1.6 & (2.5) & 3.8 & 1.9 & (2.7) & 3.8 \\
\hline
3500\,m & 40 & 0.9 & (0.9) & 3.1 & 1.2 & (1.2) & 3.0 \\
& 1000 & 0.8 & (1.2) & 4.1 & 1.3 & (1.7) & 4.0 \\
\hline
2500\,m & 40 & 0.4 & (0.4) & 3.3 & 0.6 & (0.6) & 3.2 \\
& 1000 & 0.5 & (0.6) & 4.3 & 0.8 & (1.1) & 4.2 \\
\hline
\end{tabular}}
\end{table}
\normalsize
\begin{table}[ht]
\caption{\label{table3} Expected rates [kHz] for a
$150\times 150\,\mathrm{m}^2$ detector at
different altitudes. Cores are randomly distributed
over $300\times 300\,\mathrm{m}^{2}$ and no core
veto cut is applied.}
\vskip 0.5cm
\small
\centerline{
\begin{tabular}{|l||c|c|c|}
\hline
trigger & 4500\,m & 3500\,m & 2500\,m \\
\hline
\hline
10 & 34.5 & 18.9 & 10.5 \\
40 & 6.7 & 3.6 & 2.1 \\
100 & 1.8 & 1.0 & 0.6 \\
400 & 0.2 & 0.2 & 0.1 \\
1000 & 0.05 & 0.04 & 0.02 \\
\hline
\end{tabular}}
\end{table}
\normalsize
Table\,\ref{table2} summarizes the dependence of the
detector performance on the size and the altitude
of the detector. Significance scales with $\sqrt{A_{eff}}$
as expected from Equation\,\ref{equation1}. Detector
altitude, however, is more critical. Although at 2500\,m
$10\,\sigma$ detections of a steady Crab-like source
per year are possible, it is only at 4000\,m altitudes where
the sensitivity is sufficient to detect
statistically significant daily variations of source emissions.
It is noteworthy that for the canonical design at 4500\,m,
a trigger condition as low as 10 pixels still produces
$1.2\,\sigma$ per day at median energies of about 280\,GeV
corresponding to an event rate of 34.5\,kHz.
Predicted event rates for different trigger conditions at
various altitudes are summarized in Table\,\ref{table3}.
Sustained event rates below $\sim$\,10\,kHz are achievable
with off-the-shelf data acquisition electronics; higher
rates may also be possible. The event rates estimated here
are relatively low compared to the rate of
single cosmic-ray muons; because of the optical isolation and
low-cross talk of individual detector elements
single muons are unlikely to trigger the detector even with
a low pixel multiplicity trigger condition.
\section{Conclusion}
To fully develop the field of VHE $\gamma$-ray astronomy
a new instrument is required - one capable of
continuously monitoring the sky for VHE transient emission
and performing a sensitive systematic survey for
steady sources. To achieve these goals such an instrument
must have a wide field of view, $\sim\,100\,\%$ duty
cycle, a low energy threshold, and background rejection
capabilities. Combining these features we have shown that a
detector composed of individual scintillator-based pixels
and 100$\%$ active area provides high sensitivity
at energies from 100\,GeV to beyond 10\,TeV. Detailed
simulations indicate that a source with the intensity
of the Crab Nebula would be observed with an energy dependent
significance exceeding $\sim\,3\,\sigma$/day.
AGN flares, or other transient phenomena, could be detected
on timescales $\ll$1 day depending on their
intensity - providing a true VHE transient all-sky monitor.
A conservative estimate of the
sensitivity of a detector like the one described here
(the PIXIE telescope) is shown in Figure\,\ref{sensi_comp}
compared to current and future ground- and space-based
experiments. The plot shows the sensitivities for both 50
hour and 1 year source exposures, relevant for transient and
quiescent sources, respectively. A detector based on
the PIXIE design improves upon first-generation detector
concepts, such as Milagro, in two principal ways: fast
timing and spatial mapping of air shower particles.
The sensitivity of a sky map produced by this detector in
1 year reaches the flux sensitivity of current air-Cherenkov
telescopes (for a 50 hour exposure), making the detection of
AGN in their quiescent states possible.
\begin{figure}
\epsfig{file=rmiller_fig11.eps,width=14.0cm}
\caption{\label{sensi_comp} Predicted sensitivity of some
proposed and operational
ground-based telescopes. The dashed and dotted lines show
the predicted sensitivity of the telescope described
here (PIXIE) at an altitude $>4000$\,m. The numbers are based
on a $5\,\sigma$ detection for the given exposure
on a single source. EGRET and GLAST sensitivities are for
1 month of all-sky survey. The ARGO sensitivity is
taken from \cite{argo}, all others from \cite{glast_proposal}.
Information required to extrapolate
the ARGO sensitivity to higher energies is not given
in \cite{argo}.}
\end{figure}
The cost for a detector based on the conceptual design
outlined is estimated conservatively at between
\$\,500 and \$\,1000 per pixel; the cost of scintillator
is the dominant factor. A proposal to perform a detailed
detector design study (evaluation of detector
materials, data acquisition prototyping, and investigation
of construction techniques) leading to a final
design is currently pending.
Although unlikely to surpass the sensitivity of
air-Cherenkov telescopes for detailed single source
observations, a sensitive all-sky VHE telescope could
continuously monitor the observable sky at VHE energies.
In summary, non-optical ground-based VHE astronomy {\em is}
viable, and the development of an all-sky VHE telescope with
sensitivity approaching that of the existing narrow field of
view air-Cherenkov telescopes, will contribute to
the continuing evolution of VHE astronomy.
\begin{ack}
We thank the authors of CORSIKA for providing us with the
simulation code; we also acknowledge
D.G. Coyne, C.M. Hoffman, J.M. Ryan, and D.A. Williams
for their useful comments. This research is
supported in part by the U.S. Department of Energy Office
of High Energy Physics, the U.S. Department of Energy
Office of Nuclear Physics, the University of California (RSM),
and the National Science Foundation (SW).
\end{ack}
|
2,869,038,154,663 | arxiv | \section{Introduction}\label{intro}
The light element beryllium (Be) has very special origins. The
primordial Be abundance (on the order of $N\rmn{(Be/H)}=10^{-17}$,
\citealt{thomas1994}) is negligible as predicted by the standard Big
Bang Nucleosynthesis. Be can not be produced by nuclear fusion in
the interiors of stars; on the opposite, Be will be destroyed by
this process. It was first proposed by \citet*{reeves1970} that Be
can be created by spallation reactions between galactic cosmic rays
(GCRs) and the CNO nuclei in the interstellar medium (ISM). This
model predicts a quadratic relation between the abundances of Be
and O (or a slope of 2 in logarithmic plane), assuming that the CNO
abundance is proportional to the cumulative number of Type II
supernovae (SNe\,II) and the cosmic ray flux is proportional to the
instantaneous rate of SNe\,II. However, recent observational results
(e.g., \citealt{gilmore1992, boesgaard1993,molaro1997}) showed a
linear relation between Be and O abundances, which indicates that Be
may be produced in a primary process instead of the standard
secondary GCRs process. \citet{duncan1997} suggested that Be can be
produced in a reverse spallation of C and O nuclei onto protons and
$\alpha$-particles. This process will lead to a linear dependence of
Be on O abundances. The results from the latest big survey by
\citet[hereafter B99]{boesgaard1999} showed that the slope of the Be
vs. O trend in logarithmic scale is about 1.5, which makes the Be
production scenario more complicated and confusing. In addition, the
exact slope of the Be vs. O relationship depends on which oxygen
indicator is used (see discussion in Sect.~\ref{oxygen}).
If Be is produced in a primary process and the cosmic rays were
transported globally across the early Galaxy, Be abundances should
show a very small scatter at a given time. This makes the Be
abundance an ideal cosmic chronometer \citep{suzuki2001}.
\citet{pasquini2004,pasquini2007} found that Be abundances in
globular clusters NGC\,6397 and NGC\,6752 are very similar to that
of the field stars with the same Fe abundances. Furthermore, the
derived ages from Be abundances based on the model of
\citet{valle2002} are in excellent agreement with the ages
determined by main-sequence fitting. They suggested that Be is
produced in primary spallation of cosmic rays acting on a Galactic
scale, and therefore can be used as a cosmochronometer. However, B99
and \citet{boesgaard2006} found strong evidences for intrinsic
spread of Be abundances at a given metallicity, which may indicate
that there is also local enrichment of Be in the Galaxy. But
interestingly, \citet{pasquini2005} found that stars belonging
to the the accretion and dissipative populations (see
\citealt{gratton2003} for the exact definitions for these two
kinematical classes) are neatly separated in the [O/Fe] vs.
$\log$(Be/H) diagram, and especially, the accretion component shows
a large scatter in the [O/Fe] vs. $\log$(Be/H) diagram. They
proposed that most of the scatter in the Be vs. Fe (O) trend may be
attributed to the intrinsic spread of Fe and O abundances (probably
due to the inhomogeneous enrichment in Fe and O of the protogalactic
gas), rather than Be.
In this work, we present Be abundances of 25 metal-poor stars, for
most of which the Be abundances are derived for the first time.
Oxygen abundances are also determined from both \mbox{[O\,{\sc i}]}
forbidden line and \mbox{O\,{\sc i}} triplet lines to investigate
the chemical evolution of Be with O in the Galaxy. In Sect.~\ref{od}
we briefly describe the observations and data reduction. The adopted
model atmosphere and stellar parameters are discussed in
Sect.~\ref{ms}. Sect.~\ref{au} deals with the abundances
determinations and uncertainties. Sect.~\ref{rd} presents the
results and discussions, and the conclusions are given in the last
section.
\section{Observations and data reduction}\label{od}
Our analysis are based on the high resolution and high
signal-to-noise ratio spectra of 25 metal-poor main sequence and
subgiant stars from the archive database of observations obtained
with UVES, the Ultraviolet and Visual Echelle Spectragraph
\citep{dekker2000} at the ESO VLT 8\,m Kueyen telescope. The spectra
were obtained during two observation runs: April 8--12, 2000 and April
10--12, 2001 (Programme ID 65.L-0507 and 67.D-0439), both with standard
Dichroic {\#}1 setting in the blue and red arms. The blue arm spectra
ranged from 3050 to 3850\,{\AA} with a resolution of 48\,000, while
the red arm spectra ranged from 4800 to 6800\,{\AA} with a resolution
of 55\,000.
The spectra were reduced using the standard {\sc eso midas} package.
Reduction procedure includes location of echelle orders, wavelength
calibration, background subtraction, flat-field correction, order
extraction, and continuum normalization.
\section{Model atmospheres and stellar parameters}\label{ms}
\begin{table}
\centering
\caption{Stellar parameters adopted in the analysis.}
\label{parameter}
\begin{tabular}{lccccc}\hline
Star & $T_{\rmn{eff}}$ & $\log g$ & [Fe/H] & $\xi$ & Mass \\
& K & cgs & dex & km\,s$^{-1}$ & $\mathcal{M}_{\sun}$ \\
\hline
HD\,76932 & 5890 & 4.12 & $-0.89$ & 1.2 & 0.91 \\
HD\,97320 & 6030 & 4.22 & $-1.20$ & 1.3 & 0.81 \\
HD\,97916 & 6350 & 4.11 & $-0.88$ & 1.5 & 1.03 \\
HD\,103723 & 6005 & 4.23 & $-0.82$ & 1.3 & 0.87 \\
HD\,106038 & 5990 & 4.43 & $-1.30$ & 1.2 & 0.81 \\
HD\,111980 & 5850 & 3.94 & $-1.11$ & 1.2 & 0.83 \\
HD\,113679 & 5740 & 3.94 & $-0.70$ & 1.2 & 0.94 \\
HD\,121004 & 5720 & 4.40 & $-0.73$ & 1.1 & 0.80 \\
HD\,122196 & 5975 & 3.85 & $-1.74$ & 1.5 & 0.81 \\
HD\,126681 & 5595 & 4.53 & $-1.17$ & 0.7 & 0.71 \\
HD\,132475 & 5705 & 3.79 & $-1.50$ & 1.4 & 0.88 \\
HD\,140283 & 5725 & 3.68 & $-2.41$ & 1.5 & 0.79 \\
HD\,160617 & 5940 & 3.80 & $-1.78$ & 1.5 & 0.86 \\
HD\,166913 & 6050 & 4.13 & $-1.55$ & 1.3 & 0.77 \\
HD\,175179 & 5780 & 4.18 & $-0.74$ & 1.0 & 0.85 \\
HD\,188510 & 5480 & 4.42 & $-1.67$ & 0.8 & 0.55 \\
HD\,189558 & 5670 & 3.83 & $-1.15$ & 1.2 & 0.95 \\
HD\,195633 & 6000 & 3.86 & $-0.64$ & 1.4 & 1.11 \\
HD\,205650 & 5815 & 4.52 & $-1.13$ & 1.0 & 0.77 \\
HD\,298986 & 6085 & 4.26 & $-1.33$ & 1.3 & 0.81 \\
CD\,$-$30$\degr$18140 & 6195 & 4.15 & $-1.87$ & 1.5 & 0.79 \\
CD\,$-$57$\degr$1633 & 5915 & 4.23 & $-0.91$ & 1.2 & 0.84 \\
G\,013-009 & 6270 & 3.91 & $-2.28$ & 1.5 & 0.80 \\
G\,020-024 & 6190 & 3.90 & $-1.92$ & 1.5 & 0.83 \\
G\,183-011 & 6190 & 4.09 & $-2.08$ & 1.5 & 0.77 \\
\hline
\end{tabular}
\end{table}
We adopted the one dimensional line-blanketed local thermodynamic
equilibrium (LTE) atmospheric model MAFAGS \citep{fuhrmann1997} in
our analysis. This model utilizes the \citet{kurucz1992} ODFs but
rescales the iron abundance by $-0.16$\,dex to match the improved
solar iron abundance of $\log\varepsilon\rmn{(Fe)}=7.51$
\citep{anders1989}. Individual models for each star were computed
with $\alpha$-enhancement of 0.4\,dex if $\rmn{[Fe/H]}<-0.6$ and
with the mixing length parameter $l/H_{\rmn{p}}=0.5$ to get
consistent temperatures from the Balmer lines.
The effective temperatures were derived by fitting the wings of
H$\alpha$ and H$\beta$ lines, and then averaged.
\citet{nissen2002} studied oxygen abundances of a large sample
stars, which includes all the objects investigated in this work
(actually our sample is a subset of those employed by
\citealt{nissen2002}). They determined the effective temperatures
from the $b-y$ and $V-K$ colour indices based on the infrared flux
method (IRFM) calibrations of \citet*{alonso1996}. As shown by
Fig.~\ref{para_comp}a, the agreement between the two sets of
temperatures are good for most of the stars with a mean difference
of $15\pm89$\,K. For the star G\,020-024, \citet{nissen2002} gave
a temperature of 6440\,K, which is 250\,K higher than ours.
Recently, \citet{asplund2006} derived a temperature of 6247\,K for
G\,020-024 based on H$\alpha$ profile fitting, which is very close
to ours. \citet{nissen2002} likely overestimated the reddening of
this star \citep{asplund2006}.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{fig1.eps}
\caption{Comparison of effective temperature, surface gravity and
iron abundance with \citet{nissen2002}.}
\label{para_comp}
\end{figure*}
The surface gravities were determined from the fundamental relation
\begin{equation}
\log\frac{g}{g_{\sun}}=\log\frac{\mathcal{M}}{\mathcal{M}_{\sun}}
+4\log\frac{T_{\rmn{eff}}}{T_{\rmn{eff},\sun}}+0.4(M_{\rmn{bol}}-M_{\rmn{bol},\sun})
\end{equation}
and
\begin{equation}
M_{\rmn{bol}}=V+BC+5\log\pi+5
\end{equation}
where $\mathcal{M}$ is the stellar mass, $M_{\rmn{bol}}$ is the
absolute bolometric magnitude, $V$ is the visual magnitude, $BC$
is the bolometric correction, and $\pi$ is the parallax.
The absolute visual magnitude were directly derived from the
\emph{Hipparcos} parallax \citep{esa1997} if available with a
relative error smaller than 30\,per cent. For two stars
G\,020-024 and G\,183-011, their uncertainties in parallaxes are
larger than 40\,per cent, so only photometric absolute visual
magnitude can be adopted. For G\,183-011 we followed the
$M_{\rmn{V,phot}}=4.08$ from \citet{nissen2002}. While for
G\,020-024 a large difference exists between the result of
\citet{nissen2002} ($M_{\rmn{V,phot}}=4.33$) and that of
\citet{asplund2006} ($M_{\rmn{V,phot}}=3.72$). We adopted the
latter value because \citet{asplund2006} used the spectroscopic
H$\alpha$ index which provides more precise estimate of
interstellar reddening excess than the photometric H$\beta$ index
employed by \citet{nissen2002}. The bolometric correction was
taken from \citet*{alonso1995} and the stellar mass was estimated
by comparing its position in the
$\log(L/L_{\sun})$-$\log T_{\rmn{eff}}$ diagram with the
evolutionary tracks of \citet*{yi2003}. The final $\log g$ values
are given in Table~\ref{parameter}. Our results are
$0.03\pm0.11$\,dex lower than those of \citet{nissen2002} on
average. Good agreement holds on for majority of the stars except
for G\,020-024, whereas a difference of 0.43\,dex exists. This
is mainly due to the very different absolute visual magnitude
adopted by \citet{nissen2002} and us as discussed above. The
difference in $M_{\rmn{V}}$ (0.61\,mag) alone will introduce a
difference of 0.24\,dex to the surface gravity according to
Equation (1).
\begin{table}
\centering
\caption{Fe {\sc ii} lines used to determine the iron abundances.}
\label{feii}
\begin{tabular}{cccc}\hline
Wavelength & $E_{\scriptsize\textrm{low}}$ & $\log gf$ & $\log C_6$ \\
\AA & eV & & \\
\hline
4993.350 & 2.79 & $-3.73$ & $-32.18$ \\
5100.664 & 2.79 & $-4.18$ & $-31.78$ \\
5197.575 & 3.22 & $-2.27$ & $-31.89$ \\
5234.631 & 3.21 & $-2.21$ & $-31.89$ \\
5325.560 & 3.21 & $-3.21$ & $-32.19$ \\
5425.257 & 3.19 & $-3.27$ & $-32.19$ \\
6084.110 & 3.19 & $-3.84$ & $-32.19$ \\
6149.250 & 3.87 & $-2.76$ & $-32.18$ \\
6247.560 & 3.87 & $-2.33$ & $-32.18$ \\
6416.928 & 3.87 & $-2.67$ & $-32.18$ \\
6432.680 & 2.88 & $-3.61$ & $-32.11$ \\
6456.383 & 3.89 & $-2.09$ & $-32.18$ \\
\hline
\end{tabular}
\end{table}
Iron abundances were determined from 12 unblended \mbox{Fe\,{\sc
ii}} lines with spectra synthesis method. We adopted the differential
$\log gf$ values with respect to
$\log\varepsilon\rmn{(Fe)}_{\sun}=7.51$ from \citet*{korn2003} and
the van der Waals damping constants from \citet{anstee1991,anstee1995}.
Our final iron abundances are in excellent agreement with those of
\citet{nissen2002}, who also derived the iron abundances from
\mbox{Fe\,{\sc ii} lines}. The mean difference is only
$-0.02\pm0.05$\,dex. The microturbulent velocities were determined by
requiring that the derived [Fe/H] are independent of equivalent widths.
The typical error for our effective temperature is about $\pm80$\,K.
The uncertainty of parallax contributes most to the error of the
surface gravity. The typical relative error of $\pm$15\,per cent in
parallax corresponds to an error of $\pm$0.13\,dex. In addition,
the estimated error of $\pm$0.05\,$\mathcal{M}_{\sun}$ in stellar
mass translates to an error of $\pm$0.02\,dex, while errors of
$\pm$80\,K in effective temperature and $\pm$0.05\,mag in $BC$ each
leads to an uncertainty of $\pm$0.02\,dex. So the total error of
$\log g$ is about $\pm0.15$\,dex. It is already noted that the iron
abundance is insensitive to the effective temperature. The
uncertainty of [Fe/H] is dominated by the error of surface gravity.
A typical error of $\pm$0.15\,dex in $\log g$ results in an error
of about $\pm$0.07\,dex in [Fe/H]. Combined with the line-to-line
scatter of about $\pm$0.03\,dex, the total error of [Fe/H] is about
$\pm$0.08\,dex. And the error for the microturbulent velocity is
estimated to be about $\pm0.2$\,km\,s$^{-1}$.
\section{Abundances and uncertainties}\label{au}
\subsection{Oxygen}\label{oxygen}
As Be is mainly produced by the spallation of
CNO nuclei, oxygen abundances can be a preferred alternative to iron
in investigating the galactic evolution of Be. It is also important
to know the O content when determining Be abundances, because there
are many OH lines around the \mbox{Be\,{\sc ii}} doublet. Therefore,
the oxygen abundances were firstly investigated here.
There are several indicators for oxygen abundance: the ultraviolet
OH, the \mbox{[O\,{\sc i}]} 6300 and 6363\,{\AA}, the infrared
\mbox{O\,{\sc i}} 7774\,{\AA} triplet and the infrared
vibration-rotation OH lines. However, OH molecules and \mbox{O\,{\sc
i}} atoms in the lower state of the 7774\,{\AA} transitions are
minority species compared to total number of oxygen atoms, thus are
very sensitive to the adopted stellar parameters, such as the
effective temperature and surface gravity. Moreover, line formations
are far from LTE for either the ultraviolet OH
\citep{hinkle1975,asplund2001} or the \mbox{O\,{\sc i}} 7774\,{\AA}
triplet lines \citep{kiselman2001}. In contrast, \mbox{[O\,{\sc i}]}
lines are formed very close to LTE and nearly all the oxygen atoms
in the photosphere of dwarf and giant stars are in the ground
configuration which provide the lower and upper level of the
forbidden lines \citep{nissen2002}. Therefore, it is believed that
the \mbox{[O\,{\sc i}]} line is the most reliable indicator for
oxygen abundances, but the difficulty is that the \mbox{[O\,{\sc i}]}
lines are very weak in dwarf and subgiant stars.
High resolution and high sinal-to-noise ratio spectra covering the
infrared \mbox{O\,{\sc i}} triplet for 10 stars were available from
the archived VLT/UVES spectra database. We re-reduced the spectra and
measured the equivalent widths. Since for six of these ten stars
O abundances from the \mbox{O\,{\sc i}} triplet were previously
derived by other authors, we compare our measurements with those
found in literature in Fig.~\ref{ew_comp}. It can be seen that the
agreement is good on the whole. For the rest 15 stars, we collected
their \mbox{O\,{\sc i}} 7774\,{\AA} triplet equivalent widths from
the literatures directly. Oxygen abundances were computed with the
$\log gf=0.369$, 0.223, and 0.002 from \citet*{wiese1996} in LTE
first. Then non-LTE corrections were applied according to the
results of \cite{takeda2003}.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{fig2.eps}
\caption{Comparison of \mbox{O\,{\sc i}} triplet equivalent widths between
this work and the literatures (\emph{circles}: \citealt{jonsell2005};
\emph{diamonds}: \citealt{nissen1997}; \emph{squares}: \citealt{boesgaard1993}).}
\label{ew_comp}
\end{figure}
In addition, \citet{nissen2002} measured the equivalent widths of
\mbox{[O\,{\sc i}]} 6300\,{\AA} line for 18 main-sequence and
subgiant stars, of which 15 stars are included in our sample. They
performed the measurement in a very careful fashion, including
removing the possible blending telluric O$_2$ and H$_2$O lines with
the observed spectra of rapidly rotating B-type stars and
subtracting the equivalent width of the blending \mbox{Ni\,{\sc i}}
line at 6300.339\,{\AA}. The typical error for the equivalent width
of \mbox{[O\,{\sc i}]} 6300\,{\AA} line was estimated to be about
only $\pm0.3$\,m{\AA}. For these 15 stars, we also determined their
oxygen abundances with the equivalent widths of \mbox{[O\,{\sc i}]}
6300\,{\AA} line from \citet{nissen2002} using the accurate
oscillator strength $\log gf=-9.72$ from \citet*{allendeprieto2001}.
Finally, we derived the oxygen abundances with \mbox{O\,{\sc i}}
7774\,{\AA} triplet for all the 25 sample stars and with
\mbox{[O\,{\sc i}]} 6300\,{\AA} line for 15 stars, which are given
in Table~\ref{abun} (the reference solar O abundance is
$\log\varepsilon\rmn{(O)}=8.77$ computed from \mbox{[O\,{\sc i}]}
6300\,{\AA} line using the equivalent width of 4.1\,m{\AA} from
\citealt{nissen2002}).
Oxygen abundance from the weak \mbox{[O\,{\sc i}]} 6300\,{\AA} line
is not sensitive to stellar parameters. Its uncertainty is
dominated by the error in equivalent width. Normally, a typical
error of $\pm$0.3\,m{\AA} in equivalent width corresponds on average
to an error of $\pm$0.1\,dex in oxygen abundance. For the infrared
\mbox{O\,{\sc i}} triplet, errors of $\pm$80\,K in effective
temperature and $\pm$0.15\,dex in gravity each translates to an
error of $\pm$0.05\,dex in oxygen abundance. The uncertainties in
iron abundance and microturbulence nearly have no effect on [O/Fe].
A typical error of $\pm$3\,m{\AA} in equivalent width corresponds to
an error of $\pm$0.04\,dex. In total, the error of [O/Fe] from
\mbox{O\,{\sc i}} triplet is around $\pm$0.08\,dex. The uncertainty
stated above is a random one, and the systematic error can be much
higher, as can be seen from the differences of O abundances between
\mbox{[O\,{\sc i}]} forbidden line and \mbox{O\,{\sc i}} triplet
lines discussed below.
As there are 15 stars with oxygen abundances from both
\mbox{[O\,{\sc i}]} 6300\,{\AA} and \mbox{O\,{\sc i}} 7774\,{\AA}
lines, it is interesting to investigate whether these two
indicators give consistent oxygen abundances.
Fig.~\ref{6300_7774_comp} shows the differences of O abundances based
on \mbox{[O\,{\sc i}]} 6300\,{\AA} and \mbox{O\,{\sc i}} 7774\,{\AA}
lines. We find that, on average, O abundances from \mbox{O\,{\sc i}}
7774\,{\AA} lines with NLTE corrections are $0.14\pm0.10$\,dex higher
than those from \mbox{[O\,{\sc i}]} 6300\,{\AA} line for the 15 sample
stars, which means that these two indicators are not consistent in our
study. \citet{nissen2002} found that the mean difference between O
abundance from the forbidden and permitted lines is only 0.03\,dex
for five of their stars, and they concluded that these two
indicators produce consistent O abundances. As a test, we reanalyse
the O abundances of \citet{nissen2002}'s sample using their stellar
parameters and equivalent widths. The only differences are the
adopted model atmospheres and the NLTE corrections for the
\mbox{O\,{\sc i}} triplet lines. We found that, for the forbidden
line, our abundances are almost the same as those from
\citet{nissen2002}, but for the triplet lines, our mean LTE O
abundance is about 0.06\,dex larger than that of \citet{nissen2002},
and the NLTE correction for the triplet lines from \citet{takeda2003}
is 0.06\,dex lower than theirs. These two factors lead to a total
difference of 0.12\,dex for the permitted lines. Therefore, the
different model atmospheres (MAFAGS vs. MARCS) and NLTE corrections
are responsible for the differences. Recently,
\citet{garciaperez2006} determined O abundances for 13 subgiant stars
with \mbox{[O\,{\sc i}]}, \mbox{O\,{\sc i}} and OH lines. They
followed exactly the same method as \citet{nissen2002}, but their
results showed that O abundances based on \mbox{O\,{\sc i}} triplet
are on average $0.19\pm0.22$\,dex higher than that from the forbidden
line.
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{fig3.eps}
\caption{\label{6300_7774_comp}Comparison of O abundances from
\mbox{[O\,{\sc i}]} 6300\,{\AA} line with those from \mbox{O\,{\sc
i}} 7774\,{\AA} lines.}
\end{figure}
\subsection{Beryllium}
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{fig4.eps}
\caption{Spectral synthesis of the \mbox{Be\,{\sc ii}} doublet region for the KPNO Solar Flux Atlas data.}
\label{be_sun}
\end{figure}
Beryllium abundances were derived by spectra synthesis of the
\mbox{Be\,{\sc ii}} 3130\,{\AA} resonance doublet region using the
{\sc idl/fortran siu} software package of Reetz (1993). It is well
known that this spectral region is rich with atomic and molecular
lines for solar-type stars, which results in substantial line
absorption and a deficit of continuum. We firstly computed the
synthetic solar spectrum around \mbox{Be\,{\sc ii}} doublet region
based on the line list carefully compiled and tested by
\citet{primas1997}, and then compared them with the integrated solar
flux atlas of \citet{kurucz1984}. Some changes were made in order to
make the theoretical solar spectrum match the \citet{kurucz1984}
solar flux atlas best. The major change made to the
\citet{primas1997} line list is that we increased the $\log gf$ of
the \mbox{Mn\,{\sc ii}} 3131.017\,{\AA} line by 1.72\,dex instead of
adding a predicted \mbox{Fe\,{\sc i}} line at 3131.043\,{\AA}.
Similar adjustment was adopted by \citet*{king1997}. Based on this
adjusted line list, we reproduced the solar flux atlas best with
$\rmn{A(Be)}=1.12$, which is in good agreement with the result of
$\rmn{A(Be)}=1.15\pm0.20$ derived by \citet*{chmielewski1975}.
\citet{balachandran1998} found that the standard UV
continuous opacity of the sun need to be multiplied by a factor of
1.6 in order to get consistent oxygen abundances from the UV and IR
OH lines. With the increased UV continuous opacity, they determined
the solar Be abundance to be 1.40, which is very close to the
meteoritic value 1.42. \citet{bell2001} proposed later that the
`missing' UV opacity could be accounted for by the \mbox{Fe\,{\sc
i}} bound-free transitions. However, until now there is no confirmed
evidence about this. But one should keep in mind the `missing' UV
opacity problem. If it does exist, the Be vs. Fe (O) trend of this
work and all the previous work based on the standard UV opacity might
change.
In addition to some strong OH lines, a strong \mbox{Ti\,{\sc ii}}
line at 3130.810\,{\AA} also presents in the \mbox{Be\,{\sc ii}}
region. In order to minimize its effect on the beryllium abundance
as well as to provide a constraint on the location of continuum, we
derived the Ti abundances for our sample stars from the
\mbox{Ti\,{\sc i}} 5866.461, 6258.110, and 6261.106\,{\AA} lines.
The oscillator strengths for these lines are differentially adjusted
to produce the solar Ti abundance $\log\varepsilon\rmn{(Ti)}=4.94$.
For five very metal-poor stars with $\rmn{[Fe/H]}<-1.8$, the
\mbox{Ti\,{\sc i}} lines are too weak to be detected, thus a common
value of $\rmn{[Ti/Fe]}=0.35$ \citep{magain1989} was adopted. As a
matter of fact, due to the metal deficiency of these stars, the line
blending and continuum normalization were much less problematic than
solar-type stars. The abundances of other elements, which are less
critical for Be abundance determination, were adopted by scaling a
solar composition. Beryllium abundances were
then determined by varying the value of Be to best fit the observed
line profiles. It was reported by \citet*{garcialopez1995} and
\citet{kiselman1996} that non-LTE effects for \mbox{Be\,{\sc ii}}
doublet are insignificant, normally smaller than 0.1\,dex. We took
the weaker 3131.066\,{\AA} line as the primary abundance indicator
because it's less blended compared to the stronger 3130.421\,{\AA}
component.
The uncertainties of Be abundances were estimated from the errors of
stellar parameters and pseudo-continuum location. An error of
$\pm$0.15\,dex in surface gravity implies uncertainties of
$\pm$(0.06--0.09)\,dex, while an uncertainty of $\pm$0.08\,dex in
[Fe/H] corresponds to an error of $\pm$0.06\,dex. Errors due to
effective temperature and microturbulence are always within
$\pm$0.04\,dex in total. The error in continuum location was
estimated to be less than five per cent in the worst case, which
results in an error of $\pm$0.15\,dex at most. The final errors for
each star are given in Table~\ref{abun}.
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{fig5.eps}
\caption{\label{be_sample} Spectrum synthesis for six representative
stars. The dots are the observational data, the solid line is the
best-fit synthesis, and the dotted lines are synthetic spectra with
Be abundances of $\pm0.2$\,dex relative to the best fit.}
\end{figure*}
\begin{table*}
\begin{minipage}{150mm}
\caption{Abundances and population membership.}
\label{abun}
\begin{tabular}{lcccccrrcc}\hline
Star & [Fe/H] & \multicolumn{3}{c}{[O/H]} & [Ti/Fe] & A(Li) & A(Be) & $\sigma$(Be) & Pop.$^b$ \\
& & 6300 & 7774 LTE$^a$ & 7774 $n$-LTE & & & & & \\
\hline
HD\,76932 & $-0.89$ & $-0.46$ & $-0.24^{(5)}$ & $-0.36$ & 0.24 & 2.00 & $ 0.73 $ & 0.14 & 0 \\
HD\,97320 & $-1.20$ & $-0.85$ & $-0.56^{(6)}$ & $-0.66$ & 0.21 & 2.32 & $ 0.43 $ & 0.17 & 0 \\
HD\,97916 & $-0.88$ & $\cdots$ & $-0.06^{(1)}$ & $-0.27$ & 0.23 & $<1.23$ & $ <-0.76$ & $\cdots$ & 1 \\
HD\,103723 & $-0.82$ & $-0.68$ & $-0.33^{(6)}$ & $-0.44$ & 0.15 & 2.22 & $ 0.51 $ & 0.17 & 1 \\
HD\,106038 & $-1.30$ & $\cdots$ & $-0.62^{(3)}$ & $-0.70$ & 0.21 & 2.55 & $ 1.37 $ & 0.12 & 1 \\
HD\,111980 & $-1.11$ & $-0.76$ & $-0.38^{(6)}$ & $-0.50$ & 0.29 & 2.31 & $ 0.67 $ & 0.13 & 1 \\
HD\,113679 & $-0.70$ & $-0.43$ & $-0.15^{(6)}$ & $-0.28$ & 0.32 & 2.05 & $ 0.87 $ & 0.12 & 1 \\
HD\,121004 & $-0.73$ & $-0.42$ & $-0.19^{(2)}$ & $-0.27$ & 0.28 & $<1.18$ & $ 0.94 $ & 0.15 & 1 \\
HD\,122196 & $-1.74$ & $\cdots$ & $-1.07^{(6)}$ & $-1.17$ & 0.28 & 2.28 & $-0.51 $ & 0.14 & 0 \\
HD\,126681 & $-1.17$ & $-0.72$ & $-0.65^{(4)}$ & $-0.70$ & 0.30 & 1.48 & $ 0.90 $ & 0.12 & 0 \\
HD\,132475 & $-1.50$ & $-1.09$ & $-0.82^{(4)}$ & $-0.91$ & 0.27 & 2.23 & $ 0.62 $ & 0.13 & 1 \\
HD\,140283 & $-2.41$ & $-1.61$ & $-1.65^{(3)}$ & $-1.73$ & $\cdots$ & 2.16 & $-0.94 $ & 0.14 & 1 \\
HD\,160617 & $-1.78$ & $-1.34$ & $-1.24^{(3)}$ & $-1.33$ & 0.23 & 2.25 & $-0.41 $ & 0.12 & 1 \\
HD\,166913 & $-1.55$ & $-1.16$ & $-0.81^{(4)}$ & $-0.90$ & 0.30 & 2.32 & $ 0.27 $ & 0.14 & 0 \\
HD\,175179 & $-0.74$ & $\cdots$ & $-0.13^{(6)}$ & $-0.24$ & 0.32 & $<0.87$ & $ 0.73 $ & 0.15 & 0 \\
HD\,188510 & $-1.67$ & $\cdots$ & $-0.98^{(5)}$ & $-1.02$ & 0.31 & 1.48 & $-0.25 $ & 0.13 & 0 \\
HD\,189558 & $-1.15$ & $-0.73$ & $-0.54^{(1)}$ & $-0.64$ & 0.25 & 2.24 & $ 0.64 $ & 0.14 & 0 \\
HD\,195633 & $-0.64$ & $-0.55$ & $-0.16^{(6)}$ & $-0.34$ & 0.06 & 2.25 & $ 0.53 $ & 0.18 & 2 \\
HD\,205650 & $-1.13$ & $-0.69$ & $-0.48^{(4)}$ & $-0.54$ & 0.21 & 1.70 & $ 0.51 $ & 0.19 & 0 \\
HD\,298986 & $-1.33$ & $-0.93$ & $-0.70^{(6)}$ & $-0.79$ & 0.15 & 2.26 & $-0.04 $ & 0.12 & 1 \\
CD\,$-$30$\degr$18140 & $-1.87$ & $\cdots$ & $-1.09^{(3)}$ & $-1.18$ & $\cdots$ & 2.21 & $-0.35 $ & 0.15 & 1 \\
CD\,$-$57$\degr$1633 & $-0.91$ & $\cdots$ & $-0.51^{(6)}$ & $-0.60$ & 0.01 & 2.15 & $ 0.31 $ & 0.18 & 1 \\
G\,013-009 & $-2.28$ & $\cdots$ & $-1.54^{(3)}$ & $-1.65$ & $\cdots$ & 2.21 & $-0.84 $ & 0.13 & 1 \\
G\,020-024 & $-1.92$ & $\cdots$ & $-1.19^{(3)}$ & $-1.30$ & $\cdots$ & 2.19 & $-0.72 $ & 0.17 & 1 \\
G\,183-011 & $-2.08$ & $\cdots$ & $-1.27^{(6)}$ & $-1.36$ & $\cdots$ & 2.21 & $-0.61 $ & 0.14 & 1 \\
\hline
\end{tabular}
$^a$~Sources of equivalent width for O {\sc i} triplet: (1) \citet{cavallo1997}, (2) \citet{nissen1997}, (3) \citet{nissen2002},
(4) \citet{gratton2003}, (5) \citet{jonsell2005}, (6) measured from archival UVES spectra (Programme ID 65.L-0507).
$^b$~Population membership: 0 -- dissipative component; 1 -- accretion component; 2 -- thin disc.
\end{minipage}
\end{table*}
Be abundances for several stars of our sample have been published
by other authors and they are summarized in Table~\ref{be_compare}.
We can see that, though relatively large scatter
exists, Be abundances from different researches are not different
within uncertainties for majority of the stars. The exceptions are
HD\,160617 and HD\,189558. Our Be abundance of HD\,160617 is
0.5\,dex higher than that of \citet{molaro1997}. This difference
is mostly due to the different stellar parameters adopted by
\citet{molaro1997} and us. Their effective temperature and surface
gravity are 276\,K and 0.51\,dex lower than ours, respectively,
which result in a much lower Be abundance. Another exception is
HD\,189558, where a difference of 0.37\,dex in Be abundance exists
between the result of \citet{boesgaard1993} and ours. The slight
differences in the adopted stellar parameters could not produce
such a large difference. \citet{rebolo1988} derived a Be
abundance of $\log N\rmn{(Be/H)}=-12.0\pm0.4$\,dex for this star
with similar stellar parameters. It is 1\,dex lower than the result
of \citet{boesgaard1993}. We noted that \citet{boesgaard1993}
determined Be abundances by measuring the equivalent width of
\mbox{Be\,{\sc ii}} doublet, which is very sensitive to the location
of the continuum. It is probably that they overestimated the
continuum. Moreover, both the spectra of \citet{molaro1997}
and \citet{boesgaard1993} were obtained with 3.6\,m telescopes. The
signal-to-noise ratios around \mbox{Be\,{\sc ii}} region of their
spectra are much lower than that of this work.
\begin{table*}
\begin{minipage}{151mm}
\caption{Comparison of Be abundances with those from literatures.}
\label{be_compare}
\begin{tabular}{lcccccc}\hline
Star & (1) & (2) & (3) & (4) & (5) & This work \\
\hline
HD\,76932 & $-11.04\pm0.11$ & $-11.45\pm0.18$ & $-11.21\pm0.21$ & $-11.17\pm0.05$ & $\cdots$ & $-11.27\pm0.14$ \\
HD\,132475 & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $-11.43\pm0.12$ & $-11.38\pm0.13$ \\
HD\,140283 & $-12.78\pm0.14$ & $-13.07\pm0.20$ & $-12.91\pm0.17$ & $-13.08\pm0.09$ & $\cdots$ & $-12.94\pm0.14$ \\
HD\,160617 & $\cdots$ & $\cdots$ & $-12.90\pm0.27$ & $\cdots$ & $\cdots$ & $-12.41\pm0.12$ \\
HD\,166913 & $\cdots$ & $\cdots$ & $-11.77\pm0.15$ & $\cdots$ & $\cdots$ & $-11.73\pm0.14$ \\
HD\,189558 & $-10.99\pm0.15$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $-11.36\pm0.14$ \\
HD\,195633 & $-11.21\pm0.07$ & $\cdots$ & $\cdots$ & $\cdots$ & $-11.34\pm0.11$ & $-11.47\pm0.18$ \\
\hline
\end{tabular}
References: (1) \citet{boesgaard1993}, (2) \citet{garcialopez1995}, (3) \citet{molaro1997}, (4) B99, (5) \citet{boesgaard2006}.
\end{minipage}
\end{table*}
\subsection{Lithium}
It is well known that beryllium can be destroyed in stars by fusion
reactions at a relatively low temperature (about 3.5 million K). In
order to avoid contamination of our sample from the effect of
depletion process in stars, we also determined the $^7$Li abundances
for our sample stars. Because the destruction temperature for $^7$Li
is lower than that of Be, if $^7$Li is not depleted, Be should not
be depleted either\footnote{For subgiant stars, Li and Be
can be diluted due to the enlargement of the convection zone. In
this case, Li and Be will be diluted by the same percentage.
Nevertheless, Be cannot be depleted more than Li in any case.}.
We adopted the oscillator strengths from the NIST data base, namely
$\log gf=0.002$ and $-0.299$ for the \mbox{Li\,{\sc i}} 6707.76 and
6707.91\,{\AA} lines, respectively. Collisional broadening
parameters describing van der Waals interaction with hydrogen atoms
were taken from \citet*{barklem1998} (see \citealt{shi2007} in
detail). Li abundances were derived by spectra synthesis in LTE.
Results from \citet{asplund2006} and \citet{shi2007} showed that
non-LTE effects for \mbox{Li\,{\sc i}} resonance lines are
insignificant. The typical error for our Li abundance was estimated
to be about 0.1\,dex.
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{fig6.eps}
\caption{\label{li_fe}Li abundances as a function of [Fe/H]. The
filled circles are the determined Li abundances, while the inverse
triangles represent the upper limit.}
\end{figure}
\section{Results and discussion}\label{rd}
\subsection{Be vs. Fe and Be vs. O relation}
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{fig7.eps}
\caption{\label{be_fe}Be abundances against [Fe/H] (\emph{filled circles}:
stars without $^{7}$Li depletion; \emph{filled squares}: stars with depleted
$^{7}$Li; \emph{filled inverse triangle}: upper limit Be abundance for
HD\,97916; \emph{filled star}: solar meteoritic Be abundance; \emph{open
circles}: data from B99). The solid line is the best
linear fitting (not including HD\,106038, HD\,97916, HD\,132475 and
HD\,126681) with a slope of 1.1 for our Be vs. Fe trend, while the dotted
lines represents the best fitting for the B99 data.}
\end{figure}
Encouraged by the agreement of Be abundances between our results and
literatures, we now turn to investigate the chemical evolution of
beryllium in the Galaxy. As we have mentioned before, it is first
necessary to investigate whether some of our stars are depleted in Be.
Fig.~\ref{li_fe} displays the $^7$Li abundances as a function of
metallicity. We can see that six stars in our sample are obviously
depleted in $^7$Li and another two seem to be slightly depleted,
while the star HD\,106038 has an exceptionally high $^7$Li abundance,
about 0.3\,dex higher than the Spite plateau (it also has an
abnormally high Be abundance, see discussion in Sect.~\ref{spec} for
this star). Among those stars with depleted $^7$Li, only one
(HD\,97916, denoted by filled inverse triangle in Fig.~\ref{be_fe}
and Fig.~\ref{be_o}) is also depleted seriously in Be, while the rest
(denoted by filled squares in Fig.~\ref{be_fe} and Fig.~\ref{be_o})
seem to have normal Be abundances at their metallicities.
Excluding HD\,106038 and HD\,97916 (abnormally high Be abundance and
seriously depleted in Be, respectively; these two stars will not be
included in the analysis of Be vs. O trend either), the relation
between Be and Fe abundances can be well represented by linear fitting
\[
\log N\rmn{(Be/H)}=(1.15\pm0.07)\,\rmn{[Fe/H]}-(10.24\pm0.10)
\]
One may note that HD\,132475 and HD\,126681 seem to deviate from the
trend far beyond the uncertainties. If we exclude these two stars
(see discussion in Sect.~\ref{spec}), the relationship will be
\[
\log N\rmn{(Be/H)}=(1.10\pm0.07)\,\rmn{[Fe/H]}-(10.37\pm0.10)
\]
Our result is in reasonable agreement with the result
$\log N\rmn{(Be/H)}=(0.96\pm0.04)\,\rmn{[Fe/H]}-(10.59\pm0.03)$
of B99, considering the relatively smaller
metallicity range of our sample. The overall increase of Be with Fe
suggests that Be is enriched globally in the Galaxy.
As the yields of Be is believed to be correlated with CNO nuclei
directly, it's more meaningful to investigate the relationship
between Be and O abundances. Due to the inconsistent oxygen
abundances based on the forbidden and triplet lines in our
study, it is necessary to investigate their relations with Be
abundance separately. Fig.~\ref{be_o} shows the trend of Be with
O abundances. Again, the Be abundances increase linearly with
increasing [O/H] both for the forbidden line (though with relatively
large scatter partly due to the small sample number) and triplet
lines. The relationships are best represented by
\begin{eqnarray*}
\log N\rmn{(Be/H)}=(1.55\pm0.17)\,\rmn{[O/H]}-(10.29\pm0.15) & [\rmn{O}\,\rmn{\scriptstyle{I}}] \\
\log N\rmn{(Be/H)}=(1.36\pm0.09)\,\rmn{[O/H]}-(10.69\pm0.08) & \rmn{O}\,\rmn{\scriptstyle{I}}
\end{eqnarray*}
If we exclude HD\,132475 and HD\,126681, the relationships will be
\begin{eqnarray*}
\log N\rmn{(Be/H)}=(1.49\pm0.16)\,\rmn{[O/H]}-(10.42\pm0.15) & [\rmn{O}\,\rmn{\scriptstyle{I}}] \\
\log N\rmn{(Be/H)}=(1.30\pm0.08)\,\rmn{[O/H]}-(10.81\pm0.08) & \rmn{O}\,\rmn{\scriptstyle{I}}
\end{eqnarray*}
Our result based on \mbox{O\,{\sc i}} triplet lines is slightly
flatter than the result
$\log N\rmn{(Be/H)}=(1.45\pm0.04)\,\rmn{[O/H]}-(10.69\pm0.04)$
of B99. This can be partly due to our smaller
[Fe/H] range as mentioned above. Besides, the O abundances of
B99 were averaged from the UV OH and infrared
\mbox{O\,{\sc i}} triplet lines. They thought that such a result
(a slope of roughly 1.5 for Be vs. O) is neither consistent with
the secondary process, nor the primary process. However, the
secondary process added with some chemical evolution effects, such
as an outflow of mass from the halo, indicates that there would be
a quadratic relation only at the very lowest metallicities and a
progressive shallowing of the slope to disc metallicities, for
example a slope of 1.5 between $\rmn{[Fe/H]}=-2$ and $-1$ (B99).
So they suggested that this process is most consistent with their
results.
As discussed above, \mbox{[O\,{\sc i}]} forbidden line is the most
reliable O abundance indicator for its insensitivity to the
adopted stellar parameters as well as the non-LTE effects. Our
oxygen abundances from \mbox{[O\,{\sc i}]} 6300\,{\AA} line produces
a `moderate' slope 1.49 for the Be vs. O trend. However,
\citet{nissen2002} studied the effects of 3D model atmospheres on
the derived O abundance from \mbox{[O\,{\sc i}]} forbidden lines.
They found that O abundances based on \mbox{[O\,{\sc i}]} will
decrease if 3D models are applied. Especially, the decreasing
amplitude increases with decreasing metallicity (see fig.~6 and
table~6 of \citealt{nissen2002}). While 3D effects on \mbox{Be\,{\sc
ii}} doublet are negligible according to \citet{primas2000}. This
means that the slope will be closer to one than our present result
when the 3D effects were considered for the \mbox{[O\,{\sc i}]}
forbidden lines. This implies that the Be production scenario is
probably a primary process.
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{fig8.eps}
\caption{\label{be_o}Be abundances as a function of [O/H]. a)
Results from \mbox{[O\,{\sc i}]} forbidden lines. b) Results from
\mbox{O\,{\sc i}} permitted lines. The symbols and lines have the same
meaning as Fig.~\ref{be_fe}. Remember that O abundances of
B99, for both of the two panels, are the mean
abundances based on UV OH and infrared \mbox{O\,{\sc i}} triplet lines.}
\end{figure*}
\subsection{Special stars: hints on the Be production scenario}\label{spec}
HD\,106038 is very special for its exceptionally overabundance in Li
and Be as mentioned before. Its Li abundance is 0.3\,dex higher than
the Li plateau, while its Be abundance is about 1.2\,dex higher than
that of normal stars with the same metallicity. The Be abundance of
this star is even similar to the solar meteoritic value. Such a Be-rich
star is extremely rare. \citet{asplund2006} derived $\rmn{A(Li)}=2.49$
and \citet{smiljanic2008} reported a Be abundance of
$\log N\rmn{(Be/H)}=-10.60$ for this star, both in good agreement
with our results. In addition to Li and Be, \citet{nissen1997} showed
that this star also has obviously enhanced abundances of Si, Ni, Y,
and Ba. Based on its special abundance pattern, \citet{smiljanic2008}
suggested that HD\,106038 is most probably formed in the vicinity of
a hypernova (HNe).
In addition to HD\,106038, another two stars, namely HD\,132475 and
HD\,126681, seem to stand out of the Be vs. Fe and Be vs. O trends
distinctly. Their Be abundances are about 0.6 and 0.5\,dex higher,
respectively, than that of most stars with the same metallicities.
\citet{boesgaard2006} also found an abnormally high Be abundance
for HD\,132475 (0.5\,dex above the normal stars at its metallicity).
Their sample included another star BD$+$23$\degr$3912, which has
very similar atmospheric parameters as HD\,132475 but very different
Be abundance. In fact, BD$+$23$\degr$3912 has a Be abundance
matching perfectly the linear Be vs. Fe relation. Combined with
another star HD\,94028 also with excess Be abundance found by B99,
\citet{boesgaard2006} concluded that dispersion in Be abundances
does exist at a given metallicity, which implies a local enrichment
of Be in the Galaxy.
However, \citet{pasquini2005} proposed that such dispersion
could be mostly attributed to the scatter of Fe and O abundances,
rather than Be, as we have mentioned in Sect.~\ref{intro}. Following
\citet{pasquini2005}, we calculated the space velocities using the
method presented by \citet{johnson1987}, and determined the orbital
parameters based on the Galactic mass model of \citet{allen1991} for
our sample stars. Input parameters, such as radial velocities,
parallaxes and proper motions were obtained from the SIMBAD
database. According to the criteria of \citet{gratton2003}, fifteen
stars in our sample belong to the accretion component, nine stars
belong to the dissipative component and one star belongs to the thin
disc.
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{fig9.eps}
\caption{\label{kin}a) [O/Fe] vs. [Fe/H] based on \mbox{O\,{\sc i}}
triplet lines. b) [O/Fe] vs. [Fe/H] based on \mbox{[O\,{\sc i}]}
forbidden line. c) [O/Fe] vs. $\log N\rmn{(Be/H)}$ based on
\mbox{O\,{\sc i}} triplet lines. d) [O/Fe] vs. $\log N\rmn{(Be/H)}$
based on \mbox{[O\,{\sc i}]} forbidden line. \emph{Open squares}
represent the accretion component, \emph{filled squares} represent
the dissipative component, and the \emph{asterisk} represents the
only thin disc star HD\,195633 in our sample. The open square with
a \emph{left pointing arrow} represents the upper limit Be abundance
for HD\,97916.}
\end{figure*}
Fig.~\ref{kin} shows [O/Fe] vs. [Fe/H] and [O/Fe] vs. $\log
N\rmn{(Be/H)}$ for our sample stars, based on the results from both
\mbox{[O\,{\sc i}]} forbidden line and \mbox{O\,{\sc i}} triplet
lines. One may find that no clear separation exists between the two
components in the [O/Fe] vs. [Fe/H] diagram, though the accretion
component shows a relatively larger scatter than the dissipative
component. However, in the [O/Fe] vs. $\log N\rmn{(Be/H)}$ diagram,
the two populations are distinctly different, and especially, the
accretion component shows a much larger scatter compared to the
dissipative component. Our results agree well with the findings of
\citet{pasquini2005}. They proposed that such results support the
idea that the formation of the two components took place under
significantly different conditions: an inhomogeneous, rapidly
evolving `halo phase' for the accretion component, and a more
chemically homogeneous, slowly evolving `thick disc phase' for the
dissipative component. The large scatter of the accretion component
in the [O/Fe] vs. $\log N\rmn{(Be/H)}$ diagram may reflect the
inhomogeneous enrichment in oxygen and iron of the halo gas. We note
that, for our Be-rich stars, HD\,106038 and HD\,132475 belong to the
accretion component, while HD\,126681 belongs to the dissipative
component. Another Be-rich star HD\,94028, first discovered by B99
and later confirmed by \citet{boesgaard2006}, is also classified as
a dissipative component star according to the definitions of
\citet{gratton2003}. While the deviation of the accretion component
stars could be due to the inhomogeneous enrichment in Fe and O of
the halo gas, HD\,126681 and HD\,94028 cannot be interpreted in this
way. However, one should keep in mind that stellar kinematics is
only of statistical meaning in describing the Galactic populations.
It has been shown by many previous work that substantial overlap
exists between the halo and thick disc stars. Therefore, it is
dangerous to attribute an individual star to one stellar population
and accordingly derive a firm conclusion. Moreover, we didn't find
any distinct differences in the abundance pattern between
HD\,126681/HD\,94028 and typical halo stars. So the possibility that
dispersion in Be vs. Fe and Be vs. O trend originates from the
inhomogeneous enrichment in Fe and O of the protogalactic gas cannot
be excluded.
As an alternative, the scatter in Be can be interpreted by
the so-called superbubble (SB) model (\citealt{higdon1998,
parizot1999,parizot2000b,ramaty2000}). The basic idea of the SB
model is that repeated SNe occurring in an OB association can
generate a superbubble, in which the CNO nuclei (ejected by SNe)
mixed with some ambient, metal-poor material are accelerated onto
the metal deficient material in the supershell and at the surface of
the adjacent molecular cloud, and then broken into smaller atoms
like Li, Be and B. The produced light elements are then mixed with
other SNe ejecta as well as the ambient, metal-poor gas. However,
such a mixing cannot be perfect, and new stars can form before all
the massive stars explode or the induced light elements production
occurs. As a result, scatter in the abundances of light elements for
a given SB may occur \citep{parizot2000a}. \citet{boesgaard2006}
noted that Na, Mg, Si, Y, Zr, and Ba abundances of HD\,132475 are
typically 0.2\,dex above the mean values of the other stars at that
metallicity according to the results of
\citet{fulbright2000,fulbright2002}. We also noted that Y and Ba
abundances of HD\,126681 are roughly 0.2 and 0.15\,dex higher,
respectively, than the mean values of the remaining sample as found
by \citet{nissen1997}, and the $\alpha$-elements of this star are in
the upper range of their sample. Na and $\alpha$-elements are
typical ejecta of SNe\,II, and Y, Zr, and Ba, though not very
efficiently, can also be produced by the $r$-process in SNe\,II. It
is probably that stars like HD\,132475 and HD\,126681 were formed
from the material that underwent enrichment of light elements and
SNe\,II ejecta but not plenty of dilution process in SBs. In
addition, the SB model also predicts a primary process for the Be
production, which is consistent with our Be vs. O trend.
Except for the possibilities stated above, some other
scenarios for the overabundance of Be can be excluded. It is
unlikely that the Be-rich stars were accreted from the satellite
systems of our Galaxy. \citet*{shetrone2001} and \citet{tolstoy2003}
found that abundance patterns among the dwarf spheroidal (dSph)
galaxies stars are remarkably uniform. We note that
$\alpha$-elements and Y abundances of the dSph stars are obviously
lower than the Be-rich stars. The possibility that the overabundance
of Be in our Be-rich stars could be due to the accretion of a planet
or planetesimals debris can also be excluded. If the excess Be in
our Be-rich stars were accreted from some material having similar
composition as chondrites meteorites, then the mass of the accreted
iron would be even larger than the total mass of iron in the surface
convective zone of the star, which is certainly impossible.
\section{Conclusions}\label{con}
We have derived Be abundances for 25 main sequence and subgiant stars
spanning the range $-2.5$ $<$ [Fe/H] $<$ $-0.5$. Relations between Be
and Fe as well as Be and O are investigated. The Be vs. Fe trend can
be well represented by a linear relation with a slope of 1.1. This
result is in good agreement with that of B99, and
suggests that Be is enriched globally in the Galaxy, as proposed by
\citet{suzuki2001}. Our Be abundances increase linearly with
increasing [O/H] based on both the \mbox{[O\,{\sc i}]} 6300\,{\AA}
and \mbox{O\,{\sc i}} triplet lines, but with slightly different
slopes. O abundances based on \mbox{O\,{\sc i}} triplet gives a slope
of 1.30 between [Be/H] and [O/H]. This is a little flatter than the
result of B99, which may be partly due to different
metallicity range. The most reliable O abundance indicator,
\mbox{[O\,{\sc i}]} forbidden line gives a steeper relationship (a
slope of 1.49). However, this slope will decrease if 3D effects are
took into account according to the results of \citet{nissen2002},
which means that the production process of Be is probably a primary
process.
Moreover, we find some strong evidences for the intrinsic dispersion
of Be abundances at a given metallicity. The special abundance
pattern of HD\,106038, especially its exceptionally high Be
abundance, can be interpreted most consistently only if the material
which formed HD\,106038 was contaminated by the nucleosynthetic
products of a HNe \citep{smiljanic2008}. The deviations of
HD\,132475 and HD\,126681 from the general Be vs. Fe and Be vs. O
trend can be interpreted by the SB model. However, the possibility
that such dispersion originates from the inhomogeneous enrichment in
Fe and O of the protogalactic gas cannot be excluded.
\section*{Acknowledgments}
Thanks goes to the referee Luca Pasquini for constructive suggestions
and comments. This work is supported by the National Natural Science
Foundation of China under grants Nos. 10433010, 10521001 and 10778626.
It has made use of the SIMBAD database, operated at CDS, Strasbourg,
France.
|
2,869,038,154,664 | arxiv | \section{\@startsection{section}{1}%
\z@{-.7\linespacing\@plus -\linespacing}{.5\linespacing}%
{\normalfont\scshape\centering}}
\def\subsection{\@startsection{subsection}{2}%
\z@{-.5\linespacing\@plus -.7\linespacing}{.5em}%
{\normalfont\bfseries\mathversion{bold}}}
\makeatother
\allowdisplaybreaks
\numberwithin{equation}{section}
\newcommand {\id}{\mathrm{id}}
\newcommand {\Osc}{\mathrm{Osc}}
\newcommand {\rme}{\mathrm e}
\newcommand {\bbC}{\mathbb C}
\newcommand {\bbD}{\mathbb D}
\newcommand {\bbL}{\mathbb L}
\newcommand {\bbM}{\mathbb M}
\newcommand {\bbN}{\mathbb N}
\newcommand {\bbO}{\mathbb O}
\newcommand {\bbP}{\mathbb P}
\newcommand {\bbQ}{\mathbb Q}
\newcommand {\bbT}{\mathbb T}
\newcommand {\bbX}{\mathbb X}
\newcommand {\bbY}{\mathbb Y}
\newcommand {\bbZ}{\mathbb Z}
\newcommand {\calA}{\mathcal A}
\newcommand {\calB}{\mathcal B}
\newcommand {\calC}{\mathcal C}
\newcommand {\calD}{\mathcal D}
\newcommand {\calE}{\mathcal E}
\newcommand {\calH}{\mathcal H}
\newcommand {\calK}{\mathcal K}
\newcommand {\calL}{\mathcal L}
\newcommand {\calM}{\mathcal M}
\newcommand {\calN}{\mathcal N}
\newcommand {\calP}{\mathcal P}
\newcommand {\calQ}{\mathcal Q}
\newcommand {\calR}{\mathcal R}
\newcommand {\calT}{\mathcal T}
\newcommand {\calX}{\mathcal X}
\newcommand {\calY}{\mathcal Y}
\newcommand {\calZ}{\mathcal Z}
\newcommand {\mbar}[3]{\hskip #2 \overline{\hskip -#2 #1 \hskip -#3} \hskip #3}
\newcommand {\oM}{\mbar{M}{.2em}{.1em}}
\newcommand {\oQ}{\mbar{Q}{.07em}{.07em}}
\newcommand {\oT}{\mbar{T}{.07em}{.03em}}
\newcommand {\oV}{\mbar{V}{.07em}{.07em}}
\newcommand {\oW}{\mbar{W}{.03em}{.03em}}
\newcommand {\orho}{\mbar{\rho}{.03em}{.03em}}
\newcommand {\ovarphi}{\mbar{\varphi}{.07em}{.07em}}
\newcommand {\obbL}{\mbar{\mathbb L}{.0em}{.1em}}
\newcommand {\obbM}{\mbar{\mathbb M}{.05em}{.1em}}
\newcommand {\obbT}{\mbar{\mathbb T}{.05em}{.05em}}
\newcommand {\obbQ}{\mbar{\mathbb Q}{.05em}{.05em}}
\newcommand {\ocalL}{\mbar{\mathcal L}{.07em}{.07em}}
\newcommand {\ocalM}{\mbar{\mathcal M}{.07em}{.07em}}
\newcommand {\ocalQ}{\mbar{\mathcal Q}{.07em}{.07em}}
\newcommand {\ocalT}{\mbar{\mathcal T}{.03em}{.03em}}
\newcommand {\gothh}{\mathfrak h}
\newcommand {\gothg}{\mathfrak g}
\newcommand {\gothgl}{\mathfrak{gl}}
\newcommand {\gothsl}{\mathfrak{sl}}
\newcommand {\lsliii}{\mathcal L(\mathfrak{sl}_3)}
\newcommand {\tlsliii}{\widetilde{\mathcal L}(\mathfrak{sl}_3)}
\newcommand {\uqbp}{\mathrm U_q(\mathfrak b_+)}
\newcommand {\uqbm}{\mathrm U_q(\mathfrak b_-)}
\newcommand {\uqnp}{\mathrm U_q(\mathfrak n_+)}
\newcommand {\uqnm}{\mathrm U_q(\mathfrak n_-)}
\newcommand {\uqgliii}{\mathrm U_q(\mathfrak{gl}_3)}
\newcommand {\uqsliii}{\mathrm U_q(\mathfrak{sl}_3)}
\newcommand {\uqhlsliii}{\mathrm U_q(\hat{\mathcal L}(\mathfrak{sl}_3))}
\newcommand {\uqlslii}{\mathrm U_q(\mathcal L(\mathfrak{sl}_2))}
\newcommand {\uqlsliii}{\mathrm U_q(\mathcal L(\mathfrak{sl}_3))}
\newcommand {\uqtlsliii}{\mathrm U_q(\widetilde{\mathcal L}(\mathfrak{sl}_3))}
\DeclareMathOperator {\End}{End}
\DeclareMathOperator {\im}{im}
\DeclareMathOperator {\sgn}{sgn}
\DeclareMathOperator {\tr}{tr}
\title{Quantum groups and functional relations for higher rank}
\author[H. Boos]{Hermann Boos}
\address{Fachbereich C -- Physik, Bergische Universit\"at Wuppertal, 42097 Wuppertal, Germany}
\email{[email protected]}
\author[F. G\"ohmann]{Frank G\"ohmann}
\address{Fachbereich C -- Physik, Bergische Universit\"at Wuppertal, 42097 Wuppertal, Germany}
\email{[email protected]}
\author[A. Kl\"umper]{Andreas Kl\"umper}
\address{Fachbereich C -- Physik, Bergische Universit\"at Wuppertal, 42097 Wuppertal, Germany}
\email{[email protected]}
\author[Kh. S. Nirov]{\vskip .2em Khazret S. Nirov}
\address{Institute for Nuclear Research of the Russian Academy of Sciences, 60th October Ave 7a, 117312 Moscow, Russia}
\curraddr{Fachbereich C -- Physik, Bergische Universit\"at Wuppertal, 42097 Wuppertal, Germany}
\email{[email protected]}
\author[A. V. Razumov]{Alexander V. Razumov}
\address{Institute for High Energy Physics, 142281 Protvino, Moscow region, Russia}
\email{[email protected]}
\begin{document}
\addtolength {\jot}{3pt}
\begin{abstract}
A detailed construction of the universal integrability objects related to the integrable systems associated with the quantum group $\uqlsliii$ is given. The full proof of the functional relations in the form independent of the representation of the quantum group on the quantum space is presented. The case of the general gradation and general twisting is treated. The specialization of the universal functional relations to the case when the quantum space is the state space of a discrete spin chain is described.
\end{abstract}
\maketitle
\tableofcontents
\section{Introduction}
The method of functional relations was proposed by Baxter to solve statistical models which can or cannot be treated with the help of the Bethe ansatz, see, for example, \cite{Bax82, Bax04}. It appears that its main ingredients, transfer matrices and $Q$-operators, are essential not only for the integration of the corresponding quantum statistical models in the sense of calculating the partition function in the thermodynamic limit. One of the remarkable recent applications is their usage in the construction of the fermionic basis \cite{BooJimMiwSmiTak07, BooJimMiwSmiTak09, JimMiwSmi09, BooJimMiwSmi10} for the observables of the XXZ spin chain.
It seems that the most productive, although not comprehensive, modern approach to the theory of quantum integrable systems is the approach based on the concept of quantum group invented by Drinfeld and Jimbo \cite{Dri87, Jim85}. In accordance with this approach, the transfer matrices and $Q$-operators are constructed by choosing first the representations for the factors of the tensor product of two copies of the underlying quantum group. Then these representations are applied to the universal $R$-matrix, and finally the trace over one of the representation spaces is taken. Here the functional relations are consequences of the properties of the used representations of the quantum group. For the first time, it was conceived by Bazhanov, Lukyanov and Zamolodchikov \cite{BazLukZam96, BazLukZam97, BazLukZam99}.
Following the physical tradition, we call the representation space corresponding to the first factor of the tensor product the auxiliary space, and the representation space corresponding to the second factor the quantum space. The representation on the auxiliary space defines the integrability object, while the representation on the quantum space defines the concrete physical integrable system. It appears convenient for our purposes to fix the representation for the first factor only, see, for example, \cite{AntFei97, RosWes02, BazTsu08, BooGoeKluNirRaz14a, BooGoeKluNirRaz13}. We use for the objects obtained in such a way the term ``universal''. The relations obtained for them can be used for any physical model related to the quantum group under consideration.
In the papers \cite{BazLukZam96, BazLukZam97, BazLukZam99} the case of the quantum group $\uqlslii$ was considered and as the quantum space the state space of a conformally invariant two dimensional field theory was taken. We reconsidered the case of this quantum group in the papers \cite{BooGoeKluNirRaz14a, BooGoeKluNirRaz13} where we obtained the functional relations in the universal form and then chose as the quantum space the state space of the XXZ-spin chain.
A two dimensional field theory with extended conformal symmetry related to the quantum group $\uqlsliii$ was analysed in the paper \cite{BazHibKho02}. Here as the quantum space the corresponding state space of the quantum continuous field theory under consideration was taken again. In the present paper we define for the case of the quantum group $\uqlsliii$ universal integrability objects and derive the corresponding functional relations. Here to define the integrability objects we use the general gradation and general twisting. However, the main difference from \cite{BazHibKho02} is that we give a new and detailed proof of
the functional relations. In the paper \cite{BazHibKho02} proofs are often skipped, or given in schematic form. This was one of the reasons for writing our paper. Another reason was the desire to have the specialization of the universal functional relations to the case when the quantum space is the state space of a discrete spin chain.
Below we use the notation
\begin{equation*}
\kappa_q = q - q^{-1}
\end{equation*}
so the definition of the $q$-deformed number can be written as
\begin{equation*}
[\nu]_q = \frac{q^\nu - q^{- \nu}}{q - q^{-1}} = \kappa_q^{-1}(q^\nu - q^{-\nu}), \qquad \nu \in \bbC.
\end{equation*}
We denote by $\calL(\gothg)$ the loop Lie algebra of a finite dimensional simple Lie algebra $\gothg$ and by $\widetilde \calL(\gothg)$ its standard central extension, see, for example, the book by Kac \cite{Kac90}. The symbol~$\bbN$ means the set of natural numbers and the symbol $\bbZ_+$ the set of non-negative integers.
To construct integrability objects one uses spectral parameters. They are introduced by defining a $\bbZ$-gradation of the quantum group. In the case under consideration a $\bbZ$-gradation of $\uqlsliii$ is determined by three integers $s_0$, $s_1$ and $s_2$. We use the notation $s = s_0 + s_1 + s_2$ and denote by $r_s$ some fixed $s$th root of $-1$, so that $(r_s)^s = -1$.
\section{Integrability objects}
In this paper we consider integrable systems related to the quantum group $\uqlsliii$. Depending on the sense of the ``deformation parameter'' $q$, there are at least three definitions of a quantum group. According to the first definition, $q = \exp \hbar$, where $\hbar$ is an indeterminate, according to the second one, $q$ is indeterminate, and according to the third one, $q = \exp \hbar$, where $\hbar$ is a complex number. In the first case a quantum group is a $\bbC[[\hbar]]$-algebra, in the second case a $\bbC(q)$-algebra, and in the third case it is just a complex algebra. We define the quantum group as a $\bbC$-algebra, see, for example, the books \cite{JimMiw95, EtiFreKir98}.
To construct representations of the quantum group $\uqlsliii$ we use the Jimbo's homomorphism from $\uqlsliii$ to the quantum group $\uqgliii$. Therefore, we first remind the definition of $\uqgliii$ and then discuss $\uqlsliii$.
\subsection{\texorpdfstring{Quantum group $\uqgliii$}{Quantum group Uq(gl3)}} \label{ss:qguqslii}
Denote by $\gothg$ the standard Cartan subalgebra of the Lie algebra $\gothgl_3$ and by $G_i = E_{ii}$, $i = 1, 2, 3$, the elements forming the standard basis of $\gothg$.\footnote{We use the usual notation $E_{ij}$ for the matrix units.} The root system of $\gothgl_3$ relative to $\gothg$ is generated by the simple roots $\alpha_i \in \gothg^*$, $i = 1, 2$, given by the relations
\begin{equation}
\alpha_j(G_i) = c_{ij}, \label{alphah}
\end{equation}
where
\begin{equation*}
(c_{ij}) = \left( \begin{array}{rr}
1 & 0 \\
-1 & 1 \\
0 & -1
\end{array} \right).
\end{equation*}
The Lie algebra $\gothsl_3$ is a subalgebra of $\gothgl_3$, and the standard Cartan subalgebra $\gothh$ of $\gothsl_3$ is a subalgebra of $\gothg$. Here the standard Cartan generators $H_i$, $i = 1, 2$, of $\gothsl_3$ are
\begin{equation}
H_1 = G_1 - G_2, \qquad H_2 = G_2 - G_3, \label{hk}
\end{equation}
and we have
\begin{equation*}
\alpha_j(H_i) = a_{ij},
\end{equation*}
where
\begin{equation}
(a_{ij}) = \left( \begin{array}{rr}
2 & -1 \\
-1 & 2
\end{array} \right) \label{cm}
\end{equation}
is the Cartan matrix of $\gothsl_3$.
Let $\hbar$ be a complex number and $q = \exp \hbar$. We define the quantum group $\uqgliii$ as a unital associative $\bbC$-algebra generated by the elements $E_i$, $F_i$, $i = 1, 2$, and $q^X$, $X \in \gothg$, with the relations\footnote{It is necessary to assume that $q^2 \ne 1$. In fact, we assume that $q$ is not any root of unity.}
\begin{gather}
q^0 = 1, \qquad q^{X_1} q^{X_2} = q^{X_1 + X_2}, \label{xx} \\
q^X E_i \, q^{-X} = q^{\alpha_i(X)} E_i, \qquad q^X F_i \, q^{-X} = q^{- \alpha_i(X)} F_i, \label{xexf} \\
[E_i, F_j] = \kappa_q^{-1} \delta_{ij} \, (q^{H_i} - q^{-H_i}) \label{ef}
\end{gather}
satisfied for any $i$ and $j$, and the Serre relations
\begin{equation}
E_i^2 E_j^{} - [2]_q E_i^{} E_j^{} E_i^{} + E_j^{} E_i^2 = 0, \qquad
F_i^2 F_j^{} - [2]_q F_i^{} F_j^{} F_i^{} + F_j^{} F_i^2 = 0 \label{sr}
\end{equation}
satisfied for any distinct $i$ and $j$. Note that $q^X$ is just a convenient notation. There are no elements of $\uqgliii$ corresponding to the elements of $\gothg$. In fact, this notation means a set of elements of $\uqgliii$ parametrized by $\gothg$. It is convenient to use the notations
\begin{equation*}
q^{X + \nu} = q^\nu q^X
\end{equation*}
and
\begin{equation}
[X + \nu]_q = \kappa_q^{-1} \, (q^{X + \nu}- q^{ -X -\nu}) = \kappa_q^{-1} \, (q^{\nu} q^X - q^{-\nu} q^{-X}) \label{xnq}
\end{equation}
for any $X \in \gothg$ and $\nu \in \bbC$. Here equation (\ref{ef}) takes the form
\begin{equation*}
[E_i, F_j] = \delta_{ij} [H_i]_q.
\end{equation*}
Similar notations are used below for the case of the quantum groups $\uqtlsliii$ and $\uqlsliii$.
With respect to the properly defined coproduct, counit and antipode the quantum group $\uqgliii$ is a Hopf algebra.
Looking at (\ref{xexf}) one can say that the generators $E_i$ and $F_i$ are related to the roots $\alpha_i$ and $-\alpha_i$ respectively. Define the elements related to the roots $\alpha_1 + \alpha_2$ and $-(\alpha_1 + \alpha_2)$ as
\begin{equation}
E_3 = E_1 E_2 - q^{-1} E_2 E_1, \qquad F_3 = F_2 F_1 - q \, F_1 F_2. \label{e3f3}
\end{equation}
The Serre relations (\ref{sr}) give
\begin{align}
& E_3 E_1 = q^{-1} E_1 E_3, && E_3 E_2 = q \, E_2 E_3, \label{e3e} \\
& F_3 F_1 = q^{-1} F_1 F_3, && F_3 F_2 = q \, F_2 F_3. \label{f3f}
\end{align}
One can also verify that
\begin{equation*}
[E_3, F_3] = [H_1 + H_2]_q,
\end{equation*}
and, besides,
\begin{align}
& [E_1, F_3] = - q \, F_2 \, q^{H_1}, && [E_2, F_3] = F_1 \, q^{- H_2}, \label{f3e} \\
& [E_3, F_1] = - E_2 \, q^{- H_1}, && [E_3, F_2] = q^{-1} E_1 \, q^{H_2}. \label{e3f}
\end{align}
Using the above relations, one can find explicit expressions for the action of the generators of $\uqgliii$ on Verma $\uqgliii$-modules, see appendix \ref{a:vmr}.
\subsection{\texorpdfstring{Quantum group $\uqlsliii$}{Quantum group Uq(L(sl3))}}
\subsubsection{Definition}
We start with the quantum group $\uqtlsliii$. Recall that the Cartan subalgebra of $\tlsliii$ is
\begin{equation*}
\widetilde \gothh = \gothh \oplus \bbC c,
\end{equation*}
where $\gothh = \bbC H_1 \oplus \bbC H_2$ is the standard Cartan subalgebra of $\gothsl_3$ and $c$ is the central element \cite{Kac90}. Define the Cartan elements
\begin{equation*}
h_0 = c - H_1 - H_2, \qquad h_1 = H_1, \qquad h_2 = H_2,
\end{equation*}
so that one has
\begin{equation}
c = h_0 + h_1 + h_2 \label{c}
\end{equation}
and
\begin{equation*}
\widetilde \gothh = \bbC h_0 \oplus \bbC h_1 \oplus \bbC h_2.
\end{equation*}
The simple roots $\alpha_i \in \widetilde{\gothh}^*$, $i = 0$, $1$, $2$, are given by the equation
\begin{equation*}
\alpha_j(h_i) = \tilde a_{ij},
\end{equation*}
where
\begin{equation*}
(\tilde a_{ij}) = \left(\begin{array}{rrr}
2 & -1 & -1 \\
-1 & 2 & -1 \\
-1 & -1 & 2
\end{array} \right)
\end{equation*}
is the Cartan matrix of the Lie algebra $\tlsliii$.
As before, let $\hbar$ be a complex number and $q = \exp \hbar$. The quantum group $\uqtlsliii$ is a unital associative $\bbC$-algebra generated by the elements $e_i$, $f_i$, $i = 0, 1, 2$, and $q^x$, $x
\in \widetilde \gothh$, with the relations
\begin{gather}
q^0 = 1, \qquad q^{x_1} q^{x_2} = q^{x_1 + x_2}, \label{lxx} \\
q^x e_i q^{-x} = q^{\alpha_i(x)} e_i, \qquad q^x f_i q^{-x} = q^{-\alpha_i(x)} f_i, \\
[e_i, f_j] = \delta_{ij} [h_i]_q
\end{gather}
satisfied for all $i$ and $j$, and the Serre relations
\begin{equation}
e_i^2 e_j^{\mathstrut} - [2]_q e_i^{\mathstrut} e_j^{\mathstrut} e_i^{\mathstrut}
+ e_j^{\mathstrut} e_i^2 = 0, \qquad f_i^2 f_j^{\mathstrut} - [2]_q f_i^{\mathstrut} f_j^{\mathstrut} f_i^{\mathstrut} + f_j^{\mathstrut} f_i^2 = 0 \label{lsr}
\end{equation}
satisfied for all distinct $i$ and $j$.
The quantum group $\uqtlsliii$ is a Hopf algebra with the comultiplication $\Delta$, the antipode $S$, and the counit $\varepsilon$ defined by the
relations
\begin{gather}
\Delta(q^x) = q^x \otimes q^x, \qquad \Delta(e_i) = e_i \otimes 1 + q^{- h_i} \otimes e_i, \qquad \Delta(f_i) = f_i \otimes q^{h_i} + 1 \otimes f_i, \label{cmul} \\
S(q^x) = q^{- x}, \qquad S(e_i) = - q^{h_i} e_i, \qquad S(f_i) = - f_i \, q^{- h_i}, \\
\varepsilon(q^x) = 1, \qquad \varepsilon(e_i) = 0, \qquad \varepsilon(f_i) = 0. \label{cu}
\end{gather}
The quantum group $\uqlsliii$ can be defined as the quotient algebra of $\uqtlsliii$ by the two-sided ideal generated by the elements of the form $q^{\nu c} - 1$, $\nu \in \bbC^\times$. In terms of generators and relations the quantum group $\uqlsliii$ is a $\bbC$-algebra generated by the elements $e_i$, $f_i$, $i = 0, 1, 2$, and $q^x$, $x \in \widetilde{\gothh}$, with relations (\ref{lxx})--(\ref{lsr}) and an additional relation
\begin{equation}
q^{\nu c} = 1, \label{qh0h1}
\end{equation}
where $\nu \in \bbC^\times$. It is a Hopf algebra with the Hopf structure defined by (\ref{cmul})--(\ref{cu}). One of the reasons to use the quantum group $\uqlsliii$ instead of $\uqtlsliii$ is that in the case of $\uqtlsliii$ we have no expression for the universal $R$-matrix.
\subsubsection{Universal $R$-matrix}
As any Hopf algebra the quantum group $\uqlsliii$ has another comultiplication called the opposite comultiplication. It can be defined explicitly by the equations
\begin{gather}
\Delta^{\mathrm{op}}(q^x) = q^x \otimes q^x, \label{doqx} \\
\Delta^{\mathrm{op}}(e_i) = e_i \otimes q^{- h_i} + 1 \otimes e_i, \qquad \Delta^{\mathrm{op}}(f_i) = f_i \otimes 1 + q^{h_i} \otimes f_i. \label{doefi}
\end{gather}
When the quantum group $\uqlsliii$ is defined as a $\bbC[[\hbar]]$-algebra it is a quasitriangular Hopf algebra. It means that there exists an element $\calR \in \uqlsliii \otimes \uqlsliii$, called the universal $R$-matrix, such that
\begin{equation*}
\Delta^{\mathrm{op}}(a) = \calR \, \Delta(a) \, \calR^{-1}
\end{equation*}
for all $a \in \uqlsliii$, and\footnote{For the explanation of the notation see, for example, the book \cite{ChaPre94} or the papers \cite{BooGoeKluNirRaz10, BooGoeKluNirRaz14a}.}
\begin{equation}
(\Delta \otimes \id) (\calR) = \calR^{13} \calR^{23}, \qquad (\id \otimes \Delta) (\calR) = \calR^{13} \calR^{12}. \label{urm}
\end{equation}
The expression for the universal $R$-matrix of $\uqlsliii$ considered as a $\bbC[[\hbar]]$-algebra can be constructed using the procedure proposed by Khoroshkin and Tolstoy \cite{TolKho92}. Note that here the universal $R$-matrix is an element of $\uqbp \otimes \uqbm$, where $\uqbp$ is the Borel subalgebra of $\uqlsliii$ generated by $e_i$, $i = 0, 1, 2$, and $q^x$, $x \in \widetilde{\gothh}$, and $\uqbm$ is the Borel subalgebra of $\uqlsliii$ generated by $f_i$, $i = 0, 1, 2$, and $q^x$, $x \in \widetilde{\gothh}$.
In fact, one can use the expression for the universal $R$-matrix from the paper \cite{TolKho92} also for the case of the quantum group $\uqlsliii$ defined as a $\bbC$-algebra having in mind that in this case the quantum group is quasitriangular only in some restricted sense. Namely, all the relations involving the universal $R$-matrix should be considered as valid only for the weight representations of $\uqlsliii$, see in this respect the paper \cite{Tan92} and the discussion below.
Recall that a representation $\varphi$ of $\uqlsliii$ on the vector space $V$ is a weight representation if
\begin{equation*}
V = \bigoplus_{\lambda \in \widetilde \gothh^*} V_\lambda,
\end{equation*}
where
\begin{equation*}
V_\lambda = \{v \in V \mid q^x v = q^{\lambda(x)} v \mbox{ for any } x \in \widetilde \gothh \}.
\end{equation*}
Taking into account relations (\ref{qh0h1}) and (\ref{c}), we conclude that $V_\lambda \ne \{0\}$ only if
\begin{equation}
\lambda(h_0 + h_1 + h_2) = 0. \label{lambdah0h1h2}
\end{equation}
Let $\varphi_1$ and $\varphi_2$ be weight representations of $\uqlsliii$ on the vector spaces $V_1$ and $V_2$ with the weight decompositions
\begin{equation*}
V_1 = \bigoplus_{\lambda \in \widetilde \gothh^*} (V_1)_\lambda, \qquad V_2 = \bigoplus_{\lambda \in \widetilde \gothh^*} (V_2)_\lambda.
\end{equation*}
In the tensor product $V_1 \otimes V_2$ the role of the universal $R$-matrix is played by the operator
\begin{equation}
\calR_{\varphi_1, \varphi_2} = (\varphi_1 \otimes \varphi_2)(\calB) \, \calK_{\varphi_1, \varphi_2}. \label{rpipi}
\end{equation}
Here $\calB$ is an element of $\uqnp \otimes \uqnm$, where $\uqnp$ and $\uqnm$ are the subalgebras of $\uqlsliii$ generated by $e_i$, $i = 0, 1, 2$, and $f_i$, $i = 0, 1, 2$, respectively. The operator $\calK_{\varphi_1, \varphi_2}$ acts on a vector $v \in (V_1)_{\lambda_1} \otimes (V_2)_{\lambda_2}$ in accordance with the equation
\begin{equation}
\calK_{\varphi_1, \varphi_2} \, v = q^{\sum_{i, j = 1}^2 b_{i j} \lambda_1(h_i) \lambda_2(h_j)} \, v, \label{kpipii}
\end{equation}
where
\begin{equation*}
(b_{ij}) = \frac{1}{3} \left( \begin{array}{cc}
2 & 1 \\
1 & 2
\end{array} \right)
\end{equation*}
is the inverse matrix of the Cartan matrix (\ref{cm}) of the Lie algebra $\gothsl_3$. It follows from (\ref{lambdah0h1h2}) that (\ref{kpipii}) can be written in a simpler form
\begin{equation}
\calK_{\varphi_1, \varphi_2} \, v = q^{\sum_{i = 0}^2 \lambda_1(h_i) \lambda_2(h_i) / 3} \, v. \label{kpipi}
\end{equation}
\subsection{\texorpdfstring{$R$-operators}{R-operators}}
To obtain $R$-operators, or, as they also called, $R$-matrices, one uses for both factors of the tensor product of the two copies of the quantum group one and the same finite dimensional representation. We do not use in this paper the explicit form of the $R$-operators for the quantum group $\uqlsliii$. The corresponding calculations for this case and for some other quantum groups can be found in the papers \cite{KhoTol92, LevSoiStu93, ZhaGou94, BraGouZhaDel94, BraGouZha95, BooGoeKluNirRaz10, BooGoeKluNirRaz11}.
\subsection{Universal monodromy and universal transfer operators}
\subsubsection{General remarks} \label{ss:gr}
To construct universal monodromy and transfer operators we endow $\uqlsliii$ with a $\bbZ$-gradation, see, for example, \cite{BooGoeKluNirRaz14a, BooGoeKluNirRaz13}. The usual way to do it is as follows.
Given $\zeta \in \bbC^\times$, we define an automorphism $\Gamma_\zeta$ of $\uqlsliii$ by its action on the generators of $\uqlsliii$ as
\begin{equation*}
\Gamma_\zeta(q^x) = q^x, \qquad \Gamma_\zeta(e_i) = \zeta^{s_i} e_i, \qquad \Gamma_\zeta(f_i) = \zeta^{-s_i} f_i,
\end{equation*}
where $s_i$ are arbitrary integers. The family of automorphisms $\Gamma_\zeta$, $\zeta \in \bbC^\times$, generates the $\bbZ$-gradation with the grading subspaces
\begin{equation*}
\uqlsliii_m = \{ a \in \uqlsliii \mid \Gamma_\zeta(a) = \zeta^m a \}.
\end{equation*}
Taking into account (\ref{cmul}) we see that
\begin{equation}
(\Gamma_\zeta \otimes \Gamma_\zeta) \circ \Delta = \Delta \circ \Gamma_\zeta. \label{ggd}
\end{equation}
It also follows from the explicit form of the universal $R$-matrix obtained with the help of the Tolstoy--Khoroshkin construction, see \cite{TolKho92, BooGoeKluNirRaz10}, that for any $\zeta \in \bbC^\times$ we have
\begin{equation}
(\Gamma_\zeta \otimes \Gamma_\zeta)(\calR) = \calR. \label{ggr}
\end{equation}
Following the physical tradition, we call $\zeta$ the spectral parameter.
If $\pi$ is a representation of $\uqlsliii$, then for any $\zeta \in \bbC^\times$ the mapping $\pi \circ \Gamma_\zeta$ is also a representation of $\uqlsliii$ . Below, for any homomorphism $\varphi$ from $\uqlsliii$ to some algebra we use the notation
\begin{equation}
\varphi_\zeta = \varphi \circ \Gamma_\zeta. \label{vpz}
\end{equation}
If $V$ is a $\uqlsliii$-module corresponding to the representation $\pi$, we denote by $V_\zeta$ the $\uqlsliii$-module corresponding to the representation $\pi_\zeta$. Certainly, as vector spaces $V$ and $V_\zeta$ coincide.
Now let $\pi$ be a representation of the quantum group $\uqlsliii$ on a vector space $V$. The universal monodromy operator $\calM_\pi(\zeta)$ corresponding to the representation $\pi$ is defined by the relation
\begin{equation*}
\calM_\pi(\zeta) = (\pi_\zeta \otimes \id)(\calR).
\end{equation*}
It is clear that $\calM_\pi(\zeta)$ is an element of $\End(V) \otimes \uqlsliii$.
Universal monodromy operators are auxiliary objects needed for construction of universal transfer operators. The universal transfer operator $\calT_\pi(\zeta)$ corresponding to the universal monodromy operator $\calM_\pi(\zeta)$ is defined as
\begin{equation*}
\calT_\pi(\zeta) = (\tr \otimes \id)(\calM_\pi(\zeta) (\pi_\zeta(t) \otimes 1)) = ((\tr \circ \pi_\zeta) \otimes \id)(\calR (t \otimes 1)),
\end{equation*}
where $t$ is a group-like element of $\uqlsliii$ called a twist element. Note that $\calT_\pi(\zeta)$ is an element of $\uqlsliii$. An important property of the universal transfer operators $\calT_\pi(\zeta)$ is that they commute for all representations $\pi$ and all values of $\zeta$. They also commute with all generators $q^x$, $x \in \widetilde \gothh$, see, for example, our papers \cite{BooGoeKluNirRaz14a, BooGoeKluNirRaz13}.
As we noted above, to construct representations of $\uqlsliii$ we are going to use Jimbo's homomorphism. It is a homomorphism $\varphi: \uqlsliii \to \uqgliii$ defined by the equations\footnote{Recall that the Cartan generators of $\gothsl_3$ are related to those of $\gothgl_3$ by relation (\ref{hk}).}
\begin{align}
& \varphi(q^{\nu h_0}) = q^{\nu(G_3 - G_1)}, && \varphi(q^{\nu h_1}) = q^{\nu (G_1 - G_2)}, && \varphi(q^{\nu h_2}) = q^{\nu (G_2 - G_3)}, \label{jha} \\
& \varphi(e_0) = F_3 \, q^{- G_1 - G_3}, && \varphi(e_1) = E_1, && \varphi(e_2) = E_2, \\
& \varphi(f_0) = E_3 \, q^{G_1 + G_3} , && \varphi(f_1) = F_1, && \varphi(f_2) = F_2, \label{jhc}
\end{align}
see the paper \cite{Jim86a}. If $\pi$ is a representation of $\uqgliii$, then $\pi \circ \varphi$ is a representation of $\uqlsliii$. Define the universal monodromy operator
\begin{equation*}
\calM_\varphi(\zeta) = (\varphi_\zeta \otimes \id)(\calR)
\end{equation*}
being an element of $\uqgliii \otimes \uqlsliii$. It is evident that
\begin{equation*}
\calM_{\pi \circ \varphi}(\zeta) = (\pi \otimes \id)(\calM_\varphi(\zeta)) = ((\pi \circ \varphi_\zeta) \otimes \id)(\calR).
\end{equation*}
For the corresponding transfer operator we have
\begin{equation*}
\calT_{\pi \circ \varphi}(\zeta) = (\tr \otimes \id)(\calM_{\pi \circ \varphi}(\zeta)((\pi \circ \varphi_\zeta)(t) \otimes 1)) = ((\tr \circ \pi \circ \varphi_\zeta) \otimes \id)(\calR (t \otimes 1)).
\end{equation*}
Introduce the notation
\begin{equation*}
\tr_\pi = \tr \circ \pi.
\end{equation*}
Note that $\tr_\pi$ is a trace on $\uqgliii$. This means that it is a linear mapping from $\uqgliii$ to $\bbC$ satisfying the cyclicity condition
\begin{equation*}
\tr_\pi(a b) = \tr_\pi(b a)
\end{equation*}
for any $a, b \in\uqgliii$. One can write
\begin{equation*}
\calT_{\pi \circ \varphi}(\zeta) = (\tr \otimes \id)(\calM_{\pi \circ \varphi}(\zeta) ((\pi \circ \varphi _\zeta)(t) \otimes 1)) = (\tr_\pi \otimes \id)(\calM_\varphi(\zeta) (\varphi_\zeta(t) \otimes 1)).
\end{equation*}
Thus, to obtain the universal transfer operators $\calT_{\pi \circ \varphi}(\zeta)$, one can use different universal monodromy operators $\calM_{\pi \circ \varphi}(\zeta)$ corresponding to different representations $\pi$, or use one and the same universal monodromy operator $\calM_\varphi(\zeta)$ but different traces $\tr_\pi$ corresponding to different representations $\pi$.
\subsubsection{More universal monodromy operators}
Additional universal monodromy operators can be defined with the help of automorphisms of $\uqlsliii$. There are two special automorphisms of $\uqlsliii$. The first one is defined by the relations
\begin{align}
& \sigma(e_0) = e_1, && \sigma(e_1) = e_2, && \sigma(e_2) = e_0, \label{sigmae} \\*
& \sigma(f_0) = f_1, && \sigma(f_1) = f_2, && \sigma(f_2) = f_0, \\*
& \sigma(q^{\nu h_0}) = q^{\nu h_1}, && \sigma(q^{\nu h_1}) = q^{\nu h_2}, && \sigma(q^{\nu h_2}) = q^{\nu h_0}, \label{sigmah}
\end{align}
and the second one is given by
\begin{align}
& \tau(e_0) = e_0, && \tau(e_1) = e_2, && \tau(e_2) = e_1, \label{taue} \\
& \tau(f_0) = f_0, && \tau(f_1) = f_2, && \tau(f_2) = f_1, \\
& \tau(q^{\nu h_0}) = q^{\nu h_0}, && \tau(q^{\nu h_1}) = q^{\nu h_2}, && \tau(q^{\nu h_2}) = q^{\nu h_1}. \label{tauh}
\end{align}
These automorphisms generate a subgroup of the automorphism group of $\uqlsliii$ isomorphic to the dihedral group $\mathrm D_3$. Using the automorphisms $\sigma$ and $\tau$, we define two families of homomorphisms from $\uqlsliii$ to $\uqgliii$ generalizing the Jimbo's homomorphism as
\begin{equation*}
\varphi_i = \varphi \circ \sigma^{- i + 1}, \qquad \ovarphi{}'_i = \varphi \circ \tau \circ \sigma^{- i + 1},
\end{equation*}
and the corresponding universal monodromy operators as
\begin{equation*}
\calM_i(\zeta) = ((\varphi_i)_\zeta \otimes \id)(\calR), \qquad \ocalM{}'_i(\zeta) = ((\ovarphi{}'_i)_\zeta \otimes \id)(\calR).
\end{equation*}
The prime means that the corresponding homomorphisms and the objects related to them are redefined below to have simpler form of the functional relations. Below we often define objects with the help of the powers of the automorphism $\sigma$. Different powers are marked by the values of the corresponding index. We assume that if the index is omitted it means that the object is taken for the index value $1$, in particular, we have $\varphi = \varphi_1$. Since $\sigma^3$ is the identity automorphism of $\uqlsliii$, we have
\begin{equation*}
\calM_{i + 3}(\zeta) = \calM_i(\zeta), \qquad \ocalM'_{i + 3}(\zeta) = \ocalM{}'_i(\zeta).
\end{equation*}
Therefore, there are only six different universal monodromy operators of such kind.
It follows from (\ref{cmul}) that
\begin{equation*}
(\sigma \otimes \sigma) \circ \Delta = \Delta \circ \sigma.
\end{equation*}
Similarly, (\ref{doqx}) and (\ref{doefi}) give
\begin{equation*}
(\sigma \otimes \sigma) \circ \Delta^{\mathrm{op}} = \Delta^{\mathrm{op}} \circ \sigma.
\end{equation*}
Using the definition of the universal $R$-matrix (\ref{urm}), we obtain the equation
\begin{equation*}
((\sigma \otimes \sigma)(\calR)) \Delta(\sigma(a)) ((\sigma \otimes \sigma)(\calR))^{-1} = \Delta^{\mathrm{op}}(\sigma(a)).
\end{equation*}
Taking into account the uniqueness theorem for the universal $R$-matrix \cite{KhoTol92}, we conclude that
\begin{equation}
(\sigma \otimes \sigma)(\calR) = \calR. \label{ssr}
\end{equation}
Using this relation, it is not difficult to demonstrate that
\begin{equation}
\calM_{i + 1}(\zeta) = (\id \otimes \sigma)(\calM_i(\zeta))|_{s \to \sigma(s)}, \qquad \ocalM{}'_{i + 1}(\zeta) = (\id \otimes \sigma)(\ocalM{}'_i(\zeta))|_{s \to \sigma(s)}, \label{mtom}
\end{equation}
where $s \to \sigma(s)$ stands for
\begin{equation*}
s_0 \to s_1, \ s_1 \to s_2, \ s_2 \to s_0.
\end{equation*}
One can also show that
\begin{equation}
(\tau \otimes \tau)(\calR) = \calR. \label{ttr}
\end{equation}
This relation, together with the equation
\begin{equation}
\sigma \circ \tau \circ \sigma = \tau, \label{sts}
\end{equation}
gives
\begin{equation}
\ocalM{}'_i(\zeta) = (\id \otimes \tau)(\calM_{- i + 2}(\zeta))|_{s \to \tau(s)}, \label{bmtom}
\end{equation}
where $s \to \tau(s)$ stands for
\begin{equation*}
s_0 \to s_0, \ s_1 \to s_2, \ s_2 \to s_1,
\end{equation*}
and we take into account that $\tau^2 = \id$.
Starting with the infinite dimensional representation $\widetilde \pi^{\lambda}$ of the quantum group $\uqgliii$ described in appendix \ref{a:vmr}, we define the infinite dimensional representations
\begin{equation*}
\widetilde \varphi_i^{\lambda} = \widetilde \pi^{\lambda} \circ \varphi_i, \qquad \widetilde{\ovarphi}{}'^\lambda_i = \widetilde \pi{}^{\lambda} \circ \ovarphi{}'_i
\end{equation*}
of the quantum group $\uqlsliii$. Slightly abusing notation we denote the corresponding $\uqlsliii$-modules by $\widetilde V_i^\lambda$ and $\widetilde{\oV}{}'^\lambda_i$. We define two families of universal monodromy operators:
\begin{equation*}
\widetilde \calM_i^\lambda(\zeta) = ((\widetilde \varphi_i^\lambda)_\zeta \otimes \id)(\calR), \qquad \widetilde \ocalM{}'^\lambda_i(\zeta) = ((\widetilde {\ovarphi}{}'^\lambda_i)_\zeta \otimes \id)(\calR).
\end{equation*}
In the same way one defines two families of universal monodromy operators associated with the finite dimensional representation $\pi^\lambda$ of $\uqgliii$:
\begin{equation*}
\calM_i^\lambda(\zeta) = ((\varphi_i^\lambda)_\zeta \otimes \id)(\calR), \qquad \ocalM{}'^\lambda_i(\zeta) = ((\ovarphi{}'^\lambda_i)_\zeta \otimes \id)(\calR),
\end{equation*}
where
\begin{equation*}
\varphi{}_i^\lambda = \pi^\lambda \circ \varphi_i, \qquad \ovarphi{}'^\lambda_i = \pi^\lambda \circ \ovarphi{}'_i.
\end{equation*}
The newly defined universal monodromy operators satisfy relations similar to $(\ref{mtom})$ and $(\ref{bmtom})$.
\subsubsection{Universal transfer operators} \label{sss:uto}
Recall that a universal transfer operator is constructed by taking the trace over the auxiliary space. In the case of an infinite dimensional representation there is a problem of convergence which can be solved with the help of a nontrivial twist element. We use a twist element of the form
\begin{equation}
t = q^{(\phi_0 h_0 + \phi_1 h_1 + \phi_2 h_2) / 3}, \label{t}
\end{equation}
where $\phi_0$, $\phi_1$ and $\phi_2$ are complex numbers. Taking into account (\ref{qh0h1})and (\ref{c}), we assume that
\begin{equation*}
\phi_0 + \phi_1 + \phi_2 = 0.
\end{equation*}
We define two families of universal transfer operators associated with the infinite dimensional representations $\widetilde \pi^\lambda$ of $\uqgliii$ as
\begin{align*}
& \widetilde \calT_i^\lambda(\zeta) = (\tr \otimes \id)(\widetilde \calM_i^\lambda(\zeta) ((\widetilde \varphi_i^\lambda)_\zeta(t) \otimes 1)) = (\widetilde \tr^\lambda \otimes \id)(\calM_i(\zeta) ((\varphi_i)_\zeta(t) \otimes 1)), \\
& \widetilde \ocalT{}'^\lambda_i(\zeta) = (\tr \otimes \id)(\widetilde \ocalM{}'^\lambda_i(\zeta) ((\widetilde {\ovarphi}_i^\lambda)_\zeta(t) \otimes 1)) = (\widetilde \tr^\lambda \otimes \id)(\ocalM{}'_i(\zeta) (({\ovarphi}_i)_\zeta(t) \otimes 1)),
\end{align*}
where
\begin{equation*}
\widetilde \tr{}^\lambda = \tr \circ \widetilde \pi^\lambda,
\end{equation*}
and two families of universal transfer operators associated with the finite dimensional representations $\pi^\lambda$ of $\uqgliii$ as
\begin{align*}
& \calT_i^\lambda(\zeta) = (\tr \otimes \id)(\calM_i^\lambda(\zeta) ((\varphi_i^\lambda)_\zeta(t) \otimes 1)) = (\tr^\lambda \otimes \id)(\calM_i(\zeta) ((\varphi_i)_\zeta(t) \otimes 1)), \\
& \ocalT{}'^\lambda_i(\zeta) = (\tr \otimes \id)(\ocalM{}'^\lambda_i(\zeta) ((\ovarphi{}'^\lambda_i)_\zeta(t) \otimes 1)) = (\tr^\lambda \otimes \id)(\ocalM{}'_i(\zeta) ((\ovarphi{}'_i)_\zeta(t) \otimes 1)),
\end{align*}
where
\begin{equation*}
\tr^\lambda = \tr \circ \pi^\lambda.
\end{equation*}
Note that the mappings $\widetilde \tr{}^\lambda$ and $\tr^\lambda$ are traces on the algebra $\uqgliii$.
Let us discuss the dependence of the universal transfer operators on the spectral parameter $\zeta$. Consider, for example, the universal transfer operator $\widetilde \calT^\lambda(\zeta)$. From the structure of the universal $R$-matrix, it follows that the dependence on $\zeta$ is determined by the dependence on $\zeta$ of the elements of the form $\varphi_\zeta(a)$, where $a \in \uqnp$. Any such element is a linear combination of monomials each of which is a product of $E_1$, $E_2$, $F_3$ and $q^X$ for some $X \in \gothg$. Let $A$ be such a monomial. We have
\begin{equation*}
q^{H_1} A q^{- H_1} = q^{2 n_1 - n_2 - n_3} A, \qquad q^{H_2} A q^{- H_2} = q^{- n_1 +2 n_2 - n_3} A,
\end{equation*}
where $n_1$, $n_2$ and $n_3$ are the numbers of $E_1$, $E_2$ and $F_3$ in $A$. Hence $\widetilde \tr{}^\lambda(A)$ can be non-zero only if
\begin{equation*}
n_1 = n_2 = n_3 = n.
\end{equation*}
Each $E_1$ enters $A$ with the factor $\zeta^{s_1}$, each $E_2$ with the factor $\zeta^{s_2}$, and each $F_3$ with the factor $\zeta^{s_0}$. Thus, for a monomial with non-zero trace we have a dependence on $\zeta$ of the form $\zeta^{n s}$. Therefore, the universal transfer operator $\widetilde \calT^\lambda(\zeta)$ depends on $\zeta$ only via $\zeta^s$, where $s$ is the sum of $s_0$, $s_1$ and $s_2$. The same is evidently true for all other universal transfer operators defined above. Using this fact, we obtain from (\ref{mtom}) and (\ref{bmtom}) the relations
\begin{gather}
\widetilde \calT{}^\lambda_{i + 1}(\zeta) = \sigma(\widetilde \calT^\lambda_i(\zeta))|_{\phi \to \sigma(\phi)}, \qquad
\widetilde{\ocalT}{}'^\lambda_{i + 1}(\zeta) = \sigma(\widetilde{\ocalT}{}'^\lambda_i(\zeta))|_{\phi \to \sigma(\phi)}, \label{tst} \\
\widetilde{\ocalT}{}'^\lambda_i(\zeta) = \tau(\widetilde{\calT}{}^\lambda_{- i + 2}(\zeta))|_{\phi \to \tau(\phi)}, \label{ttt}
\end{gather}
where $\phi \to \sigma(\phi)$ stands for
\begin{equation*}
\phi_0 \to \phi_1, \quad \phi_1 \to \phi_2, \quad \phi_2 \to \phi_0
\end{equation*}
and $\phi \to \tau(\phi)$ for
\begin{equation*}
\phi_0 \to \phi_0, \quad \phi_1 \to \phi_2, \quad \phi_2 \to \phi_1.
\end{equation*}
Similarly, for the universal transfer operators corresponding to the finite dimensional representations $\pi^\lambda$ we have
\begin{gather}
\calT{}^\lambda_{i + 1}(\zeta) = \sigma(\calT^\lambda_i(\zeta))|_{\phi \to \sigma(\phi)}, \qquad \ocalT{}'^\lambda_{i + 1}(\zeta) = \sigma(\ocalT{}'^\lambda_i(\zeta))|_{\phi \to \sigma(\phi)}, \label{tsta} \\
\ocalT{}'^\lambda_i(\zeta) = \tau(\calT{}^\lambda_{- i + 2}(\zeta))|_{\phi \to \tau(\phi)}.
\end{gather}
\subsubsection{Application of BGG resolution}
Recall first some definitions and properties of the necessary objects. We have denoted the standard Cartan subalgebra of the Lie algebra $\gothgl_3$ by $\gothg$. The Weyl group $W$ of the root system of $\gothgl_3$ is generated by the reflections $r_i: \gothg^* \to \gothg^*$, $i = 1, 2$, defined by the equation
\begin{equation*}
r_i(\lambda) = \lambda - \lambda(H_i) \alpha_i.
\end{equation*}
The minimal number of generators $r_i$ necessary to represent an element $w \in W$ is said to be the length of $w$ and is denoted by $\ell(w)$. It is assumed that the identity element has the length equal to $0$.
Let $\{\gamma_i\}$ be a dual basis of the standard basis $\{G_i\}$ of $\gothg$. Using (\ref{alphah}), it is easy to see that
\begin{equation*}
\alpha_1 = \gamma_1 - \gamma_2, \qquad \alpha_2 = \gamma_2 - \gamma_3.
\end{equation*}
One can get convinced that
\begin{align*}
r_1 (\gamma_1) & = \gamma_2, & r_1 (\gamma_2) & = \gamma_1, & r_1(\gamma_3) = \gamma_3, \\
r_2 (\gamma_1) & = \gamma_1, & r_2 (\gamma_2) & = \gamma_3, & r_2(\gamma_3) = \gamma_2.
\end{align*}
Identifying an element of $\gothg^*$ with the set of its components respective to the basis $\{\gamma_i\}$, we see that the reflection $r_1$ transposes the first and second components, and $r_2$ transposes the second and third components. One can verify that the whole Weyl group $W$ can be identified with the symmetric group $\mathrm S_3$. Here $(-1)^{\ell(w)}$ is evidently the sign of the permutation corresponding to the element $w \in W$. The order of $W$ is equal to six. There are one element of length $0$, two elements of length $1$, two elements of length $2$, and one element of length $3$.
Consider now a finite dimensional $\uqgliii$-module $V^\lambda$, see appendix \ref{a:vmr}. Introduce the following direct sums of infinite dimensional $\uqgliii$-modules:
\begin{equation*}
U_k = \bigoplus_{\substack{w \in W \\ \ell(w) = k}} \widetilde V^{w \cdot \lambda},
\end{equation*}
where $w \cdot \lambda$ means the affine action of $w$ defined as
\begin{equation*}
w \cdot \lambda = w(\lambda + \rho) - \rho
\end{equation*}
with $\rho = \alpha_1 + \alpha_2$. The quantum version of the Bernstein--Gelfand--Gelfand (BGG) resolution \cite{Ros91} for the quantum group $\uqgliii$ is the following exact sequence of $\uqgliii$-modules and $\uqgliii$-homomorphisms:
\begin{equation*}
0 \longrightarrow U_3 \overset{\varphi_3}{\longrightarrow} U_2 \overset{\varphi_2}{\longrightarrow} U_1 \overset{\varphi_1}{\longrightarrow} U_0 \overset{\varphi_0}{\longrightarrow} U_{-1} \longrightarrow 0,
\end{equation*}
where $U_{-1} = V^\lambda$.
Let $\rho_k$ be the representation of $\uqgliii$ corresponding to the $\uqgliii$-module $U_k$. Note that $\rho_{-1} = \pi^\lambda$. The subspaces $\ker \varphi_k$ and $\im \varphi_k$ are $\uqgliii$-submodules of $U_k$ and $U_{k - 1}$ respectively. For each $k = 0, 1, 2, 3$ we have
\begin{equation*}
\tr \circ \rho_k = \tr \circ \rho_k|_{\ker \varphi_k} + \tr \circ \rho_{k - 1} |_{\im \varphi_k}
\end{equation*}
and
\begin{equation*}
\im \varphi_k = \ker \varphi_{k - 1}.
\end{equation*}
Hence,
\begin{equation*}
\sum_{k = 0}^3 (-1)^k \tr \circ \rho_k = \tr \circ \rho_{-1}|_{\im \varphi_0} - \tr \circ \rho_3|_{\ker \varphi_3}.
\end{equation*}
Finally, having in mind that
\begin{equation*}
\im \varphi_0 = V^\lambda, \qquad \ker \varphi_3 = 0,
\end{equation*}
we obtain
\begin{equation*}
\sum_{k = 0}^3 (-1)^k \tr \circ \rho_k = \tr \circ \pi^\lambda = \tr^\lambda.
\end{equation*}
From the definition of $U_k$ it follows that
\begin{equation*}
\sum_{k = 0}^3 (-1)^k \tr \circ \rho_k = \sum_{w \in W} (-1)^{\ell(w)} \widetilde \tr{}^{\, w \cdot \lambda}.
\end{equation*}
Thus, we have
\begin{equation*}
\tr^\lambda = \sum_{w \in W} (-1)^{\ell(w)} \widetilde \tr{}^{\, w \cdot \lambda}.
\end{equation*}
This gives
\begin{equation*}
\calT^\lambda_i(\zeta) = \sum_{w \in W} (-1)^{\ell(w)} \widetilde \calT^{\, w \cdot \lambda}_i(\zeta), \qquad \ocalT{}'^\lambda_i(\zeta) = \sum_{w \in W} (-1)^{\ell(w)} \widetilde \ocalT{}'^{\, w \cdot \lambda}_i(\zeta).
\end{equation*}
Using the identification of $W$ with $\mathrm S_3$ described above, we write
\begin{equation}
\calT^\lambda_i(\zeta) = \sum_{p \in \mathrm S_3} \sgn(p) \, \widetilde \calT^{\, p(\lambda + \rho) - \rho}_i(\zeta), \qquad \ocalT{}'^\lambda_i(\zeta) = \sum_{p \in \mathrm S_3} \sgn(p) \, \widetilde \ocalT{}'^{\, p(\lambda + \rho) - \rho}_i(\zeta), \label{twtt}
\end{equation}
where $\sgn(p)$ is the sign of the permutation $p$, and $p$ acts on an element of $\gothg^*$ appropriately permuting its components with respect to the basis $\{\gamma_i\}$.
\subsection{\texorpdfstring{Universal $L$-operators and universal $Q$-operators}{Universal L-operators and universal Q-operators}}
\subsubsection{General remarks}
It follows from (\ref{rpipi}) and (\ref{kpipi}) that to construct universal monodromy operators and transfer operators it suffices to have a representation of the Borel subalgebra $\uqbp$. This observation is used to construct universal $L$-operators and $Q$-operators. In distinction to the case of universal transfer operators, we use here representations of $\uqbp$ which cannot be obtained by restriction of representations of $\uqlsliii$, or equivalently, representations of $\uqbp$ which cannot be extended to representations of $\uqlsliii$. It is clear that to have interesting functional relations one should use for the construction of universal $L$-operators and $Q$-operators representations which are connected in some way to the representations used for the construction of universal monodromy operators and transfer operators.
In general, a universal $L$-operator associated with a representation $\rho$ of $\uqbp$ is defined as
\begin{equation*}
\calL_\rho(\zeta) = (\rho_\zeta \otimes \id)(\calR).
\end{equation*}
As the universal monodromy operators are auxiliary objects needed for the construction of the universal transfer operators, the universal $L$-operators are needed for the construction of the universal $Q$-operators. The universal $Q$-operator $\calQ_\rho(\zeta)$ corresponding to the universal $L$-operator $\calL_\rho(\zeta)$ is defined as
\begin{equation*}
\calQ_\rho(\zeta) = (\tr \otimes \id)(\calL_\rho(\zeta) (\rho_\zeta(t) \otimes 1)) = ((\tr \circ \rho_\zeta) \otimes \id)(\calR (t \otimes 1)),
\end{equation*}
where $t$ is a twist element.
\subsubsection{Basic representation} \label{sss:br}
We start the construction of universal $L$-operators and $Q$-ope\-ra\-tors with the construction of the basic representation of $\uqbp$. The initial point is the representations $\widetilde \varphi^\lambda$ of $\uqlsliii$ described above.
First define the notion of a shifted representation. Let $\xi$ be an element of $\widetilde \gothh^*$ satisfying the equation
\begin{equation*}
\xi(h_0 + h_1 + h_2) = 0.
\end{equation*}
If $\rho$ is a representation of $\uqbp$, then the representation $\rho[\xi]$ defined by the relations
\begin{equation*}
\rho[\xi](e_i) = \rho(e_i), \qquad \rho[\xi](q^x) = q^{\xi(x)} \rho (q^x)
\end{equation*}
is a representation of $\uqbp$ called a shifted representation. If $W$ is a $\uqbp$-module corresponding to the representation $\rho$, then $W[\xi]$ denotes the $\uqbp$-module corresponding to the representation $\rho[\xi]$.
Consider the restriction of the representation $\widetilde \varphi^\lambda$ to $\uqbp$ and a shifted representation $\widetilde \varphi^\lambda[\xi]$ of $\uqbp$. One can show that for $\xi \ne 0$ this representation cannot be extended to a representation of $\uqlsliii$ and we can use it to construct a universal $Q$-operator. However, it follows from (\ref{rpipi}) and (\ref{kpipi}) that the universal $Q$-operator defined with the help of the representation $\widetilde \varphi^\lambda[\xi]$ is connected with the universal transfer operator defined with the help of the representation $\widetilde \varphi^\lambda$ by the relation
\begin{equation*}
\calQ_{\widetilde \varphi^\lambda[\xi]}(\zeta) = \calT_{\widetilde \varphi^\lambda}(\zeta) \, q^{\sum_{i = 0}^2 \xi(h^{\mathstrut}_i) h'_i / 3},
\end{equation*}
where
\begin{equation*}
h'_i = h_i + \phi_i.
\end{equation*}
Here we assume that the twist element is of the form (\ref{t}). We see that the use of shifted representations does not give anything really new.
Consider again the restriction of the representation $(\widetilde \varphi^\lambda)_\zeta$ to $\uqbp$. We have for this representation\footnote{Recall that $\mu_1 = \lambda_1 - \lambda_2$ and $\mu_2 = \lambda_2 - \lambda_3$.}
\begin{align}
& q^{\nu h_0} v_n = q^{\nu(- \mu_1 - \mu_2 + n_1 + 2 n_2 + n_3)} v_n, \label{ah0vn} \\*
& q^{\nu h_1} v_n = q^{\nu(\mu_1 - 2 n_1 - n_2 + n_3)} v_n, \\*
& q^{\nu h_2} v_n = q^{\nu(\mu_2 + n_1 - n_2 - 2 n_3)} v_n, \label{ah2vn} \\
& e_0 v_n = \zeta^{s_0} q^{- \lambda_1 - \lambda_3 - n_3} v_{n + \varepsilon_2}, \label{ae0vn} \\*
& e_1 v_n = \zeta^{s_1} \bigl( [\mu_1 - n_1 - n_2 + n_3 + 1]_q [n_1]_q v_{n - \varepsilon_1} - q^{\mu_1 - n_2 + n_3 + 2} [n_2]_q v_{n - \varepsilon_2 + \varepsilon_3} \bigr), \\*
& e_2 v_n = \zeta^{s_2} \bigl( [\mu_2 - n_3 + 1]_q [n_3]_q v_{n - \varepsilon_3} + q^{- \mu_2 + 2 n_3} [n_2]_q v_{n + \varepsilon_1 - \varepsilon_2} \bigr), \label{ae2vn}
\end{align}
where the basis vectors $v_n$ are defined by (\ref{vn}). Let us try to go to the limit $\mu_1, \mu_2 \to \infty$. Looking at relations (\ref{ah0vn})--(\ref{ae2vn}), we see that we cannot perform this limit directly. Therefore, we consider first a shifted representation $(\widetilde \varphi^\lambda)_\zeta[\xi]$ with $\xi$ defined by the relations
\begin{equation*}
\xi(h_0) = \mu_1 + \mu_2, \qquad \xi(h_1) = - \mu_1, \qquad \xi(h_2) = - \mu_2.
\end{equation*}
Then we introduce a new basis
\begin{equation*}
w_n = c_1^{n_1 + n_2} c_2^{n_2 + n_3} v_n,
\end{equation*}
where
\begin{equation*}
c_1 = q^{- \mu_1 - 1 - 2 (\lambda_3 - 1) s_1 / s}, \qquad c_2 = q^{- \mu_2 - 1 - 2 (\lambda_3 - 1) s_2 / s}.
\end{equation*}
Now relations (\ref{ah0vn})--(\ref{ah2vn}) take the form
\begin{gather}
q^{\nu h_0} w_n = q^{\nu(n_1 + 2 n_2 + n_3)} w_n, \label{bh01wn} \qquad q^{\nu h_1} w_n = q^{\nu(- 2 n_1 - n_2 + n_3)} w_n, \\*
q^{\nu h_2} w_n = q^{\nu(n_1 - n_2 - 2 n_3)} w_n, \label{bh2wn}
\end{gather}
and instead of relations (\ref{ae0vn})--(\ref{ae2vn}) we have
\begin{align}
e_0 w_n &= \widetilde \zeta^{s_0} q^{- n_3} w_{n + \varepsilon_2}, \label{be0wn} \\*
e_1 w_n &= \widetilde \zeta^{s_1} \bigl( \kappa_q^{-1} (q^{- n_1 - n_2 + n_3} - q^{- 2 \mu_1 + n_1 + n_2 - n_3 - 2}) [n_1]_q w_{n - \varepsilon_1} \notag \\*
& \hspace{18em} {} - q^{- n_2 + n_3 + 1}[n_2]_q w_{n - \varepsilon_2 + \varepsilon_3} \bigr), \\
e_2 w_n &= \widetilde \zeta^{s_2} \bigl( \kappa_q^{-1} (q^{- n_3} - q^{- 2 \mu_2 + n_3 - 2}) [n_3]_q w_{n - \varepsilon_3} + q^{- 2 \mu_2 + 2 n_3 - 1} [n_2]_q w_{n + \varepsilon_1 - \varepsilon_2} \bigr), \label{be3wn}
\end{align}
where
\begin{equation*}
\widetilde \zeta = q^{- 2 (\lambda_3 - 1) / s} \zeta.
\end{equation*}
It is possible now to consider the limit $\mu_1, \mu_2 \to \infty$. Relations (\ref{bh01wn}), (\ref{bh2wn}) retain their form, while (\ref{be0wn})--(\ref{be3wn}) go to
\begin{align*}
e_0 w_n &= \widetilde \zeta^{s_0} q^{- n_3} w_{n + \varepsilon_2}, \\*
e_1 w_n &= \widetilde \zeta^{s_1} \bigl( \kappa_q^{-1} q^{- n_1 - n_2 + n_3} [n_1]_q w_{n - \varepsilon_1} - q^{- n_2 + n_3 + 1} [n_2]_q w_{n - \varepsilon_2 + \varepsilon_3} \bigr), \\*
e_2 w_n &= \widetilde \zeta^{s_2} \kappa_q^{-1} q^{- n_3} [n_3]_q w_{n - \varepsilon_3}.
\end{align*}
Denote by $\rho''$ the representation of $\uqbp$ defined by the relations
\begin{align*}
q^{\nu h_0} v_n &= q^{\nu(n_1 + 2 n_2 + n_3)} v_n, \\
q^{\nu h_1} v_n &= q^{\nu(- 2 n_1 - n_2 + n_3)} v_n, \\
q^{\nu h_2} v_n &= q^{\nu(n_1 - n_2 - 2 n_3)} v_n, \\
e_0 v_n &= q^{- n_3} v_{n + \varepsilon_2}, \\*
e_1 v_n &= \bigl( \kappa_q^{-1} q^{- n_1 - n_2 + n_3} [n_1]_q v_{n - \varepsilon_1} - q^{- n_2 + n_3 + 1} [n_2]_q v_{n - \varepsilon_2 + \varepsilon_3} \bigr), \\*
e_2 v_n &= \kappa_q^{-1} q^{- n_3} [n_3]_q v_{n - \varepsilon_3},
\end{align*}
and by $W''$ the corresponding $\uqbp$-module. It is clear that if we define the universal $Q$-operator $\calQ''(\zeta)$ by the relation
\begin{equation*}
\calQ''(\zeta) = ((\tr \circ (\rho'')_\zeta) \otimes \id)(\calR (t \otimes 1)),
\end{equation*}
then we have
\begin{equation*}
\calQ''(\zeta) = \lim_{\mu_1, \mu_2 \to \infty} \left( \widetilde \calT^{(\mu_1 + \mu_2, \mu_2, 0)}(q^{- 2 / s} \zeta) \, q^{((\mu_1 + \mu_2) h'_0 - \mu_1 h'_1 - \mu_2 h'_2) / 3} \right).
\end{equation*}
We use the double prime having in mind that we will redefine $Q$-operators twice. It follows from the above relation that the universal $Q$-operators $\calQ''(\zeta)$ for all values of $\zeta$ commute. In addition, they commute with all universal transfer operators $\calT^\lambda_i(\zeta)$, $\ocalT^\lambda_i(\zeta)$ and with all generators $q^x$, $x \in \widetilde \gothh$, see, for example, our papers \cite{BooGoeKluNirRaz14a, BooGoeKluNirRaz13}.
In fact, the representation $\rho''$ is not irreducible. Certainly, the same is true for the representation $(\rho'')_\zeta$. In fact, for any $\zeta \in \bbC^\times$ there is a filtration
\begin{equation*}
\{0\} = ((W'')_\zeta)_{-1} \subset ((W'')_\zeta)_0 \subset ((W'')_\zeta)_1 \subset \cdots
\end{equation*}
formed by the $\uqbp$-submodules
\begin{equation*}
((W'')_\zeta)_k = \bigoplus_{n_1 = 0}^k \bigoplus_{n_2, n_3 = 0}^\infty \bbC \, v_n.
\end{equation*}
Denote by $\rho'$ the representation of $\uqbp$ defined by the relations
\begin{gather}
q^{\nu h_0} v_n = q^{\nu(2 n_1 + n_2)} v_n, \quad
q^{\nu h_1} v_n = q^{\nu(- n_1 + n_2)} v_n, \quad
q^{\nu h_2} v_n = q^{\nu(- n_1 - 2 n_2)} v_n, \label{qhv} \\
e_0 v_n = q^{- n_2} v_{n + \varepsilon_1}, \label{eva} \\*
e_1 v_n = - q^{- n_1 + n_2 + 1} [n_1]_q v_{n - \varepsilon_1 + \varepsilon_2}, \qquad
e_2 v_n = \kappa_q^{-1} q^{- n_2} [n_2]_q v_{n - \varepsilon_2}, \label{evb}
\end{gather}
and by $W'$ the corresponding $\uqbp$-module. Here we assume that $n$ stands for the pair $(n_1, n_2) \in \bbZ_+^2$. It is easy to see that
\begin{equation*}
((W'')_\zeta)_k / ((W'')_\zeta)_{k - 1} \cong (W'[\xi_k])_\zeta,
\end{equation*}
where
\begin{equation*}
\xi_k(h_0) = k, \qquad \xi_k(h_1) = - 2 k, \qquad \xi_k(h_2) = k.
\end{equation*}
Hence, the universal $Q$-operator $\calQ'(\zeta)$ defined by the relation
\begin{equation*}
\calQ'(\zeta) = ((\tr \circ (\rho')_\zeta) \otimes \id)(\calR (t \otimes 1)),
\end{equation*}
is related to the universal $Q$-operator $\calQ''(\zeta)$ as
\begin{equation*}
\calQ''(\zeta) = \sum_{k = 0}^\infty \calQ'(\zeta) \, q^{k (h'_0 - 2 h'_1 + h'_2) / 3} = \calQ'(\zeta) \bigl( 1 - q^{(h'_0 - 2 h'_1 + h'_2) / 3} \bigr)^{-1},
\end{equation*}
Here we use the fact that
\begin{equation}
\calQ_{\rho[\xi]}(\zeta) = \calQ_\rho(\zeta) \, q^{\sum_{i = 0}^2 \xi(h^{\mathstrut}_i) h'_i / 3} \label{qshq}
\end{equation}
for any representation $\rho$ of $\uqbp$ and any $\xi \in \widetilde \gothh^*$.
We use the representation $\rho'$ as the basic representation for the construction of all necessary universal $L$-operators and $Q$-operators. In fact, it is an asymptotic, or prefundamental, representation of $\uqbp$, see the papers \cite{HerJim12, FreHer13}.
\subsubsection{Interpretation in terms of $q$-oscillators}
It is useful to give an interpretation of relations (\ref{qhv})--(\ref{evb}) in terms of $q$-oscillators. Let us remind the necessary definitions, see, for example, the book \cite{KliSch97}.
Let $\hbar$ be a complex number and $q = \exp \hbar$.\footnote{We again assume that $q$ is not a root of unity.} The $q$-oscillator algebra $\Osc_q$ is a unital associative $\bbC$-algebra with generators $b^\dagger$, $b$, $q^{\nu N}$, $\nu \in \bbC$, and relations
\begin{gather*}
q^0 = 1, \qquad q^{\nu_1 N} q^{\nu_2 N} = q^{(\nu_1 + \nu_2)N}, \\
q^{\nu N} b^\dagger q^{-\nu N} = q^\nu b^\dagger, \qquad q^{\nu N} b q^{-\nu N} = q^{-\nu} b, \\
b^\dagger b = [N]_q, \qquad b b^\dagger = [N + 1]_q,
\end{gather*}
where we use the notation similar to (\ref{xnq}). It is easy to understand that the monomials $(b^\dagger)^{k + 1} q^{\nu N}$, $b^{k + 1} q^{\nu N}$
and $q^{\nu N}$ for $k \in \bbZ_+$ and $\nu \in \bbC$ form a basis of $\Osc_q$.
Two representations of $\Osc_q$ are interesting for us. First, let $W^{\scriptscriptstyle +}$ be a free vector space generated by the set $\{ v_0, v_1, \ldots \}$. One can show that the relations
\begin{gather*}
q^{\nu N} v_n = q^{\nu n} v_n, \\*
b^\dagger v_n = v_{n + 1}, \qquad b \, v_n = [n]_q v_{n - 1},
\end{gather*}
where we assume that $v_{-1} = 0$, endow $W^{\scriptscriptstyle +}$ with the structure of an $\Osc_q$-module. We denote the corresponding representation of the algebra $\Osc_q$ by $\chi^{\scriptscriptstyle +}$. Further, let $W^{\scriptscriptstyle -}$ be a free vector space generated again by the set $\{ v_0, v_1, \ldots \}$. The relations
\begin{gather*}
q^{\nu N} v_n = q^{- \nu (n + 1)} v_n, \\
b \, v_n = v_{n + 1}, \qquad b^\dagger v_n = - [n]_q v_{n - 1},
\end{gather*}
where we again assume that $v_{-1} = 0$, endow the vector space $W^{\scriptscriptstyle -}$ with the structure of an $\Osc_q$-module. We denote the corresponding representation of $\Osc_q$ by $\chi^{\scriptscriptstyle -}$.
Consider the algebra $\Osc_q \otimes \Osc_q$. As is usual, define
\begin{gather*}
b^{}_1 = b \otimes 1, \qquad b^\dagger_1 = b^\dagger \otimes 1, \qquad b^{}_2 = 1 \otimes b, \qquad b^\dagger_2 = 1 \otimes b^\dagger, \\
q^{\nu N_1} = q^{\nu N} \otimes 1, \qquad q^{\nu N_2} = 1 \otimes q^{\nu N},
\end{gather*}
and denote
\begin{equation*}
q^{\nu_1 N_1 + \nu_2 N_2 + \nu} = q^\nu q^{\nu_1 N_1} q^{\nu_2 N_2}.
\end{equation*}
Assume that the generators of $\Osc_q \otimes \Osc_q$ act on the module $W'$ defined by equations (\ref{qhv})--(\ref{evb}) as on the module $W^{\scriptscriptstyle +} \otimes W^{\scriptscriptstyle +}$. This allows us to write (\ref{qhv})--(\ref{evb}) as
\begin{align*}
& q^{\nu h_0} v_n = q^{\nu(2 N_1 + N_2)} v_n, &&
q^{\nu h_1} v_n = q^{\nu(- N_1 + N_2)} v_n, &&
q^{\nu h_2} v_n = q^{\nu(- N_1 - 2 N_2)} v_n, \\*
& e_0 v_n = b^\dagger_1 q^{- N_2} v_n, && e_1 v_n = - b_1^{\mathstrut} b_2^\dagger q^{- N_1 + N_2 + 1} v_n, && e_2 v_n = \kappa_q^{-1} b_2^{\mathstrut} q^{- N_2} v_n.
\end{align*}
These equations suggest defining a homomorphism $\rho: \uqbp \to \Osc_q \otimes \Osc_q$ by
\begin{align}
& \rho(q^{\nu h_0}) = q^{\nu(2 N_1 + N_2)}, && \rho(q^{\nu h_1}) = q^{\nu(- N_1 + N_2)}, &&
\rho(q^{\nu h_2}) = q^{\nu(- N_1 - 2 N_2)}, \label{rhoh} \\*
& \rho(e_0) = b^\dagger_1 q^{- N_2}, && \rho(e_1) = - b_1^{\mathstrut} b_2^\dagger q^{- N_1 + N_2 + 1}, && \rho(e_2) = \kappa_q^{-1} b_2^{\mathstrut} q^{- N_2}. \label{rhoe}
\end{align}
Using this homomorphism, we can write for the representation $\rho'$ the equation
\begin{equation*}
\rho' = (\chi^{\scriptscriptstyle +} \otimes \chi^{\scriptscriptstyle +}) \circ \rho.
\end{equation*}
Define the universal $L$-operator\footnote{The prime here and below means that the corresponding universal $L$-operators will be used to define primed universal $Q$-operators.}
\begin{equation*}
\calL'_\rho(\zeta) = (\rho_\zeta \otimes \id)(\calR)
\end{equation*}
being an element of $(\Osc_q \otimes \Osc_q) \otimes \uqlsliii$. Then if $\chi$ is a representation of $\Osc_q \otimes \Osc_q$, we have the universal $L$-operator
\begin{equation*}
\calL'_{\chi \circ \rho}(\zeta) = (\chi \otimes \id)(\calL'_\rho(\zeta)) = ((\chi \circ \rho_\zeta) \otimes \id)(\calR),
\end{equation*}
and the corresponding universal $Q$-operator is
\begin{equation*}
\calQ'_{\chi \circ \rho}(\zeta) = ((\tr \circ \chi \circ \rho_\zeta) \otimes \id)(\calR (t \otimes 1)).
\end{equation*}
One can write
\begin{equation*}
\calQ'_{\chi \circ \rho}(\zeta) = (\tr \otimes \id)(\calL'_{\chi \circ \rho}(\zeta) ((\chi \circ \rho_\zeta)(t) \otimes 1)) = (\tr_\chi \otimes \id)(\calL'_\rho(\zeta) (\rho_\zeta(t) \otimes 1)),
\end{equation*}
where $\tr_\chi = \tr \circ \chi$. Thus, to obtain the universal $Q$-operators $\calQ'_{\chi \circ \rho}(\zeta)$, one can use different $L$-operators $\calL'_{\chi \circ \rho}(\zeta)$ corresponding to different representations $\chi$, or use one and the same universal $L$-operator $\calL'_\rho(\zeta)$ but different traces $\tr_\chi$ corresponding to different representations $\chi$.
\subsubsection{More universal $L$-operators and $Q$-operators}
Using the homomorphism $\rho$ defined by (\ref{rhoh})--(\ref{rhoe}) and the automorphisms $\sigma$ and $\tau$ defined by (\ref{sigmae})--(\ref{sigmah}) and (\ref{taue})--(\ref{tauh}), we define the homomorphisms
\begin{equation*}
\rho_i = \rho \circ \sigma^{- i}, \qquad \orho_i = \rho \circ \tau \circ \sigma^{- i + 1},
\end{equation*}
where $i = 1, 2, 3$, and the universal $L$-operators
\begin{equation*}
\calL'_i(\zeta) = ((\rho_i)_\zeta \otimes \id)(\calR), \qquad \ocalL{}'_i(\zeta) = ((\orho_i)_\zeta \otimes \id)(\calR).
\end{equation*}
These universal $L$-operators are again elements of $(\Osc_q \otimes \Osc_q) \otimes \uqlsliii$. In the same way as for the universal monodromy operators, using relations (\ref{ssr}), (\ref{ttr}) and (\ref{sts}), we obtain the equations
\begin{gather}
\calL'_{i + 1}(\zeta) = (\id \otimes \sigma)(\calL'_i(\zeta))|_{s \to \sigma(s)}, \qquad
\ocalL{}'_{i + 1}(\zeta) = (\id \otimes \sigma)(\ocalL'_i(\zeta))|_{s \to \sigma(s)}, \label{lipo} \\
\ocalL{}'_i(\zeta) = (\id \otimes \tau)(\calL{}'_{- i + 1}(\zeta))|_{s \to \tau(s)}. \label{bli}
\end{gather}
Two standard representations $\chi^{\scriptscriptstyle +}$ and $\chi^{\scriptscriptstyle -}$ of the algebra $\Osc_q$ generate two traces on $\Osc_q$. We denote
\begin{equation*}
\tr^{\scriptscriptstyle +} = \tr \circ \chi^{\scriptscriptstyle +}, \qquad \tr^{\scriptscriptstyle -} = \tr \circ \chi^{\scriptscriptstyle -}.
\end{equation*}
We see that
\begin{equation*}
\tr^{\scriptscriptstyle +} ((b^\dagger)^{k + 1} q^{\nu N}) = 0, \qquad \tr^{\scriptscriptstyle +} (b^{k + 1} q^{\nu N}) = 0,
\end{equation*}
and that
\begin{equation*}
\tr^{\scriptscriptstyle +}(q^{\nu N}) = (1 - q^\nu)^{-1}
\end{equation*}
for $|q| < 1$. For $|q| > 1$ we define the trace $\tr^{\scriptscriptstyle +}$ by analytic continuation. Since the monomials $(b^\dagger)^{k + 1} q^{\nu N}$, $b^{k + 1} q^{\nu N}$ and $q^{\nu N}$ for $k \in \bbZ_+$ and $\nu \in \bbC$ form a basis of $\Osc_q$, the above relations are enough to determine the trace of any element of $\Osc_q$. It appears that
\begin{equation}
\tr^{\scriptscriptstyle -} = - \tr^{\scriptscriptstyle +}. \label{trtr}
\end{equation}
Fixing the representations, $\chi^{\scriptscriptstyle +}$ or $\chi^{\scriptscriptstyle -}$, for the factors of the tensor product $\Osc_q \otimes \Osc_q$, we obtain a trace on the algebra $\Osc_q \otimes \Osc_q$. We use for such traces the convention
\begin{equation*}
\tr^{\epsilon_1 \epsilon_2} = \tr \circ (\chi^{\epsilon_1} \otimes \chi^{\epsilon_2}),
\end{equation*}
where $\epsilon_1, \epsilon_2 = +, -$. It follows from (\ref{trtr}) that different traces $\tr^{\epsilon_1 \epsilon_2}$ differ at most by a sign. Therefore, since $\sigma^3 = \id$, we have only six really different universal $Q$-operators which can be obtained from the universal $L$-operators $\calL'_i(\zeta)$ and $\ocalL'_i(\zeta)$. We use the following definition
\begin{gather*}
\calQ'_i(\zeta) = \varsigma^{\epsilon_1 \epsilon_2} (\tr^{\epsilon_1 \epsilon_2} \otimes \id) (\calL'_i(\zeta) ((\rho_i)_\zeta(t) \otimes 1)), \\
\ocalQ'_i(\zeta) = \varsigma^{\epsilon_1 \epsilon_2} (\tr^{\epsilon_1 \epsilon_2} \otimes \id) (\ocalL{}'_i(\zeta) ((\orho_i)_\zeta(t) \otimes 1)),
\end{gather*}
where
\begin{equation*}
\varsigma^{\scriptscriptstyle ++} = 1, \qquad \varsigma^{\scriptscriptstyle +-} = -1, \qquad \varsigma^{\scriptscriptstyle -+} = -1, \qquad \varsigma^{\scriptscriptstyle --} = 1.
\end{equation*}
Certainly, we can use the same trace but different representations and define the universal $Q$-operators as
\begin{gather*}
\calQ'_i(\zeta) = \varsigma^{\epsilon_1 \epsilon_2} ((\tr \circ (\rho^{\epsilon_1 \epsilon_2}_i)_\zeta) \otimes \id)(\calR (t \otimes 1)), \\
\ocalQ'_i(\zeta) = \varsigma^{\epsilon_1 \epsilon_2} ((\tr \circ (\orho_i^{\epsilon_1 \epsilon_2})_\zeta) \otimes \id)(\calR (t \otimes 1)),
\end{gather*}
where
\begin{equation*}
\rho_i^{\epsilon_1 \epsilon_2} = (\chi^{\epsilon_1} \otimes \chi^{\epsilon_2}) \circ \rho_i, \qquad \orho_i^{\epsilon_1 \epsilon_2} = (\chi^{\epsilon_1} \otimes \chi^{\epsilon_2}) \circ \orho_i
\end{equation*}
are representations of $\uqbp$. We denote by $W^{\epsilon_1 \epsilon_2}$ the corresponding $\uqbp$-mo\-dules.
Similarly as for the case of the universal transfer operators, one can demonstrate that the universal $Q$-operators $\calQ'_i(\zeta)$ and $\ocalQ{}'_i(\zeta)$ depend on $\zeta$ via $\zeta^s$. Therefore, equations (\ref{lipo}) and (\ref{bli}) lead to the relations
\begin{gather}
\calQ'_{i + 1}(\zeta) = \sigma(\calQ'_i(\zeta))|_{\phi \to \sigma(\phi)}, \qquad \ocalQ{}'_{i + 1}(\zeta) = \sigma(\ocalQ{}'_i(\zeta))|_{\phi \to \sigma(\phi)}, \label{qsq} \\
\ocalQ{}'_i(\zeta) = \tau(\calQ{}'_{- i + 1}(\zeta))|_{\phi \to \tau(\phi)}. \label{bqtq}
\end{gather}
Based on section \ref{sss:br}, all universal $Q$-operators $\calQ'_i(\zeta)$ and $\ocalQ{}'_i(\zeta)$ can be considered as limits of the corresponding universal transfer operators. Therefore, they commute for all values of $i$ and $\zeta$. They commute also with all universal transfer operators $\calT^\lambda_i(\zeta)$, $\ocalT^\lambda_i(\zeta)$ and with all generators $q^x$, $x \in \widetilde \gothh$.
\section{Functional relations}
\subsection{\texorpdfstring{Product of two universal $Q$-operators}{Product of two universal Q-operators}}
The functional relations are some equations for products of transfer operators and $Q$-operators. Consider the product of two universal $Q$-operators. In fact, one can consider any pair of the universal $Q$-operators. The functional relations for other pairs are simple consequences of the functional relations for the initial pair. It appears that the pair $\calQ'_3(\zeta_3)$ and $\calQ'_2(\zeta_2)$ is the most convenient choice. To simplify the proof we use for the definition of $\calQ'_3(\zeta_3)$ the tensor product of two copies of the representation $\chi^{\scriptscriptstyle +}$, and for the definition of $\calQ'_2(\zeta_2)$ the tensor product of the representations $\chi^{\scriptscriptstyle -}$ and $\chi^{\scriptscriptstyle +}$. Concretely speaking, we use the equations
\begin{align}
& \calQ'_3(\zeta_3) = ((\tr \circ (\rho^{\scriptscriptstyle ++}_3)_{\zeta_3}) \otimes \id)(\calR (t \otimes 1)), \label{qp3} \\
& \calQ'_2(\zeta_2) = - ((\tr \circ (\rho^{\scriptscriptstyle -+}_2)_{\zeta_2}) \otimes \id)(\calR (t \otimes 1)). \label{qp2}
\end{align}
One can show that
\begin{equation*}
\calQ'_3(\zeta_3) \calQ'_2(\zeta_2) = - ((\tr \circ ((\rho^{\scriptscriptstyle ++}_3)_{\zeta_3} \otimes_\Delta (\rho^{\scriptscriptstyle -+}_2)_{\zeta_2})) \otimes \id) (\calR (t \otimes 1)),
\end{equation*}
where\footnote{We use the notation $\otimes_\Delta$ to distinguish between the tensor product of representations and the usual tensor product of mappings.}
\begin{equation*}
(\rho^{\scriptscriptstyle ++}_3)_{\zeta_3} \otimes_\Delta (\rho^{\scriptscriptstyle -+}_2)_{\zeta_2} = ((\rho^{\scriptscriptstyle ++}_3)_{\zeta_3} \otimes (\rho^{\scriptscriptstyle -+}_2)_{\zeta_2})\circ \Delta,
\end{equation*}
see, for example, our paper~\cite{BooGoeKluNirRaz14a}.
It is demonstrated in appendix \ref{a:tprhorho} that the $\uqbp$-module$(W^{\scriptscriptstyle ++}_3)_{\zeta_3} \otimes (W^{\scriptscriptstyle -+}_2)_{\zeta_2}$ has a basis $\{w^k_n\}$ such that
\begin{align*}
q^{\nu h_0} w^k_n &= q^{\nu (n_1 + 2 n_2 + n_3 + k + 1)} w^k_n, \\*
q^{\nu h_1} w^k_n &= q^{\nu (- 2 n_1 - n_2 + n_3 + k + 1)} w^k_n, \\*
q^{\nu h_2} w^k_n &= q^{\nu (n_1 - n_2 - 2 n_3 - 2 k - 2)} w^k_n, \\
e_0 w_n^k &= q^{- n_1} w_{n + \varepsilon_2}^k, \\
e_1 w^k_n &= - q [n_2]_q w^k_{n - \varepsilon_2 + \varepsilon_3} + \zeta_2^s \kappa_q^{-1} q^{- n_1 + 2 n_3 + 1} [n_1]_q w^k_{n - \varepsilon_1} \\*
& \hspace{14em} {} + \zeta_2^{s_2} \kappa_q \, q^{- n_1 + 2 n_3 + 2} [n_1]_q [k]_q w^{k - 1}_{n -\varepsilon_1 + \varepsilon_3}, \\
e_2 w^k_n &= (q^{- n_3} \zeta_3^s - q^{n_3} \zeta_2^s) \kappa_q^{-1} q^{n_1 - n_2 - 1} [n_3]_q w^k_{n - \varepsilon_3} \\*
& \hspace{8em} {} + q^{n_1 - n_2 + 1} [n_2]_q w^k_{n + \varepsilon_1 - \varepsilon_2} - \zeta_2^{s_2} q^{n_1 - n_2 + 2 n_3}[k]_q w^{k - 1}_n.
\end{align*}
Here $k \in \bbZ_+$ and $n$ means a triple $(n_1, n_2, n_3) \in \bbZ_+^3$. We see that there is an increasing filtration of $\uqbp$-submodules
\begin{multline*}
\{0\} = ((W^{\scriptscriptstyle ++}_3)_{\zeta_3} \otimes (W^{\scriptscriptstyle -+}_2)_{\zeta_2})_{-1} \subset ((W^{\scriptscriptstyle ++}_3)_{\zeta_3} \otimes (W^{\scriptscriptstyle -+}_2)_{\zeta_2})_{0} \\ \subset ((W^{\scriptscriptstyle ++}_3)_{\zeta_3} \otimes (W^{\scriptscriptstyle -+}_2)_{\zeta_2})_{1} \subset \cdots,
\end{multline*}
where
\begin{equation*}
((W^{\scriptscriptstyle ++}_3)_{\zeta_3} \otimes (W^{\scriptscriptstyle -+}_2)_{\zeta_2})_k = \bigoplus_{\ell = 0}^ k \bigoplus_{n_1, n_2, n_3 = 0}^\infty \bbC w_n^\ell.
\end{equation*}
It is worth to note here that
\begin{equation*}
\bigcup_{k = - 1}^\infty ((W^{\scriptscriptstyle ++}_3)_{\zeta_3} \otimes (W^{\scriptscriptstyle -+}_2)_{\zeta_2})_k = (W^{\scriptscriptstyle ++}_3)_{\zeta_3} \otimes (W^{\scriptscriptstyle -+}_2)_{\zeta_2}.
\end{equation*}
Define now a $\uqbp$-module $(W_{32})_{\zeta_3, \zeta_2}$ as a free vector space generated by the set $\{v_n\}$, $n = (n_1, n_2, n_3) \in \bbZ_+^3$, with the following action of the generators
\begin{align*}
q^{\nu h_0} v_n &= q^{\nu (n_1 + 2 n_2 + n_3 + 1)} v_n, \\*
q^{\nu h_1} v_n &= q^{\nu (- 2 n_1 - n_2 + n_3 + 1)} v_n, \\*
q^{\nu h_2} v_n &= q^{\nu (n_1 - n_2 - 2 n_3 - 2)} v_n, \\
e_0 v_n &= q^{- n_1} v_{n + \varepsilon_2}, \\*
e_1 v_n &= - q [n_2]_q v_{n - \varepsilon_2 + \varepsilon_3} + \zeta_2^s \kappa_q^{-1} q^{- n_1 + 2 n_3 + 1} [n_1]_q v_{n - \varepsilon_1}, \\*
e_2 v_n &= (q^{- n_3} \zeta_3^s - q^{n_3} \zeta_2^s) \kappa_q^{-1} q^{n_1 - n_2 - 1} [n_3]_q v_{n - \varepsilon_3} + q^{n_1 - n_2 + 1} [n_2]_q v_{n + \varepsilon_1 - \varepsilon_2},
\end{align*}
and denote by $(\rho_{32})_{\zeta_3, \zeta_2}$ the corresponding representation of $\uqbp$. It is clear that
\begin{equation*}
((W^{\scriptscriptstyle ++}_3)_{\zeta_3} \otimes (W^{\scriptscriptstyle -+}_2)_{\zeta_2})_k / ((W^{\scriptscriptstyle ++}_3)_{\zeta_3} \otimes (W^{\scriptscriptstyle -+}_2)_{\zeta_2})_{k - 1} \cong (W_{32})_{\zeta_3, \zeta_2}[\xi_k],
\end{equation*}
where the elements $\xi_k \in \widetilde \gothh^*$ are defined by the relations
\begin{equation*}
\xi_k(h_0) = k, \qquad \xi_k(h_1) = k, \qquad \xi_k(h_2) = - 2 k.
\end{equation*}
Hence, we have
\begin{equation*}
\calQ'_3(\zeta_3) \calQ'_2(\zeta_2) = - \sum_{k = 0}^\infty ((\tr \circ (\rho_{32})_{\zeta_3, \zeta_2}[\xi_k]) \otimes \id)(\calR(t \otimes 1)),
\end{equation*}
and a relation similar to (\ref{qshq}) gives
\begin{multline}
\calQ'_3(\zeta_3) \calQ'_2(\zeta_2) = - ((\tr \circ (\rho_{32})_{\zeta_3, \zeta_2}) \otimes \id)(\calR(t \otimes 1)) (1 - q^{(h'_0 + h'_1 - 2 h'_2)/3})^{-1} \\
= - ((\tr \circ (\rho_{32})_{\zeta_3, \zeta_2}) \otimes \id)(\calR(t \otimes 1)) (1 - q^{- h'_2})^{-1}. \label{q1q2}
\end{multline}
The last equation follows from the relation
\begin{equation*}
h'_0 + h'_1 + h'_2 = 0.
\end{equation*}
When $\zeta_3 = q^{2/s} \zeta_2 = \zeta$ the $\uqbp$-module $(W_{32})_{\zeta_3, \zeta_2} = (W_{32})_{\zeta, q^{-2/s} \zeta}$ has a $\uqbp$-submodule formed by the basis vectors $v_n$ with $n_3 > 0$. This submodule is isomorphic to the shifted $\uqbp$-module $(W_{32})_{q^{-2/s} \zeta, \zeta}[\xi]$, where
\begin{equation}
\xi(h_0) = 1, \qquad \xi(h_1) = 1, \qquad \xi(h_2) = - 2. \label{xi}
\end{equation}
It is evident that the quotient module $(W_{32})_{\zeta, q^{-2/s} \zeta} / (W_{32})_{q^{-2/s} \zeta, \zeta}$ is isomorphic to the $\uqbp$-module defined by the relations
\begin{gather*}
q^{\nu h_0} v_n = q^{\nu (n_1 + 2 n_2 + 1)} v_n, \quad q^{\nu h_1} v_n = q^{\nu (- 2 n_1 - n_2 + 1)} v_n, \quad
q^{\nu h_2} v_n = q^{\nu (n_1 - n_2 - 2)} v_n, \\
e_0 v_n = q^{- n_1} v_{n + \varepsilon_2}, \\
e_1 v_n = \zeta^s \kappa_q^{-1} q^{- n_1 - 1} [n_1]_q v_{n - \varepsilon_1}, \quad
e_2 v_n = q^{n_1 - n_2 + 1} [n_2]_q v_{n + \varepsilon_1 - \varepsilon_2},
\end{gather*}
where $n$ denotes the pair $(n_1, n_2) \in \bbZ_+^2$. Introduce a new basis formed by the vectors
\begin{equation*}
w_{(n_1, n_2)} = c_1^{n_1} c_2^{n_2} v_{(n_2, n_1)},
\end{equation*}
where
\begin{equation*}
c_1 = r_s^{- s_0} q^{s_0/s} \zeta^{-s_0}, \qquad c_2 = r_s^{s_1} q^{1 - s_1/s} \zeta^{-s_0 - s_2}.
\end{equation*}
For this basis we have
\begin{gather*}
q^{\nu h_0} w_n = q^{\nu (2 n_1 + n_2 + 1)} w_n, \quad q^{\nu h_1} w_n = q^{\nu (- n_1 - 2 n_2 + 1)} w_n, \quad
q^{\nu h_2} w_n = q^{\nu (- n_1 + n_2 - 2)} w_n, \\
e_0 w_n = \widetilde \zeta^{s_0} q^{- n_2} w_{n + \varepsilon_1}, \\*
e_1 w_n = \widetilde \zeta^{s_1} \kappa_q^{-1} q^{- n_2} [n_2]_q w_{n - \varepsilon_2}, \quad
e_2 w_n = - \widetilde \zeta^{s_2} q^{- n_1 + n_2 + 1} [n_1]_q w_{n - \varepsilon_1 + \varepsilon_2},
\end{gather*}
where\footnote{Recall that we denote by $r_s$ some fixed $s$th root of $-1$.}
\begin{equation*}
\widetilde \zeta = r_s q^{- 1/s} \zeta.
\end{equation*}
Thus, the quotient module $(W_{32})_{\zeta, q^{-2/s} \zeta} / (W_{32})_{q^{-2/s} \zeta, \zeta}$ is isomorphic to the $\uqbp$-module $(\oW_1)_{\widetilde \zeta}[\xi]$, where $\xi$ is again given by relations (\ref{xi}). We see that
\begin{multline*}
((\tr \circ (\rho_{32})_{\zeta, q^{-2/s} \zeta}) \otimes \id)(\calR(t \otimes 1))
= ((\tr \circ (\rho_{32})_{q^{-2/s} \zeta, \zeta}[\xi]) \otimes \id)(\calR(t \otimes 1)) \\+ (\tr \circ (\orho_1)_{r_s q^{-1/s} \zeta}[\xi]) \otimes \id)(\calR(t \otimes 1)).
\end{multline*}
Remembering equation (\ref{q1q2}), we obtain\footnote{Recall that our universal $Q$-operators depends on $\zeta$ via $\zeta^s$. Therefore, the expression $\ocalQ{}'_1(r_s \zeta)$ does not depend on the choice of $r_s$. Recall also that ${\orho}_1 = \rho \circ \tau$.}
\begin{equation*}
\calQ'_3(q^{1/s} \zeta) \calQ'_2(q^{-1/s} \zeta) - \calQ'_3(q^{-1/s} \zeta) \calQ'_2(q^{1/s} \zeta) q^{- h'_2} = {} - \ocalQ'_1(r_s \zeta) \, q^{- h'_2} (1 - q^{- h'_2})^{-1}.
\end{equation*}
The relations for other pairs of the universal $Q$-operators $\calQ{}'_i(\zeta)$ can be be obtained from this relation with the help of equation (\ref{qsq}). Further, using (\ref{bqtq}), we come to the equation
\begin{equation*}
\ocalQ'_1(q^{1/s} \zeta) \ocalQ'_2(q^{-1/s} \zeta) - \ocalQ'_1(q^{-1/s} \zeta) \ocalQ'_2(q^{1/s} \zeta) q^{- h'_1} = {} - \calQ'_3(r_s \zeta) \, q^{- h'_1} (1 - q^{- h'_1})^{-1}.
\end{equation*}
The relations for other pairs of the universal $Q$-operators $\ocalQ{}'_i(\zeta)$ can be be obtained from this relation with the help of equation (\ref{qsq}).
Now we redefine the universal $Q$-operators as
\begin{equation}
\calQ_i(\zeta) = \zeta^{\calD_i} \, \calQ'_i(\zeta), \qquad \ocalQ_i(\zeta) = \zeta^{- \calD_i} \, \ocalQ'_i(r_s \zeta), \label{rdq}
\end{equation}
where
\begin{gather*}
\calD_1 = (h'_0 - h'_1) \, s / 6, \qquad \calD_2 = (h'_1 - h'_2) \, s / 6, \qquad \calD_3 = (h'_2 - h'_0) \, s / 6.
\end{gather*}
It is worth to note here that
\begin{equation*}
\calD_1 + \calD_2 + \calD_3 = 0.
\end{equation*}
The commutativity properties of the universal $Q$-operators $\calQ_i(\zeta)$, $\ocalQ_i(\zeta)$ are the same as of $\calQ'_i(\zeta)$, $\ocalQ'_i(\zeta)$. After the above redefinition, the functional relations for the pairs of the universal $Q$-operators take the universal determinant form
\begin{align*}
\calC_i \ocalQ_i(\zeta) & = \calQ_j(q^{1/s} \zeta) \calQ_k(q^{- 1/s} \zeta) - \calQ_j(q^{-1/s} \zeta) \calQ_k(q^{1/s} \zeta), \\
\calC_i \calQ_i(\zeta) & = \ocalQ_j(q^{- 1/s} \zeta) \ocalQ_k(q^{1/s} \zeta) - \ocalQ_j(q^{1/s} \zeta) \ocalQ_k(q^{- 1/s} \zeta),
\end{align*}
where
\begin{equation*}
\calC_i = q^{- \calD_i / s} (q^{2 \calD_j / s} - q^{2 \calD_k / s})^{-1},
\end{equation*}
and $(i,\, j, \, k)$ is a cyclic permutation of $(1, \, 2, \, 3)$.
It is easy to see that
\begin{equation*}
\calD_{i+1} = \sigma(\calD_i)|_{\phi \to \sigma(\phi)}.
\end{equation*}
Therefore, one has
\begin{equation}
\calC_{i+1} = \sigma(\calC_i)|_{\phi \to \sigma(\phi)}. \label{cipo}
\end{equation}
Here and below we assume that the definition of $\calC_i$ and $\calD_i$ is extended to arbitrary integer values of the index $i$ periodically with the period $3$.
It follows from relation (\ref{cipo}) and equation (\ref{qsq}) that
\begin{equation}
\calQ_{i + 1}(\zeta) = \sigma(\calQ_i(\zeta))|_{\phi \to \sigma(\phi)}, \qquad \ocalQ{}_{i + 1}(\zeta) = \sigma(\ocalQ{}_i(\zeta))|_{\phi \to \sigma(\phi)}. \label{qsqa}
\end{equation}
One can also verify that
\begin{equation*}
\calD_i = - \tau(\calD_{- i + 1})|_{\phi \to \tau(\phi)}
\end{equation*}
and
\begin{equation}
\calC_i = \tau(\calC_{- i + 1})|_{\phi \to \tau(\phi)}. \label{ctc}
\end{equation}
This relation, together with equation (\ref{bqtq}), leads to
\begin{equation}
\ocalQ{}_i(\zeta) = \tau(\calQ_{- i + 1}(r_s \zeta))|_{\phi \to \tau(\phi)}. \label{qtq}
\end{equation}
\subsection{\texorpdfstring{Product of three universal $Q$-operators}{Product of three universal Q-operators}} \label{s:ptqo}
A convenient way to analyse the product of three universal $Q$-operators is to start with the product of $\calQ'_3(\zeta_3)$, $\calQ'_2(\zeta_2)$ and $\calQ'_1(\zeta_1)$. To construct $\calQ'_3(\zeta_3)$ and $\calQ'_2(\zeta_2)$ we use (\ref{qp3}) and (\ref{qp2}), and for $\calQ'_1(\zeta_1)$ the equation
\begin{equation*}
\calQ'_1(\zeta_1) = ((\tr \circ (\rho^{\scriptscriptstyle --}_1)_{\zeta_1}) \otimes \id)(\calR (t \otimes 1)).
\end{equation*}
Similarly as for the case of two universal $Q$-opertsors, one can show~\cite{BooGoeKluNirRaz14a} that
\begin{equation*}
\calQ'_3(\zeta_3) \calQ'_2(\zeta_2) \calQ'_1(\zeta_1) = - ((\tr \circ ((\rho^{\scriptscriptstyle ++}_3)_{\zeta_3} \otimes_\Delta (\rho^{\scriptscriptstyle -+}_2)_{\zeta_2} \otimes_\Delta (\rho^{\scriptscriptstyle --}_1)_{\zeta_1})) \otimes \id) (\calR (t \otimes 1)).
\end{equation*}
Hence, we have to analyse the tensor product of the representations $(\rho^{\scriptscriptstyle ++}_3)_{\zeta_3}$, $(\rho^{\scriptscriptstyle -+}_2)_{\zeta_2}$ and $(\rho^{\scriptscriptstyle --}_1)_{\zeta_1}$.
It is demonstrated in appendix \ref{a:tprhorhorho} that the $\uqbp$-module
\begin{equation*}
W_{\zeta_3, \zeta_2, \zeta_1} = (W^{\scriptscriptstyle ++}_3)_{\zeta_3} \otimes (W^{\scriptscriptstyle -+}_2)_{\zeta_2} \otimes (W^{\scriptscriptstyle --}_1)_{\zeta_1}
\end{equation*}
has a basis $\{w^k_n\}$ such that
\begin{align}
q^{\nu h_0} w^k_n & = q^{\nu(n_1 + 2 n_2 + n_3 + k_1 + k_2 + 2 k_3 + 4)} w^k_n, \label{th0v} \\*
q^{\nu h_1} w^k_n & = q^{\nu(-2 n_1 - n_2 + n_3 + k_1 - 2 k_2 - k_3 - 2)} w^k_n, \\*
q^{\nu h_2} w^k_n & = q^{\nu(n_1 - n_2 - 2 n_3 - 2 k_1 + k_2 - k_3 - 2)} w^k_n, \\
e_0 w^k_n & = \zeta^{s_0} q^{- \lambda_1 - \lambda_3 - n_3} w^k_{n + \varepsilon_2}, \label{te0v} \\
e_1 w^k_n & = \zeta^{s_1 - s} \kappa_q^{-1} q^{\lambda_1 + \lambda_2}(q^{- n_1 - n_2 + n_3 + 1} \zeta_2^s - q^{n_1 + n_2 - n_3 + 1} \zeta_1^s) [n_1]_q w^k_{n - \varepsilon_1} \notag \\*
& - \zeta^{s_1} q^{\mu_1 - n_2 + n_3 + 2} [n_2]_q w^k_{n - \varepsilon_2 + \varepsilon_3} - \zeta_1^{s_1} q^{2 n_1 + n_2 - n_3 - k_1 + k_3} [k_2]_q w^{k - \varepsilon_2}_n \notag \\
& - \zeta^{s_1 - s_2} \zeta_1^{s_2} \kappa_q q^{\mu_1 + \mu_2 - 2 n_2 + n_3 + 2 k_1 + k_2 - k_3 + 4} [n_1]_q [k_3]_q w^{k + \varepsilon_2 - \varepsilon_3}_{n - \varepsilon_1 + \varepsilon_3} \notag \\
& + \zeta^{s_1 - s_2} \zeta_2^{s_2} \kappa_q q^{\mu_1 + \mu_2 - 2 n_2 + n_3 + 1} [n_1]_q [k_1]_q w^{k - \varepsilon_1}_{n - \varepsilon_1 + \varepsilon_3} \notag \\
& - \zeta^{-s_2} \zeta_1^{s_1 + s_2} \kappa_q q^{\mu_2 + 2 n_1 + n_2 - n_3 + k_1 + 1} [n_1]_q [k_3]_q w^{k - \varepsilon_3}_{n - \varepsilon_1 + \varepsilon_2}, \\
e_2 w^k_n & = \zeta^{s_2 - s} \kappa_q^{-1} q^{\lambda_2 + \lambda_3} (q^{- n_3 - 1} \zeta_3^s - q^{n_3 - 1} \zeta_2^s) [n_3]_q w^k_{n - \varepsilon_3} \notag \\
& + \zeta^{s_2} q^{- \mu_2 + 2 n_3} [n_2]_q w^k_{n + \varepsilon_1 - \varepsilon_2} - \zeta_2^{s_2} q^{n_1 - n_2 + 2 n_3} [k_1]_q w^{k - \varepsilon_1}_n \notag \\
& + \zeta_1^{s_2} q^{n_1 - n_2 + 2 n_3 + 2 k_1 + k_2 - k_3 + 3} [k_3]_q w^{k + \varepsilon_2 - \varepsilon_3}_n. \label{te2v}
\end{align}
Here $n$ and $k$ mean triples $(n_1, n_2, n_3)$ and $(k_1, k_2, k_3)$ of non-negative integers. Introduce for the triples $k$ the colexicographic total order assuming that $k' \le k$ if and only if $k'_3 \le k_3$, or $k'_3 = k_3$ and $k'_2 \le k_2$, or $k'_3 = k_3$, $k'_2 = k_2$ and $k'_1 \le k_1$. We see that there is an increasing filtration on $W_{\zeta_3, \zeta_2, \zeta_1}$ formed by the $\uqbp$-submodules
\begin{equation*}
(W_{\zeta_3, \zeta_2, \zeta_1})_k = \bigoplus_{\ell \le k} \bigoplus_n \bbC w_n^\ell.
\end{equation*}
Putting
\begin{equation*}
\zeta_1 = q^{- 2(\lambda_1 + 1) / s} \zeta, \qquad \zeta_2 = q^{- 2 \lambda_2 / s} \zeta, \qquad \zeta_3 = q^{- 2(\lambda_3 - 1) / s} \zeta,
\end{equation*}
one discovers that in this case
\begin{equation*}
(W_{\zeta_3, \zeta_2 \zeta_1})_k / \bigcup_{\ell < k} (W_{\zeta_3, \zeta_2, \zeta_1})_\ell \cong (\widetilde V^\lambda)_\zeta[\xi_k],
\end{equation*}
where
\begin{gather*}
\xi_k(h_0) = \mu_1 + \mu_2 + k_1 + k_2 + 2 k_3 + 4, \\
\xi_k(h_1) = - \mu_1 + k_1 - 2 k_2 - k_3 - 2, \qquad \xi_k(h_2) = - \mu_2 - 2 k_1 + k_2 - k_3 - 2.
\end{gather*}
Now, simple calculations lead to the following result
\begin{equation}
\calC \, \widetilde \calT^\lambda(\zeta) = \calQ_1(q^{- 2 (\lambda_1 + 1)/s} \zeta) \calQ_2(q^{- 2 \lambda_2/s} \zeta) \calQ_3(q^{- 2 (\lambda_3 - 1)/s} \zeta), \label{ctt}
\end{equation}
where
\begin{equation*}
\calC = \calC_1 \calC_2 \calC_3 = (q^{2\calD_1/s} - q^{2\calD_2/s})^{-1} (q^{2\calD_2/s} - q^{2\calD_3/s})^{-1} (q^{2\calD_3/s} - q^{2\calD_1/s})^{-1}.
\end{equation*}
It is instructive to rewrite (\ref{ctt}) as
\begin{equation}
\calC \, \widetilde \calT^\lambda(\zeta) = \calQ_1(q^{- 2 (\lambda + \rho)_1/s} \zeta) \calQ_2(q^{- 2 (\lambda + \rho)_2/s} \zeta) \calQ_3(q^{- 2 (\lambda + \rho)_3/s} \zeta), \label{tqqq}
\end{equation}
where $\rho = \alpha_1 + \alpha_2$.
It follows from equation (\ref{ctc}) that
\begin{equation*}
\calC = \tau(\calC)|_{\phi \to \tau(\phi)}.
\end{equation*}
Using this relation, equation (\ref{ttt}) and equation (\ref{qtq}), we find
\begin{equation*}
\calC \, \widetilde{\ocalT}{}'^{(- \lambda_3, \, - \lambda_2, \, - \lambda_1)}(r_s \zeta) = \ocalQ_1(q^{2 (\lambda + \rho)_1/s} \zeta) \ocalQ_2(q^{2 (\lambda + \rho)_2/s} \zeta) \ocalQ_3(q^{2 (\lambda + \rho)_3/s} \zeta).
\end{equation*}
Hence, if we define
\begin{equation*}
\widetilde{\ocalT}{}^{(\lambda_1, \, \lambda_2, \, \lambda_3)}_i(\zeta) = \widetilde{\ocalT}{}'^{(- \lambda_3, \, - \lambda_2, \, - \lambda_1)}_i(r_s \zeta),
\end{equation*}
we obtain
\begin{equation}
\calC \, \widetilde{\ocalT}{}^\lambda(\zeta) = \ocalQ_1(q^{2 (\lambda + \rho)_1/s} \zeta) \ocalQ_2(q^{2 (\lambda + \rho)_2/s} \zeta) \ocalQ_3(q^{2 (\lambda + \rho)_3/s} \zeta). \label{btbqbqbq}
\end{equation}
Consider an automorphism of $\uqgliii$, similar to the automorphism $\tau$ of $\uqlsliii$, which we also denote by $\tau$. This automorphism is given by the equations
\begin{align}
& \tau(q^{\nu G_1}) = q^{- \nu G_3}, && \tau(q^{\nu G_2}) = q^{- \nu G_2}, && \tau(q^{\nu G_3}) = q^{- \nu G_1}, \label{tga} \\
& \tau(E_1) = E_2, && \tau(E_2) = E_1, &
& \tau(F_1) = F_2, && \tau(F_2) = F_1. \label{tgb}
\end{align}
It is easy to understand that there are the isomorphisms of representations
\begin{equation*}
\widetilde \pi^{(\lambda_1, \, \lambda_2, \, \lambda_3)} \circ \tau \cong \widetilde \pi^{(- \lambda_3, \, - \lambda_2, \, - \lambda_1)}, \qquad \pi^{(\lambda_1, \, \lambda_2, \, \lambda_3)} \circ \tau \cong \pi^{(- \lambda_3, \, - \lambda_2, \, - \lambda_1)}.
\end{equation*}
Now, defining the universal monodromy operators $\ocalM$
\begin{equation}
\ocalM_i(\zeta) = ((\ovarphi_i)_{r_s \zeta} \otimes \id)(\calR), \qquad \widetilde{\ocalM}{}_i^\lambda(\zeta) = ((\widetilde{\ovarphi}{}_i^\lambda)_{r_s \zeta} \otimes \id)(\calR), \label{bmiz}
\end{equation}
where\footnote{Here the first appearance of $\tau$ means the automorphism of $\uqgliii$ defined by equations (\ref{tga}), (\ref{tgb}), and the second one the automorphism of $\uqlsliii$ defined by (\ref{taue})--(\ref{tauh}).}
\begin{equation*}
\ovarphi_i = \tau \circ \varphi \circ \tau \circ \sigma^{- i + 1}
\end{equation*}
and $\widetilde{\ovarphi}{}_i^\lambda = \widetilde \pi^\lambda \circ \ovarphi_i$, we see that
\begin{equation*}
\widetilde \ocalT{}^\lambda_i(\zeta) = (\tr^\lambda \otimes \id)(\ocalM{}_i(\zeta) (({\ovarphi}_i)_{r_s \zeta}(t) \otimes 1)) = (\tr \otimes \id)(\widetilde \ocalM{}^\lambda_i(\zeta) ((\widetilde {\ovarphi}_i^\lambda)_{r_s \zeta}(t) \otimes 1)).
\end{equation*}
For the finite dimensional representations $\pi^\lambda$ we define
\begin{equation*}
\ocalT{}^{(\lambda_1, \, \lambda_2, \, \lambda_3)}_i(\zeta) = \ocalT{}'^{(- \lambda_3, \, - \lambda_2, \, - \lambda_1)}_i(r_s \zeta).
\end{equation*}
These universal transfer operators are expressed via the universal monodromy operators $\ocalM_i(\zeta)$ or
\begin{equation*}
\ocalM{}_i^\lambda(\zeta) = ((\ovarphi{}_i^\lambda)_{r_s \zeta} \otimes \id)(\calR),
\end{equation*}
where $\ovarphi{}_i^\lambda = \pi^\lambda \circ \ovarphi_i$, as
\begin{equation*}
\ocalT{}^\lambda_i(\zeta) = (\tr^\lambda \otimes \id)(\ocalM{}_i(\zeta) ((\ovarphi_i)_{r_s \zeta}(t) \otimes 1)) = (\tr \otimes \id)(\ocalM{}^\lambda_i(\zeta) ((\ovarphi_i^\lambda)_{r_s \zeta}(t) \otimes 1)).
\end{equation*}
More explicitly, one can write
\begin{align*}
& \widetilde{\ocalT}{}_i^\lambda(\zeta) = ((\widetilde \tr{}^\lambda \circ (\ovarphi_i)_{r_s \zeta}) \otimes \id)(\calR (t \otimes 1)) = ((\tr \circ (\widetilde {\ovarphi}{}_i^\lambda)_{r_s \zeta}) \otimes \id)(\calR (t \otimes 1)), \\
& \ocalT{}_i^\lambda(\zeta) = ((\tr{}^\lambda \circ (\ovarphi_i)_{r_s \zeta}) \otimes \id)(\calR (t \otimes 1)) = ((\tr \circ (\ovarphi{}_i^\lambda)_{r_s \zeta}) \otimes \id)(\calR (t \otimes 1)).
\end{align*}
Using relation (\ref{twtt}) which follows from the Berstein--Gelfand--Gelfand resolution, we come to the determinant representations
\begin{align}
& \calC \, \calT^\lambda(\zeta) = \det \left( \calQ_i(q^{- 2 (\lambda + \rho)_j / s} \zeta) \right)_{i,\, j = 1,2,3}, \label{tdq1} \\
& \calC \, \ocalT^\lambda(\zeta) = \det \left( \ocalQ_i(q^{2 (\lambda + \rho)_j /s } \zeta) \right)_{i, \, j = 1,2,3}. \label{tdq2}
\end{align}
Note that if any two of the components of $\lambda + \rho$ coincide, then $\calT^\lambda(\zeta) = 0$ and $\ocalT^\lambda(\zeta) = 0$.
We use relations (\ref{tdq1}) and (\ref{tdq2}) to define $\calT^\lambda(\zeta)$ and $\ocalT^\lambda(\zeta)$ for arbitrary $\lambda \in \gothg^*$. It is useful to have in mind that with such definition
\begin{equation}
\calT^{p(\lambda + \rho) - \rho}(\zeta) = \sgn(p) \calT^\lambda(\zeta), \qquad \ocalT^{p(\lambda + \rho) - \rho}(\zeta) = \sgn(p) \ocalT^\lambda(\zeta) \label{**}
\end{equation}
for any permutation $p \in \mathrm S_3$.
In fact the universal transfer operators $\widetilde \calT_i^\lambda(\zeta)$ and $\widetilde \ocalT{}_i^\lambda(\zeta)$ are not independent. In particular, using (\ref{tst}), one obtains from (\ref{tqqq}) and (\ref{btbqbqbq}) the equations
\begin{align*}
& \widetilde \calT_2^{(\lambda_1, \, \lambda_2, \, \lambda_3)}(\zeta) = \widetilde \calT_1^{(\lambda_3 - 2, \, \lambda_1 + 1, \, \lambda_2 + 1)}(\zeta), && \widetilde \calT_3^{(\lambda_1, \, \lambda_2, \, \lambda_3)}(\zeta) = \widetilde \calT_1^{(\lambda_2 - 1, \, \lambda_3 - 1, \, \lambda_1 + 2)}(\zeta), \\
& \widetilde{\ocalT}{}_2^{(\lambda_1, \, \lambda_2, \, \lambda_3)}(\zeta) = \widetilde{\ocalT}{}_1^{(\lambda_3 - 2, \, \lambda_1 + 1, \, \lambda_2 + 1)}(\zeta), && \widetilde{ \ocalT}{}_3^{(\lambda_1, \, \lambda_2, \, \lambda_3)}(\zeta) = \widetilde{\ocalT}{}_1^{(\lambda_2 - 1, \, \lambda_3 - 1, \, \lambda_1 + 2)}(\zeta).
\end{align*}
The same is true for the universal transfer operators $\calT^\lambda_i(\zeta)$ and $\ocalT{}^\lambda_i(\zeta)$. Therefore, we use below only the universal transfer operators $\widetilde \calT^\lambda(\zeta)$, $\widetilde{\ocalT}{}^\lambda(\zeta)$ and $\calT^\lambda(\zeta)$, $\ocalT{}^\lambda(\zeta)$. For the operators $\widetilde \calT^\lambda(\zeta)$ and $\widetilde{\ocalT}{}^\lambda(\zeta)$ we also have
\begin{align*}
& \widetilde \calT^{(\lambda_1 + \nu, \, \lambda_2 + \nu, \, \lambda_3 + \nu)}(q^{2 \nu / s} \zeta) = \widetilde \calT^{(\lambda_1, \, \lambda_2, \, \lambda_3)}(\zeta), \\
& \widetilde \ocalT{}^{(\lambda_1 + \nu, \, \lambda_2 + \nu, \, \lambda_3 + \nu)}(q^{- 2 \nu / s} \zeta) = \widetilde \ocalT{}^{(\lambda_1, \, \lambda_2, \, \lambda_3)}(\zeta),
\end{align*}
where $\nu$ is an arbitrary complex number. The same relations are valid for the operators $\calT^\lambda(\zeta)$ and $\ocalT{}^\lambda(\zeta)$:
\begin{align}
& \calT^{(\lambda_1 + \nu, \, \lambda_2 + \nu, \, \lambda_3 + \nu)}(q^{2 \nu / s} \zeta) = \calT^{(\lambda_1, \, \lambda_2, \, \lambda_3)}(\zeta), \label{tls} \\
& \ocalT{}^{(\lambda_1 + \nu, \, \lambda_2 + \nu, \, \lambda_3 + \nu)}(q^{- 2 \nu / s} \zeta) = \ocalT{}^{(\lambda_1, \, \lambda_2, \, \lambda_3)}(\zeta). \label{btls}
\end{align}
\subsection{\texorpdfstring{$TQ$- and $TT$-relations}{TQ- and TT-relations}}
Let $\lambda_j$, $j = 1, \ldots, 4$, be arbitrary complex numbers. Define three four dimensional row-vectors $\calP_i(\zeta)$, $i = 1,2,3$, by the equation
\begin{equation*}
(\calP_i(\zeta))_j = \calQ_i(q^{-2 \lambda_j / s} \zeta), \qquad j = 1, \ldots, 4,
\end{equation*}
and construct three four-by-four matrices whose first three rows are the vectors $\calP_i(\zeta)$ and the last vector is the vector $\calP_k(\zeta)$ where $k$ is $1$, $2$ or $3$. The determinant of any of these matrices certainly vanish. Expanding it over the last row and taking into account (\ref{tdq1}), written as
\begin{equation}
\calC \, \calT^{\lambda - \rho}(\zeta) = \det \left( \calQ_i(q^{- 2 \lambda_j / s} \zeta) \right)_{i,\, j = 1,2,3}, \label{ctlmr}
\end{equation}
we obtain the equation
\begin{multline*}
\calT^{(\lambda_1 - 1, \, \lambda_2, \, \lambda_3 + 1)}(\zeta) \calQ_k(q^{- 2 \lambda_4 / s} \zeta) - \calT^{(\lambda_1 - 1, \, \lambda_2, \, \lambda_4 + 1)}(\zeta) \calQ_k(q^{- 2 \lambda_3 / s} \zeta) \\
+ \calT^{(\lambda_1 - 1, \, \lambda_3, \, \lambda_4 + 1)}(\zeta) \calQ_k(q^{- 2 \lambda_2 / s} \zeta) - \calT^{(\lambda_2 - 1, \, \lambda_3, \, \lambda_4 + 1)}(\zeta) \calQ_k(q^{- 2 \lambda_1 / s} \zeta) = 0.
\end{multline*}
We call this equation the universal $TQ$-relation. Assuming that
\begin{equation*}
\lambda_1 = 2, \qquad \lambda_2 = 1, \qquad \lambda_3 = 0, \qquad \lambda_4 = -1,
\end{equation*}
we obtain the relation
\begin{multline}
\calT^{(1, \, 1, \, 1)}(\zeta) \calQ_k(q^{2 / s} \zeta) - \calT^{(1, \, 1, \, 0)}(\zeta) \calQ_k(\zeta) \\
+ \calT^{(1, \, 0, \, 0)}(\zeta) \calQ_k(q^{- 2 / s} \zeta) - \calT^{(0, \, 0, \, 0)}(\zeta) \calQ_k(q^{- 4 / s} \zeta) = 0. \label{tqi}
\end{multline}
It follows from the structure of the universal $R$-matrix, see, for example, the paper \cite{TolKho92}, that $\calT^{(0, \, 0, \, 0)}(\zeta) = 1$. Therefore, as follows from (\ref{tls}), we have
\begin{equation*}
\calT^{(\nu, \, \nu, \, \nu)}(\zeta) = 1
\end{equation*}
for any $\nu \in \bbC$. This property leads to a simpler form of (\ref{tqi}):
\begin{equation}
\calQ_k(q^{2 / s} \zeta) - \calT^{(1, \, 1, \, 0)}(\zeta) \calQ_k(\zeta)
+ \calT^{(1, \, 0, \, 0)}(\zeta) \calQ_k(q^{- 2 / s} \zeta) - \calQ_k(q^{- 4 / s} \zeta) = 0. \label{tq}
\end{equation}
In a similar way we obtain
\begin{equation}
\ocalQ_k(q^{- 2 / s} \zeta) - \ocalT^{(1, \, 1, \, 0)}(\zeta) \ocalQ_k(\zeta) + \ocalT^{(1, \, 0, \, 0)}(\zeta) \ocalQ_k(q^{2 / s} \zeta) - \ocalQ_k(q^{4 / s} \zeta) = 0. \label{btq}
\end{equation}
Equations (\ref{tq}) and (\ref{btq}) are analogues of the famous Baxter's $TQ$-relations for the case under consideration. For the first time they were obtained by Pronko and Stroganov from the nested Bethe ansatz equations \cite{ProStr00}.
It is possible to obtain relations containing only $\calT^{(1, \, 0, \, 0)}(\zeta)$ or $\calT^{(1, \, 1, \, 0)}(\zeta)$, or their barred analogues. Here we follow the paper \cite{BazFraLukMenSta11}. First remind the Jacobi identity for determinants, see, for example, the book \cite{Hir04}. Let $D$ be the determinant of some square matrix. Denote by $D \left[ \begin{array}{c} i \\ j \end{array} \right]$ the determinant of the same matrix with the $i$th row and the $j$th column removed, and by $D \left[ \begin{array}{cc} i & k \\ j & \ell \end{array} \right]$ the determinant of that matrix with the $i$th and $k$th rows and the $j$th and $\ell$th columns removed. The Jacobi identity looks as
\begin{equation}
D \left[ \begin{array}{cc} i & j \\ i & j \end{array} \right] D = D \left[ \begin{array}{c} i \\ i \end{array} \right] D \left[ \begin{array}{c} j \\ j \end{array} \right] - D \left[ \begin{array}{c} i \\ j \end{array} \right] D \left[ \begin{array}{c} j \\ i \end{array} \right]. \label{ji}
\end{equation}
For $k = 1, 2, 3$ define
\begin{equation*}
\calH^k(\zeta_1, ..., \zeta_k) = \det (\calQ_i(\zeta_j))_{i, j = 1, \ldots, k} \, ,
\end{equation*}
and assume that $\calH^0 = 1$. Introduce the matrix
\begin{equation}
\left( \begin{array}{cccc}
0 & 1 & 0 & 0 \\
\calQ_1(\zeta_1) & \calQ_1(\zeta_2) & \calQ_1(\zeta_3) & \calQ_1(\zeta_4) \\
\calQ_2(\zeta_1) & \calQ_2(\zeta_2) & \calQ_2(\zeta_3) & \calQ_2(\zeta_4) \\
\calQ_3(\zeta_1) & \calQ_3(\zeta_2) & \calQ_3(\zeta_3) & \calQ_3(\zeta_4)
\end{array}
\right), \label{mq}
\end{equation}
and apply to it the Jacobi identity (\ref{ji}) with $i = 1$ and $j = 4$. This gives the relation
\begin{equation}
\calH^3(\zeta_1, \zeta_3, \zeta_4) \calH^2(\zeta_2, \zeta_3) = \calH^3(\zeta_2, \zeta_3, \zeta_4) \calH^2(\zeta_1, \zeta_3) + \calH^3(\zeta_1, \zeta_2, \zeta_3) \calH^2(\zeta_3, \zeta_4). \label{jia}
\end{equation}
Now apply the Jacobi identity (\ref{ji}) with $i = 1$ and $j = 3$ to the matrix obtained from the matrix (\ref{mq}) by removing the last row and the last column. We come to the relation
\begin{equation*}
\calH^2(\zeta_1, \zeta_3) \calH^1(\zeta_2) = \calH^2(\zeta_2, \zeta_3) \calH^1(\zeta_1) + \calH^2(\zeta_1, \zeta_2) \calH^1(\zeta_3).
\end{equation*}
Using this equation to exclude $\calH^2(\zeta_1, \zeta_3)$ from (\ref{jia}), we obtain
\begin{multline*}
\calH^3(\zeta_1, \zeta_3, \zeta_4) \calH^2(\zeta_2, \zeta_3) \calH^1(\zeta_2) = \calH^3(\zeta_2, \zeta_3, \zeta_4) \calH^2(\zeta_2, \zeta_3) \calH^1(\zeta_1) \\
+ \calH^3(\zeta_2, \zeta_3, \zeta_4) \calH^2(\zeta_1, \zeta_2) \calH^1(\zeta_3) + \calH^3(\zeta_1, \zeta_2, \zeta_3) \calH^2(\zeta_3, \zeta_4) \calH^1(\zeta_2).
\end{multline*}
Put in the above equation
\begin{equation*}
\zeta_1 = q^{- 4/s} \zeta, \qquad \zeta_2 = q^{- 2/s} \zeta, \qquad \zeta_3 = \zeta, \qquad \zeta_4 = q^{2/s} \zeta,
\end{equation*}
and take into account that
\begin{gather*}
\calH^3(q^{- 2(\lambda_1 + 1)/s} \zeta, \, q^{- 2\lambda_2/s} \zeta, \, q^{- 2(\lambda_3 - 1)/s} \zeta) = \calC \calT^\lambda(\zeta), \\
\calH^2(q^{-1/s} \zeta, \, q^{1/s} \zeta) = - \calC_3 \ocalQ_3(\zeta), \qquad \calH^1(\zeta) = \calQ_1(\zeta).
\end{gather*}
The resulting equation is
\begin{multline*}
\calT^{(1,0,0)}(\zeta) \calQ_1(q^{- 2/s}\zeta) {\ocalQ}_3(q^{- 1 / s} \zeta) = \calQ_1(q^{-4/s}\zeta) {\ocalQ}_3(q^{- 1 /s} \zeta) \\[.3em]
+ \calQ_1(\zeta) {\ocalQ}_3(q^{- 3/s}\zeta) + \calQ_1(q^{- 2/s}\zeta) {\ocalQ}_3(q^{1/s}\zeta).
\end{multline*}
In fact, one can demonstrate that
\begin{multline}
\calT^{(1,0,0)}(\zeta) \calQ_i(q^{- 2/s}\zeta) {\ocalQ}_j(q^{- 1 / s} \zeta) = \calQ_i(q^{-4/s}\zeta) {\ocalQ}_j(q^{- 1 /s} \zeta) \\[.3em]
+ \calQ_i(\zeta) {\ocalQ}_j(q^{- 3/s}\zeta) + \calQ_i(q^{- 2/s}\zeta) {\ocalQ}_j(q^{1/s}\zeta) \label{t100qq}
\end{multline}
for all $i \ne j$. Here one also uses (\ref{tsta}) and (\ref{qsqa}). Similarly, one obtains
\begin{multline}
\calT^{(1,1,0)}(\zeta) \calQ_i(\zeta) {\ocalQ}_j(q^{-1/s}\zeta)
= \calQ_i(q^{2/s}\zeta) {\ocalQ}_j(q^{-1/s}\zeta) \\[.3em] + \calQ_i(q^{-2/s}\zeta) {\ocalQ}_j(q^{1/s}\zeta) + \calQ_i(\zeta)
{\ocalQ}_j(q^{-3/s}\zeta) \label{t110qq}
\end{multline}
again for all $i \ne j$.
Proceed now to $TT$-relations. Define the row-vector $\calP_4(\zeta)$ by the equation
\begin{equation*}
(\calP_4(\zeta))_j = \det \left( \begin{array}{ccc}
\calQ_1 (q^{-2 \lambda_j / s} \zeta) & \calQ_1 (q^{-2 \lambda_5 / s} \zeta) & \calQ_1 (q^{-2 \lambda_6 / s} \zeta) \\
\calQ_2 (q^{-2 \lambda_j / s} \zeta) & \calQ_2 (q^{-2 \lambda_5 / s} \zeta) & \calQ_2 (q^{-2 \lambda_6 / s} \zeta) \\
\calQ_3 (q^{-2 \lambda_j / s} \zeta) & \calQ_3 (q^{-2 \lambda_5 / s} \zeta) & \calQ_3 (q^{-2 \lambda_6 / s} \zeta) \\
\end{array} \right), \qquad j = 1, \ldots, 4,
\end{equation*}
where $\lambda_5$ and $\lambda_6$ are two additional complex numbers. It is easy to see that this vector is a linear combination of the vectors $\calP_1(\zeta)$, $\calP_2(\zeta)$ and $\calP_3(\zeta)$. Therefore, the determinant of the four-by-four matrix constructed from the vectors $\calP_i(\zeta)$, $i = 1, \ldots, 4$, vanishes. Expanding it over the last row and again taking into account (\ref{ctlmr}), we come to the equation
\begin{multline}
\calT^{(\lambda_1 - 1, \, \lambda_2, \, \lambda_3 + 1)}(\zeta) \calT^{(\lambda_4 - 1, \, \lambda_5, \, \lambda_6 + 1)}(\zeta) \\- \calT^{(\lambda_1 - 1, \, \lambda_2, \, \lambda_4 + 1)}(\zeta) \calT^{(\lambda_3 - 1, \, \lambda_5, \, \lambda_6 + 1)}(\zeta)
+ \calT^{(\lambda_1 - 1, \, \lambda_3, \, \lambda_4 + 1)}(\zeta) \calT^{(\lambda_2 - 1, \, \lambda_5, \, \lambda_6 + 1)}(\zeta) \\- \calT^{(\lambda_2 - 1, \, \lambda_3, \, \lambda_4 + 1)}(\zeta) \calT^{(\lambda_1 - 1, \, \lambda_5, \, \lambda_6 + 1)}(\zeta) = 0, \label{utt}
\end{multline}
which we call the universal $TT$-relation. Putting in (\ref{utt})
\begin{equation*}
\lambda_1 = \ell + 2, \qquad \lambda_2 = \ell + 1, \qquad \lambda_3 = 1, \qquad \lambda_4 = 0, \qquad \lambda_5 = 0, \qquad \lambda_6 = -1,
\end{equation*}
where $\ell$ is a positive integer, we obtain
\begin{equation}
\calT^{(\ell - 1, \, 0, \, 0)}(q^{-2/s} \zeta) \calT^{(\ell + 1, \, 0, \, 0)}(\zeta) = \calT^{(\ell, \, 0, \, 0)}(q^{-2/s} \zeta) \calT^{(\ell, \, 0, \, 0)}(\zeta) - \calT^{(\ell, \, \ell, \, 0)}(q^{-2/s} \zeta). \label{fr1}
\end{equation}
From the other hand the substitution
\begin{equation*}
\lambda_1 = \ell + 2, \qquad \lambda_2 = \ell, \qquad \lambda_3 = 0, \qquad \lambda_4 = \ell + 1, \qquad \lambda_5 = \ell + 1, \qquad \lambda_6 = -1,
\end{equation*}
where again $\ell$ is a positive integer, gives
\begin{equation}
\calT^{(\ell - 1, \, \ell - 1, \, 0)}(q^{-2/s} \zeta) \calT^{(\ell + 1, \, \ell + 1, \, 0)}(\zeta) = \calT^{(\ell, \, \ell, \, 0)}(q^{-2/s} \zeta) \calT^{(\ell, \, \ell, \, 0)}(\zeta) - \calT^{(\ell, \, 0, \, 0)}(\zeta). \label{fr2}
\end{equation}
Here the first relation of (\ref{**}) is used. Equations (\ref{fr1}) and (\ref{fr2}) are usually called the fusion relations, see \cite{KluPea92, KunNakSuz94, BazHibKho02}. They allow one to express the universal transfer operators $\calT^{(\ell, \, 0, \, 0)}(\zeta)$ and $\calT^{(\ell, \, \ell, 0)}(\zeta)$ via $\calT^{(1, \, 0, \, 0)}(\zeta)$ and $\calT^{(1, \, 1, \, 0)}(\zeta)$.
Further, for two positive integers $\ell_1$ and $\ell_2$, such that $\ell_1 \ge \ell_2$, putting in (\ref{utt})
\begin{equation*}
\lambda_1 = \ell_1 + 2, \qquad \lambda_2 = \ell_2 + 1, \qquad \lambda_3 = 1, \qquad \lambda_4 = 0, \qquad \lambda_5 = 0, \qquad \lambda_6 = -1,
\end{equation*}
we obtain
\begin{equation}
\calT^{(\ell_1, \, \ell_2, \, 0)}(\zeta) = \calT^{(\ell_1, \, 0, \, 0)}(\zeta) \calT^{(\ell_2, \, 0, \, 0)}(q^{2/s} \zeta) - \calT^{(\ell_1 + 1, \, 0, \, 0)}(q^{2/s} \zeta) \calT^{(\ell_2 - 1, \, 0, \, 0)}(\zeta). \label{fr3}
\end{equation}
Certainly, relation (\ref{fr1}) is a partial case of (\ref{fr3}). Thus, we see that $\calT^{(\ell_1, \, \ell_2, \, 0)}(\zeta)$ can be expressed via $\calT^{(\ell, \, 0, \, 0)}(\zeta)$ and, therefore, via $\calT^{(1, \, 0, \, 0)}(\zeta)$ and $\calT^{(1, \, 1, \, 0)}(\zeta)$. The explicit expression is given by the quantum Jacobi--Trudi identity \cite{BazRes90, Che87, BazHibKho02}
\begin{equation}
\calT^{(\ell_1, \, \ell_2, \, 0)}(\zeta) = \det \left( \calE_{\ell^t_i - i + j}(q^{- 2(j - 1)/s} \zeta) \right)_{1 \le i, \, j \le \ell_1}, \label{jt}
\end{equation}
where
\begin{equation*}
\calE_0(\zeta) = \calE_3(\zeta) = 1, \qquad \calE_1(\zeta) = \calT^{(1, \, 0, \, 0)}(\zeta), \qquad \calE_2(\zeta) = \calT^{(1, \, 1, \, 0)}(\zeta),
\end{equation*}
and $\calE_k(\zeta) = 0$ for $k < 0$ and $k > 3$. The numbers $\ell^t_i$ are defined as\footnote{One associates with the numbers $\ell_1$ and $\ell_2$ the Young diagram with the rows of length $\ell_1$ and $\ell_2$. The numbers $\ell^t_i$ describe the rows of the transposed diagram.}
\begin{equation*}
\ell^t_i = 2, \quad 1 \le i \le \ell_2, \qquad \ell^t_i = 1, \quad \ell_2 < i \le \ell_1.
\end{equation*}
To prove identity (\ref{jt}) we put in (\ref{utt})
\begin{equation*}
\lambda_1 = 2, \qquad \lambda_2 = 1, \qquad \lambda_3 = 0, \qquad \lambda_4 = -1, \qquad \lambda_5 = \ell_1 + 1, \qquad \lambda_6 = \ell_2,
\end{equation*}
use the first relation of (\ref{**}), and come to the relation
\begin{multline}
\calT^{(\ell_1, \, \ell_2, \, 0)}(\zeta) = \calT^{(1, \, 1, \, 0)}(\zeta) \calT^{(\ell_1 - 1, \, \ell_2 - 1, \, 0)}(q^{-2/s} \zeta) \\
- \calT^{(1, \, 0, \, 0)}(\zeta) \calT^{(\ell_1 - 2, \, \ell_2 - 2, \, 0)}(q^{-4/s} \zeta) + \calT^{(\ell_1 - 3, \, \ell_2 - 3, \, 0)}(q^{-6/s} \zeta). \label{pjt1}
\end{multline}
It is worth to write a few partial cases of this relation in the form
\begin{align}
\calT^{(\ell, \, 2, \, 0)}(\zeta) & = \calT^{(1, \, 1, \, 0)}(\zeta) \calT^{(\ell - 1, \, 1, \, 0)}(q^{-2/s} \zeta) - \calT^{(1, \, 0, \, 0)}(\zeta) \calT^{(\ell - 2, \, 0, \, 0)}(q^{-4/s} \zeta), \label{pjt2} \\
\calT^{(\ell, \, 1, \, 0)}(\zeta) & = \calT^{(1, \, 1, \, 0)}(\zeta) \calT^{(\ell - 1, \, 0, \, 0)}(q^{-2/s} \zeta) - \calT^{(\ell - 2, \, 0, \, 0)}(q^{-4/s} \zeta), \label{pjt3} \\
\calT^{(\ell, \, 0, \, 0)}(\zeta) & = \calT^{(1, \, 0, \, 0)}(\zeta) \calT^{(\ell - 1, \, 0, \, 0)}(q^{-2/s} \zeta) - \calT^{(\ell - 1, \, 1, \, 0)}(q^{-2/s} \zeta). \label{pjt4}
\end{align}
Here we again use the first relation of (\ref{**}). As (\ref{fr1})--(\ref{fr3}), relations (\ref{pjt1})--(\ref{pjt4}) allow us to express $\calT^{(\ell_1, \, \ell_2, \, 0)}(\zeta)$ via $\calT^{(1, \, 0, \, 0)}(\zeta)$ and $\calT^{(1, \, 1, \, 0)}(\zeta)$. It is not difficult to see that (\ref{jt}) satisfies (\ref{pjt1})--(\ref{pjt4}). Thus, the identity (\ref{jt}) is true.
The $TT$-relations for the barred universal transfer operators are obtained by the change $q$ to $q^{-1}$ in the relations for the unbarred ones.
A last remark is in order. It is clear that the action of the mappings $\varphi^{(\lambda_1, \, \lambda_2, \, \lambda_3)}$ and $ \ovarphi^{(\lambda_1, \, \lambda_2, \, \lambda_3)}$ on the generators of $\uqbp$ is the same except their action on the generator $e_0$. Using equations (\ref{pi100a})--(\ref{pi100d}) and (\ref{pi110a})--(\ref{pi110d}), we obtain the following relations
\begin{equation*}
(\ovarphi^{(1, \, 0, \, 0)})_\zeta(e_0) = - q^3 (\varphi^{(1, \, 0, \, 0)})_\zeta(e_0), \qquad (\ovarphi^{(1, \, 1, \, 0)})_\zeta(e_0) = - q (\varphi^{(1, \, 1, \, 0)})_\zeta(e_0).
\end{equation*}
Taking into account the discussion given at the end of section \ref{sss:uto}, we see that
\begin{equation}
\ocalT^{(1,\, 0, \, 0)}(\zeta) = \calT^{(1, \, 0, \, 0)}(q^{3/s} \zeta), \qquad \ocalT^{(1, \, 1, \, 0)}(\zeta) = \calT^{(1, \, 1, \, 0)}(q^{1/s} \zeta). \label{octct}
\end{equation}
Using these equations and the quantum Jacobi--Trudi identity (\ref{jt}) one can easily prove that
\begin{equation}
\ocalT^{(\ell, \, 0, \, 0)}(\zeta) = \calT^{(\ell, \, 0, \, 0)}(q^{(2 \ell + 1)/s} \zeta), \qquad \ocalT^{(\ell, \, \ell, \, 0)}(\zeta) = \calT^{(\ell, \, \ell, \, 0)}(q^{(2 \ell - 1)/s} \zeta). \label{bttot}
\end{equation}
The quantum Jacobi--Trudi identity for $\ocalT^{(\ell_1, \, \ell_2, \, 0)}(\zeta)$ and equation (\ref{octct}) allow us to express $\ocalT^{(\ell_1, \, \ell_2, \, 0)}(\zeta)$ via $\calT^{(1,\, 0, \, 0)}(\zeta)$ and $\calT^{(1,\, 1, \, 0)}(\zeta)$.
\section{Spin chain}
\subsection{Integrability objects}
Let us start with a systematization of our zoo of universal integrability objects. There are two types of them. An object of the first type is constructed using some homomorphism from $\uqlsliii$ to some algebra $A$ and the corresponding power of the automorphism $\sigma$. These are the universal monodromy operators and the universal $L$-operators. We denote a general representative of this type as $\calX_i(\zeta)$. It is an element of $A \otimes \uqlsliii$. More concretely, we have a family of homomorphisms $\omega_i$ such that
\begin{equation*}
\omega_{i + 1} = \omega_i \circ \sigma^{-1},
\end{equation*}
and define $\calX_i(\zeta)$ as
\begin{equation*}
\calX_i(\zeta) = ((\omega_i)_\zeta \otimes \id)(\calR).
\end{equation*}
The objects of the second type are constructed from the objects of the first type by using some trace $\tr_A$ on the algebra $A$. These are the universal transfer operators and the universal $Q$-operators. We denote a general representative of the second type as $\calY_i(\zeta)$. It is an element of $\uqlsliii$. We assume that
\begin{equation}
\calY_i(\zeta) = (\tr_A \otimes \id)(\calX_i(\zeta) ((\omega_i)_\zeta(t) \otimes 1)), \label{cycx}
\end{equation}
where $t \in \uqlsliii$ is a twist element. In fact, due to the redefinition (\ref{rdq}), it is not quite so for the connection of the universal $L$-operators and the universal $Q$-operators. However, the necessary modification of (\ref{cycx}) and of the further relations for the corresponding integrability objects can be performed easily, see also the discussion below.
A concrete integrable model is defined by a choice of a representation for the second factor of the tensor product $\uqlsliii \otimes \uqlsliii$. For the case of a spin chain one fixes a finite dimensional representation $\psi$ on the vector space $U$ and defines the operators
\begin{gather}
X_i(\zeta | \eta_1, \ldots, \eta_n) = (\id \otimes (\psi_{\eta_1} \otimes_{\Delta^{\mathrm{op}}} \cdots \otimes_{\Delta^{\mathrm{op}}} \psi_{\eta_n}))(\calX_i(\zeta)), \label{dxi} \\
Y_i(\zeta | \eta_1, \ldots, \eta_n) =(\psi_{\eta_1} \otimes_{\Delta^{\mathrm{op}}} \cdots \otimes_{\Delta^{\mathrm{op}}} \psi_{\eta_n})(\calY_i(\zeta)), \label{dyi}
\end{gather}
associated with a chain of length $n$. Here, $\eta_1$, $\ldots$, $\eta_n$ are the spectral parameters associated with the sites of the chain. It is clear that the operator $X_i(\zeta | \eta_1, \ldots, \eta_n)$ is an element of $A \otimes \End(U)^{\otimes n}$ and $Y_i(\zeta | \eta_1, \ldots, \eta_n)$ is an element of $\End(U)^{\otimes n}$. It follows from (\ref{cycx}) that
\begin{equation*}
Y_i(\zeta | \eta_1, \ldots, \eta_n) = (\tr_A \otimes \id)(X_i(\zeta | \eta_1, \ldots, \eta_n) ((\omega_i)_\zeta(t) \otimes 1)).
\end{equation*}
For all cases, except the case of $Q$-operators $Q_i(\zeta | \eta_1, \ldots, \eta_n)$ and $\oQ_i(\zeta | \eta_1, \ldots, \eta_n)$, using equations (\ref{ggd}) and (\ref{ggr}), one can demonstrate that
\begin{equation}
X_i(\zeta \nu | \eta_1 \nu, \ldots \eta_n \nu) = X_i(\zeta | \eta_1, \ldots, \eta_n), \qquad Y_i(\zeta \nu | \eta_1 \nu, \ldots \eta_n \nu) = Y_i(\zeta | \eta_1, \ldots, \eta_n) \label{xinu}
\end{equation}
for any $\nu \in \bbC^\times$. In particular, for the operators
\begin{align*}
& Q'_i(\zeta | \eta_1, \ldots, \eta_n) = (\psi_{\eta_1} \otimes_{\Delta^{\mathrm{op}}} \cdots \otimes_{\Delta^{\mathrm{op}}} \psi_{\eta_n})(\calQ'_i(\zeta)), \\
& \oQ'_i(\zeta | \eta_1, \ldots, \eta_n) = (\psi_{\eta_1} \otimes_{\Delta^{\mathrm{op}}} \cdots \otimes_{\Delta^{\mathrm{op}}} \psi_{\eta_n})(\ocalQ'_i(\zeta))
\end{align*}
we have
\begin{equation*}
Q'_i(\zeta \nu | \eta_1 \nu, \ldots \eta_n \nu) = Q'_i(\zeta | \eta_1, \ldots, \eta_n), \qquad \oQ{}'_i(\zeta \nu | \eta_1 \nu, \ldots \eta_n \nu) = \oQ{}'_i(\zeta | \eta_1, \ldots, \eta_n).
\end{equation*}
However, if we define the $Q$-operators $Q_i(\zeta | \eta_1, \ldots \eta_n)$ and $\oQ_i(\zeta | \eta_1, \ldots \eta_n)$ with the help of equation (\ref{dyi}) we will not obtain objects satisfying the second relation of (\ref{xinu}). Nevertheless, the corresponding functional relations have the simplest form namely in terms of the $Q$-operators $Q_i(\zeta | \eta_1, \ldots \eta_n)$ and $\oQ_i(\zeta | \eta_1, \ldots \eta_n)$.
Now introduce the matrix
\begin{equation*}
\bbX_i(\zeta | \eta_1, \ldots, \eta_n) = ((\bbX_i(\zeta | \eta_1, \ldots, \eta_n))_{k_1 \ldots k_n | \ell_1 \ldots \ell_n}),
\end{equation*}
where $(\bbX_i(\zeta | \eta_1, \ldots, \eta_n))_{k_1 \ldots k_n | \ell_1 \ldots \ell_n}$ are the elements of $A$ defined by the equation
\begin{equation*}
X_i(\zeta | \eta_1, \ldots, \eta_n) = \sum_{k_1, \ldots, k_n = 1}^3 \sum_{\ell_1, \ldots, \ell_n = 1}^3 (\bbX_i(\zeta | \eta_1, \ldots, \eta_n))_{k_1 \ldots k_n | \ell_1 \ldots \ell_n} \otimes (E_{k_1 \ell_1} \otimes \cdots \otimes E_{k_n \ell_n}).
\end{equation*}
Here $E_{k \ell}$ are the basis elements of $\End(U)$ associated with some basis of $U$, see, for example, appendix \ref{a:dr}. Similarly, we define the matrix
\begin{equation*}
\bbY_i(\zeta | \eta_1, \ldots, \eta_n) = ((\bbY_i(\zeta | \eta_1, \ldots, \eta_n))_{k_1 \ldots k_n | \ell_1 \ldots \ell_n}),
\end{equation*}
where $(\bbY_i(\zeta | \eta_1, \ldots, \eta_n))_{k_1 \ldots k_n | \ell_1 \ldots \ell_n}$ are the complex numbers defined by the equation
\begin{equation*}
Y_i(\zeta | \eta_1, \ldots, \eta_n) = \sum_{k_1, \ldots, k_n = 1}^3 \sum_{\ell_1, \ldots, \ell_n = 1}^3 (\bbY_i(\zeta | \eta_1, \ldots, \eta_n))_{k_1 \ldots k_n | \ell_1 \ldots \ell_n} E_{k_1 \ell_1} \otimes \cdots \otimes E_{k_n \ell_n}.
\end{equation*}
For the one-site case we introduce the matrices $\bbX(\zeta)$ and $\bbY(\zeta)$ depending on one spectral parameter. It is clear that the matrices $\bbX_i(\zeta | \eta_1, \ldots, \eta_n)$ and $\bbY_i(\zeta | \eta_1, \ldots, \eta_n)$ contain the same information as the operators $X_i(\zeta | \eta_1, \ldots, \eta_n)$ and $Y_i(\zeta | \eta_1, \ldots, \eta_n)$. However, for concrete physical applications it is more convenient to work with the matrices.
Below we take as $\psi$ the representation $\varphi^{(1, \, 0, \, 0)}$. The universal integrability objects $\calX_i(\zeta)$ satisfy the following relation
\begin{equation*}
\calX_{i + 1}(\zeta) = (\id \otimes \sigma)(\calX_i(\zeta))|_{s \to \sigma(s)}.
\end{equation*}
Hence, we have
\begin{equation*}
X_{i + 1}(\zeta) = (\id \otimes (\varphi^{(1, \, 0, \, 0)} \circ \sigma))(\calX_i(\zeta))|_{s \to \sigma(s)}.
\end{equation*}
Using equations (\ref{vp100a})--(\ref{vp100d}), we conclude that for any $a \in \uqlsliii$ one obtains
\begin{equation*}
(\varphi^{(1,\, 0, \, 0)} \circ \sigma)(a) = O \bigl( \varphi^{(1, \, 0, \,0)}(a) \bigr) O^{-1},
\end{equation*}
where the endomorphism $O$ has the matrix form
\begin{equation*}
\bbO = \left( \begin{array}{ccc}
0 & 0 & q \\
1 & 0 & 0 \\
0 & 1 & 0
\end{array} \right).
\end{equation*}
Thus, the relation
\begin{equation}
\bbX_{i + 1}(\zeta) = \bbO \, \bbX_i(\zeta) \bbO^{-1} \, |_{s \to \sigma(s)} \label{xipo}
\end{equation}
is valid. For a chain of an arbitrary length we obtain
\begin{equation*}
\bbX_{i + 1}(\zeta | \eta_1, \ldots, \eta_n) = (\underbrace{\bbO \times \cdots \times \bbO}_n) \, \bbX_i(\zeta | \eta_1, \ldots, \eta_n) (\underbrace{\bbO \times \cdots \times \bbO}_n)^{-1} \, |_{s \to \sigma(s)}.
\end{equation*}
In the same way we determine that
\begin{equation*}
\bbY_{i + 1}(\zeta | \eta_1, \ldots, \eta_n) = (\underbrace{\bbO \times \cdots \times \bbO}_n) \, \bbY_i(\zeta | \eta_1, \ldots, \eta_n) (\underbrace{\bbO \times \cdots \times \bbO}_n)^{-1} \, |_{\phi \to \sigma(\phi)}.
\end{equation*}
The explicit form of the matrices $\bbM(\zeta)$ and $\obbM{}'_i(\zeta)$ were found in the paper \cite{Raz13}.\footnote{The matrix $\obbM{}'_i(\zeta)$ in \cite{Raz13} is denoted as $\obbM{}_i(\zeta)$.} As follows from (\ref{bmiz}), the monodromy operator $\obbM{}_i(\zeta)$ can be obtained from $\obbM{}'_i(\zeta)$ with the help of the relation
\begin{equation*}
\obbM{}_i(\zeta) = \tau(\obbM{}'_i(r_s \zeta)),
\end{equation*}
where the automorphism $\tau$ of $\uqgliii$ defined by equations (\ref{tga}) and (\ref{tgb}) is applied to the matrix entries.
There is another method to find the expression for $\obbM{}_i(\zeta)$. Using equation (\ref{bmtom}), we see that
\begin{equation*}
\oM{}'_i(\zeta) = ((\varphi_{- i + 2})_\zeta \otimes \ovarphi'^{(1, \, 0, \, 0)})(\calR)|_{s \to \tau(s)}.
\end{equation*}
Then, taking into account (\ref{oeqd}), we have
\begin{equation*}
\oM{}'_i(\zeta) = (1 \otimes P)(((\varphi_{- i + 2})_{r_s^{-3} q^{-3/s} \zeta} \otimes \varphi^{(1, \, 0, \, 0) * S^{-1}})(\calR))(1 \otimes P^{-1})|_{s \to \tau(s)},
\end{equation*}
and equation (\ref{mvdp}) gives
\begin{equation*}
\obbM{}'_i(\zeta) = \bbP ((\bbM_{- i + 2}(r_s^{-3} q^{- 3/s} \zeta))^{-1})^t \bbP^{-1} |_{s \to \tau(s)},
\end{equation*}
where the matrix $\bbP$ is given by equation (\ref{bbp}).
The explicit expressions for various $L$-operators were obtained in the paper \cite{BooGoeKluNirRaz10}. However, in the present paper we use different definition of $q$-oscillators. Therefore, we give here the following expressions for two $L$-operators:
\begin{align}
& \bbL'(\zeta) = \rme^{f_3(\zeta^s)} \notag \\
& \hspace{1.5em} \times \left( \begin{array}{ccc}
q^{N_1 + N_2} - \zeta^s q^{- N_1 - N_2 - 2} & \zeta^{s - s_1} b_1 q^{- 2 N_1 - N_2} & \zeta^{s -s_1 - s_2} b_2 q^{- 2 N_2 + 1} \\[.3em]
\zeta^{s_1} \kappa_q b_1^\dagger q^{N_1} & q^{- N_1} & 0 \\[.3em]
\zeta^{s_1 + s_2} \kappa_q b_2^\dagger q^{- N_1 + N_2 - 1} & - \zeta^{s_2} \kappa_q b_1 b_2^\dagger q^{- 2 N_1 + N_2 + 1} & q^{- N_2}
\end{array} \right), \label{bbl} \\
& \obbL{}'(\zeta) = \rme^{f_3(- q \zeta^s) + f_3(- q^{-1} \zeta^s)} \notag \\
& \hspace{3.em} \times \left( \begin{array}{ccc}
q^{ - N_1 - N_2} & - \zeta^{s - s_1} \kappa_q b_2^\dagger q^{N_2 + 1} & \zeta^{s - s_1 - s_2} \kappa_q b_1^\dagger q^{N_1 - N_2 + 1} \\[.3em]
\zeta^{s_1} b_2 q^{- N_1 - 2 N_2} & q^{N_2} + \zeta^s q^{- N_2 - 1} & \zeta^{s - s_2} \kappa_q b_1^\dagger b_2 q^{N_1 - 2 N_2 + 1} \\[.3em]
- \zeta^{s_1 + s_2} b_1 q^{-2 N_1} & - \zeta^{s_2} \kappa_q b_1 b_2^\dagger q^{- N_1 + 2 N_2 + 1} & q^{N_1} + \zeta^s q^{- N_1 - 1}
\end{array} \right), \label{obbl}
\end{align}
where the function
\begin{equation*}
f_3(\zeta) = \sum_{k = 1}^\infty \frac{1}{q^{2k} + 1 + q^{-2k}} \, \frac{\zeta^k}{k}
\end{equation*}
satisfies the defining relation
\begin{equation*}
f_3(q^2\zeta) + f_3(\zeta) + f_3(q^{-2}\zeta) = - \log(1 - \zeta).
\end{equation*}
The expressions for all operators $\bbL'_i(\zeta)$ and $\obbL{}'_i(\zeta)$ can be obtained with the help of equation (\ref{xipo}). In a similar way as in the case of the monodromy matrices one obtains
\begin{equation*}
\obbL{}'_i(\zeta) = \bbP ((\bbL'_{- i + 1}(r_s^{-3} q^{- 3/s} \zeta))^{-1})^t \bbP^{-1} |_{s \to \tau(s)}.
\end{equation*}
\subsection{Functional relations}
For the transfer operators corresponding to the finite dimensional representations $\pi^\lambda$ of $\uqgliii$ on the auxiliary space we have the expressions
\begin{gather*}
T_i^\lambda(\zeta | \eta_1, \ldots, \eta_n) = \tr^\lambda (M_i(\zeta | \eta_1, \ldots, \eta_n) ((\varphi_i)_\zeta(t) \otimes 1)), \\
\oT_i^\lambda(\zeta | \eta_1, \ldots, \eta_n) = \tr^\lambda (\oM_i(\zeta | \eta_1, \ldots, \eta_n) ((\ovarphi_i)_{r_s \zeta}(t) \otimes 1)).
\end{gather*}
One can demonstrate, in particular, that
\begin{gather*}
\bbT^\lambda(\zeta | \eta_1, \ldots, \eta_n) = \tr^\lambda ((\bbM(\zeta \eta_1^{-1}) \boxtimes \cdots \boxtimes \bbM(\zeta \eta_n^{-1})) \, q^{ - \Phi_1 G_1 - \Phi_2 G_2 - \Phi_3 G_3}), \\
\obbT^\lambda(\zeta | \eta_1, \ldots, \eta_n) = \tr^\lambda ((\obbM(\zeta \eta_1^{-1}) \boxtimes \cdots \boxtimes \obbM(\zeta \eta_n^{-1})) \, q^{- \Phi_1 G_1 - \Phi_2 G_2 - \Phi_3 G_3}),
\end{gather*}
where $\boxtimes$ denotes the natural generalization of the Kronecker product to the case of matrices with noncommuting entries, see, for example, the paper \cite{BooGoeKluNirRaz14a}, and
\begin{equation*}
\Phi_1 = (\phi_0 - \phi_1)/3, \qquad \Phi_2 = (\phi_1 - \phi_2)/3, \qquad \Phi_3 = (\phi_2 - \phi_0)/3.
\end{equation*}
Similar expression can be written for the transfer operators corresponding to the infinite dimensional representations $\widetilde \pi^\lambda$ of $\uqgliii$.
Further, we obtain
\begin{align*}
& \bbQ_i(\zeta | \eta_1, \ldots, \eta_n) = \zeta^{s \Phi_i / 2} \tr^{\scriptscriptstyle ++} \big((\zeta^{\bbD_i}\bbL'_i(\zeta \eta_1^{-1}) \boxtimes \cdots \boxtimes \zeta^{\bbD_i} \bbL'_i(\zeta \eta_n^{-1})) \rho_i(t) \big), \\
& \obbQ_i(\zeta | \eta_1, \ldots, \eta_n) = \zeta^{- s \Phi_i / 2} \tr^{\scriptscriptstyle ++} \big((\zeta^{-\bbD_i}\obbL'_i(r_s \zeta \eta_1^{-1}) \boxtimes \cdots \boxtimes \zeta^{-\bbD_i} \obbL'_i(r_s \zeta \eta_n^{-1 })) \orho_i(t) \big),
\end{align*}
where
\begin{gather}
\zeta^{\bbD_1} = \left( \begin{array}{ccc}
\zeta^{-s/3} \\
& \zeta^{s/6} \\
& & \zeta^{s/6}
\end{array} \right), \qquad \zeta^{\bbD_2} = \left( \begin{array}{ccc}
\zeta^{s/6} \\
& \zeta^{- s/3} \\
& & \zeta^{s/6}
\end{array} \right), \label{zda} \\
\zeta^{\bbD_3} = \left( \begin{array}{ccc}
\zeta^{s/6} \\
& \zeta^{s/6} \\
& & \zeta^{- s/3}
\end{array} \right) \label{zdb}
\end{gather}
and $\zeta^{- \bbD_i}$ is the inverse of $\zeta^{\bbD_i}$. It is easy to determine that
\begin{gather*}
\rho_1(t) = q^{- (\Phi_1 - \Phi_2) N_1 + (\Phi_3 - \Phi_1) N_2}, \qquad \rho_2(t) = q^{- (\Phi_2 - \Phi_3) N_1 + (\Phi_1 - \Phi_2) N_2}, \\*
\rho_3(t) = q^{- (\Phi_3 - \Phi_1) N_1 + (\Phi_2 - \Phi_3) N_2}, \\
\orho_1(t) = q^{(\Phi_1 - \Phi_3) N_1 - (\Phi_2 - \Phi_1) N_2}, \qquad \orho_2(t) = q^{(\Phi_2 - \Phi_1) N_1 - (\Phi_3 - \Phi_2) N_2}, \\*
\orho_3(t) = q^{(\Phi_3 - \Phi_2) N_1 - (\Phi_1 - \Phi_3) N_2}.
\end{gather*}
The functional relations for integrability objects corresponding to some choice of the representation of $\uqlsliii$ on the quantum space are easily obtained from the functional relations for the corresponding universal integrability objects by the simple substitution. However, it is important for applications to have integrability objects with a simple analytical structure with respect to the spectral parameter $\zeta$. This is not the case for the objects introduced above. Therefore, we introduce the transfer operators $\bbT^{\mathrm p \lambda}_i(\zeta | \eta_1, \ldots, \eta_n)$ and $\obbT^{\mathrm p \lambda}_i(\zeta | \eta_1, \ldots, \eta_n)$ being Laurent polynomials in $\zeta^{s / 2}$ and related to the transfer operators $\bbT^\lambda_i(\zeta | \eta_1, \ldots, \eta_n)$ and $\obbT{}^\lambda_i(\zeta | \eta_1, \ldots, \eta_n)$ as
\begin{align*}
& \bbT^\lambda_i(\zeta | \eta_1, \ldots, \eta_n) = q^{- (\lambda_1 + \lambda_2 + \lambda_3) n / 3} \prod_{k = 1}^n (\zeta \eta_k^{-1})^{s / 2} \\*
& \hspace{4em} {} \times \prod_{k = 1}^n \rme^{f_3(q^{- 2(\lambda_1 + 1)} (\zeta \eta_k^{-1})^s) + f_3(q^{- 2\lambda_2} (\zeta \eta_k^{-1})^s) + f_3(q^{- 2(\lambda_3 - 1)} (\zeta \eta_k^{-1})^s)} \, \bbT^{\mathrm p \lambda}_i(\zeta | \eta_1, \ldots, \eta_n), \\
& \obbT{}_i^\lambda(\zeta | \eta_1, \ldots, \eta_n) = q^{2(\lambda_1 + \lambda_2 + \lambda_3) n / 3} \prod_{k = 1}^n (\zeta \eta_k^{-1})^s \\*
& \hspace{2em} {} \times \prod_{k = 1}^n \rme^{f_3(q^{2(\lambda_1 + 1) - 1} (\zeta \eta_k^{-1})^s) + f_3(q^{2\lambda_2 - 1} (\zeta \eta_k^{-1})^s) + f_3(q^{2(\lambda_3 - 1) - 1} (\zeta \eta_k^{-1})^s)} \\*
& \hspace{4em} {} \times \prod_{k = 1}^n \rme^{f_3(q^{2(\lambda_1 + 1) + 1} (\zeta \eta_k^{-1})^s) + f_3(q^{2\lambda_2 + 1} (\zeta \eta_k^{-1})^s) + f_3(q^{2(\lambda_3 - 1) + 1} (\zeta \eta_k^{-1})^s)} \obbT{}_i^{\mathrm p \lambda}(\zeta | \eta_1, \ldots, \eta_n).
\end{align*}
To prove that we really have Laurent polynomials one can use the explicit expressions for the monodromy operators $\bbM(\zeta)$ and $\obbM'(\zeta)$ from the paper \cite{Raz13}.
Similarly, we introduce the $Q$-operators $\bbQ^{\mathrm p}_i (\zeta | \eta_1, \ldots, \eta_n)$ and $\obbQ^{\mathrm p}_i (\zeta | \eta_1, \ldots, \eta_n)$ being Laurent polynomials in $\zeta^{s / 2}$ and related to the initial $Q$-operators $\bbQ_i (\zeta | \eta_1, \ldots, \eta_n)$ and $\obbQ_i (\zeta | \eta_1, \ldots, \eta_n)$ as
\begin{align*}
& \bbQ_i(\zeta | \eta_1, \ldots, \eta_n) = \zeta^{s \Phi_i / 2 + n s / 6} \prod_{k = 1}^n \rme^{f_3((\zeta \eta_k^{-1})^s)} \, \bbQ^{\mathrm p}_i (\zeta | \eta_1, \ldots, \eta_n), \\
& \obbQ{}_i(\zeta | \eta_1, \ldots, \eta_n) = \zeta^{- s \Phi_i / 2 + n s / 3} \prod_{k = 1}^n \rme^{f_3(q^{-1} (\zeta \eta_k^{-1})^s) + f_3(q (\zeta \eta_k^{-1})^s)} \, \obbQ{}^{\mathrm p}_i (\zeta | \eta_1, \ldots, \eta_n).
\end{align*}
The required polynomiality follows from the explicit form (\ref{bbl}) and (\ref{obbl}) of the $L$-opera\-tors $\bbL'(\zeta)$ and $\obbL'(\zeta)$ and the explicit form (\ref{zda}) and (\ref{zdb}) of the matrices $\zeta^{\bbD_i}$.
In terms of polynomial objects the $TQ$-relation (\ref{tq}) takes the form
\begin{multline*}
q^{\Phi_i} \prod_{k = 1}^n b(q (\zeta \eta_k^{-1})^{- s / 2}) \, \bbQ_i^{\mathrm p} (q^{2 / s} \zeta) \\- \bbT^{\mathrm p (1, \, 1, \, 0)}(\zeta) \, \bbQ_i^{\mathrm p} (\zeta)
+ q^{- \Phi_i} \bbT^{\mathrm p (1, \, 0, \, 0)}(\zeta) \, \bbQ_i^{\mathrm p} (q^{- 2 / s} \zeta) \\ - q^{- 2 \Phi_i} \prod_{k = 1}^n b((\zeta \eta_k^{-1})^{- s / 2}) \, \bbQ_i^{\mathrm p} (q^{- 4 / s} \zeta) = 0,
\end{multline*}
while instead of (\ref{btq}) we have
\begin{multline*}
q^{\Phi_i} \prod_{k = 1}^n \big[ b(q^{-3/2}(\zeta \eta_k^{-1})^{- s / 2}) b(q^{-1/2}(\zeta \eta_k^{-1})^{- s / 2} ) \big] \, \obbQ{}_i^{\mathrm p} (q^{- 2 / s} \zeta) \\
- \obbT^{\mathrm p (1, \, 1, \, 0)}(\zeta) \, \obbQ{}_i^{\mathrm p} (\zeta)
+ q^{- \Phi_i} \obbT^{\mathrm p (1, \, 0, \, 0)}(\zeta) \, \obbQ_i^{\mathrm p} (q^{2 / s} \zeta) \\ - q^{- 2 \Phi_i} \prod_{k = 1}^n \big[ b(q^{- 1/2} (\zeta \eta_k^{-1})^{- s / 2}) b(q^{1/2} (\zeta \eta_k^{-1})^{- s / 2}) \big] \, \obbQ_i^{\mathrm p} (q^{4 / s} \zeta) = 0.
\end{multline*}
Here and below we use the notation
\begin{equation*}
b(\zeta) = \zeta - \zeta^{-1},
\end{equation*}
and skip the explicit dependence of the transfer operators and $Q$-operators on the spectral parameters $\eta_1$, $\ldots$, $\eta_n$. For relations (\ref{t100qq}) and (\ref{t110qq}) we obtain
\begin{align*}
& \bbT^{\mathrm p(1,0,0)}(\zeta) \bbQ^{\mathrm p}_i(q^{-2/s} \zeta) \obbQ{}^{\mathrm p}_j(q^{-1/s} \zeta) \\
& \hspace{6em} {} = q^{-\Phi_i} \prod_{k = 1}^n b((\zeta \eta_k^{-1})^{-s/2})
\bbQ^{\mathrm p}_i(q^{-4/s} \zeta) \obbQ{}^{\mathrm p}_j(q^{-1/s} \zeta) \\
& \hspace{6em} {} + q^{\Phi_i + \Phi_j} \prod_{k = 1}^n b((\zeta \eta_k^{-1})^{-s/2}) \bbQ^{\mathrm p}_i(\zeta) \obbQ{}^{\mathrm p}_j(q^{-3/s} \zeta) \\
& \hspace{15em} {} + q^{-\Phi_j} \prod_{k = 1}^n b(q(\zeta \eta_k^{-1})^{-s/2}) \bbQ^{\mathrm p}_i(q^{-2/s} \zeta) \obbQ{}^{\mathrm p}_j(q^{1/s} \zeta)
\end{align*}
and
\begin{align*}
& \bbT^{\mathrm p(1,1,0)}(\zeta) \bbQ^{\mathrm p}_i(\zeta) \obbQ{}^{\mathrm p}_j(q^{-1/s}\zeta) \\
& \hspace{7em} {} = q^{\Phi_i} \prod_{k=1}^n b(q(\zeta\eta_k^{-1})^{-s/2})
\bbQ^{\mathrm p}_i(q^{2/s}\zeta) \obbQ{}^{\mathrm p}_j(q^{-1/s}\zeta) \\
& \hspace{7em} {} + q^{-\Phi_i-\Phi_j}
\prod_{k=1}^n b(q(\zeta\eta_k^{-1})^{-s/2})
\bbQ^{\mathrm p}_i(q^{-2/s}\zeta) \obbQ{}^{\mathrm p}_j(q^{1/s}\zeta) \\
& \hspace{17em} {} + q^{\Phi_j}
\prod_{k=1}^n b((\zeta\eta_k^{-1})^{-s/2})
\bbQ^{\mathrm p}_i(\zeta) \obbQ{}^{\mathrm p}_j(q^{-3/s}\zeta).
\end{align*}
The $TT$-relation (\ref{fr3}) and its barred analogue look like
\begin{align*}
& \prod_{k = 1}^n b(q^{-1} (\zeta \eta_k^{-1})^{- s / 2}) \bbT^{\mathrm p (\ell_1, \, \ell_2, \, 0)}(\zeta) \\*
& \hspace{6em} {} = \bbT^{\mathrm p (\ell_1, \, 0, \, 0)}(\zeta) \bbT^{\mathrm p (\ell_2, \, 0, \, 0)}(q^{2/s} \zeta) - \bbT^{\mathrm p (\ell_1 + 1, \, 0, \, 0)}(q^{2/s} \zeta) \bbT^{\mathrm p (\ell_2 - 1, \, 0, \, 0)}(\zeta), \\
&\prod_{k = 1}^n b(q^{3/2} (\zeta \eta_k^{-1})^{- s / 2}) b(q^{1/2} (\zeta \eta_k^{-1})^{- s / 2}) \obbT^{\mathrm p (\ell_1, \, \ell_2, \, 0)}(\zeta) \\
& \hspace{5em} = \obbT^{\mathrm p (\ell_1, \, 0, \, 0)}(\zeta) \obbT^{\mathrm p (\ell_2, \, 0, \, 0)}(q^{- 2/s} \zeta) - \obbT^{\mathrm p (\ell_1 + 1, \, 0, \, 0)}(q^{- 2/s} \zeta) \obbT^{\mathrm p (\ell_2 - 1, \, 0, \, 0)}(\zeta).
\end{align*}
As the final example we give the expression for the $TT$-relation (\ref{fr2}) and its barred analogue:
\begin{align*}
& \bbT^{\mathrm p (\ell - 1, \, \ell - 1, \, 0)}(q^{-2/s} \zeta) \bbT^{\mathrm p (\ell + 1, \, \ell + 1, \, 0)}(\zeta) = \bbT^{\mathrm p (\ell, \, \ell, \, 0)}(q^{-2/s} \zeta) \bbT^{\mathrm p (\ell, \, \ell, \, 0)}(\zeta) \\*
& \hspace{20em} {} - \prod_{k = 1}^n b(q^{\ell + 1} (\zeta \eta_k^{-1})^{- s/2}) \bbT^{\mathrm p (\ell, \, 0, \, 0)}(\zeta), \\
& \obbT^{\mathrm p (\ell - 1, \, \ell - 1, \, 0)}(q^{2/s} \zeta) \obbT^{\mathrm p (\ell + 1, \, \ell + 1, \, 0)}(\zeta) = \obbT^{\mathrm p (\ell, \, \ell, \, 0)}(q^{2/s} \zeta) \obbT^{\mathrm p (\ell, \, \ell, \, 0)}(\zeta) \\*
& \hspace{9em} - \prod_{k = 1}^n b(q^{- \ell - 3/2} (\zeta \eta_k^{-1})^{- s/2}) b(q^{- \ell - 1/2} (\zeta \eta_k^{-1})^{- s/2}) \obbT^{\mathrm p (\ell, \, 0, \, 0)}(\zeta).
\end{align*}
It is not difficult to obtain the expressions for all other functional relations in terms of polynomial objects. It is also worth to note that relations (\ref{bttot}) take the form
\begin{align*}
& \obbT^{(\ell, \, 0, \, 0)}(\zeta) = \prod_{k = 1}^n b(q^{1/2} (\zeta \eta_k^{-1})^{-s/2}) \bbT^{(\ell, \, 0, \, 0)}(q^{(2 \ell + 1)/s} \zeta), \\
& \obbT^{(\ell, \, \ell, \, 0)}(\zeta) = \prod_{k = 1}^n b(q^{- (2 \ell + 1)/2} (\zeta \eta_k^{-1})^{-s/2}) \bbT^{(\ell, \, \ell, \, 0)}(q^{(2 \ell - 1)/s} \zeta).
\end{align*}
\section{Conclusions}
We gave a detailed construction of the universal integrability objects related to the integrable systems associated with the quantum group $\uqlsliii$. The full proof of the functional relations in the form independent of the representation of the quantum group on the quantum space was presented. Generalizations to the case of the quantum group $\mathrm{U}_q(\mathcal L(\mathfrak{sl}_N))$ and quantum supergroup $\mathrm{U}_q(\mathcal L(\mathfrak{gl}_{N | M})$ are discussed in the papers \cite{Koj08} and \cite{Tsu13b}, respectively, however, mostly without proofs. However, the proofs for the case of the quantum supergroup $\mathrm{U}_q(\mathcal L(\mathfrak{sl}_{2 | 1}))$ are given in \cite{BazTsu08}. A discussion of the functional relations for the systems related to $\uqlslii$ with $q$ being a root of unity can be found in \cite{Kor05a} and references therein.
Note that, in fact, we define and enumerate the $Q$-operators with the help of the elements of the automorphisms group of the Dynkin diagram of the affine Lie algebra $\widehat{\calL}(\mathfrak{sl}_3)$. The similar approach is used for the case of the integrable systems related to the quantum group $\mathrm{U}_q(\mathcal L(\mathfrak{sl}_N))$ in the paper \cite{Koj08}. In the paper \cite{Tsu10} Tsuboi suggested defining and enumerating $Q$-operators with the help of Hasse diagrams. This approach was used then in \cite{BazFraLukMenSta11, FraLukMenSta11, KazLeuTsu12, AleKazLeuTsuZab13, Tsu13a, FraLukMenSta13, Tsu13b}.\footnote{We are sorry for the inevitable incompleteness of the list.} We will not discuss here what is the ``right'' way, and only note that for the case considered in the present paper both approaches are equivalent.
The next remark is on $TT$-relations. Usually one considers only three-term fusion relations of the type (\ref{fr1}) and (\ref{fr2}), see, for example, \cite{KluPea92, KunNakSuz94}. We demonstrated that in our case the four-term $TT$-relations of the type (\ref{pjt1}) are also useful, in particular, to prove the quantum Jacobi--Trudi identity. It is evident from our construction that in the case of $\mathrm{U}_q(\mathcal L(\mathfrak{sl}_N))$ one has in general $(N+1)$-term $TT$-relations.
Among other approaches to the derivation of the $TQ$-relations for the ``deformed'' case we would like to mention the approach based on the factorization of the Yang--Baxter operators, see \cite{ChiDerKarKir13} and references therein, and the approach based on the concept of $q$-cha\-rac\-ters~\cite{FreRes99} developed in the paper \cite{FreHer13}. It would be interesting to generalize the approach based on formulas for group characters \cite{KazLeuTsu12, AleKazLeuTsuZab13} to the ``deformed'' case.
\vskip .5em
{\em Acknowledgements.\/} This work was supported in part by the DFG grant KL \hbox{645/10-1} and by the Volkswagen Foundation. Kh.S.N. and A.V.R. were supported in part by the RFBR grants \#~13-01-00217 and \#~14-01-91335.
|
2,869,038,154,665 | arxiv | \section{IV. CONCLUSION}
In summary, we have systematically investigated the evolution of electronic states in the band structure of La:BaSnO$_\text{3}$ films at different La doping levels. A close connection between the transport and the spectroscopic characteristics is demonstrated. In particular, increasing the carrier concentration in the conduction band by doping is observed to significantly affect the core and valence band spectra. The Sn~3d core line shape presents a pronounced asymmetry variation with the carrier density, and is fitted following the plasmon model applicable to metallic systems. Scans around the valence band spectra allowed the detection of the occupied states in the conduction bands. It is determined that surface contamination could potentially induce surface carrier accumulation, supported by the increase in the intensity of the CBM detected in the surface exposed to contamination. This study presents a detailed characterization the chemical composition of the near-surface region of La:BaSnO$_\text{3}$, and it provides a better picture of the interplay between the doping concentration, electronic band structure and transport properties of epitaxial La:BaSnO$_\text{3}$ films.
\\ \\
\indent B. P. Doyle, A. P. Nono Tchiomo, A. R. E. Prinsloo and E. Carleschi acknowledge funding support from the National Research Foundation (NRF) of South Africa under Grant Nos. 93205, 90698, 99030, and 111985. W. Sigle and P. van Aken acknowledge funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 823717 -ESTEEM3.
|
2,869,038,154,666 | arxiv | \section{Introduction}
A flexible and versatile way to model dependence is via copulas. A
fundamental tool for inference is the empirical copula, which basically
is equal to the empirical distribution function of the sample of
multivariate ranks, rescaled to the unit interval. The asymptotic
behavior of the empirical copula process was studied in, amongst
others, Stute \cite{stute1984}, G{\"a}nssler and Stute
\cite{gaensslerstute1987},
Chapter~5, van~der Vaart and Wellner \cite{vandervaartwellner1996}, page~389,
Tsukahara \cite{tsukahara2000,tsukahara2005},
Fermanian \textit{et al.} \cite{fermanianradulovicwegkamp2004},
Ghoudi and R{\'e}millard \cite{ghoudiremillard2004}, and
van~der Vaart and Wellner~\cite{vandervaartwellner2007}. Weak
convergence is shown typically for copulas that are continuously
differentiable on the closed hypercube, and rates of convergence of
certain remainder terms have been established for copulas that are
twice continuously differentiable on the closed hypercube.
Unfortunately, for many (even most) popular copula families, even the
first-order partial derivatives of the copula fail to be continuous at
some boundary points of the hypercube.
\begin{example}[(Tail dependence)]\label{extaildep}
Let $C$ be a bivariate copula with first-order partial derivatives $\dot
{C}_1$ and $\dot{C}_2$ and positive lower tail dependence coefficient
$\lambda= \lim_{u \downarrow0} C(u, u)/u > 0$. On the one hand, $\dot
{C}_1(u, 0) = 0$ for all $u \in[0, 1]$ by the fact that $C(u, 0) = 0$
for all $u \in[0, 1]$. On the other hand, $\dot{C}_1(0, v) = \lim_{u
\downarrow0} C(u, v)/u \ge\lambda> 0$ for all $v \in(0,1]$. It
follows that $\dot{C}_1$ cannot be continuous at the point $(0, 0)$;
similarly for~$\dot{C}_2$. For copulas with a positive upper tail
dependence coefficient, the first-order partial derivatives cannot be
continuous at the point $(1, 1)$.
\end{example}
Likewise, for the Gaussian copula with non-zero correlation parameter
$\rho$, the first-order partial derivatives fail to be continuous at
the points $(0, 0)$ and $(1, 1)$ if $\rho> 0$ and at the points $(0,
1)$ and $(1, 0)$ if $\rho< 0$; see also Example~\ref{exgaussian}
below. As a consequence, the cited results on the empirical copula
process do not apply to such copulas. This problem has been largely
ignored in the literature, and unjustified calls to the above results
abound. A~notable exception is the paper by Omelka, Gijbels, and Veraverbeke~\cite
{omelkagijbelsveraverbeke2009}. On page~3031 of that paper, it is
claimed that weak convergence of the empirical copula process still
holds if the first-order partial derivatives are continuous at $[0,
1]^2 \setminus\{(0, 0), (0, 1), (1, 0), (1, 1)\}$.
It is the aim of this paper to remedy the situation by showing that the
earlier cited results on the empirical copula process actually do hold
under a much less restrictive assumption, including indeed many copula
families that were hitherto excluded. The assumption is non-restrictive
in the sense that it is needed anyway to ensure that the candidate
limiting process exists and has continuous trajectories. The results
are stated and proved in general dimensions. When specialized to the
bivariate case, the condition is substantially weaker still than the
above-mentioned condition in Omelka, Gijbels, and Veraverbeke \cite{omelkagijbelsveraverbeke2009}.
Let $F$ be a $d$-variate cumulative distribution function (c.d.f.) with
continuous margins $F_1, \ldots, F_d$ and copula $C$, that is, $F(x) =
C(F_1(x_1), \ldots, F_d(x_d))$ for $x \in\reals^d$. Let $X_1, \ldots,
X_n$ be independent random vectors with common distribution $F$, where
$X_i = (X_{i1}, \ldots, X_{id})$. The empirical copula was defined in
Deheuvels \cite{deheuvels1979} as
\begin{equation}
\label{eempcop}
C_n(u) = F_n ( F_{n1}^{-1}(u_1), \ldots, F_{nd}^{-1}(u_d) ),\qquad
u \in[0, 1]^d,
\end{equation}
where $F_n$ and $F_{nj}$ are the empirical joint and marginal cdfs of
the sample and where $F_{nj}^{-1}$ is the marginal quantile function of
the $j$th coordinate sample; see Section~\ref{secpreliminaries} below
for details. The empirical copula $C_n$ is invariant under monotone
increasing transformations on the data, so it depends on the data only
through the ranks. Indeed, up to a difference of order $1/n$, the
empirical copula can be seen as the empirical c.d.f. of the sample of
normalized ranks, as, for instance, in R{\"u}schendorf \cite{ruschendorf1976}. For
convenience, the definition in equation~\eqref{eempcop} will be
employed throughout the paper.
The empirical copula process is defined by
\begin{equation}
\label{eCCn}
\CC_n = \sqrt{n} (C_n - C),
\end{equation}
to be seen as a random function on $[0, 1]^d$. We are essentially
interested in the asymptotic distribution of $\CC_n$ in the space $\ell
^\infty([0, 1]^d)$ of bounded functions from $[0, 1]^d$ into $\reals$
equipped with the topology of uniform convergence. Weak convergence is
to be understood in the sense used in the monograph by van~der Vaart and Wellner \cite
{vandervaartwellner1996}, in particular their Definition~1.3.3.
Although the empirical copula is itself a rather crude estimator of
$C$, it plays a crucial rule in more sophisticated inference procedures
on $C$, much in the same way as the empirical c.d.f. $F_n$ is a
fundamental object for creating and understanding inference procedures
on $F$ or parameters thereof. For instance, the empirical copula is a
basic building block when estimating copula densities (Chen and Huang \cite
{chenhuang2007}, Omelka, Gijbels and Veraverbeke~\cite{omelkagijbelsveraverbeke2009}) or dependence
measures and functions (Schmid \textit{et~al.} \cite{schmidetal2010},
Genest and Segers \cite{genestsegers2010}), for testing for independence (Genest and R{\'e}millard~\cite
{genestremillard2004}, Genest, Quessy and R{\'e}millard~\cite{genestquessyremillard2007},
Kojadinovic and Holmes \cite{kojadinovicholmes2009}), for testing for shape constraints (Denuit and Scaillet \cite
{denuitscaillet2004}, Scaillet~\cite{scaillet2005}, Kojadinovic and Yan~\cite{kojadinovicyan2010}), for
resampling (R\'emillard and Scaillet \cite{remillardscaillet2009}, B{\"u}cher and Dette \cite{bucherdette2010}), and
so forth.
After some preliminaries in Section~\ref{secpreliminaries}, the
principal result of the paper is given in Section~\ref{secempproc},
stating weak convergence of the empirical copula process under the
condition that for every $j \in\{1, \ldots, d\}$, the $j$th
first-order partial derivative $\dot{C}_j$ exists and is continuous on
the set $\{ u \in[0, 1]^d \dvt 0 < u_j < 1 \}$. The condition is
non-restrictive in the sense that it is necessary for the candidate
limiting process to exist and have continuous trajectories. Moreover,
the resampling method based on the multiplier central limit theorem
proposed in R\'emillard and Scaillet \cite{remillardscaillet2009} is shown to be valid under
the same condition. Section~\ref{secstute} provides a refinement of
the main result: under certain bounds on the second-order partial
derivatives that allow for explosive behavior near the boundaries, the
almost sure error bound on the remainder term in Stute \cite{stute1984} and
Tsukahara \cite{tsukahara2005} can be entirely recovered. The result hinges on
an exponential inequality for a certain oscillation modulus of the
multivariate empirical process detailed in the \hyperref[app]{Appendix}; the inequality
is a generalization of a similar inequality in Einmahl \cite{einmahl1987} and
was communicated by Hideatsu Tsukahara. Section~\ref{secexamples}
concludes the paper with a number of examples of copulas that do or do
not verify certain sets of conditions.
\section{Preliminaries}
\label{secpreliminaries}
Let $X_i = (X_{i1}, \ldots, X_{id})$, $i \in\{1, 2, \ldots\}$, be
independent random vectors with common c.d.f.~$F$ whose margins $F_1,
\ldots, F_d$ are continuous and whose copula is denoted by~$C$. Define
$U_{ij} = F_j(X_{ij})$ for $i \in\{1, \ldots, n\}$ and $j \in\{1,
\ldots, d\}$. The random vectors $U_i = (U_{i1}, \ldots, U_{id})$
constitute an i.i.d. sample from $C$. Consider the following empirical
distribution functions: for $x \in\reals^d$ and for $u \in[0, 1]^d$,
\begin{eqnarray*}
F_n(x) &= &\frac{1}{n} \sum_{i=1}^n \ind_{(-\infty, x]}(X_i), \qquad
F_{nj}(x_j) = \frac{1}{n} \sum_{i=1}^n \ind_{(-\infty, x_j]}(X_{ij}),
\\
G_n(u) &=& \frac{1}{n} \sum_{i=1}^n \ind_{[0, u]}(U_i),\qquad
G_{nj}(u_j) = \frac{1}{n} \sum_{i=1}^n \ind_{[0, u_j]}(U_{ij}).
\end{eqnarray*}
Here, order relations on vectors are to be interpreted componentwise,
and $\ind_A(x)$ is equal to $1$ or~$0$ according to whether $x$ is an
element of $A$ or not. Let $X_{1:n,j} < \cdots< X_{n:n,j}$ and
$U_{1:n,j} < \cdots< U_{n:n,j}$ be the vectors of ascending order
statistics of the $j$th coordinate samples $X_{1j}, \ldots, X_{nj}$ and
$U_{1j}, \ldots, U_{nj}$, respectively. The marginal quantile functions
associated to $F_{nj}$ and $G_{nj}$ are
\begin{eqnarray*}
F_{nj}^{-1}(u_j)
&= &\inf\{ x \in\reals\dvt F_{nj}(x) \ge u_j \} \\
&=&
\cases{
X_{k:n,j}, &\quad $\mbox{if $(k-1)/n < u_j \le k/n$,}$\vspace*{2pt} \cr
-\infty,& \quad$\mbox{if $u_j = 0$;}$
}
\\
G_{nj}^{-1}(u_j)
&=& \inf\{ u \in[0, 1] \dvt G_{nj}(u) \ge u_j \} \\
&=&
\cases{
U_{k:n,j}, & \quad$\mbox{if $(k-1)/n < u_j \le k/n$,}$\vspace*{2pt}\cr
0, & \quad$\mbox{if $u_j = 0$.}$}
\end{eqnarray*}
Some thought shows that $X_{ij} \le F_{nj}^{-1}(u_j)$ if and only if
$U_{ij} \le G_{nj}^{-1}(u_j)$, for all $i \in\{1, \ldots, n\}$, $j \in
\{1, \ldots, d\}$ and $u_j \in[0, 1]$. It follows that the empirical
copula in equation~\eqref{eempcop} is given by
\[
C_n(u)
= G_n ( G_{n1}^{-1}(u_1), \ldots, G_{nd}^{-1}(u_d) ).
\]
In particular, without loss of generality we can work directly with the
sample $U_1, \ldots, U_n$ from~$C$.
The empirical processes associated to the empirical distribution
functions $G_n$ and $G_{nj}$ are given by
\begin{eqnarray}
\label{ealphan}
\alpha_n(u) = \sqrt{n} \bigl( G_n(u) - C(u) \bigr), \qquad
\alpha_{nj}(u_j) = \sqrt{n} \bigl( G_{nj}(u_j) - u_j \bigr),
\end{eqnarray}
for $u \in[0, 1]^d$ and $u_j \in[0, 1]$. Note that $\alpha_{nj}(0) =
\alpha_{nj}(1) = 0$ almost surely. We have
\[
\alpha_n \dto\alpha\qquad (n \to\infty)
\]
in $\ell^\infty([0, 1]^d)$, the arrow `$\dto$' denoting weak
convergence as in Definition~1.3.3 in van~der Vaart and Wellner \cite{vandervaartwellner1996}.
The limit process $\alpha$ is a $C$-Brownian bridge, that is, a tight
Gaussian process, centered and with covariance function
\[
\cov ( \alpha(u), \alpha(v) ) = C ( u \wedge v ) - C(u) C(v),
\]
for $u, v \in[0, 1]^d$; here $u \wedge v = (\min(u_1, v_1), \ldots,
\min(u_d, v_d))$. Tightness of the process~$\alpha$ and continuity of
its mean and covariance functions implies the existence of a version of~$\alpha$ with continuous trajectories. Without loss of generality, we
assume henceforth that~$\alpha$ is such a version.
For $j \in\{1, \ldots, d\}$, let $e_j$ be the $j$th coordinate vector
in $\reals^d$. For $u \in[0, 1]^d$ such that $0 < u_j < 1$, let
\[
\dot{C}_j(u) = \lim_{h \to0} \frac{C(u + he_j) - C(u)}{h},
\]
be the $j$th first-order partial derivative of $C$, provided it exists.
\begin{condition}
\label{cdiffC}
For each $j \in\{1, \ldots, d\}$, the $j$th first-order partial
derivative $\dot{C}_j$ exists and is continuous on the set $V_{d,j} :=
\{ u \in[0, 1]^d \dvt 0 < u_j < 1 \}$.
\end{condition}
Henceforth, assume Condition~\ref{cdiffC} holds. To facilitate
notation, we will extend the domain of $\dot{C}_j$ to the whole of $[0,
1]^d$ by setting
\begin{equation}
\label{eextend}
\dot{C}_j(u) =
\cases{
\displaystyle\limsup_{h \downarrow0} \frac{C(u + he_j)}{h}, & \quad$\mbox
{if $u \in[0, 1]^d$, $u_j = 0$,}$\vspace*{2pt}\cr
\displaystyle\limsup_{h \downarrow0} \frac{C(u) - C(u - he_j)}{h}, &\quad
$\mbox{if $u \in[0, 1]^d$, $u_j = 1$.}$}
\end{equation}
In this way, $\dot{C}_j$ is defined everywhere on $[0, 1]^d$, takes
values in $[0, 1]$ (because $|C(u) - C(v)| \le\sum_{j=1}^d |u_j -
v_j|$), and is continuous on the set $V_{d,j}$, by virtue of
Condition~\ref{cdiffC}. Also note that $\dot{C}_j(u) = 0$ as soon as
$u_i = 0$ for some $i \ne j$.
\section{Weak convergence}
\label{secempproc}
In Proposition~\ref{pempproc}, Condition~\ref{cdiffC} is shown to be
sufficient for the weak convergence of the empirical copula process $\CC
_n$. In contrast to earlier results, Condition~\ref{cdiffC} does not
require existence or continuity of the partial derivatives on certain
boundaries. Although the improvement is seemingly small, it
dramatically enlarges the set of copulas to which it applies; see
Section~\ref{secexamples}. Similarly, the unconditional multiplier
central limit theorem for the empirical copula process based on
estimated first-order partial derivatives continues to hold
(Proposition~\ref{pmclt}). This result is useful as a justification of
certain resampling procedures that serve to compute critical values for
test statistics based on the empirical copula in case of a composite
null hypothesis, for instance, in the context of goodness-of-fit
testing as in~Kojadinovic and Yan \cite{kojadinovicyan2010}.
Assume first that the first-order partial derivatives $\dot{C}_j$ exist
and are continuous throughout the closed hypercube $[0, 1]^d$. For $u
\in[0, 1]^d$, define
\begin{equation}
\label{eCC}
\CC(u) = \alpha(u) - \sum_{j=1}^d \dot{C}_j(u) \alpha_j(u_j),
\end{equation}
where $\alpha_j(u_j) = \alpha(1, \ldots, 1, u_j, 1, \ldots, 1)$, the
variable $u_j$ appearing at the $j$th entry. By continuity of $\dot
{C}_j$ throughout $[0, 1]^d$, the trajectories of $\CC$ are continuous.
From Fermanian \textit{et al.} \cite{fermanianradulovicwegkamp2004} and Tsukahara \cite{tsukahara2005}
we learn that $\CC_n \dto\CC$ as $n \to\infty$ in the space $\ell
^\infty([0, 1]^d)$.
The structure of the limit process $\CC$ in equation~\eqref{eCC} can
be understood as follows. The first term, $\alpha(u)$, would be there
even if the true margins $F_j$ were used rather than their empirical
counterparts $F_{nj}$. The terms $- \dot{C}_j(u) \alpha_j(u_j)$
encode the impact of not knowing the true quantiles $F_j^{-1}(u_j)$ and
having to replace them by the empirical quantiles $F_{nj}^{-1}(u_j)$.
The minus sign comes from the Bahadur--Kiefer result stating that $\sqrt
{n}(G_{nj}^{-1}(u_j) - u_j)$ is asymptotically undistinguishable from
$- \sqrt{n} (G_{nj}(u_j) - u_j)$; see, for instance, Shorack and Wellner \cite
{shorackwellner1986}, Chapter~15. The partial derivative $\dot
{C}_j(u)$ quantifies the sensitivity of $C$ with respect to small
deviations in the $j$th margin.
Now consider the same process $\CC$ as in equation~\eqref{eCC} but
under Condition~\ref{cdiffC} and with the domain of the partial
derivatives extended to $[0, 1]^d$ as in equation~\eqref{eextend}.
Since the trajectories of~$\alpha$ are continuous and since $\alpha
_j(0) = \alpha_j(1) = 0$ for each $j \in\{1, \ldots, d\}$, the
trajectories of $\CC$ are continuous, even though $\dot{C}_j$ may fail
to be continuous at points $u \in[0, 1]^d$, such that $u_j \in\{0, 1\}
$. The process $\CC$ is the weak limit in $\ell^\infty([0, 1]^d)$ of
the sequence of processes
\begin{equation}
\label{eCCntilde}
\tilde{\CC}_n(u) = \alpha_n(u) - \sum_{j=1}^d \dot{C}_j(u) \alpha
_{nj}(u_j), \qquad u \in[0, 1]^d.
\end{equation}
The reason is that the map from $\ell^\infty([0, 1]^d)$ into itself
that sends a function $f$ to $f - \sum_{j = 1}^d \dot{C}_j \pi
_j(f)$, where $(\pi_j(f))(u) = f(1, \ldots, 1, u_j, 1, \ldots, 1)$, is
linear and bounded.
\begin{proposition}
\label{pempproc}
If Condition~\ref{cdiffC} holds, then, with $\tilde{\CC}_n$ as in
equation~\eqref{eCCntilde},
\[
\sup_{u \in[0, 1]^d} | \CC_n(u) - \tilde{\CC}_n(u) | \pto0\qquad
(n \to\infty).
\]
As a consequence, in $\ell^\infty([0, 1]^d)$,
\[
\CC_n \dto\CC\qquad (n \to\infty).
\]
\end{proposition}
\begin{pf}
It suffices to show the first statement of the proposition. For $u \in
[0, 1]^d$, put
\[
R_n(u) = | \CC_n(u) - \tilde{\CC}_n(u) |, \qquad u \in[0, 1]^d.
\]
If $u_j = 0$ for some $j \in\{1, \ldots, d\}$, then obviously $\CC
_n(u) = \tilde{\CC}_n(u) = 0$, so $R_n(u) = 0$ as well. The vector of
marginal empirical quantiles is denoted by
\begin{equation}
\label{evn}
v_n(u) = ( G_{n1}^{-1}(u_1), \ldots, G_{nd}^{-1}(u_d) ),
\qquad u \in[0, 1]^d.
\end{equation}
We have
\begin{eqnarray}
\label{edecomp1}
\CC_n(u) &= &\sqrt{n} \bigl( C_n(u) - C(u) \bigr) \nonumber\\
&= &\sqrt{n}
\{
G_n ( v_n(u) ) - C ( v_n(u) )
\}
+ \sqrt{n}
\{
C ( v_n(u) ) - C(u)
\} \\
&=& \alpha_n ( v_n(u) )
+ \sqrt{n}
\{
C ( v_n(u) ) - C(u)
\}.\nonumber
\end{eqnarray}
Since $\alpha_n$ converges weakly in $\ell^\infty([0, 1]^d)$ to a
$C$-Brownian bridge $\alpha$, whose trajectories are continuous, the
sequence $(\alpha_n)_n$ is asymptotically uniformly equicontinuous; see
Theorem~1.5.7 and Addendum~1.5.8 in van~der Vaart and Wellner \cite{vandervaartwellner1996}. As
$\sup_{u_j \in[0, 1]} | G_{nj}^{-1}(u_j) - u_j | \to0$ almost surely,
it follows that
\[
\sup_{u \in[0, 1]^d}
|
\alpha_n ( v_n(u) )
- \alpha_n(u)
|
\pto0 \qquad (n \to\infty).
\]
Fix $u \in[0, 1]^d$. Put $w(t) = u + t \{ v_n(u) - u \}$ and $f(t) =
C(w(t))$ for $t \in[0, 1]$. If $u \in(0, 1]^d$, then $v_n(u) \in(0,
1)^d$, and therefore $w(t) \in(0, 1)^d$ for all $t \in(0, 1]$, as
well. By Condition~\ref{cdiffC}, the function $f$ is continuous on
$[0, 1]$ and continuously differentiable on $(0, 1)$. By the mean value
theorem, there exists $t^* = t_n(u) \in(0, 1)$ such that $f(1) - f(0)
= f'(t^*)$, yielding
\begin{equation}
\label{edecomp2}
\sqrt{n} \{ C ( v_n(u) ) - C(u) \}
= \sum_{j=1}^d \dot{C}_j ( w(t^*) ) \sqrt{n} \bigl(
G_{nj}^{-1}(u_j) - u_j \bigr).
\end{equation}
If one or more of the components of $u$ are zero, then the above
display remains true as well, no matter how $t^* \in(0, 1)$ is
defined, because both sides of the equation are equal to zero. In
particular, if $u_k = 0$ for some $k \in\{1, \ldots, d\}$, then the
$k$th term on the right-hand side vanishes because $G_{nk}^{-1}(0) = 0$
whereas the terms with index $j \ne k$ vanish because the $k$th
component of the vector $w(t^*)$ is zero, and thus the first-order
partial derivatives $\dot{C}_j$ vanish at this point.
It is known since Kiefer \cite{kiefer1970} that
\[
\sup_{u_j \in[0, 1]} \bigl| \sqrt{n} \bigl( G_{nj}^{-1}(u_j) - u_j
\bigr) + \alpha_{nj}(u_j) \bigr| \pto0\qquad (n \to\infty).
\]
Since $0 \le\dot{C}_j \le1$, we find
\[
\sup_{u \in[0, 1]^d}
\Biggl|
\sqrt{n} \{ C ( v_n(u) ) - C(u) \}
+ \sum_{j=1}^d \dot{C}_j \bigl( u + t^* \{v_n(u) - u\} \bigr) \alpha_{nj}(u_j)
\Biggr|
\pto0
\]
as $n \to\infty$. It remains to be shown that
\[
\sup_{u \in[0, 1]^d} D_{nj}(u) \pto0\qquad (n \to\infty)
\]
for all $j \in\{1, \ldots, d\}$, where
\begin{equation}
\label{eDnj}
D_{nj}(u) = \bigl| \dot{C}_j \bigl( u + t^* \{v_n(u) - u\} \bigr) - \dot
{C}_j(u) \bigr| | \alpha_{nj}(u_j) |.
\end{equation}
Fix $\eps> 0$ and $\delta\in(0, 1/2)$. Split the supremum over $u
\in[0, 1]^d$ according to the cases $u_j \in[\delta, 1 - \delta]$ on
the one hand and $u_j \in[0, \delta) \cup(1-\delta, 1]$ on the other
hand. We have
\begin{eqnarray*}
\Pr\Bigl( \sup_{u \in[0, 1]^d} D_{nj}(u) > \eps\Bigr)
&\le&\Pr\Bigl( \sup_{u \in[0, 1]^d, u_j \in[\delta, 1-\delta]}
D_{nj}(u) > \eps\Bigr) \\
&&{}+ \Pr\Bigl( \sup_{u \in[0, 1]^d, u_j \notin[\delta, 1-\delta]}
D_{nj}(u) > \eps\Bigr).
\end{eqnarray*}
Since $\sup_{u \in[0, 1]^d} |v_n(u) - u| \to0$ almost surely, since
$\dot{C}_j$ is uniformly continuous on $\{ u \in[0, 1]^d\dvt \delta/2 \le
u_j \le1 - \delta/2 \}$, and since the sequence $\sup_{u_{nj} \in[0,
1]} |\alpha_{nj}(u_j)|$ is bounded in probability, the first
probability on the right-hand side of the previous display converges to
zero. As $|x - y| \le1$ whenever $x, y \in[0, 1]$ and since $0 \le
\dot{C}_j(w) \le1$ for all $w \in[0, 1]^d$, the second probability on
the right-hand side of the previous display is bounded by\looseness=1
\[
\Pr\Bigl( \sup_{u_j \in[0, \delta) \cup(1-\delta, 1]} | \alpha
_{nj}(u_j) | > \eps\Bigr).
\]\looseness=0
By the portmanteau lemma, the $\limsup$ of this probability as $n \to
\infty$ is bounded by
\[
\Pr\Bigl( \sup_{u_j \in[0, \delta) \cup(1-\delta, 1]} | \alpha
_{j}(u_j) | \ge\eps\Bigr).
\]
The process $\alpha_j$ being a standard Brownian bridge, the above
probability can be made smaller than an arbitrarily chosen $\eta> 0$
by choosing $\delta$ sufficiently small. We find
\[
\limsup_{n \to\infty} \Pr\Bigl( \sup_{u \in[0, 1]^d} D_{nj}(u) >
\eps\Bigr) \le\eta.
\]
As $\eta$ was arbitrary, the claim is proven.\vspace*{3pt}
\end{pf}
An alternative to the direct proof above is to invoke the functional
delta method as in Fermanian \textit{et al.} \cite{fermanianradulovicwegkamp2004}. Required
then is a generalization of Lemma~2 in the cited paper asserting
Hadamard differentiability of a certain functional under Condition~\ref{cdiffC}.
This program is carried out for the bivariate case in B\"{u}cher \cite{buecher2011},
Lemma~2.6.
For purposes of hypothesis testing or confidence interval construction,
resampling procedures are often required; see the references in the
introduction. In Fermanian \textit{et al.} \cite{fermanianradulovicwegkamp2004}, a bootstrap
procedure for the empirical copula process is proposed, whereas in R\'emillard and Scaillet \cite
{remillardscaillet2009}, a method based on the multiplier central
limit theorem is employed. Yet another method is proposed in B{\"u}cher and Dette \cite
{bucherdette2010}. In the latter paper, the finite-sample properties
of all these methods are compared in a simulation study, and the
multiplier approach by R\'emillard and Scaillet \cite{remillardscaillet2009} is found to be
best overall. Although the latter approach requires estimation of the
first-order partial derivatives, it remains valid under Condition~\ref
{cdiffC}, allowing for discontinuities on the boundaries.\looseness=1
Let $\xi_1, \xi_2, \ldots$ be an i.i.d. sequence of random variables,
independent of the random vectors $X_1, X_2, \ldots,$ and with zero
mean, unit variance, and such that $\int_0^\infty\sqrt{\Pr(|\xi_1| >
x)} \,\diff x < \infty$. Define
\begin{equation}
\label{ealphanprime}
\alpha_n'(u)
= \frac{1}{\sqrt{n}}
\sum_{i=1}^n \xi_i
\bigl(
\ind\{ X_{i1} \le F_{n1}^{-1}(u_1), \ldots, X_{id} \le
F_{nd}^{-1}(u_d) \} - C_n(u)
\bigr).
\end{equation}
In $ ( \ell^{\infty}([0, 1]^d) )^2$, we have by Lemma~A.1 in
R\'emillard and Scaillet \cite{remillardscaillet2009},
\begin{equation}
\label{emcltalpha}
(\alpha_n, \alpha_n') \dto(\alpha, \alpha')\qquad (n \to\infty),
\end{equation}
where $\alpha'$ is an independent copy of $\alpha$. Further, let $\hat
{\dot{C}}_{nj}(u)$ be an estimator of $\dot{C}_j(u)$; for instance,
apply finite differencing to the empirical copula at a spacing
proportional to $n^{-1/2}$ as in R\'emillard and Scaillet \cite{remillardscaillet2009}. Define
\begin{equation}
\label{eCCnprime}
\CC_n'(u)
= \alpha_n'(u) - \sum_{j=1}^d \hat{\dot{C}}_{nj}(u) \alpha_{nj}'(u_j),
\end{equation}
where $\alpha_{nj}'(u_j) = \alpha_n'(1, \ldots, 1, u_j, 1, \ldots, 1)$,
the variable $u_j$ appearing at the $j$th coordinate.
\begin{proposition}
\label{pmclt}
Assume Condition~\ref{cdiffC}. If there exists a constant $K$ such
that $|\hat{\dot{C}}_{nj}(u)| \le K$ for all $n, j, u$, and if
\begin{equation}
\label{ediffCestim}
\sup_{u \in[0, 1]^d : u_j \in[\delta, 1-\delta]} | \hat{\dot
{C}}_{nj}(u) - \dot{C}_j(u) | \pto0
\qquad (n \to\infty)
\end{equation}
for all $\delta\in(0, 1/2)$ and all $j \in\{1, \ldots, d\}$, then in
$ ( \ell^{\infty}([0, 1]^d) )^2$, we have
\[
(\CC_n, \CC_n') \dto(\CC, \CC')\qquad (n \to\infty),
\]
where $\CC'$ is an independent copy of $\CC$.
\end{proposition}
\begin{pf}
Recall the process $\alpha_n'$ in equation~\eqref{ealphanprime}, and define
\[
\tilde{\CC}_n'(u) = \alpha_n'(u) - \sum_{j=1}^d \dot{C}_j(u) \alpha
_{nj}'(u_j), \qquad u \in[0, 1]^d.
\]
The difference with the process $\CC_n'$ in equation~\eqref{eCCnprime}
is that the true partial derivatives of $C$ are used rather than the
estimated ones. By Proposition~\ref{pempproc} and equation~\eqref
{emcltalpha}, we have
\[
(\CC_n, \tilde{\CC}_n') \dto(\CC, \CC')\qquad (n \to\infty)
\]
in $ ( \ell^{\infty}([0, 1]^d) )^2$. Moreover,
\[
| \CC_n'(u) - \tilde{\CC}_n'(u) |
\le\sum_{j=1}^d | \hat{\dot{C}}_{nj}(u) - \dot{C}_j(u) |
| \alpha_{nj}'(u_j) |.
\]
It suffices to show that each of the $d$ terms on the right-hand side
converges to $0$ in probability, uniformly in $u \in[0, 1]^d$. The
argument is similar to the one at the end of the proof of
Proposition~\ref{pempproc}. Pick $\delta\in(0, 1/2)$, and split the
supremum according to the cases $u_j \in[\delta, 1-\delta]$ and $u_j
\in[0, \delta) \cup(1-\delta, 1]$. For the first case, use
equation~\eqref{ediffCestim} together with tightness of $\alpha
_{nj}'$. For the second case, use the assumed uniform boundedness of
the partial derivative estimators and the fact that the limit process
$\hat{\alpha}_j$ is a standard Brownian bridge, having continuous
trajectories and vanishing at $0$ and $1$.
\end{pf}
\section{Almost sure rate}
\label{secstute}
Recall the empirical copula process $\CC_n$ in equation~\eqref{eCCn}
together with its approximation~$\tilde{\CC}_n$ in equation~\eqref
{eCCntilde}. If the second-order partial derivatives of $C$ exist and
are continuous on $[0, 1]^d$, then the original result by Stute \cite
{stute1984}, proved in detail in Tsukahara~\cite{tsukahara2000}, reinforces the
first claim of Proposition~\ref{pempproc} to
\begin{eqnarray}
\label{estute}
&&\sup_{u \in[0, 1]^d} |\CC_n(u) - \tilde{\CC}_n(u)|
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&&\quad= \mathrm{O} ( n^{-1/4} (\log n)^{1/2} (\log\log n)^{1/4} )\qquad
(n \to\infty) \mbox{ almost surely.}
\end{eqnarray}
For many copulas, however, the second-order partial derivatives explode
near certain parts of the boundaries. The question then is how this
affects the above rate. Recall $V_{d,j} = \{ u \in[0, 1]^d \dvt 0 < u_j <
1 \}$ for $j \in\{1, \ldots, d\}$.
\begin{condition}
\label{cdiffCK}
For every $i, j \in\{1, \ldots, d\}$, the second-order partial
derivative $\ddot{C}_{ij}$ is defined and continuous on the set
$V_{d,i} \cap V_{d,j}$, and there exists a constant $K > 0$ such that
\[
|\ddot{C}_{ij}(u)| \le K \min\biggl( \frac{1}{u_i(1-u_i)} , \frac
{1}{u_j(1-u_j)} \biggr), \qquad u \in V_{d,i} \cap V_{d,j}.
\]
\end{condition}
Condition~\ref{cdiffCK} holds, for instance, for absolutely continuous
bivariate Gaussian copulas and for bivariate extreme-value copulas
whose Pickands dependence functions are twice continuously
differentiable and satisfy a certain bound; see Section~\ref{secexamples}.
Under Condition~\ref{cdiffCK}, the rate in equation~\eqref{estute}
can be entirely recovered. The following proposition has benefited from
a suggestion of John H.J. Einmahl leading to an improvement of a
result in an earlier version of the paper claiming a slightly slower
rate. Furthermore, part of the proof is an adaptation due to Hideatsu
Tsukahara of the end of the proof of Theorem~4.1 in Tsukahara \cite
{tsukahara2000}, upon which the present result is based.
\begin{proposition}
\label{pempprocrate}
If Conditions~\ref{cdiffC} and~\ref{cdiffCK} are verified, then
equation~\eqref{estute} holds.
\end{proposition}
\begin{pf}
Combining equations~\eqref{edecomp1} and \eqref{edecomp2} in the
proof of Proposition~\ref{pempproc} yields
\[
\CC_n(u) = \alpha_n (v_n(u) ) + \sum_{j=1}^d \dot{C}_j (
w(t^*) ) \sqrt{n} \bigl( G_{nj}^{-1}(u_j) - u_j \bigr),
\qquad u \in[0, 1]^d,
\]
with $\alpha_n$ the ordinary multivariate empirical process in
equation~\eqref{ealphan}, $v_n(u)$ the vector of marginal empirical
quantiles in equation~\eqref{evn}, and $w(t^*) = u + t^* \{ v_n(u) - u
\}$ a certain point on the line segment between $u$ and $v_n(u)$ with
local coordinate $t^* = t_n(u) \in(0, 1)$. In view of the definition
of $\tilde{\CC}_n(u)$ in equation~\eqref{eCCntilde}, it follows that
\[
\sup_{u \in[0, 1]^d} |\CC_n(u) - \tilde{\CC}_n(u)| \le\I_n + \II_n +
\III_n,
\]
where
\begin{eqnarray*}
\I_n &=& \sup_{u \in[0, 1]^d} | \alpha_n ( v_n(u) ) -
\alpha_n(u) |, \\
\II_n &= &\sum_{j=1}^d \sup_{u \in[0, 1]^d} \bigl| \sqrt{n} \bigl(
G_{nj}^{-1}(u_j) - u_j \bigr) + \alpha_{nj}(u_j) \bigr|, \\
\III_n &=& \sum_{j=1}^d \sup_{u \in[0, 1]^d} D_{nj}(u),
\end{eqnarray*}
with $D_{nj}(u)$ as defined in equation~\eqref{eDnj}. By Kiefer \cite
{kiefer1970}, the term $\II_n$ is $\mathrm{O} ( n^{-1/4} (\log n)^{1/2}
\times (\log\log n)^{1/4} )$ as $n \to\infty$, almost surely. It
suffices to show that the same almost sure rate is valid for $\I_n$ and
$\III_n$, too.
\textit{The term $\I_n$.}
The argument is adapted from the final part of the proof of Theorem~4.1
in Tsukahara \cite{tsukahara2000}, and its essence was kindly provided by
Hideatsu Tsukahara. We have
\[
\I_n \le M_n (A_n), \qquad A_n = \max_{j \in\{1, \ldots, d\}} \sup_{u_j
\in[0, 1]} | G_{nj}^{-1}(u_j) - u_j |,
\]
and $M_n(a)$ is the oscillation modulus of the multivariate empirical
process $\alpha_n$ defined in equation~\eqref{eoscillation}. We will
employ the exponential inequality for $\Pr\{ M_n(a) \ge\lambda\}$
stated in Proposition~\ref{poscillation}, which generalizes
Inequality~3.5 in Einmahl \cite{einmahl1987}. Set $a_n = n^{-1/2} (\log\log
n)^{1/2}$. By the Chung--Smirnov law of the iterated logarithm for
empirical distribution functions (see, e.g., Shorack and Wellner \cite
{shorackwellner1986}, page 504),
\begin{eqnarray}
\label{eLIL}
\limsup_{n \to\infty} \frac{1}{a_n} \sup_{u_j \in[0, 1]} |
G_{nj}^{-1}(u_j) - u_j |
&= &\limsup_{n \to\infty} \frac{1}{a_n} \sup_{v_j \in[0, 1]} | v_j -
G_{nj}(v_j) |
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&= &1/\sqrt{2}\qquad \mbox{almost surely}.
\end{eqnarray}
Choose $\lambda_n = 2 K_2^{-1/2} n^{-1/4} (\log n)^{1/2} (\log\log
n)^{1/4}$ for $K_2$ as in Proposition~\ref{poscillation}. Since
$\lambda_n / (n^{1/2} a_n) \to0$ as $n \to\infty$, and since the
function $\psi$ in equation~\eqref{epsi} below is decreasing with $\psi
(0) = 1$, it follows that $\psi(\lambda_n / (n^{1/2} a_n)) \ge1/2$ for
sufficiently large $n$. Furthermore, we have
\[
\sum_{n \ge2} \frac{1}{a_n} \exp\biggl( - \frac{K_2 \lambda
_n^2}{2a_n} \biggr) = \sum_{n \ge2} \frac{1}{n^{3/2} (\log\log
n)^{1/2}} < \infty.
\]
By the Borel--Cantelli lemma and Proposition~\eqref{poscillation}, as
$n \to\infty$,
\[
\I_n \le M_n(A_n) \le M_n(a_n) = \mathrm{O} ( n^{-1/4} (\log n)^{1/2} (\log
\log n)^{1/4} )\qquad \mbox{almost surely}.
\]
\textit{The term $\III_n$.}
Let
\[
\delta_n = n^{-1/2} (\log n) (\log\log n)^{-1/2}.
\]
Fix $j \in\{1, \ldots, d\}$. We split the supremum of $D_{nj}(u)$ over
$u \in[0, 1]^d$ according to the cases $u_j \in[0, \delta_n) \cup
(1-\delta_n, 1]$ and $u_j \in[\delta_n, 1-\delta_n]$.
Since $0 \le\dot{C}_j \le1$, the supremum over $u \in[0, 1]^d$ such
that $u_j \in[0, \delta_n) \cup(1-\delta_n, 1]$ is bounded by
\[
\sup_{u \in[0, 1]^d: u_j \in[0, \delta_n) \cup(1-\delta_n, 1]}
D_{nj}(u) \le\sup_{u_j \in[0, \delta_n) \cup(1-\delta_n, 1]} |\alpha
_{nj}(u_j)|.
\]
By Theorem~2.(iii) in Einmahl and Mason \cite{einmahlmason1988} applied to $(d, \nu,
k_n) = (1, 1/2, n \delta_n)$, the previous supremum is of the order
\begin{eqnarray}
\label{ebound1}
\sup_{u_j \in[0, \delta_n) \cup(1-\delta_n]} |\alpha_{nj}(u_j)| &= &\mathrm{O}
( \delta_n^{1/2} (\log\log n)^{1/2} )
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&=&\mathrm{O} ( n^{-1/4} (\log n)^{1/2} (\log\log n)^{1/4} ) \qquad
(n \to\infty) \mbox{ almost surely.}\qquad
\end{eqnarray}
Next let $u \in[0, 1]^d$ be such that $\delta_n \le u_j \le1-\delta
_n$. By Lemma~\ref{lincr} below and by convexity of the function $(0,
1) \ni s \mapsto1/\{s(1-s)\}$,
\begin{eqnarray*}
D_{nj}(u)
&= & \bigl| \dot{C}_j \bigl( u + \lambda_n(u) \{v_n(u) - u\} \bigr) - \dot
{C}_j(u) \bigr| | \alpha_{nj}(u_j) | \\
&\le& K \max\biggl( \frac{1}{u_j(1-u_j)} , \frac
{1}{G_{nj}^{-1}(u_j) (1 - G_{nj}^{-1}(u_j))} \biggr) \norm{ v_n(u) -
u }_1 | \alpha_{nj}(u_j) |,
\end{eqnarray*}
with $\norm{x}_1 = \sum_{j=1}^d |x_j|$. Let $b_n = (\log n)^{1/2}
\log\log n$; clearly $\sum_{n=2}^\infty n^{-1} b_n^{-2} < \infty$. By
Cs{\'a}ki~\cite{csaki1974} or Mason \cite{mason1981},
\[
\Pr\biggl( \sup_{0 < s < 1} \frac{|\alpha_{nj}(s)|}{(s(1-s))^{1/2}} >
b_n \mbox{ infinitely often} \biggr) = 0.
\]
It follows that, with probability one, for all sufficiently large $n$,
\[
|\alpha_{nj}(u_j)| \le \bigl( u_j(1-u_j) \bigr)^{1/2} b_n,\qquad
u_j \in[0, 1].
\]
Let $I$ denote the identity function on $[0, 1]$, and let $\norm{ \cdot
}_\infty$ denote the supremum norm. For $u_j \in[\delta_n, 1-\delta_n]$,
\begin{eqnarray*}
G_{nj}^{-1}(u_j) &=& u_j \biggl ( 1 + \frac{G_{nj}^{-1}(u_j) -
u_j}{u_j} \biggr)\ge u_j \biggl ( 1 - \frac{\norm{G_{nj}^{-1} - I}_\infty}{\delta_n}
\biggr), \\
1 - G_{nj}^{-1}(u_j) &\ge&(1-u_j) \biggl( 1 - \frac{\norm{G_{nj}^{-1}
- I}_\infty}{\delta_n} \biggr).
\end{eqnarray*}
By the law of the iterated logarithm (see \eqref{eLIL})
\[
\norm{G_{nj}^{-1} - I} = \mathrm{o}(\delta_n)\qquad (n \to\infty) \mbox{ almost surely.}
\]
We find that with probability one, for all sufficiently large $n$ and
for all $u \in[0, 1]^d$ such that $u_j \in[\delta_n, 1 - \delta_n]$,
\[
D_{nj}(u) \le2K \bigl(u_j(1-u_j) \bigr)^{-1/2} \norm{v_n(u) -
u}_1 b_n.
\]
We use again the law of the iterated logarithm in \eqref{eLIL} to
bound $\norm{ v_n(u) - u }_1$. As a~consequence, with probability one,
\begin{eqnarray}
\label{ebound2}
\sup_{u \in[0, 1]^d : u_j \in[\delta_n, 1-\delta_n]} D_{nj}(u)
&=& \mathrm{O} ( \delta_n^{-1/2} (\log\log n)^{1/2} n^{-1/2} b_n
)
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&=& \mathrm{O} ( n^{-1/4} (\log\log n)^{7/4} )\qquad
(n \to\infty) \mbox{ almost surely}.
\end{eqnarray}
The bound in \eqref{ebound2} is dominated by the one in \eqref
{ebound1}. The latter therefore yields the total rate.
\end{pf}
\begin{lemma}
\label{lincr}
If Conditions~\ref{cdiffC} and~\ref{cdiffCK} hold, then
\begin{equation}
\label{eincr}
|\dot{C}_j(v) - \dot{C}_j(u)| \le K \max\biggl( \frac
{1}{u_j(1-u_j)} , \frac{1}{v_j(1-v_j)} \biggr) \norm{v - u}_1,
\end{equation}
for every $j \in\{1, \ldots, d\}$ and for every $u, v \in[0, 1]^d$
such that $0 < u_j < 1$ and $0 < v_j < 1$; here $\norm{x}_1 = \sum
_{i=1}^d |x_i|$ denotes the $L_1$-norm.
\end{lemma}
\begin{pf}
Fix $j \in\{1, \ldots, d\}$ and $u, v \in[0, 1]^d$ such that $u_j,
v_j \in(0, 1)$. Consider the line segment $w(t) = u + t(v-u)$ for $t
\in[0, 1]$, connecting $w(0) = u$ with $w(1) = v$; put $w_i(t) = u_i +
t(v_i - u_i)$ for $i \in\{1, \ldots, d\}$. Clearly $0 < w_j(t) < 1$
for all $t \in[0, 1]$. Next, consider the function $f(t) = \dot
{C}_j(w(t))$ for $t \in[0, 1]$. The function $f$ is continuous on $[0,
1]$ and continuously differentiable on $(0, 1)$. Indeed, if $u_i \ne
v_i$ for some $i \in\{1, \ldots, d\}$, then $0 < w_i(t) < 1$ for all
$t \in(0, 1)$; if $u_i = v_i$, then $w_i(t) = u_i = v_i$ does not
depend on $t$ at all. Hence, the derivative of $f$ in $t \in(0, 1)$ is
given by
\[
f'(t) = \sum_{i \in I} (v_i - u_i) \ddot{C}_{ij}(w(t)),
\]
where $\mathcal{I} = \{ i \in\{1, \ldots, d\} \dvt u_i \ne v_i \}$. By
the mean-value theorem, we obtain that for some $t^* \in(0, 1)$,
\[
\dot{C}_j(v) - \dot{C}_j(u) = f(1) - f(0) = f'(t^*) = \sum_{i \in
\mathcal{I}} (v_i - u_i) \ddot{C}_{ij}(w(t^*)).
\]
As a consequence,
\[
|\dot{C}_j(u) - \dot{C}_j(v)| \le\norm{v-u}_1 \max_{i \in\mathcal
{I}} \sup_{0 < t < 1} |\ddot{C}_{ij}(w(t))|.
\]
By Condition~\ref{cdiffCK},
\[
|\dot{C}_j(u) - \dot{C}_j(v)| \le\norm{v-u}_1 K \sup_{0 < t < 1}
\frac{1}{w_j(t) \{ 1 - w_j(t) \}}.
\]
Finally, since the function $s \mapsto1 / \{ s (1-s) \}$ is convex on
$(0, 1)$ and since $w_j(t)$ is a convex combination of $u_j$ and $v_j$,
the supremum of $1 / [ w_j(t) \{ 1 - w_j(t) \} ]$ over $t \in[0, 1]$
must be attained at one of the endpoints $u_j$ or $v_j$. Equation~\eqref
{eincr} follows.
\end{pf}
\section{Examples}
\label{secexamples}
\begin{example}[(Gaussian copula)]
\label{exgaussian}
Let $C$ be the $d$-variate Gaussian copula with correlation matrix $R
\in\reals^{d \times d}$, that is,
\[
C(u) = \Pr\Biggl( \bigcap_{j=1}^d \{ \Phi(X_j) \le u_j \} \Biggr),
\qquad u \in[0, 1]^d,
\]
where $X = (X_1, \ldots, X_d)$ follows a $d$-variate Gaussian
distribution with zero means, unit variances, and correlation matrix
$R$; here $\Phi$ is the standard normal c.d.f. It can be checked readily
that if the correlation matrix $R$ is of full rank, then Condition~\ref
{cdiffC} is verified, and Propositions~\ref{pempproc} and~\ref
{pmclt} apply.
Still, if $0 < \rho_{1j} = \corr(X_1, X_j) < 1$ for all $j \in\{2,
\ldots, d\}$, then on the one hand we have $\lim_{u_1 \downarrow0} \dot
{C}_1(u_1, u_{-1}) = 1$ for all $u_{-1} \in(0, 1]^{d-1}$, whereas on
the other hand we have $\dot{C}_1(u) = 0$ as soon as $u_j = 0$ for some
$j \in\{2, \ldots, d\}$. As a consequence, $\dot{C}_1$ cannot be
extended continuously to the set $\{0\} \times([0, 1]^{d-1} \setminus
(0, 1]^{d-1})$.
In the bivariate case, Condition~\ref{cdiffCK} can be verified by
direct calculation, provided the correlation parameter $\rho$ satisfies
$|\rho| < 1$.
\end{example}
\begin{example}[(Archimedean copulas)]
\label{exarchimedean}
Let $C$ be a $d$-variate Archimedean copula; that is,
\[
C(u) = \phi^{-1} \bigl( \phi(u_1) + \cdots+ \phi(u_d) \bigr), \qquad u
\in[0, 1]^d,
\]
where the generator $\phi\dvtx [0, 1] \to[0, \infty]$ is convex,
decreasing, finite on $(0, 1]$, and vanishes at~$1$, whereas $\phi^{-1}
\dvtx [0, \infty) \to[0, 1]$ is its generalized inverse, $\phi^{-1}(x) =
\inf\{ u \in[0, 1] \dvt \phi(u) \le x \}$; in fact, if $d \ge3$, more
conditions on $\phi$ are required for $C$ to be a copula; see McNeil and Ne{\v{s}}lehov{\'a}~\cite
{mcneilneslehova2009}.
Suppose $\phi$ is continuously differentiable on $(0, 1]$ and $\phi
'(0+) = -\infty$. Then the first-order partial derivatives of $C$ are
given by
\[
\dot{C}_j(u) = \frac{\phi'(u_j)}{\phi'(C(u))},\qquad u \in[0, 1]^d,
0 < u_j < 1.
\]
If $u_i = 0$ for some $i \ne j$, then $C(u) = 0$ and $\phi'(C(u)) = -
\infty$, so indeed $\dot{C}_j(u) = 0$. We find that Condition~\ref
{cdiffC} is verified, so Propositions~\ref{pempproc} and~\ref{pmclt} apply.
In contrast, $\dot{C}_j$ may easily fail to be continuous at some
boundary points. For instance, if $\phi'(1) = 0$, then $\dot{C}_j$
cannot be extended continuously at $(1, \ldots, 1)$. Or if $\phi^{-1}$
is long-tailed, that is, if $\lim_{x \to\infty} \phi^{-1}(x+y) / \phi
^{-1}(x) = 1$ for all $y \in\reals$, then $\lim_{u_1 \downarrow0}
C(u_1, u_{-1})/u_1 = 1$ for all $u_{-1} \in(0, 1]^{d-1}$, whereas $\dot
{C}_1(u) = 0$ as soon as $u_j = 0$ for some $j \in\{2, \ldots, d\}$;
it follows that $\dot{C}_1$ cannot be extended continuously to the set
$\{0\} \times([0, 1]^{d-1} \setminus(0, 1]^{d-1})$.
\end{example}
\begin{example}[(Extreme-value copulas)]
\label{exevc}
Let $C$ be a $d$-variate extreme-value copula; that is,
\[
C(u) = \exp ( - \ell( - \log u_1, \ldots, - \log u_d ) ),
\qquad u \in(0, 1]^d,
\]
where the tail dependence function $\ell\dvtx [0, \infty)^d \to[0, \infty
)$ verifies
\[
\ell(x) = \int_{\Delta_{d-1}} \max_{j \in\{1, \ldots, d\}} (w_j x_j)
H(\mathrm{d}w), \qquad x \in[0, \infty)^d,
\]
with $H$ a non-negative Borel measure (called spectral measure) on the
unit simplex $\Delta_{d-1} = \{ w \in[0, 1]^d \dvt w_1 + \cdots+ w_d = 1
\}$ satisfying the $d$ constraints $\int w_j H(\mathrm{d}w) = 1$ for all $j
\in\{1, \ldots, d\}$; see, for instance, Leadbetter and Rootz{\'e}n \cite
{leadbetterrootzen1988} or Pickands \cite{pickands1989}. It can be verified
that $\ell$ is convex, is homogeneous of order $1$, and that $\max(x_1,
\ldots, x_d) \le\ell(x) \le x_1 + \cdots+ x_d$ for all $x \in[0,
\infty)^d$.
Suppose that the following holds:
\begin{equation}
\begin{tabular}{@{}p{330pt}}
\label{cdiffell}
For every $j \in\{1, \ldots, d\}$, the first-order partial derivative
$\dot{\ell}_j$ of $\ell$ with respect to $x_j$ exists and is continuous
on the set $\{ x \in[0, \infty)^d \dvt x_j > 0 \}$.
\end{tabular}
\end{equation}
Then the first-order partial derivative of $C$ in $u$ with respect to
$u_j$ exists and is continuous on the set $\{ u \in[0, 1]^d \dvt 0 < u_j
< 1 \}$. Indeed, for $u \in[0, 1]^d$ such that $0 < u_j < 1$, we have
\[
\dot{C}_j(u) =
\cases{
\displaystyle\frac{C(u)}{u_j} \dot{\ell}_j(-\log u_1, \ldots,
-\log u_d), & \quad$\mbox{if $u_i > 0$ for all $i$,} $\vspace*{2pt}\cr
0, & \quad$\mbox{if $u_i = 0$ for some $i \ne j$.}$}
\]
The properties of $\ell$ imply that $0 \le\dot{\ell}_j \le1$ for all
$j \in\{1, \ldots, d\}$. Therefore, if $u_i \downarrow0$ for some $i
\ne j$, then $\dot{C}_j(u) \to0$, as required. Hence if \eqref
{cdiffell} is verified, Condition~\ref{cdiffC} is verified as well
and Propositions~\ref{pempproc} and~\ref{pmclt} apply.
Let us consider the bivariate case in somewhat more detail. The
function $A \dvtx [0, 1] \to[1/2, 1] \dvtx t \mapsto A(t) = \ell(1-t, t)$ is
called the Pickands dependence function of $C$. It is convex and
satisfies $\max(t, 1-t) \le A(t) \le1$ for all $t \in[0, 1]$. By
homogeneity of the function~$\ell$, we have $\ell(x, y) = (x+y)
A(\frac{y}{x+y})$ for $(x, y) \in[0, \infty)^2 \setminus\{(0, 0)\}$.
If $A$ is continuously differentiable on $(0, 1)$, then \eqref
{cdiffell} holds, and Condition~\ref{cdiffC} is verified.
Nevertheless, if $A(1/2) < 1$, which is always true except in case of
independence ($A \equiv1$), the upper tail dependence coefficient $2\{
1 - A(1/2)\}$ is positive so that the first-order partial derivatives
fail to be continuous at the point $(1, 1)$; see Example~\ref
{extaildep}. One can also see that $\dot{C}_1$ will not admit a
continuous extension in the neighborhood of the point $(0, 0)$ in case
$A'(0) = -1$.
We will now verify Condition~\ref{cdiffCK} under the following
additional assumption:
\begin{equation}
\begin{tabular}{@{}p{244pt}}
\label{cdiffA}
The function $A$ is twice continuously differentiable on $(0, 1)$ and
$M = \sup_{0 < t < 1} \{ t (1-t) A''(t) \} < \infty$.
\end{tabular}
\end{equation}
In combination with Proposition~\ref{pempprocrate}, this will justify
the use of the Stute--Tsukahara almost sure rate~\eqref{estute} in the
proof of Theorem~3.2 in Genest and Segers \cite{genestsegers2009}; in particular, see
their equation~(B.3). Note that the weight function $t (1-t)$ in the
supremum in \eqref{cdiffA} is not unimportant: for the Gumbel
extreme-value copula having dependence function $A(t) = \{ t^{1/\theta}
+ (1-t)^{1/\theta} \}^\theta$ with parameter $\theta\in(0, 1]$, it
holds that $A''(t) \to\infty$ as $t \to0$ or $t \to1$ provided $1/2
< \theta< 1$, whereas condition~\eqref{cdiffA} is verified for all
$\theta\in(0, 1]$.
The copula density at the point $(u, v) \in(0, 1)^2$ is given by
\[
\ddot{C}_{12}(u, v) = \frac{C(u, v)}{uv} \biggl( \mu(t) \nu(t) -
\frac{t (1-t) A''(t)}{\log(uv)} \biggr),
\]
where
\[
t = \frac{\log(v)}{\log(uv)} \in(0, 1),\qquad \mu(t) = A(t) - t
A'(t), \nu(t) = A(t) + (1-t) A'(t).
\]
Note that if $A''(1/2) > 0$, then $\ddot{C}_{12}(w, w) \to\infty$ as
$w \uparrow1$. The properties of $A$ imply $0 \le\mu(t) \le1$ and $0
\le\nu(t) \le1$. From $-\log(x) \ge1-x$, it follows that $-1/\log
(uv) \le\min\{ 1/(1-u), 1/(1-v) \}$ for $(u, v) \in(0, 1)^2$. Since
$C(u, v) \le\min(u, v)$ and since $\min(a, b) \min(c, d) \le\min
\{ (ac), (bd) \}$ for positive numbers $a, b, c, d$, we find
\begin{eqnarray*}
0 &\le&\ddot{C}_{12}(u, v)
\le\frac{\min(u, v)}{uv} \biggl\{ 1 + M \min\biggl( \frac
{1}{1-u}, \frac{1}{1-v} \biggr) \biggr\} \\
&\le&(1 + M) \min\biggl( \frac{1}{u(1-u)} , \frac{1}{v(1-v)} \biggr).
\end{eqnarray*}
Similarly, for $(u, v) \in(0, 1) \times[0, 1]$,
\[
\ddot{C}_{11}(u, v) =
\cases{
\displaystyle\frac{C(u,v)}{u^2} \biggl( - \mu(t) \bigl( 1 - \mu
(t) \bigr) + \displaystyle\frac{t^2 (1-t) A''(t)}{\log(u)} \biggr), &\quad$\mbox{if
$0 < v < 1$},$\vspace*{2pt}\cr
0, & \quad$\mbox{if $v \in\{0, 1\}$}.$}
\]
Continuity at the boundary $v = 0$ follows from the fact that $C(u, v)
\to0$ as $v \to0$; continuity at the boundary $v = 1$ follows from
the fact that $t \to0$ and $\mu(t) \to0$ as $v \to1$. Since $- \log
(u) \le(1-u)/u$, we find, as required,
\[
0 \le- \ddot{C}_{11}(u, v)
\le\frac{(1 + M)}{u (1-u)},\qquad
(u, v) \in(0, 1) \times[0, 1].
\]
\end{example}
\begin{example}[(If everything fails\ldots)]
Sometimes, even Condition~\ref{cdiffC} does not hold: think, for
instance, of the Fr\'echet lower and upper bounds, $C(u, v) = \max(u +
v - 1, 0)$ and $C(u, v) = \min(u, v)$,\vadjust{\goodbreak} and of the checkerboard copula
with Lebesgue density $c = 2 \ind_{[0, 1/2]^2 \cup[1/2, 1]^2}$. In
these cases, the candidate limiting process $\CC$ has discontinuous
trajectories, and the empirical copula process does not converge weakly
in the topology of uniform convergence.
One may then wonder if weak convergence of the empirical copula process
still holds in, for instance, a Skorohod-type topology on the space of
c\`{a}dl\`{a}g functions on $[0, 1]^2$. Such a result would be useful to
derive, for instance, the asymptotic distribution of certain
functionals of the empirical copula process, for example, suprema or
integrals such as appearing in certain test statistics.
\end{example}
\begin{appendix}\label{app}
\section*{Appendix: Multivariate oscillation modulus}
Let $C$ be any $d$-variate copula and let $U_1, U_2, \ldots$ be an
i.i.d. sequence of random vectors with common cumulative distribution
function $C$. Let $\alpha_n$ be the multivariate empirical process in
equation~\eqref{ealphan}. Consider the oscillation modulus defined by
\setcounter{equation}{0}
\begin{equation}
\label{eoscillation}
M_n(a) = \sup\{ | \alpha_n(u) - \alpha_n(v) | \dvt \mbox{$u, v \in[0,
1]^d$, $|u_j-v_j| \le a$ for all $j$} \}
\end{equation}
for $a \in[0, \infty)$. Define the function $\psi\dvtx [-1,\infty) \to
(0, \infty)$ by
\begin{equation}
\label{epsi}
\psi(x) = 2x^{-2} \{ (1+x) \log(1+x) - x \}, \qquad x \in(-1, 0) \cup
(0, \infty),
\end{equation}
together with $\psi(-1) = 2$ and $\psi(0) = 1$. Note that $\psi$ is
decreasing and continuous.
\setcounter{proposition}{0}
\begin{proposition}[(John H. J. Einmahl, Hideatsu Tsukahara)]
\label{poscillation}
Let $C$ be any $d$-variate copula. There exist constants $K_1$ and
$K_2$, depending only on $d$, such that
\[
\Pr\{ M_n(a) \ge\lambda\} \le\frac{K_1}{a} \exp\biggl\{ - \frac
{K_2 \lambda^2}{a} \psi\biggl( \frac{\lambda}{\sqrt{n} a} \biggr)
\biggr\}
\]
for all $a \in(0, 1/2]$ and all $\lambda\in[0, \infty)$.
\end{proposition}
\begin{pf}
In Einmahl \cite{einmahl1987}, Inequality~5.3, page 73, the same bound is
proved in case $C$ is the independence copula and for $a > 0$ such that
$1/a$ is integer. As noted by Tsukahara, in a private communication,
the only property of the joint distribution that is used in the proof
is that the margins be uniform on the interval $(0, 1)$: Inequality~2.5
in Einmahl \cite{einmahl1987}, page~12, holds for any distribution on the unit
hypercube and equation~(5.19) on page~72 only involves the margins. As
a consequence, Inequality~5.3 in Einmahl \cite{einmahl1987} continues to hold
for any copula $C$. Moreover, the assumption that $1/a$ be integer is
easy to get rid of.
\end{pf}
\end{appendix}
\section*{Acknowledgments}
Input from an exceptionally large number of sources has shaped the
paper in its present form. The author is indebted to the following persons:
\textit{Axel B\"ucher}, for careful reading and for mentioning the
possibility of an alternative proof of Proposition~\ref{pempproc} via
the functional delta method as in B\"{u}cher \cite{buecher2011};
\textit{John H.J. Einmahl}, for pointing out the almost sure bound in
equation~\eqref{ebound1} on the tail empirical process and the
resulting reinforcement of the conclusion of Proposition~\ref
{pempprocrate} with respect to an earlier version of the paper;
\textit{Gordon Gudendorf}, for fruitful discussions on Condition~\ref
{cdiffCK} in the context of extreme-value copulas;
\textit{Ivan Kojadinovic}, for meticulous proofreading resulting in a~substantially lower error rate and for numerous suggestions leading to
improvements in the exposition;
\textit{Hideatsu Tsukahara}, for sharing the correction of the final part
of the proof of Theorem~4.1 in Tsukahara \cite{tsukahara2000}, summarized here
in the proof of Proposition~\ref{pempprocrate} and in Proposition~\ref
{poscillation};
\textit{the referees, the Associate Editor, and the Editor}, for timely
and constructive comments.
Funding was provided by IAP research network Grant P6/03 of the Belgian
government (Belgian Science Policy)
and by ``Projet d'actions de recherche concert\'ees'' number 07/12/002
of the Communaut\'e fran\c{c}aise de
Belgique, granted by the Acad\'emie universitaire de Louvain.
|
2,869,038,154,667 | arxiv | \section{Introduction}\label{intro}
Obstacle avoidance is a significant step in autonomous driving and intelligent vehicle design.
In complex traffic environments,
moving objects such as cars and pedestrians are harder to detect and localize,
and are yet more threatening obstacles than stationary objects.
In such environments, perception modules with orientation estimation
are better at satisfying the needs for safety than those with pure object detection.
Orientation estimation modules play the important role of
predicting the heading directions and short-term tracks \cite{4621257} of the surrounding objects,
which serve as references to a set of safety measures determined and carried out by the ego vehicle,
such as path planning, speed control and risk assessment \cite{survey,benchmark,enzweiler2010integrated}.
Therefore, as one fundamental step of track prediction, orientation estimation is a vital ability
of modern perception modules in autonomous vehicles \cite{gd}.
From the characteristics and applications of orientation estimation,
it is more suitable and more important in monocular occasions,
since the information the monocular cameras can obtain is much more scarce than other types of sensors.
With the depth information unknown, orientation becomes the only information for tracking.
From another perspective, monocular solutions are more resource-saving with less complexity,
and are vital in occasions with LiDAR not available or suitable \cite{ffnet}.
Therefore, monocular orientation estimation attracts much research interest in autonomous driving
\cite{ffnet,deep3dbox,4621257,enzweiler2010integrated}.
Since orientation estimation is based on object localization,
solutions to the problem are mostly attached to classic object detection methods.
However, apart from difficulties in object localization, orientation estimation faces several more problems,
with the main ones listed as follows:
First, More and deeper features required:
Object localization models have the ability of extracting object-level features.
On the contrary, orientation estimation requires more component-level features,
since orientation is estimated with advantage of
relative positions of specific components to the whole object.
Second, Lack of prior information:
There are much less prior knowledge available for orientation estimation than object localization
(e.g. exclusion of obviously irrational locations of cars and pedestrians).
All directions are reasonable for any objects.
Third, Confusion between the anterior and posterior parts of objects:
The anterior and posterior parts have similar component-level features
(e.g. similar light, license plate and window positions, etc),
which may confuse the orientation estimation methods to output results just opposite to the truth.
The problem is more severe in pedestrian orientation estimation,
since the most notable face characteristics cannot be used for orientation estimation
(face orientation is not necessarily equal to body orientation).
Such problems are especially tough for monocular solutions:
First, depth information of objects is not available for them;
second, the anterior and posterior parts of objects cannot both be visible without occlusion
at the same time in monocular images, making the third problem even harder to solve.
In most existing orientation estimation solutions,
object localization and orientation estimation share one feature extractor,
indicating that the features of objects are still scarce in the orientation estimation process.
In the meantime, little existing research focus on the second and the third problem.
Therefore, the above three problems still exist in orientation estimation.
We focus on the third problem stated above in monocular orientation estimation,
and propose a monocular pretraining method to enhance performance of the orientation estimation model,
while in the meantime, partially solving the first and the second problem.
The pretraining method predicts the left/right semicircle in which the orientation of the object is located.
With the pretraining method, the original orientation estimation problem is converted to
estimation of an absolute orientation value in range $[0, \pi]$.
We implement the method in both supervised and semisupervised manners,
and test their performance in typical traffic scenarios.
Experiment results with the KITTI \cite{kitti} dataset show that the proposed pretraining method
effectively improves the accuracy of orientation estimation.
A backbone model achieves state-of-the-art orientation estimation performance with the proposed method,
similar to existing approaches with well-designed network structures.
Our contributions are summarized as follows:
first, we propose a novel pretraining method in orientation estimation
and achieve state-of-the-art performance;
second, we offer a new perspective of features available for orientation estimation,
with the proposed method being a way of feature mining and augmentation.
The paper is organized as follows:
we introduce some related work in orientation estimation in Section \ref{relatedwork};
introduction and details of the semicircle pretraining method are stated in Section \ref{method};
experiments are conducted in Section \ref{experiment}
to validate the effectiveness and performance of the proposed method;
we conclude the paper in Section \ref{conclusion}.
\addtolength{\topmargin}{-0.25in}
\addtolength{\textheight}{0.25in}
\section{Related Work}\label{relatedwork}
Although with equally much importance,
orientation estimation attracts much less attention than object detection,
according to recent studies and researches in the related field.
The orientation estimation module in most existing monocular methods
\cite{mono3d,subcnn,monopsr,shiftrcnn,ipm,mvra}
is basically a prediction branch connected to the feature extractor of a well-designed object detection model.
Some work pays special attention to orientation estimation and makes related improvements:
In \cite{deep3dbox}, the authors proposed the Multibin mechanism,
to divide the orientation domain into several sectors, converting the problem into
a high-level classification and low-level residual regression;
another discretized method similar to Multibin is proposed in \cite{dcnn};
3D modelling is adopted in \cite{symgan}, and the problem is solved in an unsupervised manner
using GAN and the 3D models;
the authors in \cite{ffnet} introduce the relationship between
2D/3D dimensions and orientation angles of objects to orientation estimation as supplementary prior knowledge.
While having accuracy improvements on object orientation estimation to some extent,
the methods above may not be qualified for complex orientation estimation problems.
As stated above, orientation estimation and object detection require different levels of features,
indicating that one feature extractor may not be qualified for solving both problems.
The same problem happens on the Multibin classifier.
Besides, since object features with different orientations may appear to be rather similar
(e.g. features of exactly opposite orientations),
the sector-classification solution may be confused of feature-similar orientations.
The anterior-posterior-similarity problem still exists in \cite{ffnet}
in which 2D and 3D dimensions are independent from anterior and posterior features.
In the meantime, the method proposed in \cite{symgan} relies heavily on the predetermined 3D model,
thus restricted to simple scenarios with available 3D models of all objects.
In this paper, we propose a method specially intended for orientation estimation.
We focus on the neglected anterior-posterior-similarity problem
and propose a novel pretraining method to solve it.
\section{Prediction of Orientation Semicircle as Pretraining}\label{method}
\subsection{Definition and Choice of Orientation Semicircles}\label{choice}
To strengthen the ability of anterior-posterior classification of the orientation estimation model,
we connect a classification branch to the feature extractor of the original model,
and train the feature extractor and the branch firstly.
The anterior-posterior classification is a coarse pretraining step
yet helpful to better orientation estimation.
Inspired by the sector classfication spirit of Multibin,
the proposed method first divides the orientation domain into several sectors.
With the aim of amplifying the differences between the anterior and posterior parts,
the number of sectors is set 2 to make the target of the pretraining process
a pure anterior-posterior classification.
In consideration of dataset enhancement,
locations of the two sectors are set left and right instead of front and back.
With the left/right location settings, dataset is enhanced by simple flip of images,
and each object forms a pair of data with different semicircle classification labels.
Choice of the semicircles and related dataset enhancement are shown in Fig. \ref{semicircle}.
\begin{figure}[tb]
\centering
\includegraphics[width=0.6\linewidth]{semicircle}
\caption{Choice of the semicircles and related dataset enhancement.}
\label{semicircle}
\end{figure}
\subsection{Supervised Orientation Estimation with Semicircle Prediction as Pretraining}
The semicircle prediction serves as a pretraining step,
and is combined with the main orientation estimation process.
The network structure of the whole orientation estimation model is shown in Fig. \ref{structure}.
The network adopts the ResNet18 \cite{resnet} backbone, and has one classification and one regression branch.
The classification branch is stated in Section \ref{choice},
while the regression branch outputs the absolute orientation value in range $[0, \pi]$.
The two results jointly determine the estimated orientation, which is shown in \eqref{joint}.
\begin{equation}
\begin{aligned}
\alpha = (i_{C(F(x))} - 1)\pi + R(F(x))
\end{aligned}
\label{joint}
\end{equation}
in which $F$, $C$ and $R$ are respectively the feature extractor, the classifier and the regressor;
$x$ is the image input corresponding to only one object;
$i_{C(F(x))}$ is the index of the classification result; $R(F(x))$ is the regression result.
Note that the regressor has no additional designs for orientation estimation.
\begin{figure}[tb]
\centering
\includegraphics[width=\linewidth]{structure}
\caption{Network structure of the proposed orientation estimation model.}
\label{structure}
\end{figure}
We decompose the whole process into three steps for better and more robust training.
In the first step, the classifier and the feature extractor are jointly trained
while parameters of the regressor are frozen.
In this step, loss is defined in \eqref{loss1}:
\begin{equation}
\begin{aligned}
L_1 = L_{CE}(C(F(x)), \epsilon(y))
\end{aligned}
\label{loss1}
\end{equation}
in which $y$ is the orientation label of $x$;
$L_{CE}$ is the classic cross entropy criterion in classification; $\epsilon$ is the step function.
In this step, a qualified semicircle classifier is established, while in the meantime,
a feature extractor capable of extracting basic orientation features is also trained.
The feature extractor is trained to focus more on component-level features.
In the second step, the regressor and the feature extractor are jointly trained
while parameters of the classifier are frozen.
In this step, loss is defined in \eqref{loss2}:
\begin{equation}
\begin{aligned}
L_2 = L_1 + L_{MSE}(R(F(x)), \cos{(y + (\epsilon(y) - 1)\pi)})
\end{aligned}
\label{loss2}
\end{equation}
in which $L_{MSE}$ is the mean squared error criterion in regression.
In this step, a qualified orientation estimator is established, while in the meantime,
the feature extractor is enhanced, and can extract more precise orientation features.
In the final step, all modules of the model are tuned simultaneously. Loss $L_2$ is adopted.
Details of the three steps are shown in Table \ref{detail}.
\begin{table}[tb]
\centering
\caption{Details of the three steps in the proposed method.}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
step & F & C & R & $loss_{CE}$ & $loss_{MSE}$ \\
\hline
1 & \checkmark & \checkmark & & \checkmark & \\
\hline
2 & \checkmark & & \checkmark & \checkmark & \checkmark \\
\hline
3 & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark \\
\hline
\end{tabular}
\label{detail}
\end{table}
The proposed method solves the third problem and partially solves the first and second problems
stated in Section \ref{intro} with the following reasons:
\begin{itemize}
\item (to problem 1)
Features of orientation semicircles are closely related to the final orientation results,
thus serving as supplementary information in orientation estimation.
\item (to problem 2)
The following prior information is introduced to orientation estimation:
the semicircle label of an image is opposite to that of the corresponding horizontal-flipped image.
\item (to problem 3)
The semicircle classification has no essential difference from the anterior-posterior classification.
An effective solution to the former problem will be equally effective to the latter one.
\end{itemize}
\subsection{Semisupervised Orientation Estimation with Unsupervised Semicircle Prediction}
The method can also be implemented in a semisupervised way with the first step unsupervised.
As stated in Section \ref{choice},
each object forms a pair of data with different semicircle classification labels
with a simple horizontal flip of the image.
Choices of locations of the semicircles as left and right enable not only dataset enhancement,
but also unsupervised training,
since the unsupervised criterion can be set
the difference between prediction results of the elements in each pair.
The unsupervised criterion is shown in \eqref{unsupervisedloss},
in which $x_1$ and $x_2$ are respectively the original and flipped image:
\begin{equation}
\begin{aligned}
L_u = L_{CE}(C(F(x_1)), \overline{C(F(x_2))})
\end{aligned}
\label{unsupervisedloss}
\end{equation}
$C(F(x_1))$ (result of the original image) and $C(F(x_2))$ (result of the flipped image)
are `considered' the output and the label of the semicircle classification
(although the correctness of them remains undetermined).
With the randomly decided semicircle label in training,
the semicircle classifier may not be absolutely accurate.
As a result, parameters of the classifier will not be frozen,
and the second step is deleted from semisupervised training.
Training details of the third step are similar to the supervised training frame.
\section{Experiments}\label{experiment}
Both supervised and semisupervised training are experimented with results and analyses.
Experimental comparisons with the plain model and with state-of-the-art baselines are conducted.
\subsection{Experiment Setup}
The dataset of KITTI Object Detection and Orientation Estimation Evaluation benchmark \cite{kitti} is used.
We conduct orientation estimation experiments on both cars and pedestrians.
Hyperparameters in training are shown in Table \ref{params}.
\begin{table}[]
\centering
\caption{Hyperparameters in supervised and semisupervised training in the proposed method.}
\begin{tabular}{|c|c|c|c|}
\hline
\multirow{2}*{Hyperparameter} & \multicolumn{3}{c|}{Value}\\
\cline{2-4}
& step1 & step2 & step3 \\
\hline
image size & \multicolumn{3}{c|}{$224\times 224$} \\
\hline
learning rate & $10^{-5}$ & $10^{-5}$ & $5\times 10^{-6}$ \\
\hline
number of iterations & 250K & 150K & 100K \\
\hline
batch size & \multicolumn{3}{c|}{16} \\
\hline
\end{tabular}
\label{params}
\end{table}
\subsection{Results of Vanilla Training on Backbone}
We first conduct vanilla training on the backbone to prove that the anterior-posterior problem does exist.
After the identical training process without the semicircle classifier,
the backbone has the mean squared orientation error shown in Fig. \ref{vanilla}.
From the experiment results, it is obvious that
the backbone is weak at classifying the anterior and posterior parts of objects,
resulting in the two error peaks (0 and $\pi$) shown in Fig. \ref{vanilla}.
Therefore, the anterior-posterior problem is an existing urgent problem.
\begin{figure}[tb]
\centering
\includegraphics[width=0.6\linewidth]{plain_err}
\caption{The mean squared orientation error of the plain model.}
\label{vanilla}
\end{figure}
\subsection{Results of Supervised Training on Backbone with Semicircle Classifier}
In training of the first step, the loss convergence and accuracy increase of the semicircle classifier
with both car and pedestrian objects are shown in Fig. \ref{sup_step1}.
From the results, it is obvious that the semicircle classification method is effective and accurate.
The classifier reaches about 95\% and 90\% accuracy on cars and pedestrians.
\begin{figure}[tb]
\centering
\includegraphics[width=0.6\linewidth]{sup_step1}
\caption{The loss convergence and accuracy increase of the semicircle classifier in the first step.}
\label{sup_step1}
\end{figure}
In training of the second step, the classification loss of the semicircle classifier
is shown in Fig. \ref{sup_step2_cls}.
Although facing slight oscillations,
loss of the classifier converges to a satisfactory level with a fast speed.
At the beginning of the second step, the loss value is much smaller than
that at the beginning of the first step, indicating the effectiveness of the pretraining method.
The loss convergence and error decrease of the orientation regressor
are shown in Fig. \ref{sup_step2_reg_err}.
The average orientation error can be converged to 0.4 (0.12$\pi$) in cars and 0.7 (0.22$\pi$).
\begin{figure}[tb]
\centering
\subfloat[The loss convergence of the semicircle classifier.]{
\includegraphics[width=0.45\linewidth]{sup_step2_cls}
\label{sup_step2_cls}
}
\subfloat[The loss convergence of the orientation regressor and error decrease of the joint model.]{
\includegraphics[width=0.45\linewidth]{sup_step2_reg_err}
\label{sup_step2_reg_err}
}
\caption{The loss convergence and error decrease in the second step.}
\end{figure}
In training of the final step, neither of the losses and average errors have large-amplitude changes,
as shown in Fig. \ref{sup_step3}.
\begin{figure}[tb]
\centering
\subfloat[The loss convergence.]{
\includegraphics[width=0.45\linewidth]{sup_step3_loss}
}
\subfloat[The error decrease.]{
\includegraphics[width=0.45\linewidth]{sup_step3_err}
}
\caption{The loss convergence and error decrease of the joint model in the final step.}
\label{sup_step3}
\end{figure}
Comparison on the mean squared orientation error
of the proposed model and the plain model after identical training is shown in Fig. \ref{err}.
From the comparison, it is obvious that the pretraining method
effectively mitigates the anterior-posterior-similarity problem raised in Section \ref{intro}.
\begin{figure}[tb]
\centering
\includegraphics[width=0.6\linewidth]{err}
\caption{MSE comparison of the proposed model and the plain model.}
\label{err}
\end{figure}
Examples of orientation estimation results with full-view monocular images
are shown in Fig. \ref{fullview}.
Note that in Fig. \ref{fullview}, the 2D object detection results
are the truth labels provided in the dataset.
\begin{figure}
\centering
\includegraphics[width=0.95\linewidth]{fullview}
\caption{Examples of orientation estimation results with full-view monocular images.}
\label{fullview}
\end{figure}
\subsection{Results of Semisupervised Training on Backbone with Semicircle Classifier}
Fig. \ref{semi} shows the comparison of the supervised and semisupervised methods at the final step.
From the comparison, the semisupervised method fails to train a qualified backbone for orientation estimation.
Possible reasons are listed as follows:
\begin{itemize}
\item Uncertainty of semicircle labels.
In semisupervised training, the labels are set the prediction results of the original images.
Such labels may be wrong enough to mislead the semicircle classifier to wrong semicircle classifications.
\item Oscillations of semisupervised training.
The prediction result of the same data still changes because of update of the network parameters.
Therefore, the labels set in semisupervised training are changed during the process,
which may cause oscillations and even divergence of training.
\end{itemize}
\begin{figure}[tb]
\centering
\includegraphics[width=0.6\linewidth]{semi_err}
\caption{The comparison of the supervised and semisupervised methods as the final step.}
\label{semi}
\end{figure}
\subsection{Results of the Proposed Method on KITTI Benchmark}
From statements in Section \ref{method}, the proposed method has no ability of 2D object detection.
We use the method proposed in \cite{led} to be the 2D object detector of the proposed method.
The proposed supervised method gets \nth{4} in cars and \nth{2} in pedestrians
among state-of-the-art monocular orientation estimation models
on the KITTI Object Detection and Orientation Estimation Evaluation scoreboard,
with the detailed results shown in Table \ref{kitti}.
AOS on moderate-difficulty-level images is used as metric.
From the baseline comparisons, it is proved that
the proposed method makes a simple backbone reach the state-of-the-art performance in orientation estimation.
\begin{table}[tb]
\centering
\caption{Comparisons between the proposed method and state-of-the-art baselines.}
\begin{scriptsize}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}*{Name} & AOS & Rank & AOS & Rank & \multirow{2}*{Time} & 3D \\
& (car) & (car) & (ped) & (ped) & & info \\
\hline
Mono3D \cite{mono3d} & 87.28\% & \nth{8} & 58.66\% & \nth{3} & 4.2s & \checkmark \\
\hline
Deep3DBox \cite{deep3dbox} & 89.88\% & \nth{4} & \diagbox{}{} & \diagbox{}{} & 1.5s & \checkmark \\
\hline
SubCNN \cite{subcnn} & 89.53\% & \nth{5} & 66.70\% & \nth{1} & 2s & \checkmark \\
\hline
MVRA \cite{mvra} & 94.46\% & \nth{1} & \diagbox{}{} & \diagbox{}{} & 0.18s & \checkmark \\
\hline
Deep MANTA \cite{deepmanta} & 93.31\% & \nth{2} & \diagbox{}{} & \diagbox{}{} & 0.7s & \checkmark \\
\hline
MonoPSR \cite{monopsr} & 87.45\% & \nth{7} & 54.65\% & \nth{4} & 0.2s & \checkmark \\
\hline
Shift R-CNN \cite{shiftrcnn} & 87.47\% & \nth{6} & 46.56\% & \nth{5} & 0.25s & \checkmark \\
\hline
\textbf{Ours(Supervised)} & 90.63\% & \nth{3} & 61.70\% & \nth{2} & 0.23s & \\
\hline
\end{tabular}
\end{scriptsize}
\label{kitti}
\end{table}
\subsection{Discussions}
In this section, two main parts are discussed:
first, the core advantages of the proposed method over baselines;
second, the effectiveness of the proposed method from a hypothetical perspective.
From Table \ref{kitti}, one major difference between the proposed method and baselines is that
the proposed method does not rely on 3D information of the environment in training.
In terms of applications, monocular cameras are more suitable in occasions without 3D sensors,
as stated in Section \ref{intro}.
Moreover, with 3D information, there are better ways with better devices to make intention predictions,
such as LiDAR-based object tracking.
Therefore, our monocular method is more suitable in monocular occasions.
The proposed method is also less complex than most baselines.
The semicircle classifier can be integrated into other orientation estimation models,
indicating that it is a universal method not repelling to baselines.
In contrast, the baselines for comparison mostly require special designs
(such as well-designed 3D object detection head) or additional materials
(such as 3D labels and premade 3D models).
This is another advantage of the proposed method.
The state-of-the-art performance with a simple network structure
is achieved by more precise network training caused by semicircle prediction.
A manifold of optimization of orientation estimation is shown in Fig. \ref{manifold}.
An ideal model (i.e. oracle) can reach optimal with the ideal iteration direction.
A well-trained model may fail to reach optimal because of anterior-posterior-similarity,
shown in Fig. \ref{manifold} that the iteration direction is different from the oracle.
To be more accurate, the model should continue the iterations until it covers this difference.
The rest of the iterations should be guided with a criterion closely related to the similarity problem,
which, in this paper, is the semicircle classification loss.
With the semicircle classifier, such iterations are completed at the beginning of the training process
(i.e. as pretraining), with the whole iteration process shown in Fig. \ref{manifold-semicircle}.
With the semicircle classifier, the anterior-posterior-similarity problem is mitigated,
and the model reaches better performance.
Due to the differences of models, the iteration direction of the proposed model in joint training
is slightly different from the original plain model.
However, with the semicircle label unknown and `randomly' decided,
the iteration direction in semisupervised training is different from that in supervised training.
The supervised criterion is a stronger restriction to the network than the semisupervised criterion,
which is presented in \eqref{restriction}.
\begin{equation}
\begin{aligned}
&(C(F(x_1))=y) \cap (C(F(x_2))=\overline{y}) \\
&\to C(F(x_1))=\overline{C(F(x_2))}
\end{aligned}
\label{restriction}
\end{equation}
Therefore, any direction can be the iteration direction of semisupervised training
if it has a component with the same direction as the supervised pretraining direction
(the red arrow in Fig. \ref{manifold-semicircle}):
\begin{equation}
\begin{aligned}
\bm{d}_{supervised} \cdot \bm{d}_{semisupervised} > 0
\end{aligned}
\end{equation}
An instance is shown in Fig. \ref{manifold-semi},
with the semisupervised pretraining direction
satisfying all conditions but having huge difference from the supervised direction.
This may severely reduce the performance of the joint model,
since the flawed classifier acts as a misleading factor of the feature extractor in joint training.
The semisupervised model in this paper is an example of, but not limited to, this instance.
\begin{figure}
\centering
\subfloat[The plain well-trained model.]{
\includegraphics[width=0.3\linewidth]{hyp}
\label{manifold}
}
\subfloat[The proposed supervised model.]{
\includegraphics[width=0.3\linewidth]{hypsupervised}
\label{manifold-semicircle}
}
\subfloat[The proposed semisupervised model.]{
\includegraphics[width=0.3\linewidth]{hypsemi}
\label{manifold-semi}
}
\caption{Manifold-based iteration hypothesis of the proposed and the plain model.}
\end{figure}
\section{Conclusion}\label{conclusion}
We propose a pretraining method to enhance the robustness of orientation estimation models.
We pretrain the model to classify the semicircle in which the orientation angle is located,
to develop its ability of classifying the similar anterior and posterior parts of objects
and extracting basic orientation features in the image.
The supervised and semisupervised versions of the method are both proposed, analyzed and experimented.
Experiments show that the pretraining step
contributes to the accuracy and robustness of orientation estimation.
The proposed method enhances the accuracy of orientation estimation models,
and mitigates the serious safety threats in autonomous driving to a certain extent.
\bibliographystyle{unsrt}
|
2,869,038,154,668 | arxiv | \section{Introduction}
\label{sec:intro}
The Standard Model (SM) of particle physics contains three fermion families which acquire mass by means of the interaction with the Higgs boson. The two Higgs doublet model (2HDM) is one of the simplest ways to extend the Higgs sector, which is the least constrained sector of the Standard Model. Two Higgs doublets also appear in many more elaborate extensions of the SM that are based on fundamental principles, such as supersymmetry (see e.g. \cite{Martin:1997ns}), the Peccei-Quinn symmetry~\cite{Peccei:1977hh,Peccei:1977ur} or grand unified theories (see \cite{Croon:2019kpe} for a recent review). Two Higgs doublet models are also motivated from electroweak baryogenesis studies, where it has been shown that contributions coming from the new physical Higgs bosons to the effective Higgs potential can strengthen the phase transition and in addition introduce new sources of charge-parity (CP) violation, from both fermion and scalar sectors \cite{Carena:1997gx, Cline:1997vk, Konstandin:2005cd, Cirigliano:2006wh, Buchmuller:2012tv, Morrissey:2012db, Konstandin:2013caa, Basler:2016obg, Fuyuto:2017ewj}. As a result, the 2HDM is one of the most popular SM extensions and has been frequently used as a benchmark for phenomenological studies (see e.g. \cite{Branco:2011iw} for a review of 2HDM studies). Furthermore, the presence of another doublet can contribute to resolving anomalies in lepton flavour universality observables \cite{Iguro:2018qzf, Martinez:2018ynq} and muon g-2 \cite{Broggio:2014mna,Wang:2014sda,Abe:2015oca,Chun:2015hsa,Chun:2015xfx,Chun:2016hzs,Wang:2018hnw,Chun:2019oix,Chun:2019sjo,Keung:2021rps,Ferreira:2021gke,Han:2021gfu,Eung:2021bef,Jueid:2021avn,Dey:2021pyn,Ilisie:2015tra,Han:2015yys,Cherchiglia:2016eui,Cherchiglia:2017uwv,Li:2020dbg,Athron:2021iuf,Omura:2015nja,Crivellin:2015hha,Iguro:2019sly,Jana:2020pxx,Hou:2021sfl,Hou:2021qmf,Atkinson:2021eox,Hou:2021wjj}, while scenarios where the extra doublet is "inert" can also explain dark matter \cite{LopezHonorez:2006gr, Gustafsson:2007pc, Dolle:2009fn, Honorez:2010re, LopezHonorez:2010tb, Chao:2012pt, Goudelis:2013uca, Arhrib:2013ela, Bonilla:2014xba, Queiroz:2015utg, Arcadi:2018pfo, Camargo:2019ukv}.
The new interactions between the SM fermions and the physical states arising from the introduction of a second Higgs doublet imply a richer phenomenology than the SM. This is further enhanced by the new free parameters and couplings in the general two Higgs doublet model (GTHDM), also known as type-III 2HDM \cite{Hou:1991un}. Physical effects such as CP violation, scalar mixing and flavour changing transitions are expected \cite{Mahmoudi:2009zx},
allowing for signatures to be observed in particle colliders. One of the most interesting experimental consequences of the flavour changing currents present in the GTHDM is lepton flavour universality (LFU) violation. Experimental measurements of LFU violation come from flavour changing charged currents (FCCCs), such as those in $B$ meson decays, and flavour changing neutral currents (FCNCs), for instance in kaon decays. The observed deviations from the SM in the measurements of FCCCs (around $3.1\sigma$ from the SM~\cite{Amhis:2019ckw}) and FCNCs (close to a combined $6\sigma$ deviation, see for example~\cite{Alguero:2021anc,Hurth:2021nsi,Bhom:2020lmk}), hint at the existence of new physics (NP) contributions and thus serve as a clear motivation for the study of NP models capable of explaining the anomalies.
It has indeed been shown that the GTHDM is able to explain the charged anomalies at $2\sigma$ \cite{Iguro:2018qzf, Martinez:2018ynq}. Similar analyses for the neutral anomalies have also been presented previously \cite{Arhrib2017, Iguro:2018qzf, Crivellin:2019dun}, finding solutions at the $2\sigma$ level and up to the $1\sigma$ level including right-handed neutrinos \cite{Crivellin:2019dun}. Nevertheless, the majority of these studies have only explored solutions in restricted regions of the parameter space, with a lack of discussion of the role of (marginally) statistically preferred regions, and often considering the $b\to sll$ observables from model independent global fits \cite{Iguro:2018qzf,Crivellin:2019dun}. Statistically rigorous explorations of the parameter space of the model contrasted directly to experimental constraints have rarely been performed, and even those were focused exclusively on interactions in the quark sector \cite{Herrero-Garcia:2019mcy}.
Furthermore, the longstanding discrepancy between the experimentally measured and SM predicted values of the anomalous magnetic moment of the muon $a_\mu$ has recently been brought back to the spotlight with the new measurement by the Muon g-2 experiment at Fermilab~\cite{PhysRevLett.126.141801}. The latest experimental value, taking into account the measurements at both Brookhaven National Laboratory and Fermilab, is $a^{\textrm{Exp}}_\mu = 116592061\pm41\times10^{-11}$. Compared to the theoretical prediction in the SM from the recent Muon $g-2$ Theory Initiative White Paper, $a^{\textrm{SM}}_\mu = 116591810\pm43\times10^{-11}$ \cite{Aoyama:2020ynm}, building on the extensive work examining the various SM contributions in \cite{davier:2017zfy,keshavarzi:2018mgv,colangelo:2018mtw,hoferichter:2019gzf,davier:2019can,keshavarzi:2019abf,kurz:2014wya,melnikov:2003xd,masjuan:2017tvw,Colangelo:2017fiz,hoferichter:2018kwz,gerardin:2019vio,bijnens:2019ghy,colangelo:2019uex,colangelo:2014qya,Blum:2019ugy,aoyama:2012wk,Aoyama:2019ryr,czarnecki:2002nt,gnendiger:2013pva}, the measured value differs from the SM prediction by $\Delta a_\mu = 2.51\pm59\times10^{-9}$, corresponding to a discrepancy of $4.2\sigma$. Models with a second Higgs doublet have been studied extensively in the literature as sources to explain this deviation~\cite{Broggio:2014mna,Wang:2014sda,Abe:2015oca,Chun:2015hsa,Chun:2015xfx,Chun:2016hzs,Wang:2018hnw,Chun:2019oix,Chun:2019sjo,Keung:2021rps,Ferreira:2021gke,Han:2021gfu,Eung:2021bef,Jueid:2021avn,Dey:2021pyn,Ilisie:2015tra,Han:2015yys,Cherchiglia:2016eui,Cherchiglia:2017uwv,Li:2020dbg,Athron:2021iuf,Omura:2015nja,Crivellin:2015hha,Iguro:2019sly,Jana:2020pxx,Hou:2021sfl,Hou:2021qmf,Atkinson:2021eox,Hou:2021wjj}. However, no simultaneous global fit of the flavour anomalies and $a_\mu$ in the GTHDM has been attempted giving a proper statistical insight into the whole parameter space.
Therefore, in this paper we present a frequentist inspired likelihood analysis for the GTHDM, simultaneously including the FCCC observables, both $b\to s\mu^{+}\mu^{-}$ transitions and the muon anomalous magnetic moment, along with other flavour observables. We perform a global fit of all constraints using the inference package \textsf{GAMBIT}\xspace, the Global And Modular Beyond-the-Standard-Model
Inference Tool \cite{Athron:2017ard, grev}. \textsf{GAMBIT}\xspace is a powerful software framework capable of performing statistical inference studies using constraints from collider~\cite{ColliderBit}, dark matter~\cite{DarkBit}, flavour~\cite{Workgroup:2017myk} and neutrino~\cite{RHN} physics, as well as cosmology~\cite{CosmoBit}. It has already been used for detailed statistical analyses of a variety of beyond the Standard Model (BSM) models, including supersymmetry~\cite{CMSSM, MSSM, EWMSSM}, scalar singlet DM~\cite{SSDM,SSDM2,HP,GUM,DMEFT}, axion and axion-like particles~\cite{Axions,XENON1T}, and neutrinos~\cite{RHN,CosmoBit_numass}, as well as an initial analysis of the 2HDM \cite{Rajec:2020orn}. Our work enhances the \textsf{FlavBit} \cite{Workgroup:2017myk} and \textsf{PrecisionBit} \cite{GAMBITModelsWorkgroup:2017ilg} modules of \textsf{GAMBIT} to support the GTHDM. We also make use of various external codes: \textsf{SuperIso 4.1} \cite{Mahmoudi:2007vz,Mahmoudi:2008tp,Mahmoudi:2009zz,Neshatpour:2021nbn} for computing flavour observables, the \textsf{2HDMC 1.8} package \cite{Eriksson:2009ws} for precision electroweak constraints, the \textsf{HEPLike} package \cite{Bhom:2020bfe} which provides likelihoods for the neutral anomaly related observables, and the differential evolution sampler \textsf{Diver 1.0.4} \cite{Workgroup:2017htr}.
The paper is organised as follows. In section \ref{sec:GTHDM} we present the Higgs and Yukawa sectors along the theoretical bounds for their parameters. In section \ref{sec:Hamiltonian} we define the effective Hamiltonian and the Wilson coefficients (WCs) for $b\to s\mu^{+}\mu^{-}$ transitions. Then, in section \ref{sec:Observables} we list the observables to be used in our scans. Following this, our results from the global fit and predictions for future experiments in colliders are discussed in section \ref{sec:Results}. Finally, we summarise our conclusions in section \ref{sec:Conclusions}.
\section{GTHDM}
\label{sec:GTHDM}
The GTHDM has been actively investigated in both its scalar and Yukawa sectors. These can be written in three different ways, namely in the generic, Higgs and physical bases, all of them related via basis transformations \cite{Davidson:2005cw}. Particularly, with respect to the Yukawa sector, in the past theorists imposed discrete symmetries to avoid flavour changing transitions,
the most popular being the $\mathbb{Z}_{2}$ symmetry in the type-II 2HDM \cite{Glashow:1976nt,Gunion:1989we}. However, it has been shown that there is no fundamental reason for forbidding flavour changing couplings \cite{Hou2019}: if the mixing angle is small, the non-observation of several tree level flavour changing transitions can be explained by the alignment phenomenon. This, and a suppression inversely proportional to the mass of the heavy Higgses in the tree level amplitudes, could suppress the effects coming from the off-diagonal Yukawa couplings, without invoking the so called natural flavour conservation (NFC) condition \cite{Glashow:1976nt}.
Here we review the Higgs potential and the Yukawa Lagrangian of the model as well as the relevant theoretical constraints coming from stability, unitarity and perturbativity at leading order (LO). We also make use of the precision electroweak constraints from the oblique parameters. For a more comprehensive review of the model the reader is referred to \cite{Branco:2011iw,Haber:2010bw,HernandezSanchez:2012eg,Crivellin2013}.
\subsection{Higgs potential}
The most general renormalizable scalar potential in the GTHDM is commonly written as \cite{Branco:2011iw,Gunion:2002zf}
\begin{alignat}{1}
V(\Phi_{1},\Phi_{2})=\: & m_{11}^{2}(\Phi_{1}^{\dag}\Phi_{1})+m_{22}^{2}(\Phi_{2}^{\dag}\Phi_{2})-m_{12}^{2}(\Phi_{1}^{\dag}\Phi_{2}+\Phi_{2}^{\dag}\Phi_{1})\nonumber \\
& +\frac{1}{2}\lambda_{1}(\Phi_{1}^{\dag}\Phi_{1})^{2}+\frac{1}{2}\lambda_{2}(\Phi_{2}^{\dag}\Phi_{2})^{2}+\lambda_{3}(\Phi_{1}^{\dag}\Phi_{1})(\Phi_{2}^{\dag}\,\Phi_{2})+\lambda_{4}(\Phi_{1}^{\dag}\Phi_{2})(\Phi_{2}^{\dag}\Phi_{1})\nonumber \\
& +\left(\frac{1}{2}\lambda_{5}(\Phi_{1}^{\dag}\Phi_{2})^{2}+\left(\lambda_{6}(\Phi_{1}^{\dag}\Phi_{1})+\lambda_{7}(\Phi_{2}^{\dag}\Phi_{2})\right)(\Phi_{1}^{\dag}\Phi_{2})+{\rm ~h.c.}\right) ,
\label{HiggsPotential}
\end{alignat}
where the two scalar doublets are given by
\begin{equation}
\ensuremath{\Phi_{i}=\left(\begin{array}{c}
\phi_{i}^{+}\\
\frac{1}{\sqrt{2}}(\upsilon_{i}+\rho_{i}+i\eta_{i})
\end{array}\right)},\quad i=1,2,
\end{equation}
\noindent with $\upsilon_{i}$ the vacuum expectation values (VEV) of the fields and $\rho_{i}$ and $\eta_{i}$ the Goldstone bosons. After diagonalization, we obtain the following mass eigenvectors:
\begin{equation}\label{eqn:HiggsMixingMatrix}
\begin{pmatrix}G_{Z}\\
A
\end{pmatrix}=R_{\beta}\begin{pmatrix}\eta_{1}\\
\eta_{2}
\end{pmatrix},\quad\begin{pmatrix}G_{W^{\pm}}\\
H^{\pm}
\end{pmatrix}=R_{\beta}\begin{pmatrix}\phi_{1}^{\pm}\\
\phi_{2}^{\pm}
\end{pmatrix},\quad\begin{pmatrix}H\\
h
\end{pmatrix}=R_{\alpha}\begin{pmatrix}\rho_{1}\\
\rho_{2}
\end{pmatrix} ,
\end{equation}
where the fields $\phi_{i}^{+}$ are charged complex scalars. From the eight degrees of
freedom, three of them ($G_{W^{\pm}}$ and $G_{Z}$) get absorbed
by the longitudinal components of the vector bosons. The remaining
five make up the new particle spectrum of the model, namely, $h$
and $H$ are physical CP-even states, $A$ is a CP-odd state and $H^{\pm}$
are two charged Higgs bosons. The rotation matrices are defined
according to
\begin{equation} \label{eqn:GenericMixingMatrix}
R_{\theta}=\left(\begin{array}{cc}
\cos\theta & \sin\theta\\
-\sin\theta & \cos\theta
\end{array}\right).
\end{equation}
In this work, we assume a CP conserving
scalar sector, which implies all the parameters in Eq.~\eqref{HiggsPotential} to be real \cite{Gunion:2002zf}. Additionally, for simplicity, we set $\lambda_{6}=\lambda_{7}=0$. In particular, for this choice of the quartic couplings, the necessary and sufficient conditions to ensure positivity of the potential along all directions are given by \cite{Gunion:2002zf,Branco:2011iw}
\begin{align}
\lambda_1 \geq 0 & , & \lambda_2 \geq 0 \; ,\label{eq:l1_l2} \\
\lambda_3 \geq -\sqrt{\lambda_1 \lambda_2} & , &
\lambda_3 + \lambda_4 - |\lambda_5| \geq -\sqrt{\lambda_1 \lambda_2} \;,
\label{eq:stability_cond}
\end{align}
\noindent whereas the tree level unitarity of the couplings imposes \cite{Branco:2011iw}
\begin{equation}
\left| a_\pm \right|,\
\left| b_\pm \right|,\
\left| c_\pm \right|,\
\left| d_\pm \right|,\
\left| e_\pm \right|,\
\left| f_\pm \right|
< 8 \pi,
\end{equation}
where
\begin{align}a_{\pm} & =\frac{3}{2}\left(\lambda_{1}+\lambda_{2}\right)\pm\sqrt{\frac{9}{4}\left(\lambda_{1}-\lambda_{2}\right)^{2}+\left(2\lambda_{3}+\lambda_{4}\right)^{2}},\\
b_{\pm} & =\frac{1}{2}\left(\lambda_{1}+\lambda_{2}\right)\pm\frac{1}{2},\,\sqrt{\left(\lambda_{1}-\lambda_{2}\right)^{2}+4\lambda_{4}^{2}},\\
c_{\pm} & =\frac{1}{2}\left(\lambda_{1}+\lambda_{2}\right)\pm\frac{1}{2}\,\sqrt{\left(\lambda_{1}-\lambda_{2}\right)^{2}+4\lambda_{5}^{2}},\\
d_{\pm} & =\lambda_{3}+2\lambda_{4}\pm3\lambda_{5},\\
e_{\pm} & =\lambda_{3}\pm\lambda_{5},\\
f_{\pm} & =\lambda_{3}\pm\lambda_{4}.
\end{align}
Following \cite{Haber:2010bw,Herrero-Garcia:2019mcy} we also include the oblique parameters $S$, $T$ and $U$, which parametrise radiative corrections to electroweak gauge boson propagators. In this study we computed these oblique parameters with the \textsf{2HDMC} package and these are contrasted with the most probable values inferred from experimental data, as found by the \textsf{Gfitter} group~\cite{Baak:2014ora}
\begin{equation}
\begin{aligned}S=0.05\pm0.11,\,\,\,T=0.09\pm0.13,\,\,\,U=0.01\pm0.11,\end{aligned}
\end{equation}
with correlations given by
\begin{equation}
\Sigma=\left(\begin{array}{ccc}
1.0 & 0.9 & -0.59\\
0.9 & 1.0 & -0.83\\
-0.59 & -0.83 & 1.0
\end{array}\right)\,.
\end{equation}
\subsection{Yukawa Lagrangian} \label{sec:Yukawas}
The most general Yukawa Lagrangian in the generic scalar basis $\{\Phi_{1},\Phi_{2}\}$
reads \cite{Herrero-Garcia:2019mcy}:
\begin{equation}
-\mathcal{L}_{Yukawa}=\bar{Q}^{0}\,(Y_{1}^{u}\tilde{\Phi}_{1}+Y_{2}^{u}\tilde{\Phi}_{2})u_{{\rm R}}^{0}+\bar{Q}^{0}\,(Y_{1}^{d}\Phi_{1}+Y_{2}^{d}\Phi_{2})d_{{\rm R}}^{0}+\bar{L}^{0}\,(Y_{1}^{l}\Phi_{1}+Y_{2}^{l}\Phi_{2})l_{{\rm R}}^{0}+{\rm ~h.c.}\,\label{eq:yuk2d},
\end{equation}
where the superscript "0" notation refers to the flavour eigenstates, and $\tilde{\Phi}_j = i \sigma_2 \Phi_j^\dagger$. The fermion mass matrices are given by
\begin{equation}
M_{f}=\frac{1}{\sqrt{2}}(v_{1}Y_{1}^{f}+v_{2}Y_{2}^{f}),\qquad f=u,d,l.\label{masa-fermiones}
\end{equation}
Notice that this matrices need to be diagonalized. This can be done through a bi-unitary transformation
\begin{equation}
\bar{M}_{f}=V_{fL}^{\dagger}M_{f}V_{fR},\label{masa-diagonal}
\end{equation}
where the fact that $M_{f}$ is Hermitian implies that
$V_{fL}=V_{fR}$, and the mass eigenstates for the fermions are given by
\begin{equation}
u=V_{u}^{\dagger}u^{0},\qquad d=V_{d}^{\dagger}d^{0},\qquad l=V_{l}^{\dagger}l^{0}.\label{redfields}
\end{equation}
Then, Eq.~(\ref{masa-fermiones}) takes the form
\begin{equation}
\bar{M}_{f}=\frac{1}{\sqrt{2}}(v_{1}\tilde{Y}_{1}^{f}+v_{2}\tilde{Y}_{2}^{f}),\label{diag-Mf}
\end{equation}
where $\tilde{Y}_{i}^{f}=V_{fL}^{\dagger}Y_{i}^{f}V_{fR}$, though
each Yukawa matrix is not diagonalized by this transformation. For this reason we shall drop the tilde from now on. Solving for $Y_{1}^{f}$ we have
\begin{equation}
Y_{1,ba}^{f}=\frac{\sqrt{2}}{v\cos\beta}\bar{M}_{f,ba}-\tan\beta Y_{2,ba}^{f}.
\end{equation}
Using the expressions above we can write the Yukawa Lagrangian in the mass basis as\footnote{This Yukawa Lagrangian differs from the one defined in Eq.(2.3) in \cite{Crivellin:2019dun} by an overall factor of $\sqrt{2}$.}
\begin{equation}
\begin{aligned}-\mathcal{L}_{Yukawa}\, =& \bar{u}_{b} \left(V_{bc}\xi_{ca}^{d}P_{R} - V_{ca}\xi_{cb}^{u*}P_{L}\right) d_{a}\,H^{+} + \bar{\nu}_{b}\xi_{ba}^{l}P_{R}l_{a}\,H^{+} + \mathrm{h.c.}\\
& +\sum_{f=u,d,e}\sum_{\phi=h,H,A}\bar{f}_{b} \Gamma_{\phi ba}^{f}P_{R}f_{a}\phi+\mathrm{h.c.},
\end{aligned}
\label{eq:YukawaLagran}
\end{equation}
where $a,b=1,2,3$ and
\begin{equation}
\xi_{ba}^{f}\equiv\dfrac{Y_{2,ba}^{f}}{\cos\beta}-\dfrac{\sqrt{2}\tan\beta\bar{M}_{f,ba}}{v},\label{eq:Xis}
\end{equation}
\begin{align}
\Gamma_{hba}^{f} & \equiv\dfrac{\bar{M}_{f,ba}}{v}s_{\beta-\alpha}+\dfrac{1}{\sqrt{2}}\xi_{ba}^{f}c_{\beta-\alpha},\label{eq:Gammafhba}\\
\Gamma_{Hba}^{f} & \equiv\dfrac{\bar{M}_{f,ba}}{v}c_{\beta-\alpha}-\dfrac{1}{\sqrt{2}}\xi_{ba}^{f}s_{\beta-\alpha},\label{eq:GammafHba}\\
\Gamma_{Aba}^{f} & \equiv\begin{cases}
-\dfrac{i}{\sqrt{2}}\xi_{ba}^{f} & \textrm{if }f=u,\\
\dfrac{i}{\sqrt{2}}\xi_{ba}^{f} & \textrm{if }f=d,l.
\end{cases}\label{eq:GammafAba}
\end{align}
At first, the total number of new complex Yukawa couplings
to consider is 54. Considering only their real parts and the ansatz
\begin{equation}
\xi^{u}=\left(\begin{array}{ccc}
0 & 0 & 0\\
0 & \xi_{cc}^{u} & \xi_{ct}^{u}\\
0 & \xi_{tc}^{u} & \xi_{tt}^{u}
\end{array}\right),\qquad\xi^{d}=\left(\begin{array}{ccc}
0 & 0 & 0\\
0 & \xi_{ss}^{d} & \xi_{sb}^{d}\\
0 & \xi_{bs}^{d} & \xi_{bb}^{d}
\end{array}\right),\qquad\xi^{l}=\left(\begin{array}{ccc}
0 & 0 & 0\\
0 & \xi_{\mu\mu}^{l} & \xi_{\mu\tau}^{l}\\
0 & \xi_{\tau\mu}^{l} & \xi_{\tau\tau}^{l}
\end{array}\right),\label{eq:Textures}
\end{equation}
we get only 12 Yukawa parameters (i.e., ignoring $3\to1$ and $2\to1$ generation transitions). Here, the $\xi^{u}$ matrix has been previously considered to be asymmetric from $B_{s}-\overline{B}_{s}$ oscillations constraints at one loop level and for heavy Higgs masses of order $\lesssim 700\,\mathrm{GeV}$ \cite{Altunkaynak:2015twa,Hou:2020chc}. However, since we are approaching the dominant contribution process at LO and we are exploring masses in the range $[0.5,\,4.0]\,\mathrm{TeV}$ as in \cite{Herrero-Garcia:2019mcy}, we will consider only the symmetric case, i.e., $\xi_{tc}^{u}=\xi_{ct}^{u}$. Hence, assuming the remaining $\xi^{d}$ and $\xi^{l}$ matrices to be symmetric as well, the total number of parameters to scan over is reduced by 3.
\section{Effective Hamiltonians for flavour changing transitions}
\label{sec:Hamiltonian}
Most of the relevant flavour observables that we consider in this work arise from processes with either suppressed or negligible contributions from SM particles. Hence, these processes are often dominated by BSM contributions, which can be generated by a large variety of UV complete theories. It is often convenient to study these transitions using the, model-agnostic, effective Hamiltonian approach, where transition operators are decomposed using the Operator Product Expansion (OPE) into a collection of simple, low-energy, operators. Associated with each of these operators comes a WC, which encodes the knowledge of the high-energy theory. Even for complete high-energy theories, as it is our case, it is extremely useful to work with the effective Hamiltonian, since one can easily compute most observables of interest in terms of a small set of WCs. In fact, there are only two independent flavour changing transitions that give rise to the majority of the studied observables, and these are the neutral $b\to s\ell^+\ell^-$ transition and the charged $b\to c\ell\bar{\nu}$ transition. In this section we write down the effective Hamiltonian for both of these transitions and provide expressions for the BSM contributions to the WCs that arise in our model.\footnote{These BSM new contributions for $b\to s\ell^+\ell^-$ and $b\to c\ell\bar{\nu}$ transitions were included in our local version of \textsf{FlavBit} and might appear in a future release.}
\subsection{\texorpdfstring{$b\to s\ell^+\ell^-$ transitions}{b to smumu}}
The effective Hamiltonian responsible for $b\to s\ell^+\ell^-$ transitions can be written as
\begin{align}
{\cal H}_{\mathrm{eff}} & =-\frac{4G_{F}}{\sqrt{2}}V_{tb}V_{ts}^{*}\left[\sum_{i=S,P}C_{i}(\mu)\mathcal{O}_{i}+C_{i}^{\prime}(\mu)\mathcal{O}_{i}^{\prime}+\sum_{i=7}^{10} C_{i}(\mu)\mathcal{O}_{i}+C_{i}^{\prime}(\mu)\mathcal{O}_{i}^{\prime}\right],\label{eq:heff}
\end{align}
where $\mu$ is the energy scale at which the WCs are defined, and
\begin{alignat}{3}
\mathcal{O}_{9} & =\frac{e^{2}}{16\pi^{2}}(\bar{s}\gamma_{\mu}P_{L}b)(\bar{\ell}\gamma^{\mu}\ell),\qquad\qquad & & \mathcal{O}_{10} & & =\frac{e^{2}}{16\pi^{2}}(\bar{s}\gamma_{\mu}P_{L}b)(\bar{\ell}\gamma^{\mu}\gamma_{5}\ell),\label{eq:basisA}\\
\mathcal{O}_{S} & =\frac{e^{2}}{16\pi^{2}}m_{b}(\bar{s}P_{R}b)(\bar{\ell}\ell),\qquad\qquad & & \mathcal{O}_{P} & & =\frac{e^{2}}{16\pi^{2}}m_{b}(\bar{s}P_{R}b)(\bar{\ell}\gamma_{5}\ell),\\
\mathcal{O}_{7} & =\frac{e}{16\pi^{2}}m_{b}(\bar{s}\sigma^{\mu\nu}P_{R}b)F_{\mu\nu},\qquad\qquad & & \mathcal{O}_{8} & & =\frac{g}{16\pi^{2}}m_{b}\bar{s}\sigma^{\mu\nu}T^{a}P_{R}bG_{\mu\nu}^{a},
\end{alignat}
are the FCNC local operators encoding the low-energy description of the high energy physics that has been integrated out. The prime operators are obtained by the replacement $P_{R(L)}\rightarrow P_{L(R)}$. The WCs can be written as
\begin{align}
C_{i} & =C_{i}^{\mathrm{SM}}+\Delta C_{i},
\end{align}
\noindent where $C_{i}^{\mathrm{SM}}$ is the SM contribution to the $i$th WC and $\Delta C_{i}$ is the NP contribution, a prediction of the GTHDM model. The SM contribution to the scalar WCs, $C_{S,P}^{(\prime)}$, is negligible, whereas for $C_{7-10}$ we have
\begin{align}
\mathrm{Re}(C_{7,8,9,10}^{\mathrm{SM}})=-0.297,\,-0.16,\,4.22,\,-4.06,
\end{align}
as computed with \textsf{SuperIso}. We evaluate the NP scalar and pseudoscalar coefficients $\Delta C_{S,P}^{(\prime)}$ at tree level, which is the LO contribution from the GTHDM \cite{Crivellin:2019dun}. Henceforth we will use the scalar and pseudoscalar coefficients in the basis defined in \textsf{SuperIso}, i.e., $C_{Q_{1},Q_{2}}^{(')}=m_{b(s)}C_{S,P}^{(')}$. The remaining coefficients, $\Delta C_{7,8,9,10}$ first appear at one loop level and we therefore include the one-loop BSM contributions to these in our analysis. These one-loop corrections can be split by contribution as follows,
\begin{align}
\Delta C_{7,8} & =C_{7,8}^{\gamma,\,g},\label{eq:DeltaC78}\\
\Delta C_{9} & =C_{9}^{\gamma}+C_{9}^{Z}+C_{9}^{\textrm{box}},\label{eq:DeltaC9}\\
\Delta C_{10} & =C_{10}^{Z}+C_{10}^{\textrm{box}}.\label{eq:DeltaC10}
\end{align}
where $C_{9,10}^{Z}$ and $C_{7,9}^{\gamma}$ come from the $Z$ and $\gamma$ penguins, respectively (figure\ \ref{fig:Penguin-diagrams.}), and $C_{9,10}^{\textrm{box}}$ are contributions from box diagrams, (figure\ \ref{fig:Box-diagrams.}). At this level, the $\Delta C_{9}^{'}$ and $\Delta C_{10}^{'}$ coefficients are suppressed as $m_{b}/m_{t}$ with respect to their non-prime counterparts. However, for studying the effects of flavour-changing Yukawa couplings we include these coefficients for completeness. $C_{8}^{g}$ is the WC related to the chromomagnetic operator coming from gluon penguins and the NP contributions $\Delta C_{7,8}^{'}$ are computed in \cite{Crivellin:2019dun}.
{\Large{}}
\begin{figure}[h]
{\Large{}
\[
\Diagram{ & & \momentum[top]{fuV}{\ell^{+}\quad}\\
& & \momentum{fA}{}\vertexlabel^{\ell^{-}}\\
& & \momentum[urt]{gv}{\:Z,\gamma}\\
\momentum{fA}{b} & h\vertexlabel_{H^{+}} & \momentum[bot]{fl}{t}h\momentum{fA}{s\;}
}
\quad\quad\Diagram{ & & \momentum[top]{fuV}{\ell^{+}\quad}\\
& & \momentum{fA}{}\vertexlabel^{\ell^{-}}\\
& & \momentum[urt]{gv}{\:Z,\gamma}\\
\momentum{fA}{b} & f\vertexlabel_{t} & \momentum[bot]{hl}{H^{+}}f\momentum{fA}{s\;}
}
\]
}{\Large\par}
{\Large{}$ $}{\Large\par}
{\Large{}$ $}{\Large\par}
{\Large{}
\[
\Diagram{\\
& & & \momentum[top]{fuV}{\ell^{+}\quad}\\
& & & \momentum{fA}{}\vertexlabel^{\ell^{-}}\\
\momentum{fA}{b} & h\vertexlabel_{H^{+}} & \momentum[bot]{fl}{t}h\momentum{fA}{s\;} & \momentum[urt]{gv}{\:Z,\gamma}\\
& & & \momentum{fA}{s\;}
}
\quad\quad\Diagram{ & \momentum[top]{fuV}{\ell^{+}\quad}\\
& \momentum{fA}{}\vertexlabel^{\ell^{-}}\\
& \momentum[urt]{gv}{\:Z,\gamma} & h\vertexlabel_{H^{+}} & \momentum[bot]{fl}{t}h\momentum{fA}{s\;}\\
\momentum{fA}{b\;} & \momentum{fA}{b}
}
\]
}{\Large\par}
{\Large{}\caption{\emph{Penguin diagrams for $b\to s\ell^{+}\ell^{-}$ transitions.
\label{fig:Penguin-diagrams.}}}
}{\Large\par}
\end{figure}
{\Large\par}
{\Large{}}
\begin{figure}[h]
\begin{centering}
{\Large{}}\subfloat[\label{fig:a}]{\begin{centering}
{\Large{}$\Diagram{\vertexlabel^{b} & & & & \vertexlabel^{\ell^{-}}\\
\momentum{fdA}{} & & & \momentum{fuA}{}\\
& & \momentum{h}{\quad\overset{H^{-}}{}}\\
& \momentum[ulft]{fvV}{t\,} & \momentum[bot]{h}{\quad\underset{H^{+}}{}} & \momentum{fvA}{\,\nu_{\mu}}\\
\vertexlabel_{s}\momentum{fuV}{} & & & \momentum{fdV}{}\vertexlabel_{\ell^{+}}
}
$}{\Large\par}
\par\end{centering}
{\Large{}}{\Large\par}}{\Large{}$\quad\quad$}\subfloat[\label{fig:b}]{\begin{centering}
{\Large{}$\Diagram{\vertexlabel^{b} & & & & \vertexlabel^{\ell^{-}}\\
\momentum{fdA}{} & & & \momentum{fuA}{}\\
& & \momentum{h}{\quad\overset{H^{-}}{}}\\
& \momentum[ulft]{fvV}{t\,} & \momentum[bot]{h}{\quad\underset{H^{+}}{}} & \momentum{fvA}{\,\nu_{\tau}}\\
\vertexlabel_{s}\momentum{fuV}{} & & & \momentum{fdV}{}\vertexlabel_{\ell^{+}}
}
$}{\Large\par}
\par\end{centering}
{\Large{}}{\Large\par}}{\Large{}$\quad\quad$}\subfloat[\label{fig:c}]{\begin{centering}
{\Large{}$\Diagram{\vertexlabel^{b} & & & & \vertexlabel^{\ell^{-}}\\
\momentum{fdA}{} & & & \momentum{fuA}{}\\
& & \momentum{h}{\quad\overset{H^{-}}{}}\\
& \momentum[ulft]{fvV}{t\,} & \momentum[bot]{g}{\quad\underset{W^{+}}{}} & \momentum{fvA}{\,\nu_{\mu}}\\
\vertexlabel_{s}\momentum{fuV}{} & & & \momentum{fdV}{}\vertexlabel_{\ell^{+}}
}
$}{\Large\par}
\par\end{centering}
{\Large{}}{\Large\par}}{\Large{}$\quad\quad$}\subfloat[\label{fig:d}]{\begin{centering}
{\Large{}$\Diagram{\vertexlabel^{b} & & & & \vertexlabel^{\ell^{-}}\\
\momentum{fdA}{} & & & \momentum{fuA}{}\\
& & \momentum{g}{\quad\overset{W^{-}}{}}\\
& \momentum[ulft]{fvV}{t\,} & \momentum[bot]{h}{\quad\underset{H^{+}}{}} & \momentum{fvA}{\,\nu_{\mu}}\\
\vertexlabel_{s}\momentum{fuV}{} & & & \momentum{fdV}{}\vertexlabel_{\ell^{+}}
}
$}{\Large\par}
\par\end{centering}
{\Large{}}{\Large\par}}{\Large\par}
\par\end{centering}
{\Large{}\caption{\emph{Box diagrams for $b\to s\ell^{+}\ell^{-}$ transitions.\label{fig:Box-diagrams.}}}
}{\Large\par}
\end{figure}
{\Large\par}
\subsubsection{Penguins and boxes computation}
We review the computation of the WCs in Eqs. (\ref{eq:DeltaC78}-\ref{eq:DeltaC10}) which have been obtained already for both the flavour conserving general THDM in \textsf{SuperIso} and for the GTHDM itself in \cite{Iguro:2017ysu,Iguro:2018qzf,Crivellin:2019dun}. In these latter works, the Yukawa couplings related to $\xi^{d}$ were assumed to be zero or negligibly small from the beginning, avoiding the appearance of possible mixed terms between the down and up couplings that, at first, might not be as suppressed as those involving only down quarks. This computation is performed assuming $\ell=\mu$ in the final state, as inspired by our choice of Yukawa textures in Eq.~\eqref{eq:Textures}, but it can be easily generalised for all flavours when required.
Using the model files provided by \texttt{FeynRules} from \cite{Degrande:2014vpa}, we generate in \texttt{FeynArts} the one loop level Feynman diagrams for $b\to s\mu^{+}\mu^{-}$ transitions.
After this, the amplitudes are tensor decomposed in\texttt{ FeynCalc} \cite{Shtabovenko:2016sxi} and then, the resulting Passarino-Veltman functions are Taylor expanded in the external momenta up to second
order. Finally, the functions are integrated with \texttt{Package X} \cite{Patel:2015tea}. In this way, with $x_{tH^{\pm}}=m_{t}^{2}(\mu_{W})/m_{H^{\pm}}^{2}$ for $\mu_W=\mathcal{O}(m_{W})$
we obtain
\begin{flalign}
&\begin{aligned}
C_{9}^{\gamma}=\frac{-\Gamma_{tb}^{L}\Gamma_{ts}^{L}}{\sqrt{2}G_{F}V_{tb}V_{ts}^{*}m_{t}^{2}\lambda_{tt}^{2}}\mathcal{D}^{H(0)}(x_{tH^{\pm}}),
\end{aligned}&&
\end{flalign}
\begin{flalign}
&\begin{aligned}
C_{9}^{Z} & =\frac{\Gamma_{tb}^{L}\Gamma_{ts}^{L}}{\sqrt{2}G_{F}V_{tb}V_{ts}^{*}m_{t}^{2}\lambda_{tt}^{2}}\frac{\left(1-4s_{W}^{2}\right)}{s_{W}^{2}}\mathcal{C}^{H(0)}(x_{tH^{\pm}})+\frac{m_{b}}{m_{t}}\frac{\Gamma_{tb}^{R}\Gamma_{ts}^{L}}{\sqrt{2}G_{F}V_{tb}V_{ts}^{*}}\mathcal{C}_{\textrm{mix}}^{H(0)}(x_{tH^{\pm}}),\label{eq:C9Z}
\end{aligned}&&
\end{flalign}
\begin{flalign}
&\begin{aligned}
C_{9}^{\textrm{box}}=C_{10}^{\textrm{box}} & =\frac{\Gamma_{tb}^{L}\Gamma_{ts}^{L}}{32G_{F}^{2}V_{tb}V_{ts}^{*}m_{t}^{2}}\left|\Gamma_{\nu_{i}\mu}^{R}\right|^{2}\mathcal{B}^{H(0)}(x_{tH^{\pm}})+\frac{m_{\mu}\,\xi_{\mu\mu}^{l}}{8\sqrt{2}G_{F}m_{W}^{3}s_{W}^{2}\text{\ensuremath{V_{tb}}}V_{ts}^{*}}\mathcal{B}^{H(0)}_{\textrm{mix}}(x_{tH^{\pm}},\,H),\label{eq:C9box}
\end{aligned}&&
\end{flalign}
\begin{flalign}
&\begin{aligned}
C_{10}^{Z} & =\frac{1}{\left(4s_{W}^{2}-1\right)}C_{9}^{Z},
\end{aligned}&&
\end{flalign}
\begin{flalign}
&\begin{aligned}
C_{7,8}^{\gamma,g} & =\frac{\Gamma_{tb}^{L}\Gamma_{ts}^{L}}{3\sqrt{2}G_{F}V_{tb}V_{ts}^{*}m_{t}^{2}\lambda_{tt}^{2}}F_{7,8}^{(1)}(x_{tH^{\pm}})-\frac{\Gamma_{tb}^{R}\Gamma_{ts}^{L}}{\sqrt{2}G_{F}V_{tb}V_{ts}^{*}m_{b}m_{t}\lambda_{tt}\lambda_{bb}}F_{7,8}^{(2)}(x_{tH^{\pm}}),
\end{aligned}&&
\end{flalign}
where
\begin{equation}
\Gamma_{ts}^{L}=\sum_{l=1}^{3}\xi_{l3}^{u}V_{l2}^{*},\quad\Gamma_{tb}^{L}=\sum_{k=1}^{3}V_{kt}\xi_{k3}^{u*},\label{eq:GammaLts}
\end{equation}
\begin{equation}
\Gamma_{tb}^{R}=\sum_{k=1}^{3}V_{kt}\xi_{k3}^{d*},\quad\left|\Gamma_{\nu_{i}\mu}^{R}\right|^{2}=\left|\xi_{\mu\mu}^{l}\right|^{2}+\left|\xi_{\tau\mu}^{l}\right|^{2},\label{eq:GammaRtb}
\end{equation}
with the Green functions $\mathcal{D}^{H(0)},\,\mathcal{C}^{H(0)},\,F_{7,8}^{(1)}$ and $F_{7,8}^{(2)}$ defined in appendices C1 and C2 in \cite{SuperIso4.1}. Here, $\lambda_{ii}$ are the diagonal Yukawa couplings defined in \textsf{SuperIso}, $G_{F}$ is the Fermi constant and $s_{W}$ is the sine of the Weinberg angle. The Green function $\mathcal{B}^{H(0)}$ for the box diagram contribution in $C_{9,10}^{\textrm{box}}$ coming from the new lepton flavour violating (LFV) couplings is given by
\begin{equation}
\mathcal{B}^{H(0)}(t)=\frac{t\left(t-t\log t-1\right)}{m_{W}^{2}s_{W}^{2}(t-1)^{2}}.\label{eq:Greenf_Box}
\end{equation}
Our computation shows two new terms absent in both the \textsf{SuperIso} manual and in \cite{Crivellin:2019dun}, namely the mixed term in the $C_{9}^{Z}$ expression where
\begin{equation}
\mathcal{C}_{\textrm{mix}}^{H(0)}(t)=-\frac{\left(1-4s_{W}^{2}\right)t\left(t^{2}-2\,t\log t-1\right)}{16m_{W}^{2}s_{W}^{2}(t-1)^{3}},
\end{equation}
and a gauge dependent contribution to $C_{9}^{\textrm{box}}$ coming from the box diagrams in figures \ref{fig:c} and \ref{fig:d} proportional to $\mathcal{B}^{H(0)}_{\textrm{mix}}(t,\,H)$ with $H=m_{H^{\pm}}^{2}/m_{W}^{2}$ (see appendix \ref{sec:gauge-term}).
For all remaining terms, we obtained full agreement with \cite{Crivellin:2019dun} once the overall
$\sqrt{2}$ factor in their Yukawa Lagrangian is taken into account
compared to our Eq.~(\ref{eq:YukawaLagran}). It is important to mention here that once the full quantum field theory matches with the effective theory at a scale $\mu_W=\mathcal{O}(m_{W})$, the evolution of the WC $C_{7}$ (and $C_{7}^{'}$) from $\mu=\mu_W$ down to $\mu=\mu_b$, where $\mu_b$ is of the order of $m_b$, is given at LO by \cite{Buras:1998raa}
\begin{equation}
C_{7}^{\textrm{eff}}(\mu_b)=\eta^{\frac{16}{23}}C_{7}+\frac{8}{3}\left(\eta^{\frac{14}{23}}-\eta^{\frac{16}{23}}\right)C_{8}+\sum_{i=1}^{8}h_{i}\eta^{a_{i}}\,C_{2}\,,
\label{C7RGE}
\end{equation}
where $\eta=\alpha_{S}(\mu_{W})/\alpha_{S}(\mu_{b})$ and the renormalisation group evolution of the QCD coupling is
\begin{equation}
\alpha_{S}(\mu_{b})=\frac{\alpha_{S}(m_{Z})}{1-\beta_{0}\frac{\alpha_{S}(m_{Z})}{2\pi}\log(m_{Z}/\mu_{b})},
\end{equation}
with $\beta_{0}=23/3$. The $\sum_{i=1}^{8}h_{i}\eta^{a_{i}}$ factor in Eq.~\eqref{C7RGE} is given in Eq.(12.23) of \cite{Buras:1998raa} and references therein. The $C_{2}$ coefficient
comes from four-quark operators generated by $W$ boson exchange in the SM and contributes importantly when computing the branching ratio $\mathrm{BR}(\overline{B}\rightarrow X_{s}\gamma)$.
In the GTHDM, as shown in \cite{Crivellin:2019dun}, an analogous contribution comes from charged Higgs exchange at tree level. In this way, following \cite{Buchalla:1995vs} with $\alpha_{S}(m_{Z})=0.117$, we use the following parametric expression at LO:
\begin{equation}
C_{7}^{\textrm{eff}}(\mu_b)=0.698\,C_{7}+0.086\,C_{8}-0.158\,C_{2},
\end{equation}
where $C_{2}=C_{2}^{\mathrm{SM}}+\Delta C_{2}$ for $C_{2}^{\mathrm{SM}}=1$ and
\begin{equation}
\Delta C_{2}= -\dfrac{7}{18}\dfrac{m_{W}^{2}}{m_{H^{\pm}}^{2}}\dfrac{V_{k2}^{*}\xi_{k2}^{u}\xi_{n2}^{u*}V_{n3}}{g_{2}^{2}V_{tb}V_{ts}^{*}}-\dfrac{1}{3}\dfrac{m_{c}}{m_{b}}\dfrac{m_{W}^{2}}{m_{H^{\pm}}^{2}}\dfrac{V_{k2}^{*}\xi_{k2}^{u}V_{2n}\xi_{n3}^{d}}{g_{2}^{2}V_{tb}V_{ts}^{*}}\left(3+4\log\left(\dfrac{\mu_{b}^{2}}{m_{H^{+}}^{2}}\right)\right).\,
\end{equation}
with $g_{2}$ the weak coupling constant. Similarly, there will be a contribution to the $C_{9}$ (and $C_{9}^{'}$) WC coming from those four-quark operators given by \cite{Crivellin:2019dun}
\begin{equation}
C_{9}^{4-\mathrm{quark}}(\mu_b)=\dfrac{2}{{27}}\dfrac{{V_{k2}^{*}\xi_{k2}^{u}\xi_{n2}^{u*}{V_{n3}}}}{{g_{2}^{2}{V_{tb}}V_{ts}^{*}}}\dfrac{m_{W}^{2}}{m_{H^{\pm}}^{2}}\Bigg(19+12\log\!\left(\!\dfrac{\mu_{b}^{2}}{m_{H^{\pm}}^{2}}\!\right)\Bigg),
\end{equation}
which can be added at LO to both the penguins and boxes contributions, obtaining
\begin{equation}
C_{9}^{\mathrm{eff}}(\mu_b)=C_{9}+C_{9}^{4-\mathrm{quark}}(\mu_b).
\end{equation}
\subsubsection{Summary of contributions}
As already mentioned, in view of the flavour changing couplings in the GTHDM, there are two new contributions compared to the ones present in \textsf{SuperIso}. These contributions come from the box diagrams in figures\ \ref{fig:a}-\ref{fig:b} and from the $Z$ penguin in figure\ \ref{fig:Penguin-diagrams.}. The $\Gamma_{tb}^{L}\Gamma_{ts}^{L}$ contribution is the largest and dominates the amplitude for most of the parameter space, with a strong dependence on $\tan\beta$, $m_{H^{\pm}},$ $Y_{2,ct/tc}^{u},\,Y_{2,tt}^{u},\,Y_{2,\mu\mu}^{l},\,Y_{2,\mu\tau}^{l}$. There are also two subdominant contributions, the first one coming from the part proportional to $\Gamma_{tb}^{R}\Gamma_{ts}^{L}$ in the $Z$ penguin diagram in figure\ \ref{fig:Penguin-diagrams.}. When comparing its contribution relative to the $\Gamma_{tb}^{L}\Gamma_{ts}^{L}$ term, we find regions of the parameter space in which it can make up to 10$\%$ of the total contribution (see figure\ \ref{fig:comparison} \textit{left}). The second subdominant contribution is the already mentioned gauge dependent part of the boxes diagrams (figures\ \ref{fig:c}-\ref{fig:d}) which is suppressed by the muon mass (see figure\ \ref{fig:comparison} \textit{right}). Additionally we verified that when varying the mass of the charged Higgs from 500 GeV to 4000 GeV these ratios were essentially unaffected. In this way, we keep in our calculations the $\Gamma_{tb}^{R}\Gamma_{ts}^{L}$ term from the $Z$ penguin and neglect the gauge dependent part of the boxes diagrams.
\begin{figure}[h]
\begin{centering}
\includegraphics[scale=0.4]{figures/C9Zmix_Suppression_contours.pdf} $\qquad$\includegraphics[scale=0.4]{figures/C9Boxmix_Suppression_contours.pdf}
\par\end{centering}
\caption{\emph{Left: $C_{9}^{Z,mix}/C_{9}^{Z}$ contour levels for $\xi_{sb}^{d}=\xi_{bb}^{d}=\xi^{d}$.
Here $C_{9}^{Z}$ and $C_{9}^{Z,mix}$ refers to the first and second
terms in Eq.~(\ref{eq:C9Z}) respectively. Right: $C_{9}^{box,mix}/C_{9}^{box}$
contour levels for $\xi_{tt}^{u}=\xi_{ct}^{u}=\xi^{u}$. $C_{9}^{box}$ and $C_{9}^{box,mix}$ refers to the first and second
terms in Eq.~(\ref{eq:C9box}) respectively.\label{fig:comparison}}}
\end{figure}
\subsection{\texorpdfstring{$b\to c \ell \overline{\nu}$}{b to clnu} semileptonic transitions}
As a consequence of the new interactions between the fermions and the charged Higgs, semileptonic tree level flavour changing transitions appear in the GTHDM (figure\ \ref{fig:RD-tree-level}) which have been extensively studied in the literature \cite{Celis:2012dk,Crivellin:2012ye,Crivellin2013,Alonso:2016oyd,Iguro:2018qzf,Martinez:2018ynq}. Therefore we include tree-level calculations of the Wilson coefficients related to these in our analysis. The effective Hamiltonian responsible for the $b\to c \ell \overline{\nu}$
transitions for the semileptonic decays of $B$-mesons, including the SM and tree level GTHDM contributions can be
written in terms of scalar operators in the form
\begin{equation}
\begin{array}{l}
{\cal H}_{{\rm eff}}=C_{SM}^{cb}{\cal O}_{SM}^{cb}+C_{R}^{cb}{\cal O}_{R}^{cb}+C_{L}^{cb}{\cal O}_{L}^{cb},
\end{array}
\label{eq:Heffective}
\end{equation}
where $C_{SM}^{cb}=4G_{F}V_{cb}/\sqrt{2}$ and the operators are given by
\begin{equation}
\begin{array}{l}
{\cal O}_{SM}^{cb}=\left(\bar{c}\gamma_{\mu}P_{L}b\right)\left(\bar{\ell}\gamma_{\mu}P_{L}\nu\right),\\
{\cal O}_{R}^{cb}=\left(\bar{c}P_{R}b\right)\left(\bar{\ell}P_{L}\nu\right),\\
{\cal O}_{L}^{cb}=\left(\bar{c}P_{L}b\right)\left(\bar{\ell}P_{L}\nu\right).
\end{array}\label{eq:Oeffective}
\end{equation}
\begin{figure}[h]
\begin{centering}
{\Large{}$\Diagram{\vertexlabel^{c} & & & & \vertexlabel^{\ell}\\
& fdV & & fuA\\
& & \momentum[bot]{h}{\quad\,H^{-}}\\
& fuA & & fdV\vertexlabel_{\overline{\nu}}\\
\vertexlabel_{b}
}
$}{\Large\par}
\par\end{centering}
\caption{\emph{Tree level contribution to $b\to c \ell \overline{\nu}$.\label{fig:RD-tree-level}}}
\end{figure}
Given that the flavour of the neutrino in the final state can not be discerned by experiments, one has to add (incoherently) to the SM the NP contributions associated with the LFV couplings $\xi_{ij}^{l}$. As the existing constraints will apply separately to the scalar and
the pseudoscalar couplings, it is convenient to define
\begin{eqnarray}
g_{S}^{\ell\ell^{\prime}}\equiv\frac{C_{R}^{cb}+C_{L}^{cb}}{C_{SM}^{cb}},\ g_{P}^{\ell\ell^{\prime}}\equiv\frac{C_{R}^{cb}-C_{L}^{cb}}{C_{SM}^{cb}},
\end{eqnarray}
where in our analysis we evaluate the WCs $C_{R}^{cb}$ and $C_{L}^{cb}$ at tree-level, with the expressions,
\begin{equation}
C_{R}^{cb}=-2\frac{(V_{cb}\xi_{bb}^{d}+V_{cs}\xi_{sb}^{d})\xi_{\ell\ell^{\prime}}^{l*}}{m_{H^{\pm}}^{2}},\quad C_{L}^{cb}=2\frac{V_{tb}\xi_{tc}^{u*}\xi_{\ell\ell^{\prime}}^{l*}}{m_{H^{\pm}}^{2}}.
\label{semileptonicWCs}
\end{equation}
\section{Observables}
\label{sec:Observables}
In this section we present the observables to be included in the fit. We divide them in four sets: The first one for FCNCs in $b\to s$ transitions and $B$ meson rare decay observables, both of them affected by the new WC contributions. The second set is associated with FCCCs observables that arise from semileptonic $b\to c \ell \overline{\nu}$ decays and the mass difference $\Delta M_{s}$ from $B_{s}-\overline{B}_{s}$ oscillations. Various leptonic decays of mesons form the third set. Finally, the fourth set contains leptonic observables associated with $\tau$ and $\mu$ decays, among them the anomalous magnetic moment of the muon in particular.
\subsection{FCNCs and \texorpdfstring{$B$}{B} rare decays}
\label{sec:FCNCObservables}
Lepton flavour universality in the SM means that all couplings between leptons and gauge bosons are the same (up to mass differences). This implies that any departure from this identity could be a clear sign of NP. The most interesting tests of LFU violation with FCNC are given by the ratios of $b\rightarrow sll$ transitions
\begin{equation}
R(K^{(*)})=\frac{\Gamma(B\rightarrow K^{(*)}\mu^{+}\mu^{-})}{\Gamma(B\rightarrow K^{(*)}e^{+}e^{-})},
\end{equation}
with $\Gamma$ representing the decay width and $K^{(*)}$ are kaons. As per our choice of Yukawa textures in Eq.~\eqref{eq:Textures}, here we only consider NP effects coming from the muon specific WCs, i.e., electronic WCs are SM-like. Aside from this $R(K^{(*)})$ ratios, hints for LFU violation are found in many branching fractions and angular observables related
to $B\rightarrow K^{(*)}\mu^{+}\mu^{-}$ decays as a function of the dimuon mass squared $q^2$. In this work we use the same observables as in \cite{Bhom:2020lmk}, with the predicted values obtained with \textsf{SuperIso} and with likelihoods provided via \textsf{HEPLike}. In particular, among the observables included are the optimised angular observables $P_{i}^{(\prime)}$ which have been constructed in order to minimise the hadronic uncertainties emerging from form factor contributions to the $B^{0}\rightarrow K^{*0}\mu^{+}\mu^{-}$ decay at leading order \cite{Descotes-Genon:2013vna}. In view of that, experimentally these observables are obtained by fitting $q^{2}$-binned
angular distributions and they are defined in the theory as CP-averages integrated in the $q^{2}$ bins:
\begin{align}
\left\langle P_{1}\right\rangle_{{\rm bin}} & =\frac{1}{2}\frac{\int_{{\rm bin}}dq^{2}[J_{3}+\bar{J}_{3}]}{\int_{{\rm bin}}dq^{2}[J_{2s}+\bar{J}_{2s}]}\ , & \left\langle P_{2}\right\rangle_{{\rm bin}} & =\frac{1}{8}\frac{\int_{{\rm bin}}dq^{2}[J_{6s}+\bar{J}_{6s}]}{\int_{{\rm bin}}dq^{2}[J_{2s}+\bar{J}_{2s}]}\ ,
\end{align}
\begin{equation}
\left\langle P_{5}^{\prime}\right\rangle_{{\rm bin}}=\frac{1}{2\,{\cal N}_{{\rm bin}}^{\prime}}\int_{{\rm bin}}dq^{2}[J_{5}+\bar{J}_{5}]\ ,
\end{equation}
\textbf{}where the $J_{i}$ functions and the normalisation
constant $\mathcal{N}_{\mathrm{bin}}^{\prime}$ are given in \cite{Bhom:2020lmk}. Additionally, they can be related to the form factor dependent observables $S_{i}$ \cite{Altmannshofer:2008dz} as
\begin{equation}
\begin{aligned}P_{1} & =\frac{2\,S_{3}}{(1-F_{{\rm L}})},\qquad P_{2}=\frac{2}{3}\frac{A_{{\rm FB}}}{(1-F_{{\rm L}})},\label{eq:P1P2}\end{aligned}
\end{equation}
\begin{equation}
P_{5}^{\prime}=\frac{S_{5}}{\sqrt{F_{{\rm L}}(1-F_{{\rm L}})}},\label{eq:P5p}
\end{equation}
where $A_{\rm FB}$ is the forward-backward asymmetry of the dimuon system and $F_{L}$ is the fraction of longitudinal polarisation of the $K^{*0}$ meson.
The most sensitive observable to scalar operators is the branching ratio $\mathrm{BR}(B_{s}\rightarrow\mu^{+}\mu^{-})$ which also depends on the muon specific $C_{10}$ and $C_{10}^{'}$ WCs \cite{Bhom:2020lmk}:
\begin{align}
\mathrm{BR}(B_{s} & \rightarrow\mu^{+}\mu^{-})=\dfrac{G_{F}^{2}\alpha^{2}}{64\pi^{3}}f_{B_{s}}^{2}\tau_{B_{s}}m_{B_{s}}^{3}\big|V_{tb}V_{ts}^{*}\big|^{2}\sqrt{1-\frac{4m_{\mu}^{2}}{m_{B_{s}}^{2}}}\nonumber\\
& \times\left[\left(1-\frac{4m_{\mu}^{2}}{m_{B_{s}}^{2}}\right)\left|\dfrac{m_{B_{s}}\left(C_{Q_{1}}-C_{Q_{1}}^{'}\right)}{(m_{b}+m_{s})}\right|^{2}
+\left|\dfrac{m_{B_{s}}\left(C_{Q_{2}}-C_{Q_{2}}^{'}\right)}{\left(m_{b}+m_{s}\right)}-2\left(C_{10}-C_{10}^{\prime}\right)\frac{m_{\mu}}{m_{B_{s}}}\right|^{2}\right],
\label{eq:Bsmumu}
\end{align}
where $f_{B_{s}}$ is the decay constant and $\tau_{B_{s}}$ is the
mean lifetime.
With respect to the inclusive
$\overline{B}\rightarrow X_{s}\gamma$ decay, we use the full expression given in the works of \cite{Czarnecki:1998tn,Misiak:2006zs,Misiak:2006ab,Czakon:2015exa,Misiak:2017bgg,Misiak:2020vlo} and implemented in \textsf{SuperIso}. The WCs $C_{7}$ and $C_{7}'$ are constrained by this decay, given at the quark level
by $b\rightarrow s\gamma$, which at LO is
\begin{equation}
\Gamma(b\rightarrow s\gamma)=\frac{G_{F}^{2}}{32\pi^{4}}\big|V_{tb}V_{ts}^{*}\big|^{2}\alpha_{{\rm em}}\,m_{b}^{5}\,\left(\vert C_{7{\rm eff}}\,(\mu_{b})\vert^{2}+\vert C_{7{\rm eff}}^{\prime}(\mu_{b})\vert^{2}\right).
\end{equation}
We also take into account the rare decays $B_{s}\rightarrow \tau^{+}\tau^{-}$ and $B^{+}\rightarrow K^{+}\tau^{+}\tau^{-}$ as well as the LFV processes $B_{s}\rightarrow\mu^{\pm}\tau^{\mp}$, $B^{+}\rightarrow K^{+}\mu^{\pm}\tau^{\mp}$ and $b\rightarrow s\nu\overline{\nu}$ with theoretical expressions given in \cite{Crivellin:2019dun}. A list of the included FCNC observables\footnote{New measurements of BR$(B_s\to\mu+\mu^-)$ have been performed recently by LHCb~\cite{LHCb:2021trn,LHCb:2021awg}, as well as a combination with previous results~\cite{Hurth:2021nsi}, giving a combined measured value of $2.85^{+0.34}_{-0.31}$. Nevertheless, we do not expect significant deviations from our results with this new measurement.} can be found in Table \ref{tab:neutral-observables}.
\begin{table}[H]
\begin{centering}
\begin{tabular}{|c|c|}
\hline
Observable & Experiment \tabularnewline
\hline
\hline
$R(K^{*})[0.045,\,1.1]\,\mathrm{GeV^{2}}$ & $0.66\pm0.09\pm0.03$ \cite{LHCb:2017avl} \tabularnewline
\hline
$R(K^{*})[1.1,\,6.0]\,\mathrm{GeV^{2}}$ & $0.69\pm0.09\pm0.05$ \cite{LHCb:2017avl} \tabularnewline
\hline
$R(K)[1.1,\,6.0]\,\mathrm{GeV^{2}}$ & $0.846\pm0.042\pm0.013$ \cite{LHCb:2021trn} \tabularnewline
\hline
$\mathrm{BR}(B_{s}\rightarrow\mu^{+}\mu^{-})\times10^{9}$ & $2.69^{+0.37}_{-0.35}$ \cite{LHCb-CONF-2020-002} \tabularnewline
\hline
$\mathrm{BR}(B\rightarrow X_{s}\gamma)\times10^{4}$ & $3.32\pm0.15$ \cite{Amhis:2019ckw} \tabularnewline
\hline
$\mathrm{BR}(B_{s}\rightarrow\tau^{+}\tau^{-})\times10^{3}$ & $<6.8$ at 95\% C.L. \cite{Zyla:2020zbs}\tabularnewline
\hline
$\mathrm{BR}(B^{+}\rightarrow K^{+}\tau^{+}\tau^{-})\times10^{3}$ & $<2.25$ at 90\% C.L. \cite{Zyla:2020zbs}\tabularnewline
\hline
$\mathrm{BR}(B_{s}\rightarrow\mu^{\pm}\tau^{\mp})\times10^{5}$ & $<4.2$ at 95\% C.L. \cite{Zyla:2020zbs}\tabularnewline
\hline
$\mathrm{BR}(B^{+}\rightarrow K^{+}\mu^{\pm}\tau^{\mp})\times10^{5}$ & $<4.8$ at 90\% C.L. \cite{Zyla:2020zbs}\tabularnewline
\hline
$\mathcal{R}_{K}^{\nu\overline{\nu}}$ & $<3.9$ at 90\% C.L. \cite{Grygier:2017tzo}\tabularnewline
\hline
$\mathcal{R}_{K^{*}}^{\nu\overline{\nu}}$ & $<2.7$ at 90\% C.L. \cite{Grygier:2017tzo}\tabularnewline
\hline
\end{tabular}
\par\end{centering}
\caption{\emph{Experimental measurements of FCNCs observables and bounds for rare $B$ decays considered in our study. The $\mathcal{R}_{K^{(*)}}^{\nu\overline{\nu}}$ parameters are related to $b\rightarrow s\nu\overline{\nu}$ transitions as introduced in Eq.(4.6) in \cite{Crivellin:2019dun}. We also include all the angular distributions and branching fractions of $B^{0}\rightarrow K^{*0}\mu^{+}\mu^{-}$ decays, the branching fractions of both $B_{s}\rightarrow \phi\mu^{+}\mu^{-}$ and $B^{+}\rightarrow K^{+}\mu^{+}\mu^{-}$ with measurements provided by the \textsf{HEPLikeData} repository \cite{HEPLikeData}. \label{tab:neutral-observables}}}
\end{table}
\subsection{FCCCs observables}
\label{sec:FCCCObservables}
The most relevant FCCC observables are the ratios of semileptonic $B$ meson decays to $\tau$ and light leptons, that is
\begin{equation}
R(D^{(*)})=\frac{\Gamma(\overline{B}\rightarrow D^{(*)}\tau\overline{\nu})}{\Gamma(\overline{B}\rightarrow D^{(*)}l\overline{\nu})} ,
\end{equation}
where $D^{(*)}$ are charmed mesons and $l$ is either an electron $(e)$ or a muon $(\mu)$. As of the time of writing, the world average for the experimental measurement of the ratios $R(D^{(*)})$ sits at a 3.1$\sigma$ deviation from the SM prediction~\cite{Amhis:2019ckw}.
The GTHDM contributions to $R(D)$$\,$ and $R(D^{*})$$\,$ from the effective Hamiltonian in Eq.~(\ref{eq:Heffective}) can be written as,
\begin{equation}
R(D)=\frac{1+1.5\,\mathrm{Re}(g_{S}^{\tau\tau})+1.0\sum\left|g_{S}^{\tau l}\right|^{2}}{3.34+4.8\sum\left|g_{S}^{\mu l}\right|^{2}},
\end{equation}
\begin{equation}
R(D^{*})=\frac{1+0.12\,\mathrm{Re}(g_{P}^{\tau\tau})+0.05\sum\left|g_{P}^{\tau l}\right|^{2}}{3.89+0.25\sum\left|g_{P}^{\mu l}\right|^{2}}.
\end{equation}
In addition to $R(D)$$\,$ and $R(D^{*})$, a third ratio has been measured by the Belle collaboration \cite{Belle:2018ezy}, the ratio $R_{e/\mu}=\mathrm{BR}(\overline{B}\rightarrow D e\overline{\nu})/\mathrm{BR}(\overline{B}\rightarrow D \mu\overline{\nu})$ which is considered to be the stringent test of LFU in $B$ decays. It can be expressed in the GTHDM as
\begin{equation}
R_{e/\mu}=\frac{1}{0.9964+0.18\,\mathrm{Re}(g_{S}^{\mu\mu})+1.46\sum\left|g_{S}^{\mu l}\right|^{2}},
\end{equation}
where we have obtained the NP leptonic contributions by integrating the heavy quark effective theory (HQET) amplitudes of the scalar type operators from \cite{Murgui:2019czp, Tanaka:2012nw}.
The $B_{c}$ meson lifetime has contributions from the SM, given by $\tau_{B_{c}}^{\mathrm{SM}}=0.52_{-0.12}^{+0.18}$ ps~\cite{Beneke:1996xe}, and the GTHDM, which can be written as
\begin{align}
1/\tau_{B_c}^\mathrm{GTHDM} = \Gamma_{B_{c}\rightarrow\tau\bar{\nu}}^{\mathrm{GTHDM}}= & \frac{m_{B_{c}}(m_{\tau}f_{B_{c}}G_{F})^{2}\left|V_{cb}\right|^{2}}{8\pi}\left(1-\frac{m_{\tau}^{2}}{m_{B_{c}}^{2}}\right)^{2}\nonumber\\
& \times\left[\left|1+\frac{m_{B_{c}}^{2}}{m_{\tau}(m_{b}+m_{c})}g_{P}^{\tau \tau}\right|^{2}+\left|\frac{m_{B_{c}}^{2}}{m_{\tau}(m_{b}+m_{c})}g_{P}^{\tau l}\right|^{2}-1\right],
\end{align}
where the -1 term accounts for the subtraction of the SM contribution. By using the lifetime of the $B_c$ meson as the constraining observable, we can compare it to the current experimental measurement of $\tau_{B_{c}}=0.510\pm0.009$(ps) \cite{Zyla:2020zbs}, instead of using the theoretical limits on the branching ratio $\mathrm{BR}(B_{c}\rightarrow\tau\bar{\nu})$, which are reported to be either 10$\%$~\cite{Akeroyd:2017mhr} and 30$\%$~\cite{Alonso:2016oyd}.
Another related measurement, $B_{c}^{+}\to J/\psi\tau^{+}{\nu}_{\tau}$,
has been reported by LHCb \cite{Aaij:2017tyk} and also hints to
disagreement with the SM. However the errors are too large at present
to reach a definitive conclusion, with $\mathcal{R}(J/\psi)=0.71\pm0.17\pm0.18$. In addition it has been claimed that the hadronic uncertainties are not at the same level as for the observables related to $\overline{B}\rightarrow D^{*}$ transitions \cite{Murgui:2019czp}, so we do not include it in our fit.
In contrast,
a recent measurement of the longitudinal polarization fraction of the $D^{*}$ meson,
defined as
\begin{equation}
F_{L}(D^{*})=\frac{\Gamma_{\lambda_{D^{*}}=0}\left(\overline{B}\rightarrow D^{*}\tau\overline{\nu}\right)}{\Gamma\left(\overline{B}\rightarrow D^{*}\tau\overline{\nu}\right)},
\end{equation}
has been recently announced by the Belle collaboration~\cite{Abdesselam:2019wbt},
\begin{equation}
F_{L}(D^{*})=0.6\pm0.08\,(\textrm{stat})\pm0.04\,(\textrm{syst)},
\end{equation}
deviating from the SM prediction $F_{L}^{\mathrm{SM}}(D^{*})=0.457\pm0.010$~\cite{Bhattacharya:2018kig} by $1.6\sigma$. The $B\to D^{*}\tau\overline{\nu}$ differential decay width into
longitudinally-polarized ($\lambda_{D^{*}}=0$) $D^{*}$ mesons is
given (keeping NP from scalar contributions only) by
\begin{eqnarray}
\frac{d\Gamma_{\lambda_{D^{*}}=0}^{D^{*}}}{dq^{2}} & = & \frac{G_{F}^{2}|V_{cb}|^{2}}{192\pi^{3}m_{B}^{3}}\,q^{2}\sqrt{\lambda_{D^{*}}(q^{2})}\left(1-\frac{m_{\tau}^{2}}{q^{2}}\right)^{2}\left\{\left[\left(1+\frac{m_{\tau}^{2}}{2q^{2}}\right)H_{V,0}^{2}+\frac{3}{2}\,\frac{m_{\tau}^{2}}{q^{2}}\,H_{V,t}^{2}\right]\right.\nonumber \\
& & \left.\hskip1.5cm\mbox{}+\frac{3}{2}\,|C_{R}^{cb}-C_{L}^{cb}|^{2}H_{S}^{2}+3\,\mathrm{Re}(C_{R}^{cb*}-C_{L}^{cb*})\,\frac{m_{\tau}}{\sqrt{q^{2}}}\,H_{S}H_{V,t}\right\}\,,
\end{eqnarray}
where the helicity amplitudes are defined in appendix B of \cite{Murgui:2019czp}. In addition, we also include the normalised distributions $d\Gamma(B\to D\tau\overline{\nu})/(\Gamma dq^{2})$
and $d\Gamma(B\to D^{\star}\tau\overline{\nu})/(\Gamma dq^{2})$, as measured by the BaBar collaboration \cite{Lees:2013uzd}.
Lastly, the mass difference $\Delta M_{s}$ of $B_{s}-\overline{B}_{s}$ oscillations is included in our study and (for $m_A=m_H$) is given by \cite{Herrero-Garcia:2019mcy}
\begin{align}
\Delta M_{s}^{\mathrm{GTHDM}}= & -\frac{f_{B_{s}}^{2}M_{B_{s}}^{3}}{4(m_{b}+m_{s})^{2}}\biggl[c_{\beta\alpha}^{2}\,\biggl(\frac{1}{m_{h}^{2}}-\frac{1}{m_{H}^{2}}\biggr)+\frac{2}{m_{H}^{2}}\biggr]\nonumber \\
& \times\biggl\{(U_{22}\tilde{\mathcal{B}}_{B_{s}}^{(2)}\,b_{2}+U_{32}\tilde{\mathcal{B}}_{B_{s}}^{(3)}\,b_{3})\,\biggl[(\xi_{bs}^{d*})^{2}+(\xi_{sb}^{d})^{2}\biggr]+2\,(U_{44}\tilde{\mathcal{B}}_{B_{s}}^{(4)}b_{4})\,\xi_{bs}^{d*}\xi_{sb}^{d}\biggr\}\,,
\end{align}
with $\vec{b}=\{8/3,\;-5/3,\;1/3,\;2,\;2/3\}$, bag factors $\tilde{\mathcal{B}}_{B_{s}}^{(2)}=0.806$, $\tilde{\mathcal{B}}_{B_{s}}^{(3)}=1.1$ and $\tilde{\mathcal{B}}_{B_{s}}^{(4)}=1.022$ \cite{Bazavov:2016nty,Straub:2018kue}, and the $U$ running matrix being defined in \cite{Herrero-Garcia:2019mcy}. A summary of all FCCC observables included in this study is provided in Table \ref{tab:charged-observables}.
\begin{table}[H]
\begin{centering}
\begin{tabular}{|c|c|}
\hline
Observable & Experiment\tabularnewline
\hline
\hline
$R(D)$ & $0.340\pm0.027\pm0.013$ \cite{Amhis:2019ckw}\tabularnewline
\hline
$R(D^{*})$ & $0.295\pm0.011\pm0.008$ \cite{Amhis:2019ckw}\tabularnewline
\hline
$R_{e/\mu}$ & $1.01\pm0.01\pm0.03$ \cite{Belle:2018ezy}\tabularnewline
\hline
$\tau_{B_{c}}$(ps) & $0.510\pm0.009$ \cite{Zyla:2020zbs}\tabularnewline
\hline
$F_{L}(D^{*})$ & $0.6\pm0.08\pm0.04$ \cite{Abdesselam:2019wbt}\tabularnewline
\hline
$\Delta M_{s}(\mathrm{ps}^{-1})$ & 17.741 $\pm$ 0.020 \cite{Amhis:2019ckw}\tabularnewline
\hline
\end{tabular}
\par\end{centering}
\caption{\emph{Observables related to the charged anomalies considered in our study. We also include the normalised distributions $d\Gamma(B\to D\tau\overline{\nu})/(\Gamma dq^{2})$
and $d\Gamma(B\to D^{\star}\tau\overline{\nu})/(\Gamma dq^{2})$ as measured by the BaBar collaboration \cite{Lees:2013uzd}. \label{tab:charged-observables}}}
\end{table}
\subsection{Leptonic decays of mesons}
Beyond those described in Sections~\ref{sec:FCNCObservables} and \ref{sec:FCCCObservables}, there are additional leptonic decays included in this study. The total decay width at LO for the process $M\to{l}\nu$ in the GTHDM is computed as \cite{HernandezSanchez:2012eg,Jung:2010ik,Iguro:2017ysu}
\begin{eqnarray}
\mathrm{BR}(M_{ij}\to l\nu)=G_{F}^{2}m_{l}^{2}f_{M}^{2}\tau_{M}|V_{ij}|^{2}\frac{m_{M}}{8\pi}\left(1-\frac{m_{l}^{2}}{m_{M}^{2}}\right)^{2}\left[|1-\Delta_{ij}^{ll}|^{2}+|\Delta_{ij}^{ll^{\prime}}|^{2}\right],
\end{eqnarray}
where $i$, $j$ are the valence quarks of the meson $M$, $f_{M}$ is
its decay constant and $\Delta_{ij}^{ll^{\prime}}$ is the NP
correction given by
\begin{eqnarray}
\Delta_{ij}^{ll^{\prime}}=\bigg(\frac{m_{M}}{m_{H^{\pm}}}\bigg)^{2}Z_{ll^{\prime}}\bigg(\frac{Y_{ij}m_{u_{i}}+X_{ij}m_{d_{j}}}{V_{ij}(m_{u_{i}}+m_{d_{j}})}\bigg),\quad\,\,\,l,l^{\prime}=2,3.
\end{eqnarray}
where the relations
\begin{equation}
X_{ij}=\frac{v}{\sqrt{2}m_{d_{j}}}V_{ik}\,\xi_{kj}^{d},\qquad Y_{ij}=-\frac{v}{\sqrt{2}m_{u_{i}}}\xi_{ki}^{u*}\,V_{kj},\qquad Z_{ij}=\frac{v}{\sqrt{2}m_{j}}\xi_{ij}^{l},
\end{equation}
depend on the Yukawa textures. The list of fully leptonic decays of mesons included in this analysis, for various mesons $M$, can be seen in Table~\ref{tab:leptonic-meson-decays}.
\begin{table}[h]
\begin{centering}
\begin{tabular}{|c|c|}
\hline
Observable & Experiment\tabularnewline
\hline
\hline
{\small{}$\mathrm{BR}(B_{u}\rightarrow\tau\nu)\times10^{4}$} & $1.09\pm0.24$ \cite{Barberio:2008fa}\tabularnewline
\hline
{\small{}$\frac{\mathrm{BR}(K\rightarrow\mu\nu)}{\mathrm{BR}(\pi\rightarrow\mu\nu)}$} & $0.6358\pm0.0011$ \cite{Mahmoudi:2008tp}\tabularnewline
\hline
{\small{}$\mathrm{BR}(D_{s}\rightarrow\tau\nu)\times10^{2}$} & $5.48\pm0.23$ \cite{Akeroyd:2009tn}\tabularnewline
\hline
{\small{}$\mathrm{BR}(D_{s}\rightarrow\mu\nu)\times10^{3}$} & $5.49\pm0.16$ \cite{Akeroyd:2009tn}\tabularnewline
\hline
{\small{}$\mathrm{BR}(D\rightarrow\mu\nu)\times10^{4}$} & $3.74\pm0.17$ \cite{Zyla:2020zbs}\tabularnewline
\hline
{\small{}$\mathrm{BR}(D\rightarrow\tau\nu)\times10^{3}$} & $1.20\pm0.27$ \cite{ParticleDataGroup:2020ssz}\tabularnewline
\hline
\end{tabular}
\par\end{centering}
\caption{\emph{Additional leptonic decays of mesons considered in this work. \label{tab:leptonic-meson-decays}}}
\end{table}
\subsection{Leptonic observables} \label{sec:leptonic_observables}
There are a number of leptonic processes that are forbidden or suppressed in the SM but can occur in the GTHDM. These include modifications to the form factors for $\ell\ell^\prime\gamma$, $\ell\ell^\prime Z$ and other interactions, which lead to contributions to the anomalous magnetic moment of the muon, $(g-2)_{\mu}$, and LFV decays such as $\tau\rightarrow\mu\gamma$, $\tau\to3\mu$ and $h\to\tau\mu$. In the SM, the contributions to these LFV observables are suppressed by the GIM mechanism, giving a very low experimental background, but in the GTHDM LFV is allowed at one- and two-loop level through the couplings $\xi^l_{ij}$ in Eqs.\ (\ref{eq:Gammafhba}-\ref{eq:GammafAba},\ref{eq:GammafCba}).\footnote{Note that in this study we will focus solely on the decays involving $\tau$ and $\mu$ leptons due to our choice of including only second and third generations in the $\xi^l_{ij}$ matrix from Eq.~(\ref{eq:Textures}).}\
A second Higgs doublet has been examined as a way to explain the muon $g-2$ anomaly. In the Type-X \cite{Wang:2014sda,Abe:2015oca,Chun:2015hsa,Chun:2015xfx,Chun:2016hzs,Wang:2018hnw,Chun:2019oix,Chun:2019sjo,Keung:2021rps,Ferreira:2021gke,Han:2021gfu,Eung:2021bef,Jueid:2021avn,Dey:2021pyn} and Flavour-Aligned \cite{Ilisie:2015tra,Han:2015yys,Cherchiglia:2016eui,Cherchiglia:2017uwv,Li:2020dbg,Athron:2021iuf} versions of the THDM the contributions from two-loop diagrams are dominant in most of the parameter space thanks to mechanisms also available in the GTHDM. Additionally, with LFV, the one-loop diagrams can receive a chirality flip enhancement from including the tau lepton in the diagram loop, as was investigated by \cite{Omura:2015nja,Crivellin:2015hha,Iguro:2019sly,Jana:2020pxx,Hou:2021sfl,Hou:2021qmf,Atkinson:2021eox,Hou:2021wjj}, however they only examined muon $g-2$ contributions at the one-loop level.
Due to the similarity of the diagrams between $\ell\rightarrow\ell^\prime\gamma$ and muon $g-2$ (which is effectively $\mu\rightarrow\mu\gamma$, see figure\ \ref{fig:ltolpgammaOneLoop}), these two observables share nomenclature and contributions. For both muon $g-2$ and $\tau\rightarrow\mu\gamma$ we can break the contributions into the same three groups: one-loop, $A^{(1)}_{ij L,R}$; two-loop fermionic, $A^{(2,f)}_{ij L,R}$; and two-loop bosonic, $A^{(2,b)}_{ij L,R}$, contributions, so that the observables can be written as
\begin{align}
\label{eqn:GTHDMgm2}
\Delta a^{\mathrm{GTHDM}}_\mu &= m_\mu^2 (A^{(1)}_{\mu\mu L} + A^{(1)}_{\mu\mu R} + A^{(2,f)}_{\mu\mu} + A^{(2,b)}_{\mu\mu}), \\
\label{eqn:GTHDMtaumugamma}
\frac{{\rm BR}(\tau\rightarrow\mu\gamma)}{{\rm BR}(\tau\rightarrow\mu\bar{\nu}_{\mu}\nu_{\tau})} &= \frac{48\pi^{3}\alpha_{\rm{EM}}\left(|A_{\tau\mu L}|^{2}+|A_{\tau\mu R}|^{2}\right)}{G_{F}^{2}},
\end{align}
with $A_{\tau\mu L,R} = A^{(1)}_{\tau\mu L,R} + A^{(2,f)}_{\tau\mu L,R} + A^{(2,b)}_{\tau\mu L,R}$ and $\alpha_{EM}$ is the fine structure constant. All form factors $A^{(l)}_{ij L,R}$ have been appropriately renormalised by combining with the relevant counterterms, and are all calculated using masses and couplings that have been extracted from data at tree-level. Additionally, for the contributions to muon $g-2$ we must subtract off the SM contributions from the SM Higgs boson to obtain a purely BSM contribution to muon $g-2$.
\begin{figure}[tb]
\centering
\begin{fmffile}{diagramFFS}
\begin{fmfgraph*}(130,130)
\fmftop{o2}
\fmfbottom{i1,o1}
\fmf{photon,label=$\gamma$}{o2,v3}
\fmf{fermion,label=$l_a$}{i1,v4}
\fmf{fermion,label=$l_i$,label.side=left}{v4,v3}
\fmf{fermion}{v3,v2}
\fmf{fermion,label=$l_b$}{v2,o1}
\fmf{dashes,label=$\phi$}{v4,v2}
\fmfforce{(0.3w,0.3h)}{v4}
\fmfforce{(0.5w,0.583h)}{v3}
\fmfforce{(0.7w,0.3h)}{v2}
\end{fmfgraph*}
\end{fmffile}
\begin{fmffile}{diagramSSF}
\begin{fmfgraph*}(130,130)
\fmftop{o2}
\fmfbottom{i1,o1}
\fmf{photon,label=$\gamma$}{o2,v3}
\fmf{fermion,label=$l_a$}{i1,v4}
\fmf{fermion,label=$\nu_i$}{v4,v2}
\fmf{fermion,label=$l_b$}{v2,o1}
\fmf{dashes,label=$H^\pm$,label.side=left}{v4,v3}
\fmf{dashes}{v3,v2}
\fmfforce{(0.3w,0.3h)}{v4}
\fmfforce{(0.5w,0.583h)}{v3}
\fmfforce{(0.7w,0.3h)}{v2}
\end{fmfgraph*}
\end{fmffile}
\caption{\emph{One-loop diagrams contributing to $\ell\rightarrow\ell^\prime\gamma$ with a neutral scalar diagram on the left and a charged scalar diagram on the right. The indices $a,b,i$ correspond to any of the lepton flavours $e,\mu,\tau$, and we have $\phi=h,H,A$. \label{fig:ltolpgammaOneLoop}}}
\end{figure}
The entire one loop contribution for muon $g-2$ and $\ell\rightarrow\ell^\prime\gamma$ can be found by summing over the neutral scalars $\phi$ and lepton generations:
\begin{equation} \label{eqn:A1loop}
A^{(1)}_{ab L,R} = \sum^{3}_{i=e,\mu,\tau} \sum_{\phi=h,H,A} \bigg(A^{(FFS)}_{ab L,R}(\phi,i) - A^{(SSF)}_{ab L,R}(H^\pm,i)\bigg),
\end{equation}
where the functions $A^{(FFS)}_{ab L,R}(\phi,i)$ and $A^{(SSF)}_{ab L,R}(\phi,i)$ involve neutral scalars ($h$,$H$,$A$) and the charged scalar $H^\pm$ respectively. They are defined in Eqs.\ (\ref{eqn:A1loopFFS}-\ref{eqn:A1loopSSF}) in appendix \ref{sec:Loop_Functions}, and shown in figure\ \ref{fig:ltolpgammaOneLoop}.
To obtain the BSM contributions to muon $g-2$, we must also subtract off the contribution from the SM Higgs boson to obtain a truly-BSM one-loop contribution.
\begin{figure}[tb]
\includegraphics[scale=0.55]{figures/THDMTwoLoopFermion}
\caption{\emph{Two-loop fermionic Barr-Zee diagrams contributing to muon $g-2$ and $l\rightarrow l' \gamma$. The indices $a,b$ correspond to any of the lepton flavours $e,\mu,\tau$, and $\phi=h,H,A$. The internal photon $\gamma$ may be replaced by a $Z$ boson. \label{fig:ltolpgammaBZFermionic}}}
\end{figure}
\begin{figure}[tb]
\includegraphics[scale=0.55]{figures/THDMTwoLoopBoson}
\caption{\emph{Two-loop bosonic Barr-Zee diagrams contributing to muon $g-2$ and $l\rightarrow l' \gamma$. The indices $a,b$ correspond to any of the lepton flavours $e,\mu,\tau$, and we have $\phi=h,H,A$. In the left panel, the internal photon $\gamma$ may be replaced by a $Z$ boson, and the internal $H^\pm$ with a $W^\pm$ boson. \label{fig:ltolpgammaBZBosonic}}}
\end{figure}
At the two-loop level we consider the Barr-Zee diagrams, shown in figures~\ref{fig:ltolpgammaBZFermionic} and \ref{fig:ltolpgammaBZBosonic}.
Just as for the one-loop contributions before, we can subdivide each of these contributions into diagrams involving charged leptons ($l^-_i$) paired with neutral bosons ($h$,$H$,$A$,$Z$,$\gamma$) and neutral leptons ($\nu_i$) paired with charged bosons ($H^\pm$,$W^\pm$).\footnote{We do not consider two-loop bosonic diagrams that are not Barr-Zee diagrams, since their maximum contributions to muon $g-2$ are relatively small \cite{Cherchiglia:2017uwv}, whereas Barr-Zee contributions have been proved to be dominant for some regions of the parameter space \cite{Omura:2015xcg}. Additionally, two-loop diagrams involving neutral bosons where both legs are Higgs bosons are suppressed by a factor $m_\mu^4$, while diagrams with both legs being either $\gamma$ or $Z$ are SM contributions, so we do not consider either, only those with both a $\phi$ and a $\gamma$ or $Z$ boson leg. Similarly for diagrams involving charged legs of $H^\pm$,$W^\pm$, we only consider a $H^\pm$ and $W^\pm$ boson paired together, as a pair of $H^\pm$ legs lead to diagrams with suppressed contributions \cite{Ilisie:2015tra}.}
The two-loop bosonic and fermionic diagrams involve an internal loop made of either bosons or fermions respectively. The total fermionic two-loop contribution to muon $g-2$ is given by \cite{Cherchiglia:2016eui}
\begin{equation} \label{eqn:gm2Contribution2loopfermionic}
A^{(2,f)}_{\mu\mu} = \sum_{f=u,d,l} \bigg(A^{(FC)}_{\mu\mu}(H^\pm,f) + \sum_{\phi=h,H,A} A^{(FN)}_{\mu\mu}(\phi,f) - A^{(FN)}_{\mu\mu}(h_{SM},f)\bigg),
\end{equation}
where the form factors are given in Eqs.~ (\ref{eqn:gm2ContributionFN}-\ref{eqn:gm2ContributionFC}) in appendix \ref{sec:Loop_Functions}. Note that only contributions from the heaviest generations of the fermions are considered, via $\Gamma_{\phi33}^{f}~(f=u,~d,~e)$. Similarly the total bosonic two-loop contributions to muon $g-2$ are
\begin{equation} \label{eqn:gm2Contribution2loopbosonic}
A^{(2,b)}_{\mu\mu} = \sum_{\phi=h,H} \bigg(A^{(BHN)}_{\mu\mu}(\phi) + A^{(BWN)}_{\mu\mu}(\phi) + A^{(BHC)}_{\mu\mu}(\phi) + A^{(BWC)}_{\mu\mu}(\phi) \bigg) - A^{(BWN)}_{\mu\mu}(h_{SM}),
\end{equation}
where again the bosonic two-loop functions are in Eqs.~(\ref{eqn:gm2ContributionBHN}-\ref{eqn:gm2ContributionBWC}) in the same appendix. Note that these contributions do not include 2-loop diagrams with an internal $Z$ boson leg, as in \cite{Ilisie:2015tra}.
In the case of the $\tau\to\mu\gamma$ decay, the contributions from the fermionic and bosonic Barr-Zee two loop diagrams, $A^{(2,f)}_{ab L,R}$ and $A^{(2,b)}_{ab L,R}$ respectively, have the same form for each Higgs bosons and fermion or boson in the loop, and can be found in Eqs.~(\ref{eqn:ltolpgammafermionic},\ref{eqn:ltolpgammabosonic}) in appendix \ref{sec:Loop_Functions}.
The contributions to $\tau\rightarrow3\mu$ decay can be divided up into 3 separate groups, the tree-level, dipole, and the contact contributions. The contributions from tree-level decay are computed in \cite{Crivellin2013}. We have found that the dipole contributions, which involve the penguin-photon diagrams of the form of $\tau\rightarrow\mu\gamma$ decays, are quite sizable compared to those at tree-level and cannot be ignored. Namely, they are given by \cite{Hou:2020itz}:
\begin{align} \label{eqn:Tauto3MuDipole}
\mathrm{BR}(\tau\to3\mu)^{\textrm{(dipole)}} =& \frac{\alpha_{\rm{EM}}}{3\pi} \bigg(\log{\bigg(\frac{m_\tau^2}{m_\mu^2}\bigg)}-\frac{11}{4}\bigg) \frac{\mathrm{BR}(\tau\to\mu\gamma)}{{\rm BR}(\tau\rightarrow\mu\bar{\nu}_{\mu}\nu_{\tau})}.
\end{align}
Similarly, the contact terms involving effective four-fermion interactions \cite{Kuno:1999jp} could be at first comparable to the dipole contributions. The contact contributions are given by
\begin{align} \label{eqn:Tauto3MuContact}
\mathrm{BR}(\tau\to3\mu)^{\textrm{(contact)}} =&
\frac{|g_2|^2}{8} + 2|g_4|^2 + \frac{16\pi\alpha_{em}}{\sqrt{2}G_F} \textrm{Re}\bigg(g_4^*\bigg (A^{(1)}_{\tau\mu L,R} + A^{(2,f)}_{\tau\mu L,R} + A^{(2,b)}_{\tau\mu L,R}\bigg)\bigg),
\end{align}
where the coefficients $g_2$ and $g_4$ are given in appendix \ref{sec:Loop_Functions}.
Another observable that we include is the lepton violating $h\to \tau\mu$ decay. This is given at tree level by\footnote{We computed the contributions coming from one-loop diagrams with two charged Higgses in the loop and found them to be 7 orders of magnitude suppressed compared to the tree level. Diagrams involving a pair of heavy neutral Higgses are possible as well but even more suppressed. The GTHDM only takes into account the tree level, which relies on being close to the alignment limit but not exactly, otherwise this tree level contribution would be zero.}
\begin{align}
\mathrm{BR}(h\to\tau\mu)=\frac{3c_{\beta\alpha}^{2}m_{h}}{8\pi \Gamma_{h}}\Big(|\xi_{\mu\tau}^{l}|^{2}+|\xi_{\tau\mu}^{l}|^{2}\Big)\left(1-\frac{m_{\tau}^{2}}{m_{h}^{2}}\right)^{2}\,,
\end{align}
with the total decay width of $h$ given by $\Gamma_{h}=3.2\,{\rm MeV}$ \cite{ParticleDataGroup:2020ssz}.
Lastly, besides $g-2$ and LFV observables, experiments have also provided constraints for the LFU ratio in $\tau$ decays. This ratio is commonly known as $(g_\mu/g_e)^2$ and is given as~\cite{HernandezSanchez:2012eg,Jung:2010ik}
\begin{eqnarray}
\left(\frac{g_{\mu}}{g_{e}}\right)^{2}=\frac{\mathrm{BR}(\tau\to\mu\bar{\nu}\nu)}{\mathrm{BR}(\tau\to e\bar{\nu}\nu)}\frac{f(m_{e}^{2}/m_{\tau}^{2})}{f(m_{\mu}^{2}/m_{\tau}^{2})}\simeq1+\sum_{i,j=\mu,\tau}\left(0.25R_{ij}^{2}-0.11R_{ii}\right),
\end{eqnarray}
where $f(x)=1-8x+8x^{3}-x^{4}-12x^{2}\,\log x$
and $R_{ij}$ is the BSM scalar contribution, given in the the GTHDM as
\begin{eqnarray}
R_{ij}=\frac{\upsilon^{2}}{2m_{H^{\pm}}^{2}}\,\left(\xi_{\tau i}^{l}\,\xi_{j\mu}^{l}\right).\label{R scalar}
\end{eqnarray}
All of the experimental measurements and upper bounds for leptonic observables are shown in Table~\ref{tab:bound_lepton_flavour_violating}.
\begin{table}[H]
\begin{centering}
\begin{tabular}{|c|c|}
\hline
Observable & Experiment \tabularnewline
\hline
\hline
${\Delta a_\mu}$ & $2.51\pm59\times10^{-9}$ \cite{PhysRevLett.126.141801} \tabularnewline
\hline
{\small{}$\mathrm{BR}(\tau\rightarrow\mu\gamma)$} & $<4.4\times10^{-8}$ at 90\% C.L. \cite{Zyla:2020zbs} \tabularnewline
\hline
{\small{}$\mathrm{BR}(\tau\rightarrow3\mu)$} & $<2.1\times10^{-8}$ at at 95\% C.L. \cite{Zyla:2020zbs} \tabularnewline
\hline
{\small{}$\mathrm{BR}(h\rightarrow\tau\mu)$} & $<1.5\times10^{-3}$ at 95\% C.L. \cite{CMS:2021rsq} \tabularnewline
\hline
{\small{}$(g_\mu/g_e)$} & $1.0018\pm0.0014$ \cite{Bifani:2018zmi}\tabularnewline
\hline
\end{tabular}
\par\end{centering}
\caption{\emph{World average measurement of $\Delta a_\mu$ and experimental bounds for the LFV decay and LFU observables considered in our analysis.\label{tab:bound_lepton_flavour_violating}}}
\end{table}
\section{Results}
\label{sec:Results}
Our main goal is to study the impact of these observables on the GTHDM parameter space and, in particular, infer the goodness-of-fit of the model in light of these anomalies. Given the plethora of observables defined in the previous section and the large multidimensional parameter space, it is very important to combine them in a statistically rigorous manner in a global fit. This avoids serious shortcomings from more naive approaches like simply overlaying constraints from confidence intervals \cite{AbdusSalam:2020rdj}.
To visualize the results we will project the high dimensional parameter space onto two-dimensional planes. To this end, the central quantity of interest is the profile likelihood,
\begin{equation}
\log\mathcal{L}_{prof}\left(\theta_{1},\theta_{2}\right)=\underset{\boldsymbol{\eta}}{\max}\log\mathcal{L}\left(\theta_{1},\theta_{2},\boldsymbol{\eta}\right) ,
\end{equation}
which is, for fixed parameters of interest $\theta_{1}$ and $\theta_{2}$, the maximum value of the log-likelihood function that can be obtained when maximizing over the remaining parameters $\boldsymbol{\eta}$. All profile likelihood figures in this study are created with \textsf{pippi}~\cite{Scott:2012qh}.
As mentioned earlier, we use here the \textsf{GAMBIT}\xspace framework for our study. The theoretical predictions of the model and the experimental likelihoods are either implemented natively in GAMBIT or from external tools interfaced with GAMBIT. In particular, the likelihoods related to $b\to s\mu^{+}\mu^{-}$ transitions are obtained from \textsf{HEPLike}, which retrieves experimental results and their correlated uncertainties from the \textsf{HEPLikeData} repository. To efficiently explore the parameter space, we employ the differential evolution sampler \textsf{Diver}, which is a self-adaptive sampler. We choose a population size of \textsf{NP} = 20000 and a convergence threshold of \textsf{convthresh} = $10^{-6}$. The data we present in this work comes from scans that took between 6 and 8 hours of running time on the Australian supercomputer \textsf{GADI} with cores varying between 1400 and 2000.
\subsection{Parameter space}
\begin{figure}[h]
\includegraphics[scale=0.5]{figures/GTHDM_neutral_116_115_like2D_combined.pdf}$\qquad$\includegraphics[scale=0.5]{figures/GTHDM_neutral_122_114_like2D_combined.pdf}
\includegraphics[scale=0.5]{figures/GTHDM_neutral_121_120_like2D_combined.pdf}$\qquad$\includegraphics[scale=0.5]{figures/GTHDM_neutral_113_127_like2D_combined.pdf}
\caption{\emph{Profile likelihood ratios $\mathcal{L}/\mathcal{L}_{max}$ for different
2D plots of the parameter space for $Y_{2,tc}^{u}\in[-2,0]$}.\label{fig:params-profiles}}
\end{figure}
We perform the parameter scans in the physical basis, i.e., where the tree-level masses of the heavy Higgses, $m_H$, $m_A$ and $m_{H^\pm}$ are taken as input. The remaining model parameters are $\tan\beta$, $m_{12}$ and the Yukawa couplings $Y_{2,ij}^{f}$ as in Eq.~(\ref{eq:Xis}). In order to avoid collider constraints, we work in the alignment limit choosing $s_{\beta-\alpha}$ close to $1$, and we select a conservative lower limit on the masses of the heavy Higgses $m_{H,A,H^\pm} \geq 500$ GeV~\footnote{From preliminary results we found that low Higgs masses are disfavoured by the contribution of various constraints and thus we do not attempt to include precise constraints on the masses from BSM Higgs searches (see e.g.\cite{Arbey:2017gmh} for a discussion of the limits on the charged Higgs mass). We leave a detailed collider study to future work.}. We also fix $m_A = m_H$ in our study, motivated by the requirement to satisfy the oblique parameter constraints which favour small mass splittings and in order to simplify the sampling of the parameter space. So as to choose reasonable priors for the Yukawa couplings, we take into account various constraints on them (or equivalently on $\xi_{ij}^f$) from previous studies. The tighter theoretical constraints come from perturbativity which requires $\left|\xi_{ij}^{f}\right|\leq\sqrt{4\pi}\sim 3.5$. On the phenomenological side, the studies in \cite{Hou:2020chc,Hou:2021sfl} have found values as large as $\xi_{tt}^{u}\sim 0.1$ and $\xi_{tc}^{u}\sim 0.32$ for masses of the heavy Higgses of order 500 GeV. With respect to the $\xi_{cc}^{u}$ coupling, it has been shown in~\cite{Iguro:2017ysu} that $\mathcal{O}(1)$ values are possible within the charged anomalies, and similar values were considered in \cite{Crivellin:2019dun} in the context of the neutral anomalies, not only for $\xi_{cc}^{u}$ but for all the Yukawa matrix elements. As for the new leptonic couplings, the results in \cite{Omura:2015xcg,Iguro:2018qzf,Crivellin:2019dun} indicate they should be $\mathcal{O}(1)$ or less in order to fit the charged anomalies. Lastly, the extra down Yukawa couplings $\xi_{ij}^{d}$ are in general expected to be $\mathcal{O}(0.1)$ \cite{Crivellin:2017upt, Iguro:2019sly} and in particular $\xi_{sb}^{d}$ is expected to be strongly constrained by $B_{s}-\overline{B}_{s}$ mixing. With all these considerations, the chosen priors on our scan parameters are
\begin{align}
\tan\beta\in[0.2,\,50] & ,\qquad m_{12}\in[-1000,\,2700]\mathrm{ GeV},\qquad m_{H^{\pm}},\,m_{A}=m_{H}\in[500,\,4000]\mathrm{ GeV},\nonumber \\
Y_{2,tt}^{u}\in[0.0,\,2.0] & ,\qquad Y_{2,cc}^{u},\,Y_{2,tc}^{u}\in[-2.0,\,2.0],\nonumber \\
Y_{2,bb}^{d}\in[-0.1\,0.1] & ,\qquad Y_{2,ss}^{d}\in[-0.2,\,0.2],\qquad Y_{2,sb}^{d}=Y_{2,bs}^{d}\in[-0.01,0.01],\nonumber \\
Y_{2,\mu\mu}^{l}\in[-0.5,0.5] & ,\qquad Y_{2,\tau\tau}^{l}\,,Y_{2,\mu\tau}^{l}=Y_{2,\tau\mu}^{l}\in[-1.0,1.0],\label{eq:Ranges}
\end{align}
The results of our scans show two degenerate regions of solutions according to the sign of $Y_{2,tc}^{u}$. We indeed verified that these regions are degenerate and the final results are unaffected by this choice, hence we select $Y_{2,tc}^{u}\in[-2,0]$ for the phenomenological analysis from now on. Namely, this degeneracy is a result of the dependency of various observables on products like $Y_{2,tc}^{u} Y_{2,ij}^{f}$ where $Y_{2,ij}^{f}$ also flips its sign.\footnote{We first found those two regions of solutions via an auxiliary scanning method based on the quadratic approximation to $\chi^{2}$ as a function of the WCs (see appendix \ref{sec:chi2_method}).}
We show in figure \ref{fig:params-profiles} different 2D planes with the most relevant parameters obtained by the scan. The values for $Y_{2,tt}^{u}$ and $Y_{2,tc}^{u}$ are displayed in the top left panel where we can observe that for the best fit point $|Y_{2,tt}^{u}|\approx|Y_{2,tc}^{u}|\approx\,0.6$. Then, in the top right panel we see a preferred value for $Y_{2,cc}^{u}\approx1.1$ (-1.1 for the positive sign solution of $Y_{2,tc}^{u}$ from the degeneracy of solutions). This, along with the lepton Yukawa couplings $Y_{2,\mu\mu}^{l}$ and $Y_{2,\tau\mu}^{l}$ (bottom right panel), helps to enhance the contributions from the box diagrams in figures\ \ref{fig:a}-\ref{fig:b}. Additionally, the LFV coupling $Y_{2,\tau\mu}^{l}$ also contributes to the $B^{+}\rightarrow K^{+}\mu^{\pm}\tau^{\mp}$ decay, requiring $|Y_{2,\tau\mu}^{l}|\gtrsim 0.4$ in order to get $\mathrm{BR}(B^{+}\rightarrow K^{+}\mu^{\pm}\tau^{\mp})\times10^{5}<4.8$. As for the $Y_{2,ij}^{d}$ couplings, we find $Y_{2,ss}^{d}=0.1\pm0.1$, $Y_{2,sb}^{d}=0.004\pm0.005$ and $Y_{2,bb}^{d}=0.017\pm0.005$ assuming Gaussian distributions. In particular, both $Y_{2,ss}^{d}$ and $Y_{2,sb}^{d}$ flip their signs for the positive solutions of $Y_{2,tc}^{u}$ whereas $Y_{2,bb}^{d}$ remains unaffected.
Finally, in the bottom right panel of figure \ref{fig:params-profiles} we observe that the preferred values for the charged Higgs mass are of order $3\,\mathrm{TeV}$ with $\tan\beta\approx1$. We find that the combined contribution of FCNC likelihoods fits better the data for this particular mass range. Similarly, although values of $\tan\beta$ up to 50 are possible in the GTHDM when using theoretical constraints alone, we identified that once we take into account all flavour constraints, there is a clear preference for low values, close to $\tan\beta\approx 1$, in agreement with \cite{WahabElKaffas:2007xd,Arhrib:2009hc,Branco:2011iw}. This preference can be understood as follows. The box contributions in figures \ref{fig:a}-\ref{fig:b} depend on the Green function $\mathcal{B}^{H(0)}$ in Eq.~(\ref{eq:Greenf_Box}), which for values of the charged Higgs mass $m_{H^{\pm}}<2$ TeV or $m_{H^\pm} > 4$ TeV significantly over- or undershoot, respectively, the observed value of $\Delta C_9\approx -1$ (see below). Furthermore, the measurement of the $B_{c}$ lifetime and the BaBar collaboration $B\to D^{(\star)}\tau\overline{\nu}$ distributions, both of which depend strongly on $\tan\beta$ and $m_{H^\pm}$ through the $C_{R,L}^{cb}$ in Eq.~(\ref{semileptonicWCs}), push both $\tan\beta$ and $m_{H^{\pm}}$ to values lower than 2 and greater than $2\,\mathrm{TeV}$ respectively. In addition to this, we have also noticed a strong penalty for large $\tan\beta$ values coming from the $B_{s}\to \mu^{+}\mu^{-}$ decays, which is due to the strong $\tan\beta$ dependence on the $C_{10}$ and (pseudo) scalar WCs. Lastly the preferred masses of the other heavy Higgses, $m_H$ and $m_A$, are of the same order as $m_{H^\pm}$ as was expected because of the oblique parameter constraints. The best fit values for some relevant scan parameters can be found in table \ref{tab:WilsonCoeff}.
\subsection{Neutral and charged anomalies}
\begin{table}
\begin{centering}
\begin{tabular}{|c|c|}
\hline
Parameter & Best fit \tabularnewline
\hline
\hline
$m_{H,A}$ & $3485$ GeV\tabularnewline
\hline
$m_{H^\pm}$ & $3429$ GeV\tabularnewline
\hline
$m_{12}$ & $2426$ GeV\tabularnewline
\hline
$\tan\beta$ & $0.98$\tabularnewline
\hline
$Y_{2,tt}^{u}$ & $0.60$\tabularnewline
\hline
$Y_{2,cc}^{u}$ & $1.15$\tabularnewline
\hline
$Y_{2,tc}^{u}$ & $-0.64$\tabularnewline
\hline
$Y_{2,bb}^{d}$ & $0.017$\tabularnewline
\hline
$Y_{2,ss}^{d}$ & $0.10$\tabularnewline
\hline
$Y_{2,sb}^{d}$ & $0.004$\tabularnewline
\hline
$Y_{2,\mu\mu}^{l}$ & $-0.04$\tabularnewline
\hline
$Y_{2,\tau\tau}^{l}$ & $-0.36$ \tabularnewline
\hline
$Y_{2,\mu\tau}^{l}$ & $0.75$ \tabularnewline
\hline
\end{tabular}\hspace{20pt}
\begin{tabular}{|c|c|}
\hline
Wilson coefficient & Best fit \tabularnewline
\hline
\hline
$\mathrm{Re}(\Delta C_{Q_{1}})$ & $0.14\pm0.01$\tabularnewline
\hline
$\mathrm{Re}(\Delta C_{2})$ & $-0.018\pm0.005$\tabularnewline
\hline
$\mathrm{Re}(\Delta C_{7})$ & $0.002\pm0.01$\tabularnewline
\hline
$\mathrm{Re}(\Delta C_{7}^{'})$ & $0.01\pm0.01$\tabularnewline
\hline
$\mathrm{Re}(\Delta C_{8})$ & $0.002\pm0.015$\tabularnewline
\hline
$\mathrm{Re}(\Delta C_{8}^{'})$ & $0.01\pm0.01$\tabularnewline
\hline
$\mathrm{Re}(\Delta C_{9})$ & $-0.89\pm0.15$\tabularnewline
\hline
$\mathrm{Re}(\Delta C_{10})$ & $-0.19\pm0.14$\tabularnewline
\hline
\end{tabular}
\par\end{centering}
\caption{\emph{Best fit values for the scan parameters (left) and WCs for
$b\rightarrow s\mu^{+}\mu^{-}$ transitions (right). We show only $\mathrm{Re}(\Delta C_{Q_{1}})$
given that at tree level and in the alignment limit $\mathrm{Re}(\Delta C_{Q_{1}})=\mathrm{Re}(\Delta C_{Q_{2}})$
and $m_{s}/m_{b}\,\mathrm{Re}(\Delta C_{Q_{1}})=\mathrm{Re}(\Delta C_{Q_{1}}^{'})=-\mathrm{Re}(\Delta C_{Q_{2}}^{'})$. The uncertainties on the WCs were computed with \textsf{GAMBIT} assuming a symmetric Gaussian distribution from the resulting one-dimensional profile likelihoods. We do not display the $\mathrm{Re}(\Delta C_{9,10}^{'})$ WCs either which we find to be suppressed by a factor of $m_{b}/m_{t}$ compared to their non prime counterparts.} \label{tab:WilsonCoeff}}
\end{table}
In table \ref{tab:WilsonCoeff} we show the best fit values for the parameters from the scans (\textit{left}) and the muon specific WCs evaluated at the best fit point (\textit{right}), where in particular, $\Delta C_{9}$ is consistent with the value obtained by model independent fits at the 1$\sigma$ level. In this sense, the neutral anomalies can indeed be explained in the GTHDM as shown in figure\ \ref{fig:WCs_1d_2d}. Furthermore, coming from the quadratic dependence in the branching ratio $\mathrm{BR}(B_{s}\rightarrow\mu^{+}\mu^{-})$, we can see two regions of solutions for the scalar WC $\Delta C_{Q_{1}}$, one of them containing the SM prediction within 2$\sigma$. In addition, we ran a complementary scan invalidating points for $|\Delta C_{Q_{1}}|>0.1$ and found that the corresponding region of solutions gives an equally good fit to the data, i.e., the preference over the second region of solutions is completely arbitrary.
\begin{figure}
$\quad$\includegraphics[scale=0.41]{figures/GTHDM_neutral_101_like1D.pdf}
\includegraphics[scale=0.43]{figures/GTHDM_neutral_101_109_like2D_combined.pdf}$\quad$\includegraphics[scale=0.41]{figures/GTHDM_neutral_109_like1D.pdf}
\includegraphics[scale=0.43]{figures/GTHDM_neutral_101_110_like2D_combined.pdf}\includegraphics[scale=0.43]{figures/GTHDM_neutral_109_110_like2D_combined.pdf}\includegraphics[scale=0.41]{figures/GTHDM_neutral_110_like1D.pdf}
\caption{\emph{One- and two-dimensional profile likelihoods for three of the Wilson coefficients computed from the fit.}\label{fig:WCs_1d_2d}}
\end{figure}
In order to better understand the contribution of the GTHDM to the various rates and angular observables, we display various plots comparing both the SM and the GTHDM predictions along the experimental data. For the angular observables $\left\langle P_{1}\right\rangle$ and $\left\langle P_{5}^{\prime}\right\rangle$ defined in Eqs.(\ref{eq:P1P2}) and (\ref{eq:P5p}), we show in figure\ \ref{fig:P1-P5p} their predictions compared to the CMS 2017 \cite{CMS:2017ivg}, ATLAS 2018 \cite{ATLAS:2018gqc} and LHCb 2020 \cite{LHCb:2020lmf} data. For $\left\langle P_{1}\right\rangle$ (figure\ \ref{fig:P1-P5p} \textit{left}) the GTHDM distribution is rather indistinguishable from the SM one, except in the $[1,\,2]$ $\mathrm{GeV}^{2}$ bin close to the photon pole and sensitive to $C_{7}^{(\prime)}$. The situation is different for $\left\langle P_{5}^{\prime}\right\rangle$ (figure\ \ref{fig:P1-P5p} \textit{right}) in which the GTHDM prediction fits the LHCb 2020 data better, particularly in the $C_{7}^{(\prime)}$- $C_{9}^{(\prime)}$ interference region ($1<q^2<6\,\textrm{GeV}^2$). We also provide in figure\ \ref{fig:Si-obs} predictions for the angular observables in the $S_{i}$ basis using the same LHCb 2020 measurements and also the the ATLAS 2018 \cite{ATLAS:2018gqc} data. We can see that the GTHDM fits better the LHCb data \cite{LHCb:2020lmf} in the large recoil region than the SM by 2$\sigma$. We also note that neither the SM or the GTHDM can explain the central values (with larger uncertainties) from the ATLAS 2018 data.
\begin{figure}[htb]
\includegraphics[scale=0.5]{figures/P1.pdf}$\qquad$\includegraphics[scale=0.5]{figures/P5p.pdf}
\caption{\emph{Predicted distributions for \textit{Left}: $\left\langle P_{1}\right\rangle$ and \textit{Right}: $\left\langle P_{5}^{\prime}\right\rangle$ compared to the CMS 2017 \cite{CMS:2017ivg}, ATLAS 2018 \cite{ATLAS:2018gqc} and LHCb 2020 \cite{LHCb:2020lmf} data. The theoretical uncertainties using \textsf{GAMBIT} have been computed assuming a symmetric Gaussian distribution for the resulting one-dimensional profile likelihoods for each one of the bins. The theory predictions close to the $J/\psi(1S)$ and $\psi(2S)$ narrow charmonium resonances are vetoed from all our plots.} \label{fig:P1-P5p}}
\end{figure}
\begin{figure}[htb]
\includegraphics[scale=0.5]{figures/FL.pdf}$\qquad$\includegraphics[scale=0.5]{figures/S3.pdf}
\includegraphics[scale=0.5]{figures/S5.pdf}$\qquad$\includegraphics[scale=0.5]{figures/AFB.pdf}
\caption{\emph{Predicted distributions for the form factor dependent observables in the $S_{i}$ basis using both the ATLAS 2018 \cite{ATLAS:2018gqc} and the LHCb 2020 \cite{LHCb:2020lmf} data.}\label{fig:Si-obs}}
\end{figure}
As for the measured branching ratios of $B^{0}\rightarrow K^{0*}\mu^{+}\mu^{-}$ and $B^{+}\rightarrow K^{+}\mu^{+}\mu^{-}$, in figure\ \ref{fig:bsmumu_predictions} we show the SM and GTHDM predictions using the LHCb results \cite{LHCb:2016ykl,LHCb:2012juf,LHCb:2014cxe}, where we can see again how the GTHDM fits better the data compared to the SM, specially in the region above the open charm threshold, sensitive to both $C_{9}^{(\prime)}$ and $ C_{10}^{(\prime)}$. In contrast, the performance of the model is worse than the SM (figure\ \ref{fig:Lambdab_Lmumu} \textit{left}) in the low recoil region of the differential branching ratio $\frac{d\mathrm{BR}}{dq^{2}}(\Lambda_{b}\rightarrow\Lambda\mu^{+}\mu^{-})$ when comparing to the LHCb 2015 \cite{LHCb:2015tgy} data. As pointed out in \cite{Bhom:2020lmk}, the decays of the $\Lambda_b$ baryon, such as $\Lambda_{b}\rightarrow\Lambda\mu^{+}\mu^{-}$ have much larger uncertainties than those of the corresponding meson decays. However, once more experimental data is available, recent \cite{Detmold:2016pkz} and future developments of lattice calculations would eventually make this decay providing similar constraints as other $b\to s\mu^+\mu^-$ transitions. Finally, the results for the $\frac{d\mathrm{BR}}{dq^{2}}(B_{s}\rightarrow\phi\mu^{+}\mu^{-})$ distribution are shown in figure \ref{fig:Lambdab_Lmumu} \textit{right}. The large recoil region of the experimental data deviates from both the SM and GTHDM predictions by approximately 3$\sigma$, and for the low recoil bin the GTHDM performs slightly better than the SM by approximately 1$\sigma$.
\begin{figure}[h]
\includegraphics[scale=0.5]{figures/BR_B0Kstarmumu.pdf}$\qquad$\includegraphics[scale=0.5]{figures/BR_BKmumu.pdf}
\caption{\emph{\textit{Left}: Differential branching ratio for $\frac{d\mathrm{BR}}{dq^{2}}(B^{0}\rightarrow\,K^{*0}\mu^{+}\mu^{-})$ with the LHCb 2016 data \cite{LHCb:2016ykl}. \textit{Right}: $\frac{d\mathrm{BR}}{dq^{2}}(B^{+}\rightarrow\,K^{+}\mu^{+}\mu^{-})$ compared to the LHCb 2012 and 2014 measurements \cite{LHCb:2012juf,LHCb:2014cxe}.} \label{fig:bsmumu_predictions}}
\end{figure}
\begin{figure}[h]
\includegraphics[scale=0.5]{figures/Lambda_to_Lmumu.pdf}$\qquad$\includegraphics[scale=0.5]{figures/BRBsphimumu.pdf}
\caption{\emph{\textit{Left}: Differential branching ratio $\frac{d\mathrm{BR}}{dq^{2}}(\Lambda_{b}\rightarrow\Lambda\mu^{+}\mu^{-})$ obtained with} \texttt{flavio} \emph{\cite{Straub:2018kue} compared to the LHCb 2015 \cite{LHCb:2015tgy} data. \textit{Right}: $\frac{d\mathrm{BR}}{dq^{2}}(B_{s}\rightarrow\phi\mu^{+}\mu^{-})$ compared to the LHCb 2015 and 2021 data \cite{LHCb:2015wdu,LHCb:2021zwz}.} \label{fig:Lambdab_Lmumu}}
\end{figure}
Last but not least important observables related to the $b\to s\mu^{+}\mu^{-}$ transitions are the ratios $R(K^{(*)})$. Despite being only three bins in total \cite{LHCb:2017avl,LHCb:2019hip,LHCb:2021trn}, these measurements have been intensively studied as they provide evidence for LFU violation. We include in our fit the latest LHCb collaboration data for the $R(K^{*})$ and $R(K)$ ratios from 2021 \cite{LHCb:2021trn} and 2017 \cite{LHCb:2017avl} respectively and obtain the plots in figure\ \ref{fig:RK-RKstar}, where we compare also to the Belle 2019 experiment data \cite{Belle:2019oag,BELLE:2019xld}. The effect from the fit on the $R(K^{(*)})$ ratios is significant, explaining the LHCb 2021 measurement of $R(K)$ at the 1$\sigma$ level.
\begin{figure}[htb]
\includegraphics[scale=0.5]{figures/RK.pdf}$\qquad$\includegraphics[scale=0.5]{figures/RKstar.pdf}
\caption{\emph{$R(K^{(*)})$ theoretical ratios compared to both the LHCb \cite{LHCb:2017avl,LHCb:2021trn} and Belle data \cite{Belle:2019oag,BELLE:2019xld}.} \label{fig:RK-RKstar}}
\end{figure}
\begin{figure}[htb]
\begin{centering}
\includegraphics[scale=0.60]{figures/GTHDM_neutral_8_9_like2D.pdf}
\par\end{centering}
\caption{\emph{$R(D^{*})$ versus $R(D)$ correlated ratios. The cyan and orange lines are the 1$\sigma$ and 3$\sigma$ deviations from the HFLAV average respectively.}\label{fig:RD_RDstar}}
\end{figure}
The next interesting results are related with the charged anomalies, in particular we find that the $R(D^{(*)})$ ratio can (can not) be explained at the $1\sigma$ level with the GTHDM, a result in agreement with the phenomenological analysis of \cite{Iguro:2017ysu}. We furthermore corroborate that the constraint coming from the $B_{c}$ lifetime makes it very difficult to fit $R(D^{*})$ and $R(D)$ simultaneously. In figure \ref{fig:RD_RDstar} we show the preferred values by the profile likelihood. We see a slightly better performance of the GTHDM compared to the SM with respect to the HFLAV average. Regarding the $d\Gamma(B\to D^{(\star)}\tau\overline{\nu})/(\Gamma dq^{2})$ distributions measured by BaBar \cite{Lees:2013uzd}, we find that the GTHDM prediction is indistinguishable from the SM, in agreement with \cite{Martinez:2018ynq}. We find furthermore that the longitudinal polarisation $F_{L}(D^{*})$ is strongly correlated with $R(D^{*})$ and the model is not able to explain the Belle measurement, giving a best fit value of $0.458\pm0.006$.
\subsection{Anomalous $(g-2)_{\mu}$}
With regards to the anomalous magnetic moment of the muon, $(g-2)_{\mu}$, we find that a simultaneous explanation using all the likelihoods defined before is not possible (solid red line in figure\ \ref{Delta_a_mu}). However, when doing a fit to all other observables except the neutral anomalies, i.e., without using the \textsf{HEPLike} likelihoods, the model is able to explain the measured $\Delta a_{\mu}$ by Fermilab at the 1$\sigma$ level (dashed gray line in figure\ \ref{Delta_a_mu}). Furthermore, when evaluating the performance of the \textsf{HEPLike} likelihoods for the best fit value, we find a SM-like behavior with all NP WCs close to zero, except for those scalar WCs that enter in $\mathrm{BR}(B_{s}\rightarrow\mu^{+}\mu^{-})$.
\begin{figure}[htb]
\begin{centering}
\includegraphics[scale=0.60]{figures/GTHDM_neutral_20_like1D.pdf}
\par\end{centering}
\caption{\emph{One-dimensional profile likelihood for $\Delta a_{\mu}$. The solid red line shows the result from the fit using all likelihoods and observables defined in this study. The dashed gray line is obtained using all but the \textsf{HEPLike} likelihoods instead.}\label{Delta_a_mu}}
\end{figure}
\subsection{Projections for future and planned experiments}
Although a detailed collider analysis is beyond the scope of the present work, we have included as pure observables the branching ratio for $t\to c\,h$ and $h\to b\,s$ \footnote{We are not aware of current bounds for the $h\to b\,s$ branching ratio so we did not define an associated likelihood function for it.} at tree level. These tree level branching ratios in the GTHDM are suppressed as $c_{\beta\alpha}^{2}|\xi_{tc(bs)}^{u(d)}|^{2}$, respectively, so that in the alignment limit they will be exactly zero. In order to study the effects of this fined tuned suppression, we have ran a second scan with $s_{\beta\alpha}\in[0.9999,\,1]$ and we found the branching ratio of $t\to c\,h$ decays are of order $10^{-11}-10^{-7}$, which although are outside future searches sensitivities, they are larger than the SM loop prediction ($\sim10^{-15})$ and well below the current experimental upper bound obtained by the ATLAS collaboration \cite{ATLAS:2018jqi}
\begin{align}
\mathrm{BR}(t\to c\,h)<1.1\cdot 10^{-3}\,.
\end{align}
Concerning the $\mathrm{BR}(h\to b\,s)$ observable, it was shown in \cite{Herrero-Garcia:2019mcy} that it is related to tree level $B_{s}-\overline{B}_{s}$ oscillations which are not only proportional to $c_{\beta\alpha}^{2}$ but also to pseudoscalar contributions independent of the scalar CP-even mixing. Hence, in figure\ \ref{fig:quark-likelihoods} we see that
$h\to b\,s$ is not as constrained as $t\to c\,h$ with values ranging from $10^{-7}$ up to $10^{-3}$ at the 1$\sigma$ level, which may be within range of the ILC~\cite{Barducci:2017ioq}.
\begin{figure}[htb]
\begin{centering}
\includegraphics[scale=0.60]{figures/GTHDM_neutral_18_3_like2D_combined.pdf}
\par\end{centering}
\caption{\emph{Profile likelihood contours in the $\Delta M_{s}$-$\mathrm{BR}(h\to b\,s)$ plane obtained with a scan using $s_{\beta\alpha}\in[0.9999,\,1]$. The observed correlation is expected from Eq.(4.18) in \cite{Herrero-Garcia:2019mcy}.} \label{fig:quark-likelihoods}}
\end{figure}
Regarding LFV searches, we show in figure\ \ref{fig:leptonic-likelihoods} the profile likelihood for the $\tau\rightarrow3\mu$ and $\tau\to\mu\gamma$ branching ratios. We see that the best fit value for the $\tau\rightarrow3\mu$ decay is well within the projected sensitivity in the Belle II experiment \cite{Belle-II:2018jsg} with a discovery potential for $\mathrm{BR}(\tau\rightarrow3\mu)\sim 10^{-9}$. Regarding the $\tau\to\mu\gamma$ decay, we find that with the projected future sensitivity, the GTHDM prediction could be confirmed with values for the branching ratio varying from $10^{-9}$ up to $10^{-8}$. As mentioned earlier, the $\tau\rightarrow3\mu$ decay receives contributions in the GTHDM from all tree, dipole and contact terms, in such a way that a possible detection in the $\tau\to\mu\gamma$ channel will not necessarily imply a strong constraint for $\mathrm{BR}(\tau\rightarrow3\mu)$.
\begin{figure}[htb]
\begin{centering}
\includegraphics[scale=0.65]{figures/GTHDM_neutral_32_33_like2D.pdf}
\par\end{centering}
\caption{\emph{$\mathrm{BR}(\tau\to 3\mu)$ versus $\mathrm{BR}(\tau\to \mu\gamma)$. The magenta solid line is the combined Belle II experiment future sensitivity obtained for both observables using a one-sided Gaussian upper limit likelihood function at 90$\%$C.L.} \label{fig:leptonic-likelihoods}}
\end{figure}
With respect to $h\to \tau\mu$, with the model best fit point values, we computed the branching ratio $\mathrm{BR}(h\to \tau\mu)$ obtaining values from $10^{-2}$ down to $10^{-6}$ which are within the future sensitivity at the HL-LHC, reaching the 0.05$\%$ limit \cite{Hou:2020tgl}.
Finally, as for the $B_{s}\rightarrow\tau^{+}\tau^{-}$ decay, we find values of at most $\mathrm{BR}(B_{s}\rightarrow\tau^{+}\tau^{-})\sim\mathcal{O}(10^{-6})$ with our best fit point, which is one order of magnitude higher than the SM prediction, but out of reach of the future sensitivity in both the HL-LHC and the Belle-II experiments with limits at $\mathcal{O}(10^{-4})$
\cite{LHCb:2018roe,Belle-II:2018jsg}. Regarding the branching ratio
$\mathrm{BR}(B^{+}\rightarrow K^{+}\tau^{+}\tau^{-})$, as in the $B_{s}\rightarrow\tau^{+}\tau^{-}$ decay, the predicted branching ratio $\mathrm{BR}(B^{+}\rightarrow K^{+}\tau^{+}\tau^{-})$ is of order $10^{-7}-10^{-6}$, out of reach for Belle-II projections at $2\times10^{-5}$.
\section{Conclusions and Outlook}
\label{sec:Conclusions}
We presented a frequentist inspired likelihood analysis for the GTHDM including the charged anomalies, $b\to s\mu^{+}\mu^{-}$ transitions and the anomalous magnetic moment of the muon along with other flavour observables.
The analysis was carried out using the open source global fitting framework \textsf{GAMBIT}.
We computed the GTHDM WCs and validated them obtaining full agreement with the one loop calculations reported in the literature after the different notation factors were taken into account.
As expected, we found that the GTHDM can explain the neutral anomalies at the $1\sigma$ level.
Additionally, we also confirmed that the model is able to fit the current experimental values of the $R(D)$ ratio at the $1\sigma$ level, but it can not accommodate the $D^{*}$ charmed meson observables $R(D^{*})$ and $F_{L}(D^{*})$.
Furthermore, we inspected the fitted values for the angular observables in $b\to s\mu^{+}\mu^{-}$ transitions, obtaining in general a better performance with the GTHDM in comparison to the SM.
Then, based on the obtained best fit values of the model parameters and their 1$\sigma$ and 2$\sigma$ C.L. regions, we made predictions impacting directly in the future collider observables $\mathrm{BR}(t\to ch)$, $\mathrm{BR}(h\to bs)$, $\mathrm{BR}(h\to \tau\mu)$, $\mathrm{BR}(B_{s}\rightarrow\tau^{+}\tau^{-})$, $\mathrm{BR}(B^{+}\rightarrow K^{+}\tau^{+}\tau^{-})$ and the flavour violating decays of the $\tau$ lepton, $\mathrm{BR}(\tau\rightarrow3\mu)$ and $\mathrm{BR}(\tau\to\mu\gamma)$. We find that the model predicts values of $\mathrm{BR}(t\to ch)$, $\mathrm{BR}(B_{s}\rightarrow\tau^{+}\tau^{-})$ and $\mathrm{BR}(B^{+}\rightarrow K^{+}\tau^{+}\tau^{-})$ that are out of reach of future experiments, but its predictions for $\mathrm{BR}(h\to bs)$ and $\mathrm{BR}(h\to \tau\mu)$ are within the future sensitivity of the HL-LHC or the ILC. We also find that the predictions for the $\tau\rightarrow3\mu$ and $\tau\to\mu\gamma$ decays are well within the projected limits of the Belle II experiment. In summary, the next generation of particle colliders will have the sensitivity to probe, discover or exclude large parts of the parameter space of the GTHDM, and thus it serves as a further motivation for the development of higher energy and higher intensity particle colliders.
We can envision many avenues of future investigation using the tools and techniques developed for this work. The complete parameter space of the GTHDM is enormous, and thus for this study we have only focused on a subset of CP-conserving interactions between second and third generation fermions. The inclusion of the first generation in the Yukawa textures would introduce additional interactions and decay channels, possibly improving the fit to various of the flavour anomalies, while at the same time introducing new relevant constraints, such as rare kaon decays, e.g. from the NA62 experiment, and LFV muon decays, e.g. from the Mu2e experiment. CP-violation in kaon and $B$-meson decays would also become important constraints in case of complex Yukawa textures. Modifications of the GTHDM may also lead to improved fits to some flavour observables, for instance it has been shown that with the addition of right-handed neutrinos the model can better accommodate the neutral anomalies. Lastly, in this study we have not included detailed collider constraints from, e.g., searches for heavy Higgs bosons at colliders. Such a detailed study is a clear follow up from this work and it will showcase the complementarity of flavour and collider searches to constrain models of new physics that tools such as GAMBIT can explore.
Finally, in view of the latest experimental measurement made by the Fermilab Muon $g-2$ Collaboration, we performed a simultaneous fit to $\Delta a_{\mu}$ constrained by the charged anomalies finding solutions at the $1\sigma$ level. Once the neutral anomalies are included, however, a simultaneous explanation is unfeasible. A detailed study looking for a simultaneous explanation of both $g-2$ and the neutral anomalies in the GTHDM will be presented in a follow-up work.
\acknowledgments
We thank Martin White, Filip Rajec and the rest of the \textsf{GAMBIT}\xspace community for their suggestions and advice. We would also like to thank Dominik St\"ockinger and Hyejung St\"ockinger-Kim for their help and guidance on the dominant contributions to muon $g-2$. C.S. thanks Ulrik Egede for useful comments on the future sensitivity of the HL-LHC, and Peter Stangl for discussions about correlations in FCNC observables. The work of C.S. was supported by the Monash Graduate Scholarship (MGS) and the Monash International Tuition Scholarship (MITS). T.E.G. is supported by DFG Emmy Noether Grant No. KA 4662/1-1. The research placement of D.J. for this work was supported by the Australian Government Research Training Program (RTP) Scholarship and the Deutscher Akademischer Austauschdienst (DAAD) One-Year Research Grant. The work of P.A.\ was supported by the Australian Research Council Future Fellowship grant FT160100274. The work of P.A., C.B. and T.E.G. was also supported with the Australian Research Council Discovery Project grant DP180102209. The work of C.B. was supported by the Australian Research Council through the ARC Centre of Excellence for Particle Physics at the Tera-scale CE110001104. This project was also undertaken with the assistance of resources and services from the National Computational Infrastructure, which is supported by the Australian Government. We thank Astronomy Australia Limited for financial support of computing resources, and the Astronomy Supercomputer Time Allocation Committee for its generous grant of computing time.
|
2,869,038,154,669 | arxiv | \section{Introduction}
Understanding nanoscale effects is one of the most exciting scientific endeavours. It underpins very diverse research areas such as the mechanisms of life~\cite{Schechter2008,MYS+2011, Thomas2012, Wang2019}, quantum information~\cite{TARASOV2009, Laucht2021, Heinrich2021}, and fundamental phenomena in condensed matter systems~\cite{Cohen2008, Ou2019, Bachtold2022}. Research in these areas requires nanoscale quantum sensors, and one of the most mature room-temperature quantum nanoscale sensor is nanodiamond containing the negatively-charged nitrogen-vacancy (NV) centre~\cite{Schirhagl2014, Radtke2019}.
Such doped nanodiamonds are a superb system for quantum sensing. They are highly biocompatible~\cite{Aharonovich2011, Zhu2012} and photostable~\cite{Vaijayanthimala2012, Jung2020}, and therefore are ideal for minimally invasive biological experiments.
In NV, readout is typically achieved via optically detected magnetic resonance (ODMR), where the resonances in the interaction of the electronic spin of NV centres and a radio frequency (RF) electromagnetic field are detected by measuring the photo luminescence intensity of the centres. In this way, NV centres have been used for nanoscale magnetometry~\cite{Maze2008, Bai2020}, electrometery~\cite{Dolde2011, Tetienne2017}, thermometry~\cite{Kucsko2013, Khalid2020} and pressure measurements~\cite{Doherty2014}. Alternatively, accurate measurements of the photon luminescence spectrum (in particular its zero-phonon line) allows for all optical measurements~\cite{PDC+2014}.
A drawback of fluorescent nanodiamonds in comparison to quantum dots and organic molecules is their intrinsic heterogeneity. Large variations in fluorescence intensities and lifetimes are observed between NV centres in similar nanodiamonds~\cite{Heffernan2017, Capelli2019, Wilson2019, Capelli2021}. Understanding the origins of such variations and the ways of reducing the heterogeneity is important for developing a reliable technological platform.
Here we show large variations in NV fluorescence by performing theoretical modelling of the fluorescence of of a point defect in spherical nanodiamonds as a function of nanodiamond size and the defect position within the crystal. To explore the effect of geometry on emission, we treat the NV phonon structure by a set of electric dipoles with fixed energy spacing and emission probabilities~\cite{Davies1976,Su2008}, and the electromagnetic fields within and outside the diamond are calculated via Mie theory~\cite{Mie1908, vandeHulst1957, Bohren1998, Margetis2002} which were also validated by the numerical solver~\cite{Sun2022}. In our calculations, the density of state modification for centres close to crystal surface~\cite{Inam2013} is not considered. Our results show that noticeable variations in the spectra and intensities appear when the particle radius, $a$, is greater than around 110~nm, and the shapes of the spectra are significantly modified for larger crystals as shown for the cases at $a=200$~nm and $a=300$~nm. The effect is negligible for sizes below $a=100$~nm. Although our systems are idealised for computational tractability, our results highlight the sensitivity of fluorescence to the precise geometry of the NV-diamond system, and are therefore important for understanding the experimentally observed variations in fluorescence.
\section{Model}
\begin{table}[t]
\centering
\caption{Probabilities (Relative intensity $R$) to emit photons and phonons of a NV centre in bulk diamond as a function of the number of phonons at low temperature calculated in Refs.~\cite{Davies1976}. The zero phonon line is indicated by ZPL, and the phononic sideband arises from the summation from 1 to 11 phonons.}
\begin{tabular}{| c | c | c | c |}
\hline
No. of phonons & Wavelength $\lambda$ & Emission probabilities & Dipole moment strength (arb. u.) \\
& (nm) & (Relative intensity $R$) & ($|p| = \lambda^2 \sqrt{R}$) \\
\hline
0 (ZPL) & 637 & 0.0270 & 66674.65 \\
1 & 659 & 0.0951 & 133924.83 \\
2 & 683 & 0.173 & 194028.02 \\
3 & 708 & 0.209 & 229160.45 \\
4 & 736 & 0.191 & 236740.36 \\
5 & 765 & 0.140 & 218971.14 \\
6 & 797 & 0.0856 & 185846.13 \\
7 & 832 & 0.0441 & 145367.04 \\
8 & 870 & 0.0211 & 109946.08 \\
9 & 912 & 0.00931 & 80253.60 \\
10 & 957 & 0.00343 & 53637.80 \\
11 & 1008 & 0.000980 & 31807.83 \\
\hline
\end{tabular}
\label{tab:brancingratios}
\end{table}
To investigate how the NV centre location within a nanodiamond particle affects the far field fluorescence, we consider a single NV in a spherical particle with a refractive index of 2.4. The broad NV emission spectrum is represented by emission by 12 point dipoles $\boldsymbol{p} \equiv \boldsymbol{p}_i \, (i=0,1,2...,11)$ corresponding to the NV de-exciting via a single photon and multiple phonons. This gives rise to a broad emission spectrum with components at different wavelengths, as listed in Table~\ref{tab:brancingratios}. We use the low temperature emission probabilities from Ref.~\cite{Davies1976, Su2008} as the relative intensity, $R$, emitted from the NV centre at different numbers of de-exciting phonons, but we expect similar results for the room temperature case. Since intensity $R$ is proportional to the field power, it is then proportional to the square of the strength of the represented electric dipole for the NV centre. To match the dimension, we have $ (c |\boldsymbol{p}|^2) / (4 \pi \epsilon_0 \epsilon_r \lambda^4) \sim R$ where $c$ is the speed of light, $\epsilon_0$ is the vacuum permittivity, $\epsilon_r$ is the relative permittivity and $\lambda$ is the wavelength of emission light. Since $c$, $\epsilon_0$ and $\epsilon_r$ are constant in a homogeneous diamond, for simplicity, we set $|\boldsymbol{p}|^2 = \lambda^4 R$. In Fig.~\ref{Fig:10nm} (c-d), the square symbols display the relative intensity $R$ at the corresponding wavelengths.
\begin{figure}[t]
\centering{}
\includegraphics[width=0.65\textwidth]{Fig1_PhysSketch.png}
\caption{Sketch of the physical model for the photon collections by a pin hole with a circular optical objective emitted from a single NV centre implemented in a spherical nanodiamond.} \label{fig:sketch}
\end{figure}
To monitor the emission, we model a detector with pin hole and circular entrance aperture. The axis of the point dipole is assumed either parallel or perpendicular to the plane of the aperture, as sketched in Fig.~\ref{fig:sketch}.
All electric dipoles are co-located at $\boldsymbol{x}_d$ but each of them oscillates at a specific angular frequency $\omega \equiv \omega_i \, (i=0,1,2...,11)$ as $\exp{(-\mathrm{i} \omega t)}$. The emitted electric and magnetic fields from such a dipole are, respectively,
\begin{subequations}\label{eq:EMfielddipole}
\begin{align}
\boldsymbol{E}^{d} &= \frac{1}{4\pi \epsilon_0 \epsilon_2}\frac{\exp( \mathrm{i} k_2 r_d)}{ r_d^3}\left\{(-k_2^2 r_d^2 - 3 \mathrm{i} k_2 r_d +3) \frac{\boldsymbol r_d \cdot \boldsymbol p}{r_d^2}\boldsymbol r_d + (k_2^2 r_d^2 + \mathrm{i} k_2 r_d -1) \boldsymbol p \right\}, \\
\boldsymbol{H}^{d} &= \frac{\omega k_2}{4\pi}[\boldsymbol{r}_d \times \boldsymbol{p}] \left( \frac{1}{r_d} - \frac{1}{\mathrm{i} k_2 r_d^2} \right) \frac{\exp( \mathrm{i} k_2 r_d)}{r_d}
\end{align}
\end{subequations}
where $\boldsymbol{r}_d = \boldsymbol{x} - \boldsymbol{x}_d$ with $\boldsymbol{x}$ being the field location of interest and $r_d = |\boldsymbol{r}_d|$, $k_2$ is the wavenumber, $\epsilon_0 $ is the permittivity in vacuum, and $\epsilon_2 = n_2^2$ is the relative permittivity of diamond with $n_2 = 2.4$ the refractive index of diamond.
In a homogeneous medium, the intensity of each wavelength would be proportional to the photon emission probability in the actual spectrum at the same wavelength, which in turn is derived from the emission probabilities. However, the electromagnetic fields transmitted to the surrounding medium (air in this work) are modified due to the boundary conditions on the surface of the particle and can be obtained by solving Maxwell's equations. In the frequency domain, the Maxwell's equations in the internal domain of the nanodiamond and the external domain are
\begin{subequations}\label{eq:Maxwelleq}
\begin{align}
\nabla \times \boldsymbol E^{j} &= \mathrm{i} \omega \mu_0 \mu_j \boldsymbol{H}^{j}, \\
\nabla \cdot \boldsymbol E^{j} & = 0; \\
\nabla \times \boldsymbol H^{j} &=-\mathrm{i} \omega \epsilon_0 \epsilon_j \boldsymbol{E}^{j}, \\
\nabla \cdot \boldsymbol H^{j} &= 0
\end{align}
\end{subequations}
where $\mu_0$ is the permeability in vacuum, $j$ refers to the external domain and the nanodiamond domain with $j=1$ and $j=2$, respectively, and $\mu_j$ is the relative permeability of each domain which is set as $\mu_1 = \mu_2 = 1$ in this work.
Together with the boundary conditions,
\begin{subequations}
\begin{align}
\boldsymbol{t}_1 \cdot (\boldsymbol{E}^{2} + \boldsymbol{E}^{d}) &= \boldsymbol{t}_1\cdot \boldsymbol{E}^{1}, \qquad \boldsymbol{t}_2 \cdot (\boldsymbol{E}^{2} + \boldsymbol{E}^{d}) = \boldsymbol{t}_2\cdot \boldsymbol{E}^{1}; \\
\boldsymbol{t}_1 \cdot (\boldsymbol{H}^{2} + \boldsymbol{H}^{d}) &= \boldsymbol{t}_1\cdot \boldsymbol{H}^{1}, \qquad \boldsymbol{t}_2 \cdot (\boldsymbol{H}^{2} + \boldsymbol{H}^{d}) = \boldsymbol{t}_2\cdot \boldsymbol{H}^{1}
\end{align}
\end{subequations}
where $\boldsymbol{t}_1$ and $\boldsymbol{t}_2$ are the two independent unit tangential directions on the diamond surface, the Maxwell's equations~(\ref{eq:Maxwelleq}) are solved using the Mie theory, which is detailed in the Appendix~\ref{sec:appsolution}. After obtaining the electromagnetic fields, we can calculate the observed far-field intensity for each dipole as measured through the aperture located either at the top view position ($T$) or the side view position ($S$). This is done by integrating the time-averaged Poynting vector over the aperture area.
\begin{subequations}
\begin{align}
\text{Top view:} \qquad I_{T}(\lambda_{i}) = \int_{S_{\text{obj}}} \frac{1}{2} \left[\boldsymbol{E}^{1} \times (\boldsymbol{H}^{1})^{*}\right] \cdot \boldsymbol{e}_z \, \mathrm{d} S, \qquad i=0,1,2...,11; \\
\text{Side view:} \qquad I_{S}(\lambda_{i}) = \int_{S_{\text{obj}}} \frac{1}{2} \left[\boldsymbol{E}^{1} \times (\boldsymbol{H}^{1})^{*}\right] \cdot \boldsymbol{e}_x \, \mathrm{d} S, \qquad i=0,1,2...,11.
\end{align}
\end{subequations}
The above formulations can give us the overall photon counts with respect to the relative intensities of a NV centre in bulk diamond listed in Table~\ref{tab:brancingratios} in which $\lambda_i$ is the wavelength of the corresponding dipole, $\boldsymbol{e}_x$ and $\boldsymbol{e}_z$ are the unit vector along $x$ and $z$ axis, respectively, and superscript `*' indicates the complex conjugate of the field.
We also calculated the normalised (relative) spectral intensity of the radiation collected through the pin hole:
\begin{subequations}
\begin{align}
\text{Top view:} \qquad I^{n}_{T}(\lambda_{i}) &= \frac{I_{T}(\lambda_{i})}{\sum^{11}_{i=0} I_{T}(\lambda_{i})}, \qquad i=0,1,2...,11;\\
\text{Side view:} \qquad I^{n}_{S}(\lambda_{i}) &= \frac{I_{S}(\lambda_{i})}{\sum^{11}_{i=0} I_{S}(\lambda_{i})}, \qquad i=0,1,2...,11.
\end{align}
\end{subequations}
The normalised spectra are important as there are often large experimental variations in total fluorescence and hence relative changes in the spectrum are often easier to observe.
\section{Results}
\begin{figure}[t]
\centering{}
\subfloat[Case A]{ \includegraphics[width=0.35\textwidth]{Fig2a_Xx1.png}} \qquad
\subfloat[Case B]{ \includegraphics[width=0.35\textwidth]{Fig2b_Xz1.png}} \\
\subfloat[Case C]{ \includegraphics[width=0.35\textwidth]{Fig2c_Zx1.png}} \qquad
\subfloat[Case D]{ \includegraphics[width=0.35\textwidth]{Fig2d_Zz1.png}}
\caption{Four cases under consideration for the photon collections emitted from a single NV centre, which is represented by an electric dipole with moment $\boldsymbol{p}$, implemented in a spherical nanodiamond when the NV centre is located at different position along the $x$-axis: (a) Case A, $\boldsymbol{p}=(p,0,0)$ and side view; (b) Case B, $\boldsymbol{p}=(p,0,0)$ and top view; (c) Case C, $\boldsymbol{p}=(0,0,p)$ and side view; and (d) Case D, $\boldsymbol{p}=(0,0,p)$ and side view.} \label{Fig:Cases}
\end{figure}
\begin{figure}[t]
\centering{}
\subfloat[Case A]{ \includegraphics[width=0.45\textwidth]{Fig3a_XPowerNrmlx_010nm.png}}
\subfloat[Case D]{ \includegraphics[width=0.45\textwidth]{Fig3b_ZPowerNrmlz_010nm.png}} \\
\subfloat[Case A]{ \includegraphics[width=0.4\textwidth]{Fig3c_a010_CaseAnrml.png}}
\subfloat[Case D]{ \includegraphics[width=0.4\textwidth]{Fig3d_a010_CaseDnrml.png}}
\caption{The overall photon counts (a) and (b) emitted from a NV centre embedded in a nanodiamond with radius of $a=10$~nm when the NV colour centre is placed from the left to the right of the nanodiamond. Also, the normalised photon counts (c) and (d) at $x_d/a = 0$ for Case A and Case D, which almost fully represent the relative intensities of a NV centre in bulk diamond at the low-temperature condition listed in Table~\ref{tab:brancingratios}.} \label{Fig:10nm}
\end{figure}
To demonstrate how the position of the NV centre in a spherical nanodiamond can affect the photon collections at the far field, we locate the NV centre at varying positions along the $x$-axis, $\boldsymbol{x}_d = (x_d,\,0,\,0)$ and $|x_d| = d$. The equivalent electric dipole moment, $\boldsymbol{p}$, can be either along $x$-axis or $z$-axis. Together with two observation spots, the top view and the side view, as shown in Fig.~\ref{Fig:Cases}, we studied four cases: (i) Case A, $\boldsymbol{p}=(p,0,0)$ and side view; (ii) Case B, $\boldsymbol{p}=(p,0,0)$ and top view; (iii) Case C, $\boldsymbol{p}=(0,0,p)$ and side view; and (iv) Case D, $\boldsymbol{p}=(0,0,p)$ and side view. Corresponding to Case A to D, the animations of the overall and normalised photon counts for $a=10$~nm to $a=300$~nm when the NV centre is located from the left to the right of the particle are presented in Supp. Mat. 1 to 4 and Supp. Mat. 5 to 8, respectively. Also, the detailed analysis for different size of particles is demonstrated below.
We start with the case of a small nanodiamond with a radius of $a=10$~nm. When the particle size is small compared to the wavelength of the emitted light from the NV centre, the relative position of the NV centre to the surface of the diamond particle has insignificant effects on the photon collection by the optical objective (the pin hole)~\cite{Plakhotnik2018}, as displayed in Fig.~\ref{Fig:10nm}. In this figure, we only show the overall and normalised electromagnetic intensity profiles for Case A and Case D as a function of the number of de-exciting phonons, which almost fully represent the relative intensities of a NV centre in bulk diamond at the low-temperature condition listed in Table~\ref{tab:brancingratios}. For Case B and Case C, the profiles are same as what are presented in Fig.~\ref{Fig:10nm}, and hence are not shown in the main text.
\begin{figure}[t]
\centering{}
\subfloat[Case A]{ \includegraphics[width=0.45\textwidth]{Fig4a_XPowerNrmlx_100nm.png}}
\subfloat[Case B]{ \includegraphics[width=0.45\textwidth]{Fig4b_XPowerNrmlz_100nm.png}} \\
\subfloat[Case C]{ \includegraphics[width=0.45\textwidth]{Fig4c_ZPowerNrmlx_100nm.png}}
\subfloat[Case D]{ \includegraphics[width=0.45\textwidth]{Fig4d_ZPowerNrmlz_100nm.png}}
\caption{The overall photon counts (colour axis) emitted from a NV centre embedded in a nanodiamond with radius of $a=100$~nm when the NV colour centre is placed from the left to the right of the nanodiamond. For the $x$-oriented dipole [(A) top view and (B) side view] the emission is brightest when $x_d/a \lessapprox 0$. For the $z$-oriented dipole [(C) and (D)], emission is brightest towards the edges and exhibits different emission in the top (C) and side (D) directions. Note that the colour axes are different in each image.} \label{Fig:I100nm}
\end{figure}
\begin{figure}[t]
\centering{}
\subfloat[Case A]{ \includegraphics[width=0.4\textwidth]{Fig5a_a100_CaseA.png}}
\subfloat[Case B]{ \includegraphics[width=0.4\textwidth]{Fig5b_a100_CaseB.png}} \\
\subfloat[Case C]{ \includegraphics[width=0.4\textwidth]{Fig5c_a100_CaseC.png}}
\subfloat[Case D]{ \includegraphics[width=0.4\textwidth]{Fig5d_a100_CaseD.png}}
\caption{The overall photon counts emitted from a NV centre embedded in a nanodiamond with radius of $a=100$~nm at selected NV centre locations.} \label{Fig:I100nmSpc}
\end{figure}
\begin{figure}[t]
\centering{}
\subfloat[Case A]{ \includegraphics[width=0.45\textwidth]{Fig6a_XPowerNrmlxnrml_100nm.png}}
\subfloat[Case B]{ \includegraphics[width=0.45\textwidth]{Fig6b_XPowerNrmlznrml_100nm.png}} \\
\subfloat[Case C]{ \includegraphics[width=0.45\textwidth]{Fig6c_ZPowerNrmlxnrml_100nm.png}}
\subfloat[Case D]{ \includegraphics[width=0.45\textwidth]{Fig6d_ZPowerNrmlznrml_100nm.png}}
\caption{The normalised photon counts emitted from a NV centre embedded in a nanodiamond with radius of $a=100$~nm when the NV colour centre is placed from the left to the right of the nanodiamond. When compared to the overall electromagnetic field intensity, the effects of the NV centre location is less significant for the normalised intensity for a nanodiamond particle which radius is $a=100$~nm.} \label{Fig:In100nm}
\end{figure}
When the radius of the diamond particle is 100~nm, the effects on the photon counts emitted from the NV centre due to its location relative to the nanodiamond surface start to present. For example, when the equivalent electric dipole moment direction is along the $x$-axis, the overall electromagnetic field intensity collected by the objective from side (Case A) and top (Case B) view is stronger when the dipole is located in the centre of the diamond particle relative to when it is close to the diamond surface, as shown in Fig.~\ref{Fig:I100nm} (a) and (b) as well as in Fig.~\ref{Fig:I100nmSpc} (a) and (b). However, if the dipole moment direction is along the $z$-axis, the overall electromagnetic field intensity is stronger when the dipole is close to the diamond particle surface on the left for the side view, as shown in Fig.~\ref{Fig:I100nm} (c) and~\ref{Fig:I100nmSpc} (c). With the top view for $z$-oriented NV centre, the overall electromagnetic field intensity profile is symmetric with respect to the centre of the diamond centre. The emission is weaker when the dipole is near the centre of the particle relative to when it is close to the particle surface, as displayed in Fig.~\ref{Fig:I100nm} (d) and~\ref{Fig:I100nmSpc} (d).
\begin{figure}[t]
\centering{}
\subfloat[Case D]{ \includegraphics[width=0.45\textwidth]{Fig7a_ZPowerNrmlznrml_110nm.png}}
\subfloat[Case D]{ \includegraphics[width=0.4\textwidth]{Fig7b_a110_CaseDnrml.png}}
\caption{The normalised photon counts emitted from a NV centre embedded in a nanodiamond with radius of $a=110$~nm for Case D (a) when the NV colour centre is placed from the left to the right of the nanodiamond and (b) at selected NV centre locations.} \label{Fig:In110nm}
\end{figure}
In Fig.~\ref{Fig:In100nm}, the normalised electromagnetic field intensity profiles, $I_{S}^{n}$ and $I_{T}^{n}$, are shown for a diamond particle with radius of 100~nm. Compared to the overall electromagnetic field intensity, one obvious difference is that the effects of the NV centre location is less significant for the normalised intensity which is almost the same as the relative intensity of a NV centre in bulk diamond.
Nevertheless, when the diamond particle size is $a=110$~nm, the effects of the location of the NV centre on the normalised intensity become noticeable. As shown in Fig.~\ref{Fig:In110nm}, when the equivalent electric dipole moment direction is along the $z$-axis, the dominant signal of the normalised intensity is changed from $\lambda=$~708~nm to $\lambda=$~683~nm when the NV centre is close to the particle surface.
\begin{figure}[t]
\centering{}
\subfloat[Case A]{ \includegraphics[width=0.45\textwidth]{Fig8a_XPowerNrmlx_200nm.png}}
\subfloat[Case B]{ \includegraphics[width=0.45\textwidth]{Fig8b_XPowerNrmlz_200nm.png}} \\
\subfloat[Case C]{ \includegraphics[width=0.45\textwidth]{Fig8c_ZPowerNrmlx_200nm.png}}
\subfloat[Case D]{ \includegraphics[width=0.45\textwidth]{Fig8d_ZPowerNrmlz_200nm.png}}
\caption{The overall photon counts emitted from a NV centre embedded in a nanodiamond with radius of $a=200$~nm when the NV colour centre is placed from the left to the right of the nanodiamond.} \label{Fig:I200nm}
\end{figure}
\begin{figure}[!ht]
\centering{}
\subfloat[Case A]{ \includegraphics[width=0.4\textwidth]{Fig9a_a200_CaseA.png}}
\subfloat[Case B]{ \includegraphics[width=0.4\textwidth]{Fig9b_a200_CaseB.png}} \\
\subfloat[Case C]{ \includegraphics[width=0.4\textwidth]{Fig9c_a200_CaseC.png}}
\subfloat[Case D]{ \includegraphics[width=0.4\textwidth]{Fig9d_a200_CaseD.png}}
\caption{The overall photon counts emitted from a NV centre embedded in a nanodiamond with radius of $a=200$~nm at selected NV colour centre locations.} \label{Fig:I200nmSpc}
\end{figure}
When the radius of the diamond particle is 200~nm, the subtle effects that were predicted for the 100~nm particles become far more pronounced. Large changes in both the overall and relative (normalised) spectra are observed. The spectra for the overall electromagnetic field intensity for the four cases are shown in Fig.~\ref{Fig:I200nm} and~\ref{Fig:I200nmSpc}. When the equivalent electric dipole moment direction is along the $x$-axis, the overall electromagnetic field intensity collected from both the top and side views indicate that, when the NV centre is deep in the nanodiamond particle, the fluorescence signals are much stronger than that when it is close to the particle surface, as shown in Fig.~\ref{Fig:I200nm} (a-b) and~\ref{Fig:I200nmSpc} (a-b). Unlike the symmetric fluorescence profile from the top view, the nanodiamond is much brighter when the NV centre locates in the left part of the particle ($x_d/a < 0$) from the side view as shown in Fig.~\ref{Fig:I200nm} (a) and the comparison between $x_d/a=-0.3$ and $x_d/a=0.5$ in Fig.~\ref{Fig:I200nmSpc} (a). Whereas if the dipole moment direction is in $z$-direction, for example, Case C and D in Fig.~\ref{Fig:I200nm} (c-d) and Fig.~\ref{Fig:I200nmSpc} (c-d), emission signals from the NV centre is significant when it is either close to the particle surface or near the centre of the diamond particle.
\begin{figure}[t]
\centering{}
\subfloat[Case A]{ \includegraphics[width=0.45\textwidth]{Fig10a_XPowerNrmlxnrml_200nm.png}}
\subfloat[Case B]{ \includegraphics[width=0.45\textwidth]{Fig10b_XPowerNrmlznrml_200nm.png}} \\
\subfloat[Case C]{ \includegraphics[width=0.45\textwidth]{Fig10c_ZPowerNrmlxnrml_200nm.png}}
\subfloat[Case D]{ \includegraphics[width=0.45\textwidth]{Fig10d_ZPowerNrmlznrml_200nm.png}}
\caption{The normalised photon counts emitted from a NV centre embedded in a nanodiamond with radius of $a=200$~nm when the NV colour centre is placed from the left to the right of the nanodiamond.} \label{Fig:In200nm}
\end{figure}
\begin{figure}[!ht]
\centering{}
\subfloat[Case A]{ \includegraphics[width=0.4\textwidth]{Fig11a_a200_CaseAnrml.png}}
\subfloat[Case B]{ \includegraphics[width=0.4\textwidth]{Fig11b_a200_CaseBnrml.png}} \\
\subfloat[Case C]{ \includegraphics[width=0.4\textwidth]{Fig11c_a200_CaseCnrml.png}}
\subfloat[Case D]{ \includegraphics[width=0.4\textwidth]{Fig11d_a200_CaseDnrml.png}}
\caption{The normalised photon counts emitted from a NV centre embedded in a nanodiamond with radius of $a=200$~nm at selected NV colour centre location.} \label{Fig:In200nmSpc}
\end{figure}
\begin{figure}[t]
\centering{}
\subfloat[Case A]{ \includegraphics[width=0.45\textwidth]{Fig12a_XPowerNrmlx_300nm.png}}
\subfloat[Case B]{ \includegraphics[width=0.45\textwidth]{Fig12b_XPowerNrmlz_300nm.png}} \\
\subfloat[Case C]{ \includegraphics[width=0.45\textwidth]{Fig12c_ZPowerNrmlx_300nm.png}}
\subfloat[Case D]{ \includegraphics[width=0.45\textwidth]{Fig12d_ZPowerNrmlz_300nm.png}}
\caption{The overall photon counts emitted from a NV centre embedded in a nanodiamond with radius of $a=300$~nm when the NV colour centre is placed from the left to the right of the nanodiamond.} \label{Fig:I300nm}
\end{figure}
The normalised electromagnetic field intensity profiles of a single NV centre implemented in a nanodiamond with radius of 200~nm are shown in Fig.~\ref{Fig:In200nm} and~\ref{Fig:In200nmSpc}. For Case A when the dipole moment is along $x$-axis and the photon collection is along the side view, the normalised electromagnetic field intensity almost represents the relative intensities of a NV centre in bulk diamond when the NV centre is located in the left part of the nanodiamond particle ($x_d/a<0$). Nevertheless, if the NV centre is placed to the right part in the nanodiamond when $x_d/a>0$, compared to the relative intensities of a NV centre in bulk diamond, dominant wavelength of the normalised electromagnetic field intensity collected from the side view is firstly changes from $\lambda=$~708~nm to $\lambda=$~736~nm at around $x_d/a = 0.5$ and then changes again to $\lambda=$~659~nm at around $x_d/a = 0.6$, as shown in Fig.~\ref{Fig:In200nm} (a) and~\ref{Fig:In200nmSpc} (a). Also, at $x_d/a = 0.6$, there is a second peak of the normalised electromagnetic field intensity at $\lambda=$~765~nm, while the signal at 708~nm is significantly reduced. Regarding to the top view as presented in Fig.~\ref{Fig:In200nm} (b) and~\ref{Fig:In200nmSpc} (b) for Case B, the normalised electromagnetic field intensity profile is similar to that of a NV centre in bulk diamond when the NV centre is located from side to side in the particle. If the dipole moment direction is $z$-oriented, both the side and top views show that the emission signal is enhanced significantly when the NV centre is close to the surface of the particle ($|x_d|/a > 0.5$) for the wavelength at $\lambda=$~708~nm, as shown in Fig.~\ref{Fig:In200nm} (c-d) and~\ref{Fig:In200nmSpc} (c-d). When the $z$-oriented NV centre is deep in the particle, from the side view, the dominant number of de-exciting phonons changes from three ($\lambda=$~708~nm) to five ($\lambda=$~765~nm) around $x_d/a = 0.4$, as shown in Fig.~\ref{Fig:In200nm} (c) and~\ref{Fig:In200nmSpc} (c).
\begin{figure}[t]
\centering{}
\subfloat[Case A]{ \includegraphics[width=0.4\textwidth]{Fig13a_a300_CaseA.png}}
\subfloat[Case B]{ \includegraphics[width=0.4\textwidth]{Fig13b_a300_CaseB.png}} \\
\subfloat[Case C]{ \includegraphics[width=0.4\textwidth]{Fig13c_a300_CaseC.png}}
\subfloat[Case D]{ \includegraphics[width=0.4\textwidth]{Fig13d_a300_CaseD.png}}
\caption{The overall photon counts emitted from a NV centre embedded in a nanodiamond with radius of $a=300$~nm at selected NV colour centre locations.} \label{Fig:I300nmSpc}
\end{figure}
\begin{figure}[t]
\centering{}
\subfloat[Case A]{ \includegraphics[width=0.45\textwidth]{Fig14a_XPowerNrmlxnrml_300nm.png}}
\subfloat[Case B]{ \includegraphics[width=0.45\textwidth]{Fig14b_XPowerNrmlznrml_300nm.png}} \\
\subfloat[Case C]{ \includegraphics[width=0.45\textwidth]{Fig14c_ZPowerNrmlxnrml_300nm.png}}
\subfloat[Case D]{ \includegraphics[width=0.45\textwidth]{Fig14d_ZPowerNrmlznrml_300nm.png}}
\caption{The normalised photon counts emitted from a NV centre embedded in a nanodiamond with radius of $a=300$~nm when the NV colour centre is placed from the left to the right of the nanodiamond.} \label{Fig:In300nm}
\end{figure}
As the diamond radius increases to 300~nm, the spectra become richer. This is because there are numerous opportunities for resonances over the various wavelengths. For a diamond particle with radius of 300~nm, if the equivalent electric dipole moment direction is in the $x$-direction, the overall electromagnetic field intensity at $\lambda=$~708~nm is much stronger when the NV centre is around $x_d/a = -0.5$ in the particle from the side view, as shown in Fig~\ref{Fig:I300nm} (a) and~\ref{Fig:I300nmSpc} (a) for Case A. From the top view, the symmetric profile of the field intensity with respect to the particle centre is obtained when the NV centre is located from one side to the other of the particle, and the strongest fluorescence signal happens at $|x_d|/a = 0.55$ for $\lambda = 708$~nm, as shown in Fig~\ref{Fig:I300nm} (b) and~\ref{Fig:I300nmSpc} (b) for Case B. When the dipole moment direction is pointing along the $z$-axis, the highest fluorescence signal happens at $x_d/a = -0.4$ for $\lambda=$~708~nm from the side view, as shown in Fig~\ref{Fig:I300nm} (c) and~\ref{Fig:I300nmSpc} (c), while from the top view as shown in Fig~\ref{Fig:I300nm} (d), the electromagnetic field intensity profile is symmetric to the particle centre and the strongest appears at around $|x_d|/a = 0.4$ for $\lambda=$~708~nm and $\lambda=$~736~nm. Also, for these two cases, when $x_d/a=-0.75$ from the side view and $|x_d|/a = 0.85$ from the top view, there are two peaks of the fluorescence signals at $\lambda=683$~nm and $\lambda=832$~nm while the original peak signal at $\lambda=708$~nm for a NV centre in bulk diamond is significantly reduced.
\begin{figure}[t]
\centering{}
\subfloat[Case A]{ \includegraphics[width=0.4\textwidth]{Fig15a_a300_CaseAnrml.png}}
\subfloat[Case B]{ \includegraphics[width=0.4\textwidth]{Fig15b_a300_CaseBnrml.png}} \\
\subfloat[Case C]{ \includegraphics[width=0.4\textwidth]{Fig15c_a300_CaseCnrml.png}}
\subfloat[Case D]{ \includegraphics[width=0.4\textwidth]{Fig15d_a300_CaseDnrml.png}}
\caption{The normalised photon counts emitted from a NV centre embedded in a nanodiamond with radius of $a=300$~nm at selected NV colour centre locations.} \label{Fig:In300nmSpc}
\end{figure}
Fig~\ref{Fig:In300nm} illustrates the normalised fluorescence signals when a NV centre is located at different position in a diamond particle with radius of 300~nm. For Case A and B when the electric dipole moment direction is along $x$-axis, the dominant emission fluorescence is the same as a NV centre in bulk diamond at $\lambda=$~708~nm when the NV centre locates close to the surface of the diamond particle, as shown in Fig~\ref{Fig:In300nm} (a-b) and~\ref{Fig:In300nmSpc} (a-b). When the NV centre locates at $x_d/a=0.3$, the dominant emission wavelength changes to $\lambda=$~736~nm, as shown in Fig.~\ref{Fig:In300nmSpc} (a). For Case C and D as the dipole moment direction is in $z$-direction, when the position of the NV centre is close to the surface of the diamond particle, the strongest emission happens at $\lambda=$~683~nm relative to a NV centre in bulk diamond at $\lambda=$~708~nm, as shown in Fig~\ref{Fig:In300nm} (c) and (d). If the NV centre location locates deeper in the diamond particle at around $x_d/a=0.45$, the dominant emission is changed to $\lambda=$~736~nm, as shown in Fig~\ref{Fig:In300nm} (c-d) and~\ref{Fig:In300nmSpc} (c-d). Also, as shown in Fig.~\ref{Fig:In300nmSpc} (c-d), when $x_d/a=-0.75$ from the side view and at $|x_d|/a = 0.8$ from the top view for a $z$-oriented NV centre, there are the two peaks of the normalised fluorescence signals at $\lambda=683$~nm and $\lambda=832$~nm while the original peak signal at $\lambda=708$~nm for a NV centre in bulk diamond is significantly reduced.
When comparing the fluorescence profiles from a 300~nm diamond to those from the smaller diamonds, the emission from longer wavelengths are enhanced in the 300~nm case. This is because the particle size at radius of 300~nm is comparable to the longer wavelengths when the high refractive index of diamond is taken into consideration, which leads to the enhanced cavity effects of the diamond particle for the emission at the higher order lines~\cite{Almokhtar2014}.
\section{Conclusion}
We performed theoretical modelling of the fluorescence profiles of a NV colour centre in a spherical nanodiamond, exploring the effects of the relative location, orientation, and nanodiamond size on the emission probabilities of NV centre together. Changes in the emission probabilities lead to variations in the expected fluorescence profile. Our calculations indicate that the effects of the relative location, orientation of NV centre on the fluorescence signals become noticeable when the particle radius is greater than around $a=110$~nm and much profound for larger particles when $a=200$ nm and $a=300$ nm, with negligible effects below $a=100$~nm. Our results indicate that the information of the exact geometry of NV-diamond system is critical to understand and control the fluorescence profile, which is of importance to optimise such systems for quantum bio-sensing applications.
\begin{acknowledgments}
Q.S. acknowledges the support from the Australian Research Council grant DE150100169. A.D.G. acknowledges the support from the Australian Research Council grant FT160100357. Q.S., S.L. and A.D.G. acknowledge the Australian Research Council grant CE140100003. S.L. and A.D.G acknowledge the Air Force Office of Scientific Research (FA9550-20-1-0276). This research was partially undertaken with the assistance of resources from the National Computational Infrastructure (NCI Australia), an NCRIS enabled capability supported by the Australian Government (Grant No. LE160100051).
\end{acknowledgments}
|
2,869,038,154,670 | arxiv | \section{Introduction}
\label{sec:intro}
Merging compact binaries have long been thought to be promising sources of
gravitational waves that might be detectable in ground-based (Advanced LIGO,
Advanced VIRGO, KAGRA, etc) \cite{LIGO,VIRGO,KAGRA} or space-based (eLISA)
\cite{eLISA} experiments. With the first observation of a binary black
hole merger (GW150914) by Advanced LIGO \cite{AbboETC16a}, the era of
gravitational wave astronomy has arrived. This first observation emphasizes
what was long understood--that detection of weak signals and physical parameter
estimation will be aided by accurate theoretical predictions. Both the native
theoretical interest and the need to support detection efforts combine to
motivate research in three complementary approaches \cite{Leti14} for
computing merging binaries: numerical relativity \cite{BaumShap10,LehnPret14},
post-Newtonian (PN) theory \cite{Will11,Blan14}, and gravitational self-force
(GSF)/black hole perturbation (BHP) calculations
\cite{DrasHugh05,Bara09,PoisPounVega11, Thor11,Leti14}. The effective-one-body
(EOB) formalism then provides a synthesis, drawing calibration of its
parameters from all three \cite{BuonDamo99,BuonETC09,Damo10,HindETC13,
Damo13,TaraETC14}.
In the past seven years numerous comparisons \cite{Detw08,SagoBaraDetw08,
BaraSago09,BlanETC09,BlanETC10,Fuji12,ShahFrieWhit14,Shah14,
JohnMcDaShahWhit15,AkcaETC15} have been made in the overlap region
(Fig.~\ref{fig:regimes}) between GSF/BHP theory and PN theory. PN
theory is accurate for orbits with wide separations (or low frequencies) but
arbitrary component masses, $m_1$ and $m_2$. The GSF/BHP approach assumes a
small mass ratio $q = m_1/m_2 \ll 1$ (notation typically being $m_1 = \mu$
with black hole mass $m_2 = M$). While requiring small $q$, GSF/BHP
theory has no restriction on orbital separation or field strength. Early BHP
calculations focused on comparing energy fluxes; see for example
\cite{Pois93,CutlETC93,TagoSasa94,TagoNaka94} for waves radiated to infinity
from circular orbits and \cite{PoisSasa95} for flux absorbed at the black hole
horizon. Early calculations of losses from eccentric orbits were made by
\cite{TanaETC93,AposETC93,CutlKennPois94,Tago95}. More recently, starting
with Detweiler \cite{Detw08}, it became possible with GSF theory to compare
\emph{conservative} gauge-invariant quantities \cite{SagoBaraDetw08,
BaraSago09,BlanETC09,BlanETC10,ShahFrieWhit14,DolaETC14b,JohnMcDaShahWhit15,
BiniDamoGera15,AkcaETC15,HoppKavaOtte15}. With the advent of extreme
high-accuracy GSF calculations \cite{Fuji12,ShahFrieWhit14} focus also
returned to calculating dissipative effects (fluxes), this time to
extraordinarily high PN order \cite{Fuji12,Shah14} for circular orbits. This
paper concerns itself with making similar extraordinarily accurate (200
digits) calculations to probe high PN order energy flux from eccentric orbits.
\begin{figure}
\includegraphics[scale=0.95]{regimes_fig.pdf}
\caption{Regions of binary parameter space in which different formalisms
apply. Post-Newtonian (PN) approximation applies best to binaries with wide
orbital separation (or equivalently low frequency). Black hole perturbation
(BHP) theory is relevant for binaries with small mass ratio $\mu/M$.
Numerical relativity (NR) works best for close binaries with comparable
masses. This paper makes comparisons between PN and BHP results in their
region of mutual overlap.
\label{fig:regimes}}
\end{figure}
The interest in eccentric orbits stems from astrophysical considerations
\cite{AmarETC07,AmarETC14} that indicate extreme-mass-ratio inspirals (EMRIs)
should be born with high eccentricities. Other work \cite{HopmAlex05}
suggests EMRIs will have a distribution peaked about $e = 0.7$ as they enter
the eLISA passband. Less extreme (intermediate) mass ratio inspirals (IMRIs)
may also exist \cite{MillColb04} and might appear as detections in Advanced
LIGO \cite{BrowETC07,AmarETC07}. Whether they exist, and have significant
eccentricities, is an issue for observations to settle. The PN expansion for
eccentric orbits is known through 3PN relative order \cite{ArunETC08a,
ArunETC08b,ArunETC09a,Blan14}. The present paper confirms the accuracy of
that expansion for the energy flux and determines PN eccentricity-dependent
coefficients all the way through 7PN order for multiple orders in an expansion
in $e^2$. The model is improved by developing an understanding of what
eccentricity singular functions to factor out at each PN order. In so doing,
we are able to obtain better convergence and the ability to compute the
flux even as $e \rightarrow 1$. The review by Sasaki and Tagoshi
\cite{SasaTago03} summarized earlier work on fluxes from slightly eccentric
orbits (through $e^2$) and more recently results have been obtained
\cite{SagoFuji15} on fluxes to $e^6$ for 3.5PN and 4PN order.
Our work makes use of the analytic function expansion formalism developed
by Mano, Suzuki, and Takasugi (MST) \cite{ManoSuzuTaka96a,ManoSuzuTaka96b}
with a code written in \emph{Mathematica} (to take advantage of arbitrary
precision functions). The MST formalism expands solutions to the Teukolsky
equation in infinite series of hypergeometric functions. We convert from
solutions to the Teukolsky equation to solutions of the Regge-Wheeler-Zerilli
equations and use techniques found in \cite{HoppEvan10,HoppETC15}. Our use
of MST is similar to that found in Shah, Friedman, and Whiting
\cite{ShahFrieWhit14}, who studied conservative effects, and Shah
\cite{Shah14}, who examined fluxes for circular equatorial orbits on Kerr.
This paper is organized as follows. Those readers interested primarily in new
PN results will find them in Secs.~\ref{sec:preparePN}, \ref{sec:confirmPN},
and \ref{sec:newPN}. Sec.~\ref{sec:preparePN} contains original work in
calculating the 1.5PN, 2.5PN, and 3PN hereditary terms to exceedingly high
order in powers of the eccentricity to facilitate comparisons with
perturbation theory. It includes a subsection, Sec.~\ref{sec:asymptotic},
that uses an asymptotic analysis to guide an understanding of different
eccentricity singular factors that appear in the flux at all PN orders.
In Sec.~\ref{sec:confirmPN} we verify all previously known PN coefficients
(i.e., those through 3PN relative order) in the energy flux from eccentric
binaries at lowest order in the mass ratio. Sec.~\ref{sec:newPN} and
App.~\ref{sec:numericEnh} present our new findings on PN coefficients in the
energy flux from eccentric orbits between 3.5PN and 7PN order. For those
interested in the method, Sec.~\ref{sec:homog} reviews the MST formalism for
analytic function expansions of homogeneous solutions, and describes the
conversion from Teukolsky modes to normalized Regge-Wheeler-Zerilli modes.
Section \ref{sec:InhomogSol} outlines the now-standard procedure of solving
the RWZ source problem with extended homogeneous solutions, though now with
the added technique of spectral source integration \cite{HoppETC15}. Some
details on our numerical procedure, which allows calculations to better than
200 decimal places of accuracy, are given in Sec.~\ref{sec:CodeDetails}. Our
conclusions are drawn in Sec.~\ref{sec:conclusions}.
Throughout this paper we set $c = G = 1$, and use metric signature $(-+++)$
and sign conventions of Misner, Thorne, and Wheeler \cite{MisnThorWhee73}.
Our notation for the RWZ formalism is that of \cite{HoppEvan10}, which derives
from earlier work of Martel and Poisson \cite{MartPois05}.
\section{Analytic expansions for homogeneous solutions}
\label{sec:homog}
This section briefly outlines the MST formalism \cite{ManoSuzuTaka96b} (see
the detailed review by Sasaki and Tagoshi \cite{SasaTago03}) and describes
our conversion to analytic expansions for normalized RWZ modes.
\subsection{The Teukolsky formalism}
The MST approach provides analytic function expansions for general
perturbations of a Kerr black hole. With other future uses in mind, elements
of our code are based on the general MST expansion. However, the present
application is focused solely on eccentric motion in a Schwarzschild
background and thus in our discussion below we simply adopt the $a = 0$ limit
on black hole spin from the outset. The MST method describes gravitational
perturbations in the Teukolsky formalism \cite{Teuk73} using the
Newman-Penrose scalar
$\psi_4 = - C_{\alpha\beta\gamma\delta} n^{\alpha} \bar{m}^{\beta}
n^{\gamma} \bar{m}^{\delta}$ \cite{NewmPenr62,NewmPenr63}. Here
$C_{\alpha\beta\gamma\delta}$ is the Weyl tensor, and its projection is made
on elements of the Kinnersley null tetrad (see \cite{Kinn69,Teuk73} for its
components).
In our application the line element is
\begin{equation}
ds^2 = -f dt^2 + f^{-1} dr^2
+ r^2 \left( d\theta^2 + \sin^2\theta \, d\varphi^2 \right) ,
\end{equation}
as written in Schwarzschild coordinates, with $f(r) = 1 - 2M/r$. The
Teukolsky equation \cite{Teuk73} with spin-weight $s = -2$ is satisfied
(when $a=0$) by $r^4 \psi_4$, with $\psi_4$ separated into Fourier-harmonic
modes by
\begin{equation}
\psi_4 = r^{-4} \sum_{lm} \, \int \, d\o \, e^{-i\o t}
\, R_{lm\o}(r) \, {}_{-2}Y_{lm}(\th, \varphi) .
\end{equation}
Here ${}_{s}Y_{lm}$ are spin-weighted spherical harmonics. The Teukolsky
equation for $R_{lm\o}$ reduces in our case to the Bardeen-Press equation
\cite{BardPres73,CutlKennPois94}, which away from the source has the
homogeneous form
\begin{equation}
\label{eqn:radial}
\left[ r^2 f \frac{d^2}{dr^2} - 2(r - M) \frac{d}{dr} + U_{l\omega}(r)\right]
\, R_{lm\omega}(r) = 0 ,
\end{equation}
with potential
\begin{equation}
U_{l\omega}(r) = \frac{1}{f}\left[\omega^2 r^2 - 4i\omega (r - 3M)\right]
- (l-1)(l+2) .
\end{equation}
Two independent homogeneous solutions are of interest, which have,
respectively, causal behavior at the horizon, $R^{\rm in}_{lm\omega}$, and at
infinity, $R^{\rm up}_{lm\omega}$,
\begin{align}
\label{eqn:Rin}
\hspace{-1.95ex}
R^{\rm in}_{lm\omega} &=
\begin{cases}
B^{\rm trans}_{lm\omega} r^2 f \, e^{-i \o r_*} \, &
r \rightarrow 2M
\\
B^{\rm ref}_{lm\omega} r^3 \, e^{i\omega r_*} +
\frac{B^{\rm in}_{lm\omega}}{r} \, e^{-i\omega r_*} \, &
r \rightarrow +\infty ,
\end{cases}
\\
\label{eqn:Rup}
\hspace{-1.95ex}
R^{\rm up}_{lm\omega} &=
\begin{cases}
C^{\rm up}_{lm\omega} \, e^{i\o r_*} +
C^{\rm ref}_{lm\omega} r^2 f \, e^{-i\o r_*} \, &
r \rightarrow 2M
\\
C^{\rm trans}_{lm\omega} r^3 \, e^{i\omega r_*} \, &
r \rightarrow +\infty ,
\end{cases}
\end{align}
where $B$ and $C$ are used for incident, reflected, and transmitted
amplitudes. Here $r_*$ is the usual Schwarzschild tortoise coordinate
$r_* = r + 2M \log (r/2M - 1 )$.
\subsection{MST analytic function expansions for $R_{lm\o}$}
\label{sec:MST}
The MST formalism makes separate analytic function expansions for the
solutions near the horizon and near infinity. We begin with the near-horizon
solution.
\subsubsection{Near-horizon (inner) expansion}
\label{sec:innerMST}
After factoring out terms that arise from the existence of singular points,
$R^{\rm in}_{lm\omega}$ is represented by an infinite series in hypergeometric
functions
\begin{align}
\label{eqn:Down1}
R_{lm\omega}^{\text{in}} &=
e^{i\epsilon x}(-x)^{2-i\epsilon}
p_\text{in}^{\nu}(x) ,
\\
\label{eqn:DownSeries}
p_\text{in}^\nu(x) &= \sum_{n=-\infty}^{\infty} a_n p_{n+\nu}(x) ,
\end{align}
where $\epsilon = 2M\omega$ and $x = 1 - r/2M$. The functions $p_{n+\nu}(x)$
are an alternate notation for the hypergeometric functions
${}_2F_1(a,b;c;x)$, with the arguments in this case being
\begin{equation}
\label{eqn:DownPDef}
p_{n+\nu}(x) = {}_2F_1(n+\nu+1-i\epsilon,-n-\nu-i\epsilon;3-2i\epsilon;x) .
\end{equation}
The parameter $\nu$ is freely specifiable and referred to as the
\emph{renormalized angular momentum,} a generalization of $l$ to non-integer
(and sometimes complex) values.
The series coefficients $a_n$ satisfy a three-term recurrence relation
\begin{equation}
\label{eqn:recurrence}
\alpha_n^\nu a_{n+1} + \beta_n^\nu a_n + \gamma_n^\nu a_{n-1} = 0 ,
\end{equation}
where $\alpha_n^\nu$, $\beta_n^\nu$, and $\gamma_n^\nu$ depend on $\nu$, $l$,
$m$, and $\epsilon$ (see App.~\ref{sec:solveNu} and
Refs.~\cite{ManoSuzuTaka96b} and \cite{SasaTago03} for details). The
recurrence relation has two linearly-independent solutions, $a_n^{(1)}$ and
$a_n^{(2)}$. Other pairs of solutions, say $a_n^{(1')}$ and $a_n^{(2')}$,
can be obtained by linear transformation. Given the asymptotic form of
$\alpha_n^\nu$, $\beta_n^\nu$, and $\gamma_n^\nu$, it is possible to find
pairs of solutions such that
$\lim_{n\rightarrow +\infty} a_n^{(1)}/a_{n}^{(2)} = 0$ and
$\lim_{n\rightarrow -\infty} a_n^{(1')}/a_{n}^{(2')} = 0$. The two
sequences $a_n^{(1)}$ and $a_n^{(1')}$ are called \emph{minimal} solutions
(while $a_n^{(2)}$ and $a_n^{(2')}$ are \emph{dominant} solutions), but in
general the two sequences will not coincide. This is where the free
parameter $\nu$ comes in. It turns out possible to choose $\nu$ such that
a unique minimal solution emerges (up to a multiplicative constant), with
$a_n(\nu)$ uniformly valid for $-\infty < n < \infty$ and with the series
converging. The procedure for finding $\nu$, which depends on frequency,
and then finding $a_n(\nu)$, involves iteratively solving for the root of
an equation that contains
continued fractions and resolving continued fraction equations. We give
details in App.~\ref{sec:solveNu}, but refer the reader to \cite{SasaTago03}
for a complete discussion. The expansion for $R_{lm\o}^{\rm in}$ converges
everywhere except $r=\infty$. For the behavior there we need a separate
expansion.
\subsubsection{Near-infinity (outer) expansion}
\label{sec:outerMST}
After again factoring out terms associated with singular points, an infinite
expansion can be written \cite{ManoSuzuTaka96b,SasaTago03,Leav86} for the
outer solution $R^{\rm up}_{lm\o}$ with outgoing wave dependence,
\begin{align}
\label{eqn:RMinus}
R^{\rm up}_{lm\o} & = 2^\nu e^{-\pi\epsilon}e^{-i\pi(\nu-1)}e^{iz}
z^{\nu+i\epsilon}(z-\epsilon)^{2-i\epsilon} \\
&\times\sum_{n=-\infty}^\infty i^n
\frac{(\nu-1-i\epsilon)_n}{(\nu+3+i\epsilon)_n}
b_n (2z)^n \notag \\
&\qquad\qquad\times
\Psi(n+\nu-1-i\epsilon,2n+2\nu+2;-2iz) . \notag
\end{align}
Here $z = \o r = \epsilon (1 - x)$ is another dimensionless variable,
$(\zeta)_n = \Gamma(\zeta+n)/\Gamma(\zeta)$ is the (rising) Pochhammer
symbol, and $\Psi(a,c;x)$ are irregular confluent hypergeometric functions.
The free parameter $\nu$ has been introduced again as well. The limiting
behavior $ \lim_{|x|\rightarrow\infty}\Psi(a,c;x)\rightarrow x^{-a}$
guarantees the proper asymptotic dependence
$ R^{\rm up}_{lm\o} = C^{\rm trans}_{lm\o} (z/\o)^{3} \,
e^{i(z+\epsilon\log{z})}$.
Substituting the expansion in \eqref{eqn:radial} produces a three-term
recurrence relation for $b_n$. Remarkably, because of the Pochhammer
symbol factors that were introduced in \eqref{eqn:RMinus}, the recurrence
relation for $b_n$ is identical to the previous one \eqref{eqn:recurrence}
for the inner solution. Thus the same value for the renormalized angular
momentum $\nu$ provides a uniform minimal solution for $b_n$, which can
be identified with $a_n$ up to an arbitrary choice of normalization.
\subsubsection{Recurrence relations for homogeneous solutions}
Both the ordinary hypergeometric functions ${}_2F_1(a,b;c;z)$ and the
irregular confluent hypergeometric functions $\Psi(a,b;z)$ admit three term
recurrence relations, which can be used to speed the construction of
solutions \cite{Shah14b}. The hypergeometric functions
$p_{n+\nu}$ in the inner solution \eqref{eqn:DownSeries} satisfy
\begin{align}
\label{eqn:DowngoingRecurrence}
&p_{n+\nu} = -\frac{2n+2\nu-1}{(n+\nu-1)(2+n+\nu-i\epsilon)(n+\nu-i\epsilon)}
\notag\\
&\hspace{3ex}\times\left[(n+\nu)(n+\nu-1)(2x-1)
+(2i+\epsilon)\epsilon\right]p_{n+\nu-1}\notag\\
&\hspace{3ex}-\frac{(n+\nu)(n+\nu+i\epsilon-3)(n+\nu+i\epsilon-1)}{(n+\nu-1)
(2+n+\nu-i\epsilon)(n+\nu-i\epsilon)} p_{n+\nu-2}.\notag \\
\end{align}
Defining by analogy with Eqn.~\eqref{eqn:DownPDef}
\begin{equation}
q_{n+\nu} \equiv \Psi(n+\nu-i\epsilon -1,2n+2\nu+2;-2iz) ,
\end{equation}
the irregular confluent hypergeometric functions satisfy
\begin{align}
\label{eqn:OutgoingRecurrence}
& q_{n+\nu} = \frac{(2n+2\nu-1)}{(n+\nu-1)(n+\nu-i\epsilon - 2)z^2} \notag \\
&\hspace{3ex} \times\left[2n^2+2\nu(\nu-1)+n(4\nu-2)-(2+i\epsilon)z\right]
q_{n+\nu-1} \notag \\
&\hspace{3ex}
+\frac{(n+\nu)(1+n+\nu+i\epsilon)}{(n+\nu-1)(n+\nu-i\epsilon-2)z^2}
q_{n+\nu-2}.
\end{align}
\subsection{Mapping to RWZ master functions}
In this work we map the analytic function expansions of $R_{lm\o}$ to ones
for the RWZ master functions. The reason stems from having pre-existing
coding infrastructure for solving RWZ problems \cite{HoppEvan10} and the
ease in reading off gravitational wave fluxes. The Detweiler-Chandrasekhar
transformation \cite{Chan75,ChanDetw75,Chan83} maps $R_{lm\o}$ to a solution
$X^{\rm RW}_{lm\omega}$ of the Regge-Wheeler equation via
\begin{equation}
\label{eqn:RWtrans1}
X^{\rm RW}_{lm\omega} = r^3
\left(\frac{d}{dr}-\frac{i\omega}{f}\right)
\left(\frac{d}{dr}-\frac{i\omega}{f}\right)
\frac{R_{lm\omega}}{r^2} .
\end{equation}
For odd parity ($l + m =$ odd) this completes the transformation. For even
parity, we make a second transformation \cite{Bern07} to map through to a
solution $X_{lm\omega}^Z$ of the Zerilli equation
\begin{align}
\label{eqn:RWtrans2}
X_{lm\omega}^{Z,\pm} &=
\frac{1}{\lambda (\lambda + 1) \pm 3 i \o M}
\Bigg\{
3 M f\frac{dX_{lm\o}^{\rm{RW},\pm}}{dr} \\
& \hspace{7ex}+
\left[\lambda(\lambda+1)+\frac{9 M^2 f}{r(\lambda r+3M)}\right]
X_{lm\o}^{\rm{RW},\pm}
\Bigg\} . \notag
\end{align}
Here $\lambda=(l-1)(l+2)/2$. We have introduced above the $\pm$ notation to
distinguish outer ($+$) and inner ($-$) solutions--a notation that will be
used further in Sec.~\ref{sec:TDmasterEq}. [When unambiguous we often use
$X_{lm\o}$ to indicate either the RW function (with $l+m=$ odd) or Zerilli
function (with $l+m=$ even).] The RWZ functions satisfy the
homogeneous form of \eqref{eqn:masterInhomogFD} below with their respective
parity-dependent potentials $V_l$.
The normalization of $R_{lm\o}$ in the MST formalism is set by adopting some
starting value, say $a_0 = 1$, in solving the recurrence relation for $a_n$.
This guarantees that the RWZ functions will not be unit-normalized at infinity
or on the horizon, but instead will have some $A^{\pm}_{lm\o}$ such that
$X^{\pm}_{lm\o} \sim A^{\pm}_{lm\o} \, e^{\pm i \o r_*}$. We find
it advantageous though to construct unit-normalized modes
$\hat{X}^{\pm}_{lm\o} \sim \exp(\pm i \o r_*)$ \cite{HoppEvan10}.
To do so we first determine the initial amplitudes $A^{\pm}_{lm\o}$ by
passing the MST expansions in
Eqns.~\eqref{eqn:Down1}, \eqref{eqn:DownSeries}, and \eqref{eqn:RMinus}
through the transformation in Eqns.~\eqref{eqn:RWtrans1} (and additionally
\eqref{eqn:RWtrans2} as required) to find
\begin{widetext}
\begin{align}
\begin{split}
\label{eqn:RWasymp1}
A_{lm\omega}^{\text{RW},+}
&=
-2^{-1+4 i M \omega} i\omega (M \omega)^{2 i M \omega}
e^{-\pi M \omega-\frac{1}{2} i \pi \nu} \\
&\times\sum_{n=-\infty}^\infty (-1)^n \bigg\{(\nu-1) \nu (\nu+1) (\nu+2)
+ 4 i M \left[2 \nu (\nu+1)-7\right] \omega +32 i M^3 \omega^3
+ 400 M^4 \omega^4\\
& \hspace{10ex} + 20 M^2 \left[2 \nu (\nu+1)-1\right] \omega^2
+ 2 (2 \nu+1) \Big[4 M \omega (5 M \omega+i) + \nu(\nu+1)-1\Big]n \\
&\hspace{10ex} +\Big[8 M \omega (5 M \omega+i)
+ 6 \nu (\nu+1)-1\Big]n^2
+ (4 \nu+2)n^3 + n^4\bigg\}
\frac{(\nu-2 i M \omega-1)_n}{(\nu+2 i M \omega+3)_n}a_n,
\end{split}
\end{align}
\begin{align}
\begin{split}
\label{eqn:RWasymp2}
A_{lm\omega}^{Z,+}
&=
-\frac{ 2^{-1+4 i M \omega}i\omega (M\omega)^{2 i M \omega}
\left[(l-1) l (l+1) (l+2)+12 i M \omega\right]}{(l-1) l (l+1) (l+2)-12 i M
\omega} \\
&\times\sum_{n=-\infty}^\infty
e^{\frac{1}{2} i \pi (2 i M \omega+2 n-\nu)} \Big[2 M \omega (7 i-6 M \omega)
+n(n+2\nu+1)+\nu(\nu+1)\Big] \\
&
\hspace{10ex}
\times \Big\{-2 \left[1+3 M \omega (2 M \omega+i)\right]
+n(n+2\nu+1)+\nu(\nu+1)\Big\}
\frac{(\nu-2 i M \omega-1)_n}{(\nu+2 i M \omega+3)_n}a_n.
\end{split}
\end{align}
\begin{align}
\begin{split}
\label{eqn:RWasymp3}
& A_{lm\omega}^{\text{RW},-}
= A_{lm\omega}^{Z,-}
= -\frac{1}{M}e^{2 i M \omega} (2 M \omega+i) (4 M \omega+i)
\sum _{n=-\infty}^{\infty} a_n,
\end{split}
\end{align}
\end{widetext}
These amplitudes are then used to renormalize the initial $a_n$.
\section{Solution to the perturbation equations using MST and SSI}
\label{sec:InhomogSol}
We briefly review here the procedure for solving the perturbation equations
for eccentric orbits on a Schwarzschild background using MST and a recently
developed spectral source integration (SSI) \cite{HoppETC15} scheme, both of
which are needed for high accuracy calculations.
\subsection{Bound orbits on a Schwarzschild background}
\label{sec:orbits}
We consider generic bound motion between a small mass $\mu$, treated as a
point particle, and a Schwarzschild black hole of mass $M$, with
$\mu/M \ll 1$. Schwarzschild coordinates $x^{\mu} = (t,r,\theta, \varphi )$
are used. The trajectory of the particle is given by
$x_p^{\a}(\tau) =\left[t_p(\tau),r_p(\tau), \pi/2, \varphi_p(\tau)\right]$ in
terms of proper time $\tau$ (or some other suitable curve parameter) and
the motion is assumed, without loss of generality, to be confined to the
equatorial plane. Throughout this paper, a subscript $p$ denotes evaluation
at the particle location. The four-velocity is
$u^{\alpha} = dx_p^{\alpha}/d\tau$.
At zeroth order the motion is geodesic in the static background and the
equations of motion have as constants the specific energy
$\mathcal{E} = -u_t$ and specific angular momentum $\mathcal{L} = u_\varphi$.
The four-velocity becomes
\begin{equation}
\label{eqn:four_velocity}
u^\a = \l \frac{{\mathcal{E}}}{f_{p}}, u^r, 0, \frac{{\mathcal{L}}}{r_p^2} \r .
\end{equation}
The constraint on the four-velocity leads to
\begin{equation}
\label{eqn:rpDots}
\dot r_p^2(t) = f_{p}^2 \left[ 1 - \frac{f_p}{{\mathcal{E}}^2}
\l 1 + \frac{{\mathcal{L}}^2}{r^2} \r \right] ,
\end{equation}
where dot is the derivative with respect to $t$. Bound orbits
have ${\mathcal{E}} < 1$ and, to have two turning points, must at least have
${\mathcal{L}} > 2 \sqrt{3} M$. In this case, the pericentric radius,
$r_{\rm min}$, and apocentric radius, $r_{\rm max}$, serve as alternative
parameters to ${\mathcal E}$ and ${\mathcal L}$, and also give rise to
definitions of the (dimensionless) semi-latus rectum $p$ and the
eccentricity $e$ (see \cite{CutlKennPois94,BaraSago10}). These various
parameters are related by
\begin{equation}
\label{eqn:defeandp}
{\mathcal{E}}^2 = \frac{(p-2)^2-4e^2}{p(p-3-e^2)},
\quad
{\mathcal{L}}^2 = \frac{p^2 M^2}{p-3-e^2} ,
\end{equation}
and $r_{\rm max} = pM/(1-e)$ and $r_{\rm min} = pM/(1+e)$. The requirement
of two turning points also sets another inequality, $p > 6 + 2 e$, with the
boundary $p = 6 + 2 e$ of these innermost stable orbits being the
separatrix \cite{CutlKennPois94}.
Integration of the orbit is described in terms of an alternate curve
parameter, the relativistic anomaly $\chi$, that gives the radial position
a Keplerian-appearing form \cite{Darw59}
\begin{equation}
r_p \l \chi \r = \frac{pM}{1+ e \cos \chi} .
\end{equation}
One radial libration makes a change $\Delta\chi = 2\pi$. The orbital
equations then have the form
\begin{align}
\label{eqn:darwinEqns}
\frac{dt_p}{d \chi} &= \frac{r_p \l \chi \r^2}{M (p - 2 - 2 e \cos \chi)}
\left[\frac{(p-2)^2 -4 e^2}{p -6 -2 e \cos \chi} \right]^{1/2} ,
\nonumber
\\
\frac{d \varphi_p}{d\chi}
&= \left[\frac{p}{p - 6 - 2 e \cos \chi}\right]^{1/2} ,
\\
\frac{d\tau_p}{d \chi} &= \frac{M p^{3/2}}{(1 + e \cos \chi)^2}
\left[ \frac{p - 3 - e^2}{p - 6 - 2 e \cos \chi} \right]^{1/2} ,
\nonumber
\end{align}
and $\chi$ serves to remove singularities in the differential equations
at the radial turning points \cite{CutlKennPois94}. Integrating the first of
these equations provides the fundamental frequency and period of radial motion
\begin{equation}
\label{eqn:O_r}
\O_r \equiv \frac{2 \pi}{T_r},
\quad \quad
T_r \equiv \int_{0}^{2 \pi} \l \frac{dt_p}{d\chi} \r d \chi.
\end{equation}
There is an analytic solution to the second equation for the azimuthal
advance, which is especially useful in our present application,
\begin{equation}
\varphi_p(\chi) = \left(\frac{4 p}{p - 6 - 2 e}\right)^{1/2} \,
F\left(\frac{\chi}{2} \, \middle| \, -\frac{4 e}{p - 6 - 2 e} \right) .
\end{equation}
Here $F(x|m)$ is the incomplete elliptic integral of the first kind
\cite{GradETC07}. The average of the angular frequency $d \varphi_p / dt$
is found by integrating over a complete radial oscillation
\begin{equation}
\label{eqn:O_phi}
\O_\varphi = \frac{4}{T_r} \left(\frac{p}{p - 6 - 2 e}\right)^{1/2} \,
K\left(-\frac{4 e}{p - 6 - 2 e} \right) ,
\end{equation}
where $K(m)$ is the complete elliptic integral of the first kind
\cite{GradETC07}. Relativistic orbits will have $\Omega_r \ne \Omega_{\varphi}$,
but with the two approaching each other in the Newtonian limit.
\subsection{Solutions to the TD master equation}
\label{sec:TDmasterEq}
This paper draws upon previous work \cite{HoppEvan10} in solving the RWZ
equations, though here we solve the homogeneous equations using the MST
analytic function expansions discussed in Sec.~\ref{sec:homog}. A goal is to
find solutions to the inhomogeneous time domain (TD) master equations
\begin{equation}
\label{eqn:masterEqTD}
\left[-\frac{\partial^2}{\partial t^2} + \frac{\partial^2}{\partial r_*^2} - V_l (r) \right]
\Psi_{lm}(t,r) = S_{lm}(t,r) .
\end{equation}
The parity-dependent source terms $S_{lm}$ arise from decomposing the
stress-energy tensor of a point particle in spherical harmonics. They are
found to take the form
\begin{align}
\label{eqn:sourceTD}
S_{lm} = G_{lm}(t) \, \delta[r - r_p(t)] + F_{lm}(t) \,
\delta'[r - r_p(t)],
\end{align}
where $G_{lm}(t)$ are $F_{lm}(t)$ are smooth functions. Because of the
periodic radial motion, both $\Psi_{lm}$ and $S_{lm}$ can be written as
Fourier series
\begin{align}
\label{eqn:psiSeries}
\Psi_{lm}(t,r) &= \sum_{n=-\infty}^\infty X_{lmn}(r) \, e^{-i \o t} , \\
S_{lm}(t,r) &= \sum_{n=-\infty}^\infty Z_{lmn}(r) \, e^{-i \o t},
\label{eqn:Slm}
\end{align}
where the $\o \equiv \omega_{mn} = m\Omega_\varphi + n\Omega_r$ reflects the
bi-periodicity of the source motion. The inverses are
\begin{align}
X_{l mn}(r) &= \frac{1}{T_r} \int_0^{T_r} dt \ \Psi_{l m}(t,r)
\, e^{i \o t},
\\
Z_{l mn}(r) &= \frac{1}{T_r} \int_0^{T_r} dt \ S_{l m}(t,r)
\, e^{i \o t} .
\label{eqn:Zlmn}
\end{align}
Inserting these series in Eqn.~\eqref{eqn:masterEqTD} reduces the TD master
equation to a set of inhomogeneous ordinary differential equations (ODEs)
tagged additionally by harmonic $n$,
\begin{equation}
\label{eqn:masterInhomogFD}
\left[\frac{d^2}{dr_*^2} +\omega^2 -V_l (r) \right]
X_{lmn}(r) = Z_{lmn} (r) .
\end{equation}
The homogeneous version of this equation is solved by MST expansions. The
unit normalized solutions at infinity (up) are $\hat{X}^+_{lmn}$ while the
horizon-side (in) solutions are $\hat{X}^-_{lmn}$. These independent
solutions provide a Green function, from which the particular solution to
Eqn.~\eqref{eqn:masterInhomogFD} is derived
\begin{equation}
\label{eqn:FDInhomog}
X_{lmn} (r) = c^+_{lmn}(r) \, \hat{X}^+_{lmn}(r)
+ c^-_{lmn}(r) \, \hat{X}^-_{lmn}(r) .
\end{equation}
See Ref.~\cite{HoppEvan10} for further details. However, Gibbs behavior in
the Fourier series makes reconstruction of $\Psi_{lm}$ in this fashion
problematic. Instead, the now standard approach is to derive the TD solution
using the method of extended homogeneous solutions (EHS) \cite{BaraOriSago08}.
We form first the frequency domain (FD) EHS
\begin{equation}
\label{eqn:FD_EHS}
X^\pm_{lmn} (r) \equiv C^{\pm}_{lmn} \hat X_{lmn}^\pm (r), \quad \quad r > 2M ,
\end{equation}
where the normalization coefficients, $C^+_{lmn} = c^+_{lmn}(r_{\rm max})$ and
$C^-_{lmn} = c^-_{lmn}(r_{\rm min})$, are discussed in the next subsection.
From these solutions we define the TD EHS,
\begin{equation}
\label{eqn:TD_EHS}
\Psi^\pm_{lm} (t,r)
\equiv \sum_n X^\pm_{lmn} (r) \, e^{-i \o t}, \quad \quad r > 2M .
\end{equation}
Then the particular solution to Eqn.~\eqref{eqn:masterEqTD} is formed by
abutting the two TD EHS at the particle's location,
\begin{align}
\begin{split}
\Psi_{lm} (t,r) &= \Psi^{+}_{lm}(t,r) \theta \left[ r - r_p(t) \right] \\
& \hspace{10ex}
+
\Psi^{-}_{lm}(t,r) \theta \left[ r_p(t) - r \right] .
\end{split}
\end{align}
\subsection{Normalization coefficients}
\label{sec:NormCoeff}
The following integral must be evaluated to obtain the normalization
coefficients $C^\pm_{lmn}$ \cite{HoppEvan10}
\begin{align}
\label{eqn:EHSC}
C_{lmn}^\pm
&= \frac{1}{W_{lmn} T_r} \int_0^{T_r}
\Bigg[
\frac{1}{f_{p}} \hat X^\mp_{lmn}
G_{lm} \hspace{5ex} \\
&\hspace{5ex}
+ \l \frac{2M}{r_{p}^2 f_{p}^{2}} \hat X^\mp_{lmn}
- \frac{1}{f_{p}}
\frac{d \hat X^\mp_{lmn}}{dr} \r F_{lm}
\Bigg] e^{i \o t} \, dt, \notag
\end{align}
where $W_{lmn}$ is the Wronskian
\begin{equation}
W_{lmn} = f \hat{X}^-_{lmn} \frac{d \hat{X}^+_{lmn}}{dr}
- f \hat{X}^+_{lmn} \frac{d \hat{X}^-_{lmn}}{dr} .
\end{equation}
The integral in \eqref{eqn:EHSC} is often computed using Runge-Kutta (or
similar) numerical integration, which is algebraically convergent. As shown
in \cite{HoppETC15} when MST expansions are used with arbitrary-precision
algorithms to obtain high numerical accuracy (i.e., much higher than double
precision), algebraically-convergent integration becomes prohibitively
expensive. We recently developed the SSI scheme, which provides exponentially
convergent source integrations, in order to make possible MST calculations of
eccentric-orbit EMRIs with arbitrary precision. In the present paper our
calculations of energy fluxes have up to 200 decimal places of accuracy.
The central idea is that, since the source terms $G_{lm}(t)$ and $F_{lm}(t)$
and the modes $X^{\pm}_{lmn}(r)$ are smooth functions, the integrand in
\eqref{eqn:EHSC} can be replaced by a sum over equally-spaced samples
\begin{equation}
\label{eqn:CSum}
C^{\pm}_{lmn} = \frac{1}{N W_{lmn}} \sum_{k=0}^{N-1} \bar{E}^{\pm}_{lmn}(t_k)
\, e^{i n\Omega_r t_k} .
\end{equation}
In this expression $\bar{E}_{lmn}$ is the following $T_r$-periodic
smooth function of time
\begin{align}
&\bar{E}^{\pm}_{lmn}(t) =
\frac{\bar{G}_{lm}(t)}{f_p} \, \hat{X}^{\mp}_{lmn}(r_p(t)) \\
&
\hspace{5ex}+
\frac{2 M}{r_p^2} \frac{\bar{F}_{lm}(t)}{f_p^2} \,\hat{X}^{\mp}_{lmn}(r_p(t))
-
\frac{\bar{F}_{lm}(t)}{f_p} \, \partial_r\hat{X}^{\mp}_{lmn}(r_p(t)) . \notag
\end{align}
It is evaluated at $N$ times that are evenly spaced between $0$ and $T_r$, i.e.,
$t_k \equiv k T_r / N$. In this expression $\bar{G}_{lm}$ is related to the
term in Eqn.~\eqref{eqn:sourceTD} by
$\bar{G}_{lm}=G_{lm} e^{im\Omega_{\varphi} t}$ (likewise for $\bar{F}_{lm}$).
It is then found that the sum in \eqref{eqn:CSum} exponentially converges to
the integral in \eqref{eqn:EHSC} as the sample size $N$ increases.
One further improvement was found. The curve parameter in \eqref{eqn:EHSC}
can be arbitrarily changed and the sum \eqref{eqn:CSum} is thus replaced by
one with even sampling in the new parameter. Switching from $t$ to $\chi$
has the effect of smoothing out the source motion, and as a result the sum
\begin{align}
\label{eqn:CfromEbar}
C^{\pm}_{lmn} &= \frac{\O_r}{N W_{lmn}} \sum_k
\frac{dt_p}{d\chi}
\bar{E}^{\pm}_{lmn}(t_k)
\, e^{i n\Omega_r t_k} ,
\end{align}
evenly sampled in $\chi$ ($\chi_k = 2 \pi k /N$ with $t_k = t_p(\chi_k)$)
converges at a substantially faster rate. This is particularly advantageous
for computing normalizations for high eccentricity orbits.
Once the $C^{\pm}_{lmn}$ are determined, the energy fluxes at infinity can
be calculated using
\begin{align}
\label{eqn:fluxNumeric}
\left\langle \frac{dE}{dt} \right\rangle =
\sum_{lmn}\frac{\o^2}{64\pi}\frac{(l+2)!}{(l-2)!}
|C^{+}_{lmn}|^2 ,
\end{align}
given our initial unit normalization of the modes $\hat{X}_{lmn}^{\pm}$.
We return to this subject and specific algorithmic details in
Sec.~\ref{sec:CodeDetails}.
\medskip
\begin{widetext}
\section{Preparing the PN expansion for comparison with perturbation
theory}
\label{sec:preparePN}
The formalism we briefly discussed in the preceding sections, along with
the technique in \cite{HoppETC15}, was used to build a code for computing
energy fluxes at infinity from eccentric orbits to accuracies as high as 200
decimal places, and to then confirm previous work in PN theory and to discover
new high PN order terms. In this section we make further preparation for that
comparison with PN theory. The average energy and angular momentum fluxes from
an eccentric binary are known to 3PN relative order
\cite{ArunETC08a,ArunETC08b,ArunETC09a} (see also the review by Blanchet
\cite{Blan14}). The expressions are given in terms of three parameters;
e.g., the gauge-invariant post-Newtonian compactness parameter
$x\equiv\left[(m_1 + m_2) \Omega_\varphi\right]^{2/3}$, the eccentricity, and the
symmetric mass ratio $\nu=m_1 m_2/(m_1+m_2)^2 \simeq\mu/M$ (not to be confused
with our earlier use of $\nu$ for renormalized angular momentum parameter).
In this paper we ignore contributions to the flux that are higher order in the
mass ratio than $\mathcal{O}(\nu^2)$, as these would require a second-order
GSF calculation to reach. The more appropriate compactness parameter in the
extreme mass ratio limit is $y\equiv\left(M \Omega_\varphi\right)^{2/3}$, with
$y = x (1+ m_1/m_2)^{-2/3}$ \cite{Blan14}. Composed of a set of
eccentricity-dependent coefficients, the energy flux through 3PN order has
the form
\begin{align}
\label{eqn:energyflux}
\mathcal{F}_{\rm 3PN} =
\left\langle \frac{dE}{dt} \right\rangle_{\rm 3PN} =
\frac{32}{5} \left(\frac{\mu}{M}\right)^2 y^5 \,
\Bigl(\mathcal{I}_0 + y\,\mathcal{I}_1 + y^{3/2}\,\mathcal{K}_{3/2}
+ y^2\,\mathcal{I}_2
+ y^{5/2}\,\mathcal{K}_{5/2}
+ y^3\,\mathcal{I}_3 + y^3\,\mathcal{K}_{3} \Bigr) .
\end{align}
The $\mathcal{I}_n$ are instantaneous flux functions [of eccentricity and
(potentially) $\log(y)$] that have known closed-form expressions (summarized
below). The $\mathcal{K}_n$ coefficients are hereditary, or tail,
contributions (without apparently closed forms). The purpose of this
section is to derive new expansions for these hereditary terms and to
understand more generally the structure of all of the eccentricity dependent
coefficients, up to 3PN order and beyond.
\subsection{Known instantaneous energy flux terms}
For later reference and use, we list here the instantaneous energy flux
functions, expressed in modified harmonic (MH) gauge
\cite{ArunETC08a,ArunETC09a,Blan14} and in terms of $e_t$, a particular
definition of eccentricity (\emph{time eccentricity}) used in the
quasi-Keplerian (QK) representation \cite{DamoDeru85} of the orbit (see also
\cite{KoniGopa05,KoniGopa06,ArunETC08a,ArunETC08b,ArunETC09a,GopaScha11,Blan14})
\begin{align}
\label{eqn:edot0}
\mathcal{I}_0 &= \frac{1}{(1-e_t^2)^{7/2}}
{\left(1+\frac{73}{24}~e_t^2 + \frac{37}{96}~e_t^4\right)} ,
\\
\label{eqn:edot1}
\mathcal{I}_1 &=
\frac{1}{(1-e_t^2)^{9/2}}
{\left( -\frac{1247}{336} + \frac{10475}{672} e_t^2 + \frac{10043}{384} e_t^4
+ \frac{2179}{1792} e_t^6 \right)} ,
\\
\begin{split}
\label{eqn:edot2}
\mathcal{I}_2 &=
\frac{1}{(1-e_t^2)^{11/2}}
{\left(-\frac{203471}{9072} - \frac{3807197}{18144} e_t^2
- \frac{268447}{24192} e_t^4 + \frac{1307105}{16128} e_t^6
+ \frac{86567}{64512} e_t^8 \right)}
\\
&\hspace{50ex} + \frac{1}{(1-e_t^2)^{5}} \left(\frac{35}{2}
+ \frac{6425}{48} e_t^2
+ \frac{5065}{64} e_t^4 + \frac{185}{96} e_t^6 \right) ,
\end{split}
\end{align}
\begin{align}
\begin{split}
\label{eqn:edot3}
\mathcal{I}_3 &= \frac{1}{(1-e_t^2)^{13/2}}
\left(\frac{2193295679}{9979200} + \frac{20506331429}{19958400} e_t^2
-\frac{3611354071}{13305600} e_t^4
\right.
\\
\biggl.
&\hspace{45ex}+ \frac{4786812253}{26611200} e_t^6
+ \frac{21505140101}{141926400} e_t^8 - \frac{8977637}{11354112} e_t^{10}
\biggr)
\\&
+ \frac{1}{(1-e_t^2)^{6}} \left(-\frac{14047483}{151200}
+ \frac{36863231}{100800} e_t^2 + \frac{759524951}{403200} e_t^4
+ \frac{1399661203}{2419200} e_t^6 + \frac{185}{48} e_t^8 \right)
\\&
+
\frac{1712}{105}
\log\left[\frac{y}{y_0}
\frac{1+\sqrt{1-e_t^2}}{2(1-e_t^2)}\right] F(e_t),
\end{split}
\end{align}
where the function $F(e_t)$ in Eqn.~\eqref{eqn:edot3}
has the following closed-form \cite{ArunETC08a}
\begin{equation}
F(e_t) = \frac{1}{(1-e_t^2)^{13/2}}
\bigg( 1+ \frac{85}{6} e_t^2 + \frac{5171}{192} e_t^4 +
\frac{1751}{192} e_t^6 + \frac{297}{1024} e_t^8\bigg) .
\label{eqn:capFe}
\end{equation}
The first flux function, $\mathcal{I}_0(e_t)$, is the \emph{enhancement
function} of Peters and Mathews \cite{PeteMath63} that arises from quadrupole
radiation and is computed using only the Keplerian approximation of the
orbital motion. The term ``enhancement function'' is used for functions like
$\mathcal{I}_0(e_t)$ that are defined to limit on unity as the orbit becomes
circular (with one exception discussed below). Except for $\mathcal{I}_0$, the
flux coefficients generally depend upon choice of gauge, compactness
parameter, and PN definition of eccentricity. [Note that the extra parameter
$y_0$ in the $\mathcal{I}_3$ log term cancels a corresponding log term in the
3PN hereditary flux. See Eqn.~\eqref{eqn:hered3PN} below.] We also point out
here the appearance of factors of $1 - e_t^2$ with negative, odd-half-integer
powers, which make the PN fluxes diverge as $e_t \rightarrow 1$. We will have
more to say in what follows about these \emph{eccentricity singular factors.}
\subsection{Making heads or tails of the hereditary terms}
The hereditary contributions to the energy flux can be defined
\cite{ArunETC08a} in terms of an alternative set of functions
\begin{align}
\label{eqn:edot32}
&\mathcal{K}_{3/2} = 4\pi\,\varphi(e_t) , \\
&\mathcal{K}_{5/2} = -\frac{8191}{672}\,\pi\,\psi(e_t)
\label{eqn:hered52} , \\
&\mathcal{K}_{3} = -\frac{1712}{105}\,\chi(e_t) +
\left[
-\frac{116761}{3675} + \frac{16}{3} \,\pi^2 -\frac{1712}{105}\,\gamma_\text{E} -
\frac{1712}{105}\log\left(\frac{4y^{3/2}}{y_0}\right)\right]\, F(e_t) ,
\label{eqn:hered3PN}
\end{align}
where $\gamma_\text{E}$ is the Euler constant and $F$, $\varphi$, $\psi$, and
$\chi$ are enhancement functions (though $\chi$ is the aforementioned special
case, which instead of limiting on unity vanishes as $e_t \rightarrow 0$).
(Note also that the enhancement function $\chi(e_t)$ should not to be
confused with the orbital motion parameter $\chi$.) Given the limiting
behavior of these new functions, the circular orbit limit
becomes obvious. The 1.5PN enhancement function $\varphi$ was first calculated
by Blanchet and Sch\"{a}fer \cite{BlanScha93} following discovery of the
circular orbit limit ($4 \pi$) of the tail by Wiseman \cite{Wise93}
(analytically) and Poisson \cite{Pois93} (numerically, in an early BHP
calculation). The function $F(e_t)$, given above in Eqn.~\eqref{eqn:capFe},
is closed form,
while $\varphi$, $\psi$, and $\chi$ (apparently) are not. Indeed, the lack of
closed-form expressions for $\varphi$, $\psi$, and $\chi$ presented a problem
for us. Arun et al.~\cite{ArunETC08a,ArunETC08b,ArunETC09a} computed these
functions numerically and plotted them, but gave only low-order expansions in
eccentricity. For example Ref.~\cite{ArunETC09a} gives for the 1.5PN tail
function
\begin{equation}
\label{eqn:arunphi}
\varphi(e_t)=1+\frac{2335}{192} e_t^2 + \frac{42955}{768} e_t^4 + \cdots .
\end{equation}
One of the goals of this paper became finding means of calculating these
functions with (near) arbitrary accuracy.
The expressions above are written as functions of the eccentricity $e_t$.
However, the 1.5PN tail $\varphi$ and the functions $F$ and $\chi$ only depend
upon the binary motion, and moments, computed to Newtonian order. Hence, for
these functions (as well as $\mathcal{I}_0$) there is no distinction between
$e_t$ and the usual Keplerian eccentricity. Nevertheless, since we will
reserve $e$ to denote the relativistic (Darwin) eccentricity, we express
everything here in terms of $e_t$.
Blanchet and Sch\"{a}fer \cite{BlanScha93} showed that $\varphi(e_t)$, like the
Peters-Mathews enhancement function $\mathcal{I}_0$, is determined by the
quadrupole moment as computed at Newtonian order from the Keplerian elliptical
motion. Using the Fourier series expansion of the time dependence of a Kepler
ellipse \cite{PeteMath63,Magg07}, $\mathcal{I}_0$ can be written in
terms of Fourier amplitudes of the quadrupole moment by
\begin{equation}
\label{eqn:pmsum}
\mathcal{I}_0(e_t) = \frac{1}{16} \sum_{n=1}^\infty n^6
\vert \! \mathop{\hat{I}}_{(n)}{}_{\!\!ij}^{\!\!(\mathrm{N})} \vert^2 =
\sum_{n=1}^\infty g(n,e_t) = f(e_t) =
\frac{1}{(1-e_t^2)^{7/2}}
{\left(1+\frac{73}{24}~e_t^2 + \frac{37}{96}~e_t^4\right)} ,
\end{equation}
which is the previously mentioned closed form expression. Here, $f(e)$ is
the traditional Peters-Mathews function name, which is not to be confused
with the metric function $f(r)$. In the expression,
${}_{(n)} \hat{I}_{ij}^{(\mathrm{N})}$ is the $n$th Fourier harmonic of the
dimensionless quadrupole moment (see sections III through V of
\cite{ArunETC08a}). The function $g(n,e_t)$ that represents the square of
the quadrupole moment amplitudes is given by
\begin{align}
\label{eqn:gfunc}
g(n,e_t) &\equiv \frac{1}{2} n^2 \bigg\{ \left[-\frac{4}{e_t^3}-3 e_t+
\frac{7}{e_t}\right] n J_n(n e_t) J_n'(n e_t) +
\left[\left(e_t^2+\frac{1}{e_t^2}-2\right) n^2+\frac{1}{e_t^2}-1\right]
J_n'(n e_t)^2 \notag\\
& \hspace{40ex} +\left[\frac{1}{e_t^4}-\frac{1}{e_t^2}+
\left(\frac{1}{e_t^4}-e_t^2-\frac{3}{e_t^2}+3\right) n^2+\frac{1}{3}\right]
J_n(ne_t)^2\bigg\} ,
\end{align}
and was derived by Peters and Mathews \cite{PeteMath63} (though the corrected
expression can be found in \cite{BlanScha93} or \cite{Magg07}).
These quadrupole moment amplitudes also determine $F(e_t)$,
\begin{equation}
\label{eqn:capFeSum}
F(e_t) = \frac{1}{4} \sum_{n=1}^\infty n^2 \, g(n,e_t) ,
\end{equation}
whose closed form expression is found in \eqref{eqn:capFe}, and the 1.5PN
tail function \cite{BlanScha93}, which emerges from a very similar sum
\begin{equation}
\label{eqn:phi2}
\varphi(e_t) = \sum_{n=1}^\infty \frac{n}{2} \, g(n,e_t) .
\end{equation}
Unfortunately, the odd factor of $n$ in this latter sum (and more generally
any other odd power of $n$) makes it impossible to translate the sum into an
integral in the time domain and blocks the usual route to finding a closed-form
expression like $f(e_t)$ and $F(e_t)$.
The sum \eqref{eqn:phi2} might be computed numerically but it is more
convenient to have an expression that can be understood at a glance and be
rapidly evaluated. The route we found to such an expression leads to
several others. We begin with \eqref{eqn:gfunc} and expand $g(n,e_t)$,
pulling forward the leading factor and writing the remainder as
a Maclaurin series in $e_t$
\begin{equation}
\label{eqn:gexp}
g(n,e_t) = \left(\frac{n}{2}\right)^{2n} e_t^{2n - 4} \left(
\frac{1}{\Gamma(n-1)^2} -
\frac{(n-1)(n^2 + 4n -2)}{2 \, \Gamma(n)^2} e_t^2 +
\frac{6 n^4 + 45 n^3 + 18 n^2 - 48 n + 8}{48 \, \Gamma(n)^2} e_t^4 +
\cdots \right) .
\end{equation}
In a sum over $n$, successive harmonics each contribute a series that starts
at a progressively higher power of $e_t^2$. Inspection further shows that for
$n = 1$ the $e_t^{-2}$ and $e_t^0$ terms vanish, the former because
$\Gamma(0)^{-1} \rightarrow 0$. The $n = 2$ harmonic is the only one that
contributes at $e_t^0$ [in fact giving $g(2,e_t) = 1$, the circular orbit
limit]. The successively higher-order power series in $e_t^2$ imply that the
individual sums that result from expanding \eqref{eqn:pmsum},
\eqref{eqn:capFeSum}, and \eqref{eqn:phi2} each truncate, with only a finite
number of harmonics contributing to the coefficient of any given power of
$e_t^2$.
If we use \eqref{eqn:gexp} in \eqref{eqn:pmsum} and sum, we find
$\mathcal{I}_0 = 1 + (157/24) e_t^2 + (605/32) e_t^4 + (3815/96) e_t^6 +
\cdots$, an infinite series. If on the other hand we introduce the known
eccentricity singular factor, take $(1 - e_t^2)^{7/2} \, g(n,e_t)$, re-expand
and sum, we then find $1 + (73/24) e_t^2 + (37/96) e_t^4$, the well known
Peters-Mathews polynomial term. All the sums for higher-order terms vanish
identically. The same occurs if we take a different eccentricity singular
factor, expand $(1/4) (1 - e_t^2)^{13/2} \, n^2 \, g(n,e_t)$ and sum; we
obtain the polynomial in the expression for $F(e_t)$ found in
\eqref{eqn:capFe}. The power series expansion of $g(n,e_t)$ thus provides
an alternative means of deriving these enhancement functions without
transforming to the time domain.
\subsubsection{Form of the 1.5PN Tail}
\label{sec:tailvarphi}
Armed with this result, we then use \eqref{eqn:gexp} in \eqref{eqn:phi2} and
calculate the sums in the expansion, finding
\begin{equation}
\varphi(e_t)=1+ \frac{2335}{192} e_t^2 + \frac{42955}{768} e_t^4 +
\frac{6204647}{36864} e_t^6 +
\frac{352891481}{884736} e_t^8 + \cdots ,
\end{equation}
agreeing with and extending the expansion \eqref{eqn:arunphi} derived by
Arun et al \cite{ArunETC09a}. We forgo giving a lengthier expression because
a better form exists. Rather, we introduce an assumed singular factor and
expand $(1-e_t^2)^5 \, g(n,e_t)$. Upon summing we find
\begin{align}
\label{eqn:vphiExpand}
\varphi(e_t) = &\frac{1}{(1-e_t^2)^5} \,
\bigg(1+\frac{1375}{192} e_t^2+\frac{3935}{768} e_t^4
+\frac{10007}{36864} e_t^6
+\frac{2321}{884736} e_t^8
+\frac{237857}{353894400} e_t^{10}
+\frac{182863}{4246732800} e_t^{12} \\ \notag
&+\frac{4987211}{6658877030400} e_t^{14}
-\frac{47839147}{35514010828800} e_t^{16}
-\frac{78500751181}{276156948204748800} e_t^{18}
-\frac{3031329241219}{82847084461424640000} e_t^{20}
+\cdots \bigg) .
\end{align}
Only the leading terms are shown here; we have calculated over 100 terms with
\emph{Mathematica} and presented part of this expansion previously (available
online \cite{Fors14,Fors15,Evan15a}). The first four terms are also published
in \cite{SagoFuji15}. The assumed singular factor turns out to be the correct
one, allowing the remaining power series to converge to a finite value at
$e_t = 1$. As can be seen from the rapidly diminishing size of higher-order
terms, the series is convergent. The choice for singular factor is supported
by asymptotic analysis found in Sec.~\ref{sec:asymptotic}. The 1.5PN
singular factor and the high-order expansion of $\varphi(e_t)$ are two key
results of this paper.
The singular behavior of $\varphi(e_t)$ as $e_t\rightarrow 1$ is evident on the
left in Fig.~\ref{fig:curlyPhiPlot}. The left side of this figure reproduces
Figure 1 of Blanchet and Sch\"{a}fer \cite{BlanScha93}, though note their
older definition of $\varphi(e)$ (Figure 1 of Ref.~\cite{ArunETC08a} compares
directly to our plot). The right side of Fig.~\ref{fig:curlyPhiPlot} however
shows the effect of removing the singular dependence and plotting only the
convergent power series. We find that the resulting series limits on
$\simeq 13.5586$ at $e_t = 1$.
\begin{figure}
\includegraphics[scale=.95]{curlyPhiPlot.pdf}
\caption{
Enhancement function $\varphi(e_t)$ associated with the 1.5PN tail. On the
left the enhancement function is directly plotted, demonstrating the singular
behavior as $e_t \rightarrow 1$. On the right, the eccentricity singular
factor $(1-e_t^2)^{-5}$ is removed to reveal convergence in the remaining
expansion to a finite value of approximately $13.5586$ at $e_t = 1$.
\label{fig:curlyPhiPlot}}
\end{figure}
\subsubsection{Form of the 3PN Hereditary Terms}
\label{sec:chiform}
With a useful expansion of $\varphi(e_t)$ in hand, we employ the same approach to
the other hereditary terms. As a careful reading of Ref.~\cite{ArunETC08a}
makes clear the most difficult contribution to calculate is
\eqref{eqn:hered52}, the correction of the 1.5PN tail showing up at 2.5PN
order. Accordingly, we first consider the simpler 3PN case
\eqref{eqn:hered3PN}, which is the sum of the tail-of-the-tail and
tail-squared terms \cite{ArunETC08a}. The part in \eqref{eqn:hered3PN} that
requires further investigation is $\chi(e_t)$. The infinite series for
$\chi(e_t)$ is shown in \cite{ArunETC08a} to be
\begin{equation}
\label{eqn:chiSum}
\chi(e_t) = \frac{1}{4} \, \sum_{n=1}^\infty n^2 \,
\log\left(\frac{n}{2}\right) \, g(n,e_t) .
\end{equation}
The same technique as before is now applied to $\chi(e_t)$ using the expansion
\eqref{eqn:gexp} of $g(n,e_t)$. The series will be singular at $e_t = 1$, so
factoring out the singular behavior is important. However, for reasons to
be explained in Sec.~\ref{sec:asymptotic}, it proves essential in this case
to remove the two strongest divergences. We find
\begin{align}
\label{eqn:chiExp}
\chi (e_t) = - \frac{3}{2} F(e_t) \log(1 - e_t^2) &+
\frac{1}{(1-e_t^2)^{13/2}} \,
\Bigg\{
\left[-\frac{3}{2} - \frac{77}{3} \log(2) + \frac{6561}{256} \log(3) \right]
e_t^2 \\ \notag
&+
\left[-22 + \frac{34855}{64} \log(2) - \frac{295245}{1024} \log(3) \right]
e_t^4
\\ \notag
&+
\left[-\frac{6595}{128} - \frac{1167467}{192} \log(2) +
\frac{24247269}{16384} \log(3) + \frac{244140625}{147456} \log(5) \right]
e_t^6
\\ \notag
&+
\left[-\frac{31747}{768} + \frac{122348557}{3072} \log(2) +
\frac{486841509}{131072} \log(3) - \frac{23193359375}{1179648} \log(5) \right]
e_t^8
+ \cdots
\Bigg\} .
\end{align}
Empirically, we found the series for $\chi(e_t)$ diverging like
$\chi(e_t) \sim - C_{\chi} (1 - e_t^2)^{-13/2} \log(1 - e_t^2)$ as
$e_t \rightarrow 1$, where $C_{\chi}$ is a constant. The first term in
\eqref{eqn:chiExp} apparently encapsulates all of the logarithmic divergence
and implies that $C_{\chi} = -(3/2)(52745/1024) \simeq -77.2632$. The reason
for pulling out this particular function is based on a guess suggested by the
asymptotic analysis in Sec.~\ref{sec:asymptotic} and considerations on how
logarithmically divergent terms in the combined instantaneous-plus-hereditary
3PN flux should cancel when a switch is made from orbital parameters
$e_t$ and $y$ to parameters $e_t$ and $1/p$ (to be further discussed in a
forthcoming paper). Having isolated the two divergent terms, the remaining
series converges rapidly with $n$. The divergent behavior of the second term
as $e_t \rightarrow 1$ is computed to be approximately
$\simeq +73.6036 (1-e_t^2)^{-13/2}$. The appearance of $\chi(e_t)$ is shown
in Fig.~\ref{fig:chiPlot}, with and without its most singular factor removed.
\subsubsection{Form of the 2.5PN Hereditary Term}
Armed with this success we went hunting for a comparable result for the
2.5PN enhancement factor $\psi$. Calculating $\psi$ is a much more involved
process, as part of the tail at this order is a 1PN correction to the mass
quadrupole. At 1PN order the orbital motion no longer closes and the
corrections in the mass quadrupole moments require a biperiodic Fourier
expansion. Arun et al.~\cite{ArunETC08b} describe a procedure for computing
$\psi$, which they evaluated numerically. One of the successes we are able
to report in this paper is having obtained a high-order power series
expansion for $\psi$ in $e_t$. Even with \emph{Mathematica's} help, it is
a consuming calculation, and we have reached only the 35th order
($e_t^{70}$). This achieves some of our purposes in seeking the expansion.
We were also able to predict the comparable singular factor present as
$e_t \rightarrow 1$ and demonstrate apparent convergence in the remaining
series to a finite value at $e_t = 1$. The route we followed in making the
calculation of the 2.5PN tail is described in App.~\ref{sec:massQuad}.
Here, we give the first few terms in the $\psi$ expansion
\begin{align}
\label{eqn:psiExpand}
\psi(e_t) = \frac{1}{\left(1-e_t^2\right)^6}
\biggl[
1 &-\frac{72134}{8191}e_t^2-\frac{19817891}{524224}e_t^4
-\frac{62900483}{4718016}e_t^6-\frac{184577393}{603906048}e_t^8
+\frac{1052581}{419379200}e_t^{10} \notag \\
&-\frac{686351417}{1159499612160}e_t^{12}
+\frac{106760742311}{852232214937600}e_t^{14}
+\frac{7574993235161}{436342894048051200}e_t^{16} \notag \\
&-\frac{4345876114169}{2524555315563724800}e_t^{18}
-\frac{61259745206138959}{56550039068627435520000}e_t^{20}
+\cdots
\biggr],
\end{align}
Like in the preceding plots, we show $\log_{10}|\psi|$ graphed on the left in
Fig.~\ref{fig:psiPlot}. The singular behavior is evident. On the right
side, the 2.5PN singular factor has been removed and the finite limit at
$e_t = 1$ is clear.
\begin{figure*}
\includegraphics[scale=.95]{chiPlot.pdf}
\caption{
The 3PN enhancement function $\chi(e_t)$. Its log is plotted on the left.
On the right we remove the dominant singular factor
$-(1-e_t^2)^{-13/2} \log(1-e_t^2)$. The turnover near $e_t = 1$ reflects
competition with the next-most-singular factor, $(1-e_t^2)^{-13/2}$.
\label{fig:chiPlot}}
\end{figure*}
\begin{figure*}
\includegraphics[scale=.95]{psiPlot.pdf}
\caption{
The enhancement factor $\psi(e_t)$. On the right we remove the singular
factor $(1-e_t^2)^{-6}$ and see the remaining contribution smoothly approach
a finite value at $e_t=1$.
\label{fig:psiPlot}}
\end{figure*}
\subsection{Applying asymptotic analysis to determine eccentricity singular
factors}
\label{sec:asymptotic}
In the preceding section we assumed the existence of certain ``correct''
eccentricity singular factors in the behavior of the known hereditary
terms, which once factored out allow the remaining power series to converge
to constants at $e_t = 1$. We show now that at least some of these
singular factors, specifically the ones associated with $\varphi(e_t)$ and
$\chi(e_t)$, can be derived via asymptotic analysis. In the process the
same analysis confirms the singular factors in $f(e_t)$ and $F(e_t)$ already
known from post-Newtonian work. \emph{As a bonus our asymptotic analysis can
even be used to make remarkably sharp estimates of the limiting constant
coefficients that multiply these singular factors.}
What all four of these enhancement functions share is dependence on the
square of the harmonics of the quadrupole moment given by the function
$g(n,e_t)$ found in \eqref{eqn:gfunc}. To aid our analysis near $e_t = 1$,
we define $x \equiv 1 - e_t^2$ and use $x$ to rewrite \eqref{eqn:gfunc} as
\begin{equation}
\label{eqn:gxfunc}
g(n,e_t) =
\frac{1}{6} n^2
\frac{1+x+x^2 + 3 n^2 x^3}{(1-x)^2} J_n(ne_t)^2
+\frac{1}{2} n^2 \frac{x (1 + n^2 x)}{1-x} J_n'(n e_t)^2
-\frac{1}{2} n^3 \frac{x (1 + 3x)}{(1-x)^{3/2}} J_n(n e_t) J_n'(n e_t) .
\end{equation}
An inspection of how \eqref{eqn:gxfunc} folds into \eqref{eqn:pmsum},
\eqref{eqn:capFeSum}, \eqref{eqn:phi2}, and \eqref{eqn:chiSum} shows that
infinite sums of the following forms are required to compute $\varphi(e_t)$,
$\chi(e_t)$, $f(e_t)$, and $F(e_t)$
\begin{equation}
\label{eqn:Hfunctions}
H_0^{\alpha,\beta} = \sum_{n=1}^{\infty} n^\alpha \log^\beta\left(\frac{n}{2}
\right)
J_n(n e_t)^2 , \quad
H_1^{\alpha,\beta} = \sum_{n=1}^{\infty} n^\alpha \log^\beta\left(\frac{n}{2}
\right)
J_n^\prime (n e_t)^2 , \quad
H_2^{\alpha,\beta} = \sum_{n=1}^{\infty} n^\alpha \log^\beta\left(\frac{n}{2}
\right)
J_n(n e_t) J_n^\prime(n e_t) .
\end{equation}
In this compact shorthand, $\beta=1$ merely indicates sums that contain logs
needed to calculate $\chi(e_t)$ while $\beta=0$ (absence of a log) covers
the other cases. Careful inspection of \eqref{eqn:gxfunc} reveals there are
18 different sums needed to calculate the four enhancement functions in
question, and $\alpha$ ranges over (some) values between $2$ and $6$.
As $x \rightarrow 0$ ($e_t \rightarrow 1$) large $n$ terms have growing
importance in the sums. In this limit the Bessel functions have uniform
asymptotic expansions for large order $n$ of the form
\cite{AbraSteg72,DLMF,NHMF}
\begin{align}
J_n(n e_t) \sim & \left(\frac{4 \zeta}{x} \right)^{\frac{1}{4}}
\left[n^{-1/3} {\rm Ai}(n^{2/3} \zeta)
\sum_{k=0}^{\infty}
\frac{A_k}{n^{2 k}} + n^{-5/3} {\rm Ai^\prime} (n^{2/3} \zeta)
\sum_{k=0}^{\infty}
\frac{B_k}{n^{2 k}}
\right] , \\
J_n^\prime (n e_t) \sim & -\frac{2}{\sqrt{1-x}}
\left(\frac{x}{4 \zeta} \right)^{\frac{1}{4}}
\left[n^{-4/3} {\rm Ai}(n^{2/3} \zeta)
\sum_{k=0}^{\infty}
\frac{C_k}{n^{2 k}} + n^{-2/3} {\rm Ai^\prime} (n^{2/3} \zeta)
\sum_{k=0}^{\infty}
\frac{D_k}{n^{2 k}}
\right] ,
\end{align}
where $\zeta$ depends on eccentricity and is found from
\begin{equation}
\frac{2}{3} \zeta^{3/2} = \log\left(\frac{1 + \sqrt{x}}{\sqrt{1-x}} \right)
- \sqrt{x} \equiv \rho(x) \simeq
\frac{1}{3} x^{3/2} + \frac{1}{5} x^{5/2} + \frac{1}{7} x^{7/2} + \cdots ,
\end{equation}
and where the expansion of $\rho(x)$ is the Puiseux series. Defining
$\xi \equiv n \rho(x)$, we need in turn the asymptotic expansions of the
Airy functions \cite{AbraSteg72,DLMF,NHMF}
\begin{align}
{\rm Ai}(n^{2/3} \zeta) \sim & \,
\frac{e^{-\xi}}{2^{5/6} 3^{1/6} \sqrt{\pi} \xi^{1/6}} \left(
1 - \frac{5}{72 \xi} + \frac{385}{10368 \xi^2}
-\frac{85085}{2239488 \xi^3}
+\frac{37182145}{644972544 \xi^4}
-\frac{5391411025}{46438023168 \xi^5}
+\cdots \right) ,
\\
{\rm Ai^\prime} (n^{2/3} \zeta) \sim & \,
-\frac{3^{1/6} \xi^{1/6} e^{-\xi}}{2^{7/6} \sqrt{\pi}} \left(
1 + \frac{7}{72 \xi} - \frac{455}{10368 \xi^2}
+\frac{95095}{2239488 \xi^3}
-\frac{40415375}{644972544 \xi^4}
+\frac{5763232475}{46438023168 \xi^5}
+\cdots \right) .
\end{align}
In some of the following estimates all six leading terms in the Airy
function expansions are important, while a careful analysis reveals that we
never need to retain any terms in the Bessel function expansions beyond
$A_0 = 1$ and $D_0 = 1$.
These asymptotic expansions can now be used to analyze the behavior of the
sums in \eqref{eqn:Hfunctions} (from whence follow the enhancement functions)
in the limit as $e_t \rightarrow 1$. Take as an example $H_2^{3,0}$. We
replace the Bessel functions with their asymptotic expansions and thus
obtain an approximation for the sum
\begin{equation}
\label{eqn:aeH230}
H_2^{3,0} = \sum_{n=1}^{\infty} n^3 J_n(n e_t) J_n^\prime(n e_t) \simeq
\frac{1}{2 \pi \sqrt{1-x}} \sum_{n=1}^{\infty} n^2 e^{-2 \xi}
\left(1 + \frac{1}{36 \xi} - \frac{35}{2592 \xi^2} + \cdots \right) ,
\end{equation}
where recall that $\xi$ is the product of $n$ with $\rho(x)$. The original
sum has in fact a closed form that can be found in the appendix of
\cite{PeteMath63}
\begin{equation}
\sum_{n=1}^{\infty} n^3 J_n(n e_t) J_n^\prime(n e_t) =
\frac{e_t}{4 (1-e_t^2)^{9/2}}
\left(1+3~e_t^2 + \frac{3}{8}~e_t^4\right) \sim
\frac{35}{32} \, \frac{1}{(1-e_t^2)^{9/2}} \simeq
\frac{1.094}{(1-e_t^2)^{9/2}} ,
\end{equation}
where in the latter part of this line we give the behavior near $e_t =1$.
With this as a target, we take the approximate sum in \eqref{eqn:aeH230} and
make a further approximation by replacing the sum over $n$ with an integral
over $\xi$ from $0$ to $\infty$ while letting
$\Delta n = 1 \rightarrow d\xi / \rho(x)$ and retaining only terms in the
expansion that yield non-divergent integrals. We find
\begin{equation}
\frac{1}{2 \pi \sqrt{1-x}} \frac{1}{\rho(x)^3}
\int_{0}^{\infty} d\xi \, e^{-2 \xi} \,
\left(\xi^2 + \frac{1}{36} \xi - \frac{35}{2592} \right) =
\frac{1297}{10368 \, \pi} \frac{1}{\rho(x)^3 \sqrt{1-x}} \sim
\frac{1297}{384 \, \pi} \frac{1}{(1-e_t^2)^{9/2}} \simeq
\frac{1.0751}{(1-e_t^2)^{9/2}} ,
\end{equation}
with the final result coming from further expanding in powers of $x$. Our
asymptotic calculation, and approximate replacement of sum with integral,
not only provides the known singular dependence but also an estimate of the
coefficient on the singular term that is better than we perhaps had any
reason to expect.
All of the remaining 17 sums in \eqref{eqn:Hfunctions} can be approximated
in the same way. As an aside it is worth noting that for those sums in
\eqref{eqn:Hfunctions} without log terms (i.e., $\beta = 0$) the replacement of
the Bessel functions with their asymptotic expansions leads to infinite sums
that can be identified as the known polylogarithm functions \cite{DLMF,NHMF}
\begin{equation}
{\rm Li}_{-k}\left( e^{-2\rho(x)}\right)
= \sum_{n=1}^{\infty} n^k e^{-2 n \rho(x)} .
\end{equation}
However, expanding the polylogarithms as $x \rightarrow 0$ provides results
for the leading singular dependence that are no different from those of the
integral approximation. Since the $\beta = 1$ cases are not represented by
polylogarithms, we simply uniformly use the integral approximation.
We can apply these estimates to the four enhancement functions. First, the
Peters-Mathews function $f(e_t)$ in \eqref{eqn:pmsum} has known leading
singular dependence of
\begin{equation}
f(e_t) \simeq \left(1+\frac{73}{24} + \frac{37}{96}\right)
\frac{1}{(1-e_t^2)^{7/2}} = \frac{425}{96} \frac{1}{(1-e_t^2)^{7/2}} \simeq
\frac{4.4271}{(1-e_t^2)^{7/2}} , \qquad {\rm as} \qquad e_t \rightarrow 1 .
\end{equation}
If we instead make an asymptotic analysis of the sum in \eqref{eqn:pmsum} we
find
\begin{equation}
f(e_t) \sim \frac{191755}{13824 \, \pi} \frac{1}{(1-e_t^2)^{7/2}} \simeq
\frac{4.4153}{(1-e_t^2)^{7/2}} ,
\end{equation}
which extracts the correct eccentricity singular function and yields a
surprisingly sharp estimate of the coefficient. We next turn to the function
$F(e_t)$ in \eqref{eqn:capFeSum}. In this case the function tends to
$F(e_t) \simeq (52745/1024) (1-e_t^2)^{-13/2} \simeq 51.509 (1-e_t^2)^{-13/2}$
as $e_t \rightarrow 1$. Using instead the asymptotic technique we get an
estimate
\begin{equation}
F(e_t) \sim \frac{5148642773}{31850496 \,\pi} \frac{1}{(1-e_t^2)^{13/2}} \simeq
\frac{51.455}{(1-e_t^2)^{13/2}} .
\end{equation}
Once again the correct singular function emerges and a surprisingly accurate
estimate of the coefficient is obtained.
These two cases are heartening checks on the asymptotic analysis but of
course both functions already have known closed forms. What is more
interesting is to apply the approach to $\varphi(e_t)$ and $\chi(e_t)$, which
are not known analytically. For the sum in \eqref{eqn:phi2} for
$\varphi(e_t)$ we obtain the following asymptotic estimate
\begin{equation}
\varphi(e_t) \sim
\frac{56622073}{1327104 \, \pi} \frac{1}{(1-e_t^2)^{5}} -
\frac{371833517}{6635520 \, \pi} \frac{1}{(1-e_t^2)^{4}} + \cdots
\simeq
\frac{13.581}{(1-e_t^2)^{5}} -
\frac{17.837}{(1-e_t^2)^{4}} + \cdots ,
\end{equation}
where in this case we retained the first two terms in the expansion about
$e_t = 1$. The leading singular factor is exactly the one we identified
in \ref{sec:tailvarphi} and its coefficient is remarkably close to
the 13.5586 value found by numerically evaluating the high-order expansion in
\eqref{eqn:vphiExpand}. The second term was retained merely to illustrate
that the expansion is a regular power series in $x$ starting with $x^{-5}$
(in contrast to the next case).
We come finally to the enhancement function, $\chi(e_t)$, whose definition
\eqref{eqn:chiSum} involves logarithms. Using the same asymptotic
expansions and integral approximation for the sum, and retaining the
first two divergent terms, we find
\begin{equation}
\label{eqn:chiasymp}
\chi(e_t) \sim
- \frac{5148642773}{21233664 \, \pi} \frac{\log(1-e_t^2)}{(1-e_t^2)^{13/2}}
- \frac{5148642773}{21233664 \, \pi} \left[-\frac{7882453164}{5148642773}
+ \frac{2}{3} \gamma_E + \frac{4}{3} \log (2)
- \frac{2}{3} \log(3) \right] \frac{1}{(1-e_t^2)^{13/2}} .
\end{equation}
The form of \eqref{eqn:chiExp} assumed in Sec.~\ref{sec:chiform}, whose
usefulness was verified through direct high-order expansion, was suggested
by the leading singular behavior emerging from this asymptotic analysis.
We guessed that there would be two terms, one with eccentricity singular
factor $\log(1-e_t^2) (1-e_t^2)^{-13/2}$ and one with $(1-e_t^2)^{-13/2}$.
In any calculation made close to $e_t = 1$ these two leading terms compete
with each other, with the logarithmic term only winning out slowly as
$e_t \rightarrow 1$. Prior to identifying the two divergent series we
initially had difficulty with slow convergence of an expansion for
$\chi(e_t)$ in which only the divergent term with the logarithm was factored
out. To see the issue, it is useful to numerically evaluate our approximation
\eqref{eqn:chiasymp}
\begin{equation}
\label{eqn:chiasympnum}
\chi(e_t) \sim -77.1823 \frac{\log(1-e_t^2)}{(1-e_t^2)^{13/2}}
\left[1 - \frac{0.954378}{\log(1-e_t^2)} \right] =
-77.1823 \frac{\log(1-e_t^2)}{(1-e_t^2)^{13/2}}
+ \frac{73.6612}{(1-e_t^2)^{13/2}} .
\end{equation}
From this it is clear that even at $e_t = 0.99$ the second term makes a
$+24.3$\% correction to the first term, giving the misleading impression
that the leading coefficient is near $-96$ not $-77$. The key additional
insight was to guess the closed form for the leading singular term in
\eqref{eqn:chiExp}. As mentioned, the reason for expecting this exact
relationship comes from balancing and cancelling logarithmic terms in both
instantaneous and hereditary 3PN terms when the expansion is converted
from one in $e_t$ and $y$ to one in $e_t$ and $1/p$. The coefficient on
the leading (logarithmic) divergent term in $\chi(e_t)$ is exactly
$-(3/2)(52745/1024) \simeq -77.2632$. [This number is -3/2 times the limit
of the polynomial in $F(e_t)$.] It compares well with the first number in
\eqref{eqn:chiasympnum}. Additionally, recalling the discussion made
following \eqref{eqn:chiExp}, the actual coefficient found on the
$(1-e_t^2)^{-13/2}$ term is $+73.6036$, which compares well with the second
number in \eqref{eqn:chiasympnum}. The asymptotic analysis has thus again
provided remarkably sharp estimates for an eccentricity singular factor.
\footnote{Note added in proof: while this paper was in press the authors
became aware that similar asymptotic analysis of hereditary terms was being
pursued by N. Loutrel and N. Yunes \cite{LoutYune16}.}
\subsection{Using Darwin eccentricity $e$ to map
$\mathcal{I}(e_t)$ and $\mathcal{K}(e_t)$
to $\tilde{\mathcal{I}}(e)$ and $\tilde{\mathcal{K}}(e)$
}
\label{sec:EnhanceDarwin}
Our discussion thus far has given the PN energy flux in terms of the standard
QK time eccentricity $e_t$ in modified harmonic gauge \cite{ArunETC09a}.
The motion is only known presently to 3PN relative order, which means that
the QK representation can only be transformed between gauges up to and
including $y^3$ corrections. At the same time, our BHP calculations
accurately include all relativistic effects that are first order in the mass
ratio. It is possible to relate the relativistic (Darwin) eccentricity $e$ to
the QK $e_t$ (in, say, modified harmonic gauge) up through correction terms
of order $y^3$,
\begin{align}
\label{eqn:etToe}
\frac{e_t^2}{e^2} = &1 - 6 y
-\frac{\left(15-19\sqrt{1-e^2}\right)
+ \left(15\sqrt{1-e^2}-15\right)e^2}{(1-e^2)^{3/2}}y^2 \notag \\
&\quad +\frac{1}{(1-e^2)^{5/2}}
\biggl[
\left(30-38\sqrt{1-e^2}\right)
+\left(59\sqrt{1-e^2}-75\right)e^2
+\left(45-18\sqrt{1-e^2}\right)e^4
\biggr]y^3 .
\end{align}
See \cite{ArunETC09a} for the low-eccentricity limit of this more general
expression. We do not presently know how to calculate $e_t$ beyond this
order. Using this expression we can at least transform expected fluxes to
their form in terms of $e$ and check current PN results through 3PN order.
However, to go from 3PN to 7PN, as we do in this paper, our results must be
given in terms of $e$.
The instantaneous ($\mathcal{I}$) and hereditary ($\mathcal{K}$) flux terms
may be rewritten in terms of the relativistic eccentricity $e$
straightforwardly by substituting $e$ for $e_t$ using \eqref{eqn:etToe} in
the full 3PN flux \eqref{eqn:energyflux} and re-expanding the result in powers
of $y$. All flux coefficients that are lowest order in $y$ are unaffected
by this transformation. Instead, only higher order corrections are
modified. We find
\begin{align}
\label{eqn:tiIK0}
\tilde{\mathcal{I}}_0(e) &= \mathcal{I}_0(e_t) , \qquad
\tilde{\mathcal{K}}_{3/2}(e) = \mathcal{K}_{3/2}(e_t) , \qquad
\tilde{\mathcal{K}}_3(e) = \mathcal{K}_3(e_t) ,
\\
\label{eqn:tiI1}
\tilde{\mathcal{I}}_1(e) &= \frac{1}{(1-e^2)^{9/2}}
\left(-\frac{1247}{336}-\frac{15901}{672} e^2-\frac{9253}{384} e^4
-\frac{4037}{1792} e^6
\right),\\
\label{eqn:tiI2}
\tilde{\mathcal{I}}_2(e) &=
\frac{1}{(1-e^2)^{11/2}}
{\left(-\frac{203471}{9072}-\frac{1430873 }{18144}e^2+\frac{2161337}{24192} e^4
+\frac{231899}{2304} e^6+\frac{499451}{64512} e^8\right)}
\\
&\hspace{40ex} + \frac{1}{(1-e^2)^{5}} \left(\frac{35}{2}
+ \frac{1715}{48} e^2
- \frac{2975}{64} e^4 - \frac{1295}{192} e^6 \right) , \notag \\
\label{eqn:tiI3}
\tilde{\mathcal{I}}_3(e) &= \frac{1}{(1-e^2)^{13/2}}
\left(
\frac{2193295679}{9979200}+\frac{55022404229 }{19958400}e^2
+\frac{68454474929 }{13305600}e^4
\right.
\\
\biggl.
&\hspace{30ex}+
\frac{40029894853}{26611200} e^6
-\frac{32487334699 }{141926400}e^8
-\frac{233745653 }{11354112}e^{10}
\biggr) \notag
\\&
\qquad + \frac{1}{(1-e^2)^{6}} \left(
-\frac{14047483}{151200}-\frac{75546769}{100800}e^2-\frac{210234049}{403200}e^4
+\frac{1128608203}{2419200} e^6
+\frac{617515}{10752} e^8
\right) \notag
\\&
\qquad +\frac{1712}{105} \log\left[\frac{y}{y_0}
\frac{1+\sqrt{1-e^2}}{2(1-e^2)}\right] F(e) , \notag
\\
\label{eqn:tiK52}
\tilde{\mathcal{K}}_{5/2}(e) &=\frac{-\pi}{(1-e^2)^6}
\biggl(
\frac{8191}{672}+\frac{62003}{336} e^2
+\frac{20327389}{43008} e^4
+\frac{87458089}{387072} e^6
+\frac{67638841}{7077888} e^8
+\frac{332887 }{25804800}e^{10}
\notag\\&\hspace{20ex}
-\frac{482542621}{475634073600} e^{12}
+\frac{43302428147}{69918208819200} e^{14}
-\frac{2970543742759 }{35798122915430400}e^{16}
\notag\\&\hspace{20ex}+\frac{3024851376397}{207117711153561600} e^{18}
+ \frac{24605201296594481}{4639436729839779840000} e^{20}+\cdots
\biggr) ,
\end{align}
where $F$ is given by \eqref{eqn:capFe} with $e_t \rightarrow e$. The full
3PN flux is written exactly as Eqn.~\eqref{eqn:energyflux} with
$\mathcal{I}\to\tilde{\mathcal{I}}$ and $\mathcal{K}\to\tilde{\mathcal{K}}$.
\section{Confirming eccentric-orbit fluxes through 3PN relative order}
\label{sec:confirmPN}
Sections \ref{sec:homog} and \ref{sec:InhomogSol} briefly described a
formalism for an efficient, arbitrary-precision MST code for use with
eccentric orbits. Section \ref{sec:preparePN} detailed new high-order
expansions in $e^2$ that we have developed for the hereditary PN terms. The
next goal of this paper is to check all known PN coefficients for the energy
flux (at lowest order in the mass ratio) for eccentric orbits. The MST code
is written in \emph{Mathematica} to allow use of its arbitrary precision
functions. Like previous circular orbit calculations
\cite{ShahFrieWhit14,Shah14}, we employ very high accuracy calculations (here
up to 200 decimal places of accuracy) on orbits with very wide separations
($p \simeq 10^{15} - 10^{35}$). Why such wide separations? At $p = 10^{20}$,
successive terms in a PN expansion separate by 20 decimal places from each
other (10 decimal places for half-PN order jumps). It is like doing QED
calculations and being able to dial down the fine structure constant from
$\alpha \simeq 1/137$ to $10^{-20}$. This in turn mandates the use of
exceedingly high-accuracy calculations; it is only by calculating with 200
decimal places that we can observe $\sim 10$ PN orders in our numerical
results with some accuracy.
\subsection{Generating numerical results with the MST code}
\label{sec:CodeDetails}
In Secs.~\ref{sec:homog} and \ref{sec:InhomogSol} we covered the
theoretical framework our code uses. We now provide an algorithmic
roadmap for the code. (While the primary focus of this paper is in
computing fluxes, the code is also capable of calculating local quantities
to the same high accuracy.)
\begin{itemize}
\item \emph{Solve orbit equations for given $p$ and $e$.} Given a set of
orbital parameters, we find $t_p (\chi)$, $\varphi_p (\chi)$, and $r_p(\chi)$ to
high accuracy at locations equally spaced in $\chi$. We do so by employing
the SSI method outlined in Sec.~II B of Ref.~\cite{HoppETC15}. From these
functions we also obtain the orbital frequencies $\O_r$ and $\O_\varphi$.
All quantities are computed with some pre-determined overall accuracy goal;
in this paper it was a goal of 200 decimal places of accuracy in the energy
flux.
\item \emph{Obtain homogeneous solutions to the FD RWZ master equation
for given $lmn$ mode.} We find the homogeneous solutions using the MST
formalism outlined in Sec.~\ref{sec:MST}. The details of the calculation
are given here.
\begin{enumerate}
\item \emph{Solve for $\nu$.} For each $lmn$, the $\o$-dependent
renormalized angular momentum $\nu$ is determined (App.~\ref{sec:solveNu}).
\item \emph{Determine at what $n$ to truncate infinite MST sums involving
$a_n$.} The solutions $R_{lm\omega}^\text{up/in}$ are infinite sums
\eqref{eqn:Down1} and \eqref{eqn:RMinus}. Starting with $a_0=1$, we determine
$a_n$ for $n<0$ and $n>0$ using Eqn.~\eqref{eqn:RL}. Terms are added on
either side of $n=0$ until the homogeneous Teukolsky equation is satisfied
to within some error criterion at a single point on the orbit. In the
post-Newtonian regime the behavior of the size of these terms is well
understood \cite{ManoSuzuTaka96a,ManoSuzuTaka96b,KavaOtteWard15}. Our
algorithm tests empirically for stopping. Note that
in addition to forming
$R_{lm\omega}^\text{up/in}$, residual testing requires computing its first
and second derivatives. Having tested for validity of the stopping criterion
at one point, we spot check at other locations around the orbit. Once the
number of terms is established we are able to compute the Teukolsky function
and its first derivative at any point along the orbit. (The index $n$ here
is not to be confused with the harmonic index on such functions as
$\hat{X}_{lmn}$.)
\item \emph{Evaluate Teukolsky function between $r_{\rm min}$ and
$r_{\rm max}$.} Using the truncation of the infinite MST series, we evaluate
$R_{lm\omega}^\text{up/in}$ and their first derivative [higher derivatives
are found using the homogeneous differential equation \eqref{eqn:radial}]
at the $r$ locations corresponding to the even-$\chi$ spacing found
in Step 1. The high precision evaluation of hypergeometric functions
in this step represents the computational bottleneck in the code.
\item \emph{Transform Teukolsky function to RWZ master functions.} For
$l+m$ odd we use Eqn.~\eqref{eqn:RWtrans1} to obtain $\hat{X}_{lmn}^\pm$.
When $l+m$ is even we continue and use Eqn.~\eqref{eqn:RWtrans2}.
\item \emph{Scale master functions.} In solving for the fluxes, it is
convenient to work with homogeneous solutions that are unit-normalized at
the horizon and at infinity. We divide the RWZ solutions by the asymptotic
amplitudes that arise from choosing $a_0 = 1$ when forming the MST solutions
to the Teukolsky equation. These asymptotic forms are given in
Eqns.~\eqref{eqn:RWasymp1}-\eqref{eqn:RWasymp3}.
\end{enumerate}
\item \emph{Form $lmn$ flux contribution.} Form $C_{lmn}^+$ using the
exponentially-convergent SSI sum \eqref{eqn:CfromEbar}. Note that this
exponential convergence relies on the fact that we evaluated the homogeneous
solutions at evenly-spaced locations in $\chi$. The coefficient
$C_{lmn}^+$ feeds into a single positive-definite term in the sum
\eqref{eqn:fluxNumeric}.
\item \emph{Sum over $lmn$ modes.} In reconstructing the total flux there
are three sums:
\begin{enumerate}
\item \emph{Sum over $n$.} For each spherical harmonic $lm$, there is
formally an infinite Fourier series sum from $n=-\infty$ to $\infty$.
In practice the SSI method shows that $n$ is effectively bounded in some range
$-N_1\leq n \leq N_2$. This range is determined by the fineness of the
evenly-spaced sampling of the orbit in $\chi$. For a given orbital sampling,
we sum modes between $-N_1 \leq n \leq N_2$, where $N_1$ and $N_2$ are the
first Nyquist-like notches in frequency, beyond which aliasing effects set in
\cite{HoppETC15}.
\item \emph{Sum over $m$.} For each $l$ mode, we sum over $m$ from
$-l\leq m \leq l$. In practice, symmetry allows us to sum from $0\leq m
\leq l$, multiplying positive $m$ contributions by 2.
\item \emph{Sum over $l$.} The sum over $l$ is, again, formally infinite.
However, each multipole order appears at a higher PN order, the spacing of
which depends on $1/p$. The leading $l=2$ quadrupole flux appears at
$\mathcal{O}(p^{-5})$. For an orbit with $p=10^{20}$, the $l=3$ flux appears
at a level 20 orders of magnitude smaller. Only contributions through
$l\leq 12$ are necessary with this orbit and an overall precision goal of
200 digits. This cutoff in $l$ varies with different $p$.
\end{enumerate}
\end{itemize}
\subsection{Numerically confirming eccentric-orbit PN results through
3PN order}
\label{sec:Result1}
We now turn to confirming past eccentric-orbit PN calculations. The MST
code takes as input the orbital parameters $p$ and $e$. Then $1/p$
is a small parameter. Expanding $dt_p/d\chi$ in \eqref{eqn:darwinEqns} we
find from \eqref{eqn:O_r}
\begin{align}
\label{eqn:O_rPN}
\O_r = \frac{1}{M} \l \frac{1-e^2}{p} \r^{3/2}
\left\{ 1
- 3 \frac{1-e^2}{p}
- \frac{3}{2} \frac{\sqrt{1 - e^2}
[5 - 2 \sqrt{1 - e^2} + e^2 (-5 + 6 \sqrt{1 - e^2} )]}{p^2}
+ \cdots
\right\} .
\end{align}
Expanding \eqref{eqn:O_phi} in similar fashion gives
\begin{align}
\label{eqn:O_phiPN}
\O_\varphi = \frac{1}{M} \l \frac{1-e^2}{p} \r^{3/2}
\left\{ 1
+ 3 \frac{e^2}{p}
- \frac{3}{4} \frac{\left[10 (-1 + \sqrt{1 - e^2}) +
e^2 (20 - 3 \sqrt{1 - e^2}) + 2 e^4 (-5 + 6 \sqrt{1 - e^2}) \right]}
{\sqrt{1 - e^2} p^2}
+ \cdots
\right\}.
\end{align}
Then given the definition of $y$ we obtain an expansion of $y$ in terms
of $p$
\begin{align}
y = \frac{1 - e^2}{p}
+ \frac{2 e^2 (1 - e^2)}{p^2}
+ \frac{1}{2} \sqrt{1 - e^2} \frac{\left[10 (1 - \sqrt{1 - e^2})
+ e^2 (-3 + 10 \sqrt{1 - e^2}) + 10 e^4) \right]}{p^3}
+ \cdots .
\end{align}
So from our chosen parameters $e$ and $p$ we can obtain $y$ to arbitrary
accuracy, and then other orbital parameters, such as $\Omega_r$ and
$\Omega_\varphi$, can be computed as well to any desired accuracy.
\begin{figure}
\centering
\begin{minipage}{0.48\textwidth}
\centering
\includegraphics[scale=0.98]{mountainRange_fig.pdf}
\caption{Fourier-harmonic energy-flux spectra from an orbit with semi-latus
rectum $p=10^{20}$ and eccentricity $e=0.1$. Each inverted-$V$ spectrum
represents flux contributions of modes with various harmonic number $n$ but
fixed $l$ and $m$. The tallest spectrum traces the harmonics of the $l=2$,
$m=2$ quadrupole mode, the dominant contributor to the flux. Spectra of
successively higher multipoles (octupole, hexadecapole, etc) each drop 20
orders of magnitude in strength as $l$ increases by one ($l \leq 12$ are
shown). Every flux contribution is computed that is within 200 decimal
places of the peak of the quadrupole spectrum. With $e = 0.1$, there were
7,418 significant modes that had to be computed (and are shown above).
\label{fig:fluxSpectra}}
\end{minipage}\hfill
\begin{minipage}{0.48\textwidth}
\centering
\includegraphics[scale=0.98]{I3compare_fig.pdf}
\caption{Residuals after subtracting from the numerical data successive PN
contributions. Residuals are shown for a set of orbits with $p = 10^{20}$
and a range of eccentricities from $e = 0.005$ through $e = 0.1$ in steps
of $0.005$. Residuals are scaled relative to the Peters-Mathews
flux (uppermost points at unit level). The next set of points (blue) shows
residuals after subtracting the Peters-Mathews enhancement from BHP data.
Residuals drop uniformly by 20 order of magnitude, consistent with 1PN
corrections in the data. The next (red) points result from subtracting the
1PN term, giving residuals at the 1.5PN level. Successive subtraction of
known PN terms is made, reaching final residuals at 70 orders of magnitude
below the total flux and indicating the presence of 3.5PN contributions in
the numerical fluxes.
\label{fig:residuals1}}
\end{minipage}
\end{figure}
To check past work \cite{ArunETC08a,ArunETC08b,ArunETC09a,Blan14} on the PN
expansion of the energy flux, we used a single orbital separation
($p = 10^{20}$), with a set of eccentricities ($e = 0.005$ through $e=0.1$).
For each $e$, we compute the flux for each $lmn$-mode up
to $l = 12$ to as much as 200 decimal places of accuracy (the accuracy can
be relaxed for higher $l$ as these modes contribute only weakly to the total
energy flux). Fig.~\ref{fig:fluxSpectra} depicts all
7,418 $lmn$-modes that contributed to the energy flux for just the $p = 10^{20}$,
$e = 0.1$ orbit. Making this calculation and sum for all $e$, we then have
an eccentricity dependent flux. Next, we compute the PN parts of the
expected flux using Eqns.~\eqref{eqn:tiIK0} through \eqref{eqn:tiK52}. The
predicted flux $\mathcal{F}_{\rm 3PN}$ is very close to the computed flux
$\mathcal{F}_{\rm MST}$. We then subtract the quadrupole theoretical flux
term
\begin{equation}
\mathcal{F}_{\rm N} = \frac{32}{5} \left(\frac{\mu}{M}\right)^2 y^5 \,
\tilde{\mathcal{I}}_0(e)
\equiv
\mathcal{F}^{\rm circ}_{\rm N} \ \tilde{\mathcal{I}}_0(e) ,
\end{equation}
from the flux computed with the MST code (and normalize with respect to the
Newtonian term)
\begin{equation}
\frac{\mathcal{F}_{\rm MST} - \mathcal{F}_{\rm N}}{\mathcal{F}_{\rm N}}
= \mathcal{O}(y)
\simeq \frac{1}{\tilde{\mathcal{I}}_0 (e)} \left[y\,\tilde{\mathcal{I}}_1(e)
+ y^{3/2}\,\tilde{\mathcal{K}}_{3/2}(e)
+ y^2\,\tilde{\mathcal{I}}_2(e)
+ y^{5/2}\,\tilde{\mathcal{K}}_{5/2}(e)
+ y^3\,\tilde{\mathcal{I}}_3(e)
+ y^3\,\tilde{\mathcal{K}}_{3}(e) \right] ,
\end{equation}
and find a residual that is 20 orders of magnitude smaller than the
quadrupole flux. The residual reflects the fact that our numerical (MST
code) data contains the 1PN and higher corrections. We next subtract the
1PN term
\begin{equation}
\frac{\mathcal{F}_{\rm MST} - \mathcal{F}_{\rm N}}{\mathcal{F}_{\rm N}}
- y\, \frac{\tilde{\mathcal{I}}_1(e) }{\tilde{\mathcal{I}}_0(e)}
= \mathcal{O}(y^{3/2}) ,
\end{equation}
and find residuals that are another 10 orders of magnitude lower. This
reflects the expected 1.5PN tail correction. Using our high-order expansion
for $\varphi(e)$, we subtract and reach 2PN residuals. Continuing in this way,
once the 3PN term is subtracted, the residuals lie at a level 70 orders of
magnitude below the quadrupole flux. We have reached the 3.5PN
contributions, which are encoded in the MST result but whose form is
(heretofore) unknown. Fig.~\ref{fig:residuals1} shows this process
of successive subtraction. \emph{We conclude that the published PN
coefficients} \cite{ArunETC08a,Blan14} \emph{for eccentric orbits in the lowest
order in $\nu$ limit are all correct.} Any error would have to be at the
level of one part in $10^{10}$ (and only then in the 3PN terms) or it would
show up in the residuals.
As a check we made this comparison also for other orbital radii and using the
original expressions in terms of $e_t$ (which we computed from $e$ and $y$
to high precision). The 2008 results \cite{ArunETC08a} continued to stand.
\section{Determining new PN terms between orders 3.5PN and 7PN}
\label{sec:newPN}
Having confirmed theoretical results through 3PN, we next sought to determine
analytic or numerical coefficients for as-yet unknown PN coefficients at
3.5PN and higher orders. We find new results to 7PN order.
\subsection{A model for the higher-order energy flux}
The process begins with writing an expected form for the expansion. As
discussed previously, beyond 3PN we presently do not know $e_t$, so all new
results are parameterized in terms of the relativistic $e$ (and $y$). Based
on experience with the expansion up to 3PN (and our expansions of the
hereditary terms), we build in the expected eccentricity singular factors
from the outset. In addition, with no guidance from analytic PN theory, we
have no way of separating instantaneous from hereditary terms beyond 3PN
order, and thus denote all higher-order PN enhancement factors with the
notation $\mathcal{L}_i(e)$. Finally, known higher-order work \cite{Fuji12}
in the circular-orbit limit allows us to anticipate the presence of various
logarithmic terms and powers of logs. Accordingly, we take the flux to
have the form
\begin{align}
\label{eqn:energyfluxNew}
\left\langle \frac{dE}{dt} \right\rangle =
&\mathcal{F}_{\rm 3PN}
+\mathcal{F}^{\rm circ}_{\rm N}
\biggl[\mathcal{L}_{7/2}y^{7/2}
+y^4\Bigl(\mathcal{L}_4+\log(y)\mathcal{L}_{4L}\Bigr)
+y^{9/2}\Bigl(\mathcal{L}_{9/2}+\log(y)\mathcal{L}_{9/2L}\Bigr)
+y^5\Bigl(\mathcal{L}_5
+\log(y)\mathcal{L}_{5L}\Bigr)
\notag\\&\quad
+y^{11/2}\Bigl(\mathcal{L}_{11/2}+\log(y)\mathcal{L}_{11/2L}\Bigr)
+ y^6\Bigl(\mathcal{L}_6 + \log(y)\mathcal{L}_{6L}
+ \log^2(y)\mathcal{L}_{6L^2} \Bigr)
+y^{13/2}\Bigl(\mathcal{L}_{13/2}+\log(y)\mathcal{L}_{13/2L}\Bigr)
\notag\\&\quad
+ y^7\Bigl(\mathcal{L}_7 + \log(y)\mathcal{L}_{7L}
+ \log^2(y)\mathcal{L}_{7L^2} \Bigr)
+ y^{15/2}\Bigl(\mathcal{L}_{15/2} + \log(y)\mathcal{L}_{15/2L}
+ \log^2(y)\mathcal{L}_{15/2L^2}\Bigr)
\notag\\&\quad
+ y^{8}\Bigl(\mathcal{L}_{8} + \log(y)\mathcal{L}_{8L}
+ \log^2(y)\mathcal{L}_{8L^2}\Bigr)
+ y^{17/2}\Bigl(\mathcal{L}_{17/2} + \log(y)\mathcal{L}_{17/2L}
+ \log^2(y)\mathcal{L}_{17/2L^2}\Bigr)
\notag\\&\quad
+ y^{9}\Bigl(\mathcal{L}_{9} + \log(y)\mathcal{L}_{9L}
+ \log^2(y)\mathcal{L}_{9L^2}
+ \log^3(y)\mathcal{L}_{9L^3}\Bigr)
\notag\\&\quad
+ y^{19/2}\Bigl(\mathcal{L}_{19/2} + \log(y)\mathcal{L}_{19/2L}
+ \log^2(y)\mathcal{L}_{19/2L^2}
+ \log^3(y)\mathcal{L}_{19/2L^3}\Bigr)
\notag\\&\quad +
y^{10}\Bigl(\mathcal{L}_{10} + \log(y)\mathcal{L}_{10L}
+ \log^2(y)\mathcal{L}_{10L^2}
+ \log^3(y)\mathcal{L}_{10L^3}\Bigr)
\biggr].
\end{align}
It proves useful to fit MST code data all the way through 10PN order even
though we quote new results only up to 7PN.
\subsection{Roadmap for fitting the higher-order PN expansion}
The steps in making a sequence of fits to determine the higher-order PN
expansion are as follows:
\begin{itemize}
\item \emph{Compute data for orbits with various $e$ and $y$}. We compute
fluxes for 1,683 unique orbits, with 33 eccentricities for each of 51
different orbital separations ($p$ or $y$ values). The models include
circular orbits and eccentricities
ranging from $e = 10^{-5}$ to $e = 0.1$. The $p$ range is from $10^{10}$
through $10^{35}$ in half-logarithmic steps, i.e., $10^{10},10^{10.5},\dots$.
The values of $y$ are derived from $p$ and $e$.
\item \emph{Use the expected form of the expansion in $y$}. As mentioned
earlier, known results for circular fluxes on Schwarzschild backgrounds
allow us to surmise the expected terms in the $y$-expansion, shown in
Eqn.~\eqref{eqn:energyfluxNew}. The expansion through 10PN order contains
as a function of $y$ 44 parameters, which can be determined by our dataset
with 51 $y$ values (at each eccentricity).
\item \emph{Eliminate known fit parameters}. The coefficients at 0PN,
1PN, 1.5PN, 2PN, and 3PN relative orders involve known enhancement functions
of the eccentricity $e$ (given in the previous section) and these terms may
be subtracted from the data and excluded from the fit model. It is important
to note that we do not include the 2.5PN term in this subtraction. Though
we have a procedure for expanding the $\mathcal{K}_{5/2}$ term to high order
in $e^2$, it has proven computationally difficult so far to expand beyond
$e^{70}$. This order was sufficient for use in Sec.~\ref{sec:confirmPN} in
confirming prior results to 3PN but is not accurate enough to reach 10PN
(at the large radii we use). We instead include a parameterization of
$\mathcal{K}_{5/2}$ in the fitting model.
\item \emph{Fit for the coefficients on powers of $y$ and $\log(y)$}.
We use \emph{Mathematica's} \texttt{NonlinearModelFit} function to obtain
numerical values for the coefficients $\mathcal{L}_{7/2}$, $\mathcal{L}_4$,
$\dots$ shown in Eqn.~\eqref{eqn:energyfluxNew}. We perform this fit
separately for each of the 33 values of $e$ in the dataset.
\item \emph{Organize the numerically determined functions of $e$ for each
separate coefficient $\mathcal{L}_i(e)$ in the expansion over $y$ and
$\log(y)$}. Having fit to an expansion of the form \eqref{eqn:energyfluxNew}
and eliminated known terms there remain 38 functions of $e$, each of which
is a discrete function of 33 different eccentricities.
\item \emph{Assemble an expected form for the expansion in $e$ of
each $\mathcal{L}_i(e)$}. Based on the pattern in Sec.~\ref{sec:preparePN},
each full (or half) PN order $= N$ will have a leading eccentricity
singular factor of the form $(1-e^2)^{-7/2 - N}$. The remaining power
series will be an expansion in powers of $e^2$.
\item \emph{Fit each model for $\mathcal{L}_i(e)$ using data ranging over
eccentricity}. The function \texttt{NonlinearModelFit} is again used to
find the unknown coefficients in the eccentricity function expansions.
With data on 33 eccentricities, the coefficient models are limited to
at most 33 terms. However, it is possible to do hierarchical fitting. As
lower order coefficients are firmly determined in analytic form (see next
step), they can be eliminated in the fitting model to allow new, higher-order
ones to be included.
\item \emph{Attempt to determine analytic form of $e^2$ coefficients}.
It is possible in some cases to determine the exact analytic form (rational
or irrational) of coefficients of $e^2$ determined only in decimal value
form in the previous step. We use \emph{Mathematica}'s function
\texttt{FindIntegerNullVector} (hereafter FINV), which is an implementation
of the PSLQ integer-relation algorithm.
\item \emph{Assess the validity of the analytic coefficients}.
A rational or irrational number, or combination thereof, predicted by FINV
to represent a given decimal number has a certain probability of being a
coincidence (note: the output of FINV will still be a very accurate
\emph{representation} of the input decimal number). If FINV outputs, say,
a single rational number with $N_N$ digits in its numerator and $N_D$ digits
in its denominator, and this rational agrees with the input decimal number
it purports to represent to $N$ digits, then the likelihood that this is a
coincidence is of order $\mathcal{P} \simeq 10^{N_N+N_D-N}$
\cite{ShahFrieWhit14}. With the analytic coefficients we give in what
follows, in no case is the probability of coincidence larger than $10^{-6}$,
and in many cases the probability is as low as $10^{-90} - 10^{-50}$. Other
consistency checks are made as well. It is important that the analytic
output of PSLQ not change when the number of significant digits in the input
is varied (within some range). Furthermore, as we expect rational numbers
in a perturbation expansion to be sums of simpler rationals, a useful
criterion for validity of an experimentally determined rational is that it
have no large prime factors in its denominator \cite{JohnMcDaShahWhit15}.
\end{itemize}
\subsection{The energy flux through 7PN order}
We now give, in mixed analytic and numeric form, the PN expansion (at lowest
order in $\nu$) for the energy flux through 7PN order. Analytic coefficients
are given directly, while well-determined coefficients that are known only
in numerical form are listed in the formulae as numbered parameters like
$b_{26}$. The numerical values of these coefficients are tabulated in
App.~\ref{sec:numericEnh}. We find for the 3.5PN and 4PN (non-log)
terms
\begin{align}
\mathcal{L}_{7/2} &= -\frac{\pi}{(1-e^2)^7}
\biggl(\frac{16285}{504}+ \frac{21500207}{48384}e^2 +
\frac{3345329}{48384}e^4 - \frac{111594754909}{41803776} e^6-
\frac{82936785623}{55738368} e^8 - \frac{11764982139179}{107017666560} e^{10}
\notag\\&\ -
\frac{216868426237103}{9631589990400} e^{12}
- \frac{30182578123501193}{2517055517491200}e^{14}
- \frac{351410391437739607}{48327465935831040} e^{16}
- \frac{1006563319333377521717}{208774652842790092800}e^{18}
\notag\\&\ -
\frac{138433556497603036591}{40776299383357440000}e^{20}
- \frac{16836217054749609972406421}{6736462131727360327680000}e^{22}
- \frac{2077866815397007172515220959}{1091306865339832373084160000}e^{24}
\notag\\&\ +
b_{26}e^{26}+ b_{28}e^{28}+ b_{30}e^{30}+ b_{32}e^{32}
+ b_{34}e^{34}+ b_{36}e^{36}+ b_{38}e^{38}+ b_{40}e^{40}
+ b_{42}e^{42}+ b_{44}e^{44}+ b_{46}e^{46}+ b_{48}e^{48}
\notag\\&\ +
b_{50}e^{50}+ b_{52}e^{52}+ b_{54}e^{54}
+\cdots
\biggr),\label{eqn:Ienh72}\\
\notag \\
\mathcal{L}_4 &=\frac{1}{(1-e^2)^{15/2}}
\biggl[
-\frac{323105549467}{3178375200}+\frac{232597}{4410}\gamma_\text{E}
-\frac{1369}{126}\pi^2+\frac{39931}{294}\log(2)
-\frac{47385}{1568}\log(3)
\notag\\&\ +
\biggl(-\frac{128412398137}{23543520}
+\frac{4923511}{2940}\gamma_\text{E}
-\frac{104549}{252}\pi^2-\frac{343177}{252} \log(2) +\frac{55105839 }{15680}\log(3)
\biggr)e^2
\notag\\&\
+\biggl(-\frac{981480754818517}{25427001600}
+\frac{142278179}{17640}\gamma_\text{E}-\frac{1113487}{504}\pi^2
+\frac{762077713}{5880}\log(2)
-\frac{2595297591}{71680}\log(3)
\notag\\&\hspace{25ex}-\frac{15869140625}{903168}\log(5)\biggr)e^4
\notag\\&\ +\biggl(-\frac{874590390287699}{12713500800}
+\frac{318425291}{35280}\gamma_\text{E}
-\frac{881501}{336}\pi^2-\frac{90762985321}{63504}\log(2)
+\frac{31649037093}{1003520}\log(3)
\notag\\&\hspace{25ex}+\frac{10089048828125}{16257024}\log(5)\biggr)e^6
\notag\\&\ + d_{8}e^{8} + d_{10}e^{10}
+ d_{12}e^{12} + d_{14}e^{14} + d_{16}e^{16} + d_{18}e^{18}
+ d_{20}e^{20} + d_{22}e^{22} + d_{24}e^{24} + d_{26}e^{26}
\notag\\&\ + d_{28}e^{28}
+ d_{30}e^{30} + d_{32}e^{32} + d_{34}e^{34} + d_{36}e^{36} + d_{38}e^{38}
+ d_{40}e^{40}+\cdots
\biggr].\label{eqn:Ienh4}
\end{align}
In both of these expressions the circular orbit limits ($e^0$) were known
\cite{Fuji12}. These results have been presented earlier \cite{Fors14,Fors15,
Evan15a} and are available online. The coefficients through $e^6$ for 3.5PN
and 4PN are also discussed in \cite{SagoFuji15}, which we found to be in
agreement with our results. We next consider the 4PN log contribution, which
we find to have an \emph{exact, closed-form expression}
\begin{align}
\mathcal{L}_{4L} &= \frac{1}{(1-e^2)^{15/2}}
\biggl(
\frac{232597}{8820} + \frac{4923511}{5880}e^2
+ \frac{142278179}{35280}e^4+
\frac{318425291}{70560}e^6
+ \frac{1256401651}{1128960}e^8
+\frac{7220691}{250880}e^{10}
\biggr) . \label{eqn:Ienh4L}
\end{align}
In the 4.5PN non-log term we were only able to find analytic coefficients for
the circular limit (known previously) and the $e^2$ term. We find many
higher-order terms numerically (App.~\ref{sec:numericEnh})
\begin{align}
\mathcal{L}_{9/2} &= \frac{\pi}{(1-e^2)^8}
\biggl[\frac{265978667519}{745113600}-\frac{6848}{105}\gamma_\text{E}
-\frac{13696}{105}\log(2)
+ \biggl(\frac{5031659060513}{447068160}-\frac{418477 }{252} \gamma_\text{E}
-\frac{1024097 }{1260}\log(2)
\notag\\&\ -\frac{702027}{280}\log(3)\biggr)e^2
+ h_{4}e^{4}+ h_{6}e^{6}+ h_{8}e^{8}+ h_{10}e^{10}
+ h_{12}e^{12}+ h_{14}e^{14}+ h_{16}e^{16}+ h_{18}e^{18}+ h_{20}e^{20}
+ h_{22}e^{22}
\notag\\&\ + h_{24}e^{24}+ h_{26}e^{26}+ h_{28}e^{28}+ h_{30}e^{30}
+ h_{32}e^{32}+ h_{34}e^{34}+ h_{36}e^{36}
\biggr] . \label{eqn:Ienh92}
\end{align}
In the 4.5PN log term we are able to find the first 10 coefficients in
analytic form and 6 more in accurate numerical form
(App.~\ref{sec:numericEnh})
\begin{align}
\mathcal{L}_{9/2L} &= -\frac{\pi}{(1-e^2)^8}
\biggl(
\frac{3424}{105} + \frac{418477}{504}e^2+
\frac{32490229}{10080}e^4 +
\frac{283848209}{96768}e^6+
\frac{1378010735}{2322432}e^8
+\frac{59600244089}{4644864000}e^{10}
\notag\\&\ -
\frac{482765917}{7962624000}e^{12}
+\frac{532101153539}{29132587008000}e^{14}
-\frac{576726373021}{199766310912000}e^{16}
+\frac{98932878601597}{3624559945187328000}e^{18}
+g_{20}e^{20}
\notag\\&\ +g_{22}e^{22}+g_{24}e^{24}+g_{26}e^{26}+g_{28}e^{28}+g_{30}e^{30}+
\cdots
\biggr) . \label{eqn:Ienh92L}
\end{align}
For the 5PN non-log term, we are only able to confirm the circular-orbit
limit analytically. Many other terms were found with accurate numerical
values (App.~\ref{sec:numericEnh})
\begin{align}
\mathcal{L}_5 &= \frac{1}{(1-e^2)^{17/2}}
\biggl[
-\frac{2500861660823683}{2831932303200}
-\frac{424223}{6804} \pi ^2
+\frac{916628467}{7858620} \gamma_\text{E}
-\frac{83217611 }{1122660}\log(2)+\frac{47385}{196}\log(3)
\notag\\&\ + k_{2}e^{2} + k_{4}e^{4} + k_{6}e^{6}
+ k_{8}e^{8} + k_{10}e^{10} + k_{12}e^{12} + k_{14}e^{14}
+ k_{16}e^{16} + k_{18}e^{18} + k_{20}e^{20} + k_{22}e^{22}
+ k_{24}e^{24}
\notag\\&\ + k_{26}e^{26}+\cdots
\biggr] . \label{eqn:Ienh5}
\end{align}
In the 5PN log term we found the first 13 terms in analytic form, and
several more numerically (App.~\ref{sec:numericEnh})
\begin{align}
\mathcal{L}_{5L} &= \frac{1}{(1-e^2)^{17/2}}\biggl(
\frac{916628467}{15717240} + \frac{11627266729}{31434480}e^2-
\frac{84010607399}{10478160}e^4-
\frac{67781855563}{1632960}e^6-
\frac{87324451928671}{2011806720}e^8
\notag\\&\ -\frac{301503186907}{29804544}e^{10}-
\frac{752883727}{1290240}e^{12}-
\frac{22176713}{129024}e^{14}-
\frac{198577769}{2064384}e^{16}-
\frac{250595605}{4128768}e^{18}
\notag\\&\ -
\frac{195002899}{4718592}e^{20}-
\frac{280151573}{9437184}e^{22}-
\frac{1675599991}{75497472}e^{24}
+j_{26}e^{26}+j_{28}e^{28}+\cdots
\biggr) . \label{eqn:Ienh5L}
\end{align}
In the 5.5PN non-log term we found analytic forms for the first two terms
with 8 more in numerical form (App.~\ref{sec:numericEnh})
\begin{align}
\mathcal{L}_{11/2} = \frac{\pi}{(1-e^2)^9}
& \biggl[
\frac{8399309750401}{101708006400}+
\frac{177293}{1176}\gamma_\text{E}
+\frac{8521283}{17640}\log(2)
-\frac{142155}{784}\log(3)
\notag\\&\ +
\biggl(-\frac{6454125584294467}{203416012800}
+\frac{197515529}{17640} \gamma_\text{E}
-\frac{195924727}{17640} \log (2)
+\frac{1909251}{80} \log (3)\biggr)e^2
\notag\\&\
+ m_{4}e^{4} + m_{6}e^{6} + m_{8}e^{8} + m_{10}e^{10} + m_{12}e^{12}
+ m_{14}e^{14} + m_{16}e^{16} + m_{18}e^{18}
+\cdots
\biggr] . \label{eqn:Ienh112}
\end{align}
The 5.5PN log term yielded analytic forms for the first six terms with
several more known only numerically (App.~\ref{sec:numericEnh})
\begin{align}
\mathcal{L}_{11/2L} &= \frac{\pi}{(1-e^2)^9}
\biggl(
\frac{177293}{2352} + \frac{197515529}{35280}e^2
+\frac{22177125281}{451584}e^4+
\frac{362637121649}{3386880}e^6+
\frac{175129893794507}{2601123840}e^8
\notag\\&\ +
\frac{137611940506079}{13005619200}e^{10}
+l_{12}e^{12}+l_{14}e^{14}+l_{16}e^{16}+\cdots
\biggr) . \label{eqn:Ienh112L}
\end{align}
We only extracted the circular-orbit limit analytically for the 6PN non-log
term. Six more coefficients are known numerically (App.~\ref{sec:numericEnh})
\begin{align}
\mathcal{L}_6 &= \frac{1}{(1-e^2)^{19/2}}
\biggl(\frac{3803225263}{10478160}\pi^2
-\frac{27392}{105}\zeta (3)+\frac{1465472}{11025} \gamma_{\text{E}}^2
-\frac{256}{45}\pi^4-\frac{27392}{315} \gamma_\text{E} \pi ^2
-\frac{246137536815857}{157329572400} \gamma_\text{E}
\notag\\&\ +\frac{2067586193789233570693}{602387400044430000}
+\frac{5861888}{11025} \log ^2(2)
-\frac{54784}{315} \pi ^2 \log (2)
-\frac{271272899815409}{157329572400} \log (2)
\notag\\&\ +\frac{5861888}{11025} \gamma_\text{E} \log (2)
-\frac{37744140625}{260941824} \log (5)
-\frac{437114506833}{789268480} \log (3)
+n_2e^2 + n_4e^4+n_6e^6+n_8e^8+n_{10}e^{10}
\notag\\&\ +n_{12}e^{12}+\cdots
\biggr) . \label{eqn:Ienh6}
\end{align}
The 6PN log term yielded analytic forms for the first two terms, with 5
more in numerical form (App.~\ref{sec:numericEnh})
\begin{align}
\mathcal{L}_{6L} &= \frac{1}{(1-e^2)^{19/2}}
\biggl[
-\frac{246137536815857}{314659144800}-\frac{13696}{315}\pi^2
+\frac{1465472}{11025} \gamma_\text{E}
+\frac{2930944}{11025} \log (2)
\notag\\&\
+\biggl(-\frac{25915820507512391}{629318289600}
- \frac{1773953}{945}\pi^2
+ \frac{189812971}{33075}\gamma_\text{E}
+ \frac{18009277}{4725}\log(2)
+ \frac{75116889}{9800}\log(3)\biggr)e^2
\notag\\&\
+p_4e^4+p_6e^6+p_8e^8+p_{10}e^{10}+p_{12}e^{12}+\cdots
\biggr] . \label{eqn:Ienh6L}
\end{align}
The 6PN squared-log term (first instance of such a term) yielded the first
seven coefficients in analytic form
\begin{align}
\mathcal{L}_{6L^2} &= \frac{1}{(1-e^2)^{19/2}}\biggl(
\frac{366368}{11025} + \frac{189812971}{132300}e^2
+\frac{1052380631}{105840}e^4
+\frac{9707068997}{529200}e^6
+\frac{8409851501}{846720}e^8
+\frac{4574665481}{3386880}e^{10}
\notag\\&\ +\frac{6308399}{301056}e^{12}+\cdots
\biggr) . \label{eqn:Ienh6L2}
\end{align}
At 6.5PN order, we were only able to confirm the circular-orbit limit in
the non-log term. Additional terms are known accurately numerically
(App.~\ref{sec:numericEnh})
\begin{align}
\mathcal{L}_{13/2}&=\frac{\pi}{(1-e^2)^{10}}
\biggl(-\frac{81605095538444363}{20138185267200}
+\frac{300277177}{436590} \gamma_\text{E}
-\frac{42817273}{71442} \log (2)
+\frac{142155}{98} \log (3)
+r_2e^2+r_4e^4
\notag\\&\ +r_6e^6+r_8e^8+r_{10}e^{10}
+r_{12}e^{12}+\cdots\biggr) . \label{eqn:Ienh132}
\end{align}
In the 6.5PN log term we found the first two coefficients analytically.
Others are known numerically (App.~\ref{sec:numericEnh})
\begin{align}
\mathcal{L}_{13/2L} &= \frac{\pi}{(1-e^2)^{10}}\biggl(
\frac{300277177}{873180}+\frac{99375022631}{27941760}e^2+s_4e^4
+s_6e^6+s_8e^8+s_{10}e^{10}
+s_{12}e^{12}+\cdots
\biggr) . \label{eqn:Ienh132L}
\end{align}
At 7PN order in the non-log term, we only confirm the leading term.
Three more terms are known numerically (App.~\ref{sec:numericEnh})
\begin{align}
\mathcal{L}_7 &= \frac{1}{(1-e^2)^{21/2}}
\biggl(\frac{58327313257446476199371189}{8332222517414555760000}
+\frac{531077}{2205} \zeta (3)
+\frac{2621359845833}{2383781400} \pi^2
+\frac{531077}{6615}\gamma_\text{E} \pi^2
-\frac{9523 }{945}\pi^4
\notag\\
&\
-\frac{52525903}{154350} \gamma_\text{E}^2
+\frac{9640384387033067}{17896238860500} \gamma_\text{E}
+\frac{1848015}{5488}\log^2(3)
-\frac{5811697}{2450} \log^2(2)
+\frac{128223}{245} \pi^2 \log(2)
\notag\\&\
+\frac{19402232550751339 }{17896238860500}\log(2)
-\frac{142155}{392} \pi^2\log(3)
+\frac{1848015}{2744} \log(2)\log(3)
+\frac{1848015}{2744} \gamma_\text{E}\log (3)
\label{eqn:Ienh7}\\
&\
+\frac{9926708984375}{5088365568} \log (5)
-\frac{471188717 }{231525}\gamma \log (2)
-\frac{6136997968378863 }{1256910054400}\log (3)
+ t_2e^2+t_4e^4+t_6e^6+\cdots
\biggr) . \notag
\end{align}
At 7PN order in the log term we found the first two coefficients analytically.
Three more orders in $e^2$ are known numerically (App.~\ref{sec:numericEnh})
\begin{align}
\mathcal{L}_{7L} &=
\frac{1}{(1-e^2)^{21/2}}
\biggl[
\frac{9640384387033067}{35792477721000}
+\frac{531077}{13230} \pi ^2
-\frac{52525903 }{154350}\gamma_\text{E}
-\frac{471188717}{463050} \log (2)
+\frac{1848015}{5488} \log (3)
\notag\\&\
+\biggl(\frac{5361621824744487121}{28633982176800}
+ \frac{20170061}{1764}\pi^2
- \frac{8436767071}{185220}\gamma_\text{E}
+ \frac{8661528101}{926100}\log(2)
- \frac{21008472903}{274400}\log(3)\biggr)e^2
\notag\\&\ +u_4e^4+u_6e^6+u_8e^8+\cdots
\biggr] . \label{eqn:Ienh7L}
\end{align}
\begin{center}
\begin{figure*}
\includegraphics[scale=1.35]{twoPanelPlot.pdf}
\caption{Agreement between numerical flux data and the 7PN expansion at
smaller radius and larger eccentricities. An orbit with separation of
$p=10^3$ was used. The left panel shows the energy flux as a function of
eccentricity normalized to the circular-orbit limit (i.e., the curve closely
resembles the Peters-Mathews enhancement function). The red curve shows the
7PN fit to this data. On the right, we subtract the fit (through 6PN order)
from the energy flux data points. The residuals have dropped by 14 orders
of magnitude. The residuals are then shown to still be well fit by the
remaining 6.5PN and 7PN parts of the model even for $e=0.6$ .
\label{fig:twoPanelPlot}}
\end{figure*}
\end{center}
Finally, at 7PN order there is a squared-log term and we again found the
first two coefficients analytically
\begin{align}
\mathcal{L}_{7L^2} &= -\frac{1}{(1-e^2)^{21/2}}
\biggl(
\frac{52525903}{617400}+\frac{8436767071}{740880}e^2+v_4e^4+v_6e^6
+v_8e^8+\cdots
\biggr) \qquad \qquad \qquad . \label{eqn:Ienh7L2}
\end{align}
\end{widetext}
\subsection{Discussion}
The analytic forms for the $e^2$ coefficients at the 5.5PN non-log, 6PN log,
6.5PN log, 7PN log, and 7PN log-squared orders were previously obtained by
Johnson-McDaniel \cite{JohnMcDa15}. They are obtained by using the eccentric
analogue of the simplification described in \cite{JohnMcDa14} to predict
leading logarithmic-type terms to all orders, starting from the expressions
for the modes given in Appendix G of \cite{MinoETC97}.
\begin{figure}
\includegraphics[scale=0.67]{strongFieldFluxes_blue.jpg}
\caption{Strong-field comparison between the 7PN expansion and energy fluxes
computed with a Lorenz gauge/RWZ gauge hybrid self-force code \cite{OsbuETC14}
(courtesy T.~Osburn).
\label{fig:strongField}}
\end{figure}
The 7PN fit was obtained using orbits with eccentricities between $0.0$
and $0.1$, and using orbital separations of $p = 10^{10}$ through
$p = 10^{35}$. A natural question to ask is how well does the PN expansion
work if we compute fluxes from higher eccentricity orbits and from orbits
with much smaller separations? The answer is: quite well.
Fig.~\ref{fig:twoPanelPlot} shows (on the left) the circular orbit limit normalized
energy flux (dominated by the Peters-Mathews term) as black points, and
the red curve is the fit from our 7PN model. Here we have reduced the
orbital separation to $p = 10^3$ and we compare the data and model all the
way up to $e = 0.6$. On the right side we show the effect of subtracting
the model containing all terms up to and including the 6PN contributions.
With an orbit with a radius of $p = 10^3$, the residuals have dropped by
14 orders of magnitude. The remaining part of the model (6.5PN and 7PN) is
then shown to still fit these residuals.
We examined the fit then at still smaller radius orbits. Figure
\ref{fig:strongField} compares our 7PN determination to energy fluxes
obtained by T.~Osburn using a Lorenz gauge/RWZ gauge hybrid code
\cite{OsbuETC14}. Energy fluxes have accuracies of $10^{-3}$ all the way in
as close as $p = 30$.
\section{Conclusions}
\label{sec:conclusions}
In this paper we have presented a first set of results from a new
eccentric-orbit MST code. The code, written in \emph{Mathematica}, combines
the MST formalism and arbitrary-precision functions to solve the perturbation
equations to an accuracy of 200 digits. We computed the energy flux at
infinity, at lowest order in the mass ratio (i.e., in the region of parameter
space overlapped by BHP and PN theories). In this effort, we computed
results for approximately 1,700 distinct orbits, with up to as many as 7,400
Fourier-harmonic modes per orbit.
The project had several principal
new results. First, we confirmed previously computed PN flux expressions
through 3PN order. Second, in the process of this analysis, we developed
a procedure and new high-order series expansions for the non-closed form
hereditary terms at 1.5PN, 2.5PN, and 3PN order. Significantly, at 2.5PN
order we overcame some of the previous roadblocks to writing down accurate
high-order expansions for this flux contribution (App.~\ref{sec:massQuad}).
The 3PN hereditary term was shown to have a subtle singular behavior as
$e_t \to 1$. All of this clarification of the behavior of the hereditary
terms was aided by an asymptotic analysis of a set of enhancement functions.
In the process we were able to predict the form of eccentricity singular
factors that appear at each PN order. Third, based on that understanding,
we then used the high accuracy of the code to find a mixture of new analytic
and numeric flux terms between 3.5PN and 7PN. We built in expected forms
for the eccentricity singular factors, allowing the determined power series
in $e^2$ to better determine the flux at high values of $e$.
The code we have developed for this project can be used not only to compute
fluxes but also local GSF quantities. Recently Akcay et al.~\cite{AkcaETC15}
made comparisons between GSF and PN values of the eccentric orbit
generalization of Detweiler's redshift invariant
($\Delta U$) \cite{Detw08,BaraSago11}. We may be able to extend these
comparisons beyond the current 4PN level and compute currently unknown
coefficients (in the linear in $\mu/M$ limit). We can also modify the code
to make calculations on a Kerr background.
\acknowledgments
The authors thank T.~Osburn, A.~Shah, B.~Whiting, S.A.~Hughes,
N.K.~Johnson-McDaniel, and L.~Barack for helpful discussions. We also
thank L.~Blanchet and an anonymous referee for separately asking several
questions that led us to include a section on asymptotic analysis of
enhancement functions and eccentricity singular factors. This work was
supported in part by NSF grant PHY-1506182. EF acknowledges support from
the Royster Society of Fellows at the University of North Carolina-Chapel
Hill. CRE is grateful for the hospitality of the Kavli Institute for
Theoretical Physics at UCSB (which is supported in part by the National
Science Foundation under Grant No. NSF PHY11-25915) and the Albert Einstein
Institute in Golm, Germany, where part of this work was initiated. CRE also
acknowledges support from the Bahnson Fund at the University of North
Carolina-Chapel Hill. SH acknowledges support from the Albert Einstein
Institute and also from Science Foundation Ireland under Grant
No.~10/RFP/PHY2847. SH also acknowledges financial support provided under
the European Union's H2020 ERC Consolidator Grant ``Matter and strong-field
gravity: New frontiers in Einstein’s theory'' grant agreement
no.~MaGRaTh--646597.
|
2,869,038,154,671 | arxiv | \section{Introduction}
\label{sec:intro}
Many interesting \(\Cst\)\nobreakdash-algebras may be realised as
\(\Cst\)\nobreakdash-algebras of étale, locally compact groupoids. Examples
are the \(\Cst\)\nobreakdash-algebras associated to group actions on spaces,
(higher-rank) graphs, and self-similar groups. These examples of
\(\Cst\)\nobreakdash-algebras are defined by some combinatorial or dynamical
data. This data is interpreted in
\cites{Antunes-Ko-Meyer:Groupoid_correspondences,
Meyer:Diagrams_models} as a diagram in a certain bicategory, whose
objects are étale groupoids and whose arrows are called groupoid
correspondences. A groupoid correspondence is a space with
commuting actions of the two groupoids, subject to some conditions.
In favourable cases, the \(\Cst\)\nobreakdash-algebra associated to such a
diagram is a groupoid \(\Cst\)\nobreakdash-algebra of a certain étale
groupoid built from the diagram. A candidate for this groupoid is
proposed in~\cite{Meyer:Diagrams_models}, where it is called the
groupoid model of the diagram.
Here we prove two important results about groupoid models. First,
any diagram of groupoid correspondences has a groupoid model.
Secondly, the groupoid model is a locally compact groupoid provided
the diagram is proper and consists of locally compact groupoid
correspondences. The latter is crucial because the groupoid
\(\Cst\)\nobreakdash-algebra of an étale groupoid is only defined if it is
locally compact.
By the results in~\cite{Meyer:Diagrams_models}, the groupoid model
exists if and only if the category of actions of the diagram on
spaces defined in~\cite{Meyer:Diagrams_models} has a terminal
object, and then it is unique up to isomorphism. To show that such
a terminal diagram action exists, we prove that the category of
actions is cocomplete and has a coseparating set of objects; this
criterion is also used to prove the Special Adjoint Functor Theorem.
Proving that the groupoid model is locally compact is more
challenging. The key ingredient here is the relative
Stone--\v{C}ech compactification. This is defined for a space~\(Y\)
with a continuous map to a locally compact Hausdorff ``base
space''~\(B\), and produces another space over~\(B\) that is proper
in the sense that the map to~\(B\) is proper and its underlying
space is Hausdorff. If~\(B\) is a point, then the relative
Stone--\v{C}ech compactification becomes the usual Stone--\v{C}ech
compactification. An action of a diagram on a space~\(Y\) contains
a map \(Y\to \Gr^0\) for a certain space~\(\Gr^0\), which is locally
compact and Hausdorff if and only if the diagram is locally compact.
If the diagram is proper, then the action on~\(Y\) extends uniquely
to an action on the relative Stone--\v{C}ech compactification.
Since the relative Stone--\v{C}ech compactification is a Hausdorff
space with a proper map to the locally compact Hausdorff
space~\(\Gr^0\), it is itself a locally compact Hausdorff space.
Then an abstract nonsense argument shows that the relative
Stone--\v{C}ech compactification of a universal action must be
homeomorphic to the universal action. This shows that the universal
action lives on a locally compact Hausdorff space that is proper
over~\(\Gr^0\). As a consequence, this space is compact
if~\(\Gr^0\) is compact.
The main result in this article answers an important, but technical
question in the previous article~\cite{Meyer:Diagrams_models}.
Therefore, we assume that the reader has already
seen~\cite{Meyer:Diagrams_models} and we do not attempt to make this
article self-contained. In \longref{Section}{sec:preparation}, we
only recall the most crucial results from
\cites{Antunes-Ko-Meyer:Groupoid_correspondences,
Meyer:Diagrams_models}. In
\longref{Section}{sec:general_existence}, we prove that any diagram
has a groupoid model --~not necessarily Hausdorff or locally
compact. In \longref{Section}{sec:relative_SC}, we introduce the
relative Stone--\v{C}ech compactification and prove some properties
that we are going to need. In \longref{Section}{sec:extend_action},
we prove that an action of an étale groupoid or of a diagram of
proper, locally compact étale groupoid correspondences extends
canonically to the relative Stone--\v{C}ech compactification. In
\longref{Section}{sec:proper_model}, we use this to prove that the
universal action of such a diagram lives on a space that is
Hausdorff, locally compact, and proper over~\(\Gr^0\). To conclude,
we discuss two examples. One of them shows that the groupoid model
may fail to be locally compact if the groupoid correspondences in
the underlying diagram are not proper.
\section{Preparations}
\label{sec:preparation}
In this section, we briefly recall the definition of the bicategory
of groupoid correspondences, diagrams of groupoid correspondences,
their actions on spaces, and the universal action of a diagram. We
describe actions of diagrams through slices. More details may be
found in \cites{Antunes-Ko-Meyer:Groupoid_correspondences,
Meyer:Diagrams_models}.
We describe a topological groupoid~\(\Gr\) by topological spaces
\(\Gr\) and \(\Gr^0\subseteq\Gr\) of arrows and objects with continuous range
and source maps \(\rg,\s\colon \Gr \rightrightarrows \Gr^0\), a
continuous multiplication map
\(\Gr\times_{\s,\Gr^0,\rg} \Gr \to \Gr\), \((g,h)\mapsto g\cdot h\),
such that each object has a unit arrow and each arrow has an inverse
with the usual algebraic properties and the unit map and the
inversion are continuous as well. We tacitly assume all groupoids
to be \emph{étale}, that is, \(\s\) and~\(\rg\) are local
homeomorphisms. This implies that each arrow \(g\in\Gr\) has an
open neighbourhood \(\U\subseteq \Gr\) such that \(\s|_\U\)
and~\(\rg|_\U\) are homeomorphisms onto open subsets of~\(\Gr^0\).
Such an open subset is called a \emph{slice}.
\begin{definition}
An (étale) groupoid~\(\Gr\) is called \emph{locally compact} if
its object space~\(\Gr^0\) is Hausdorff and locally compact.
\end{definition}
If~\(\Gr\) is a locally compact groupoid, then its arrow
space~\(\Gr\) is locally compact and locally Hausdorff, but it need
not be Hausdorff. We only know that each slice \(\U\subseteq \Gr\)
is Hausdorff locally compact because it is homeomorphic to an open
subset in~\(\Gr^0\). As in~\cite{Meyer:Diagrams_models}, we allow
groupoids that are not locally compact. We need this for the
general existence result for groupoid models.
\begin{definition}[\cite{Meyer:Diagrams_models}*{Definitions 2.7--9}]
\label{def:Bibundles}
Let \(\Gr[H]\) and~\(\Gr\) be (étale) groupoids. An
\textup{(}étale\textup{)} \emph{groupoid correspondence}
from~\(\Gr\) to~\(\Gr[H]\), denoted
\(\Bisp\colon \Gr[H]\leftarrow \Gr\), is a space~\(\Bisp\) with
commuting actions of \(\Gr[H]\) on the left and~\(\Gr\) on the
right, such that the right anchor map \(\s\colon \Bisp\to \Gr^0\)
is a local homeomorphism and the right \(\Gr\)\nobreakdash-action is basic.
A correspondence \(\Bisp\colon \Gr[H]\leftarrow \Gr\) is
\emph{proper} if the map \(\rg_*\colon \Bisp/\Gr\to \Gr[H]^0\)
induced by~\(\rg\) is proper.
Let \(\Gr[H]\) and~\(\Gr\) be locally compact groupoids. A
\emph{locally compact groupoid correspondence}
\(\Bisp\colon \Gr[H]\leftarrow \Gr\) is a groupoid
correspondence~\(\Bisp\) such that~\(\Bisp/\Gr\) is Hausdorff.
\end{definition}
The ``groupoids'' and ``groupoid correspondences'' as defined
in~\cite{Antunes-Ko-Meyer:Groupoid_correspondences} are the
``locally compact groupoids'' and the ``locally compact groupoid
correspondences'' in the notation in this article.
\begin{definition}[\cite{Antunes-Ko-Meyer:Groupoid_correspondences}*{Definition~7.2}]
\label{def:correspondence_slices}
Let \(\Bisp\colon \Gr[H]\leftarrow \Gr\) be a groupoid
correspondence. A \emph{slice} of~\(\Bisp\) is an open subset
\(\U\subseteq \Bisp\) such that both \(\s\colon \Bisp \to \Gr^0\)
and the orbit space projection \(\Qu\colon \Bisp\prto \Bisp/\Gr\)
are injective on~\(\U\). Let \(\Bis(\Bisp)\) be the set of all
slices of~\(\Bisp\).
\end{definition}
Let \(\Bisp\colon \Gr[H]\leftarrow \Gr\) be a groupoid
correspondence. Then the slices of~\(\Bisp\) form a basis for the
topology of~\(\Bisp\).
Groupoid correspondences may be composed, and this gives rise to a
bicategory~\(\Grcat\)
(see~\cite{Antunes-Ko-Meyer:Groupoid_correspondences}). We only
need this structure to talk about bicategory homomorphisms
into~\(\Grcat\). Such a homomorphism is described more concretely
in~\cite{Meyer:Diagrams_models}:
\begin{proposition}[\cite{Meyer:Diagrams_models}*{Proposition 3.1}]
\label{pro:diagrams_in_Grcat}
Let~\(\Cat\) be a category. A \emph{\(\Cat\)\nobreakdash-shaped diagram of
groupoid correspondences} \(F\colon \Cat\to\Grcat\) is given by
\begin{enumerate}
\item groupoids~\(\Gr_x\) for all objects~\(x\) of~\(\Cat\);
\item correspondences \(\Bisp_g\colon \Gr_x\leftarrow \Gr_y\) for all
arrows \(g\colon x\leftarrow y\) in~\(\Cat\);
\item isomorphisms of correspondences \(\mu_{g,h}\colon
\Bisp_g\Grcomp_{\Gr_y} \Bisp_h\xrightarrow\sim \Bisp_{g h}\) for all pairs of
composable arrows \(g\colon z\leftarrow y\), \(h\colon y\leftarrow
x\) in~\(\Cat\);
\end{enumerate}
such that
\begin{enumerate}[label=\textup{(\ref*{pro:diagrams_in_Grcat}.\arabic*)},
leftmargin=*,labelindent=0em]
\item \label{en:diagrams_in_Grcat_1} \(\Bisp_x\) for an object~\(x\)
of~\(\Cat\) is the identity correspondence~\(\Gr_x\) on~\(\Gr_x\);
\item \label{en:diagrams_in_Grcat_2}
\(\mu_{g,y}\colon \Bisp_g \Grcomp_{\Gr_y} \Gr_y \xrightarrow\sim \Bisp_g\)
and
\(\mu_{x,g}\colon \Gr_x \Grcomp_{\Gr_x} \Bisp_g \xrightarrow\sim \Bisp_g\)
for an arrow \(g\colon x\leftarrow y\)
in~\(\Cat\)
are the canonical isomorphisms;
\item \label{en:diagrams_in_Grcat_3} for all composable arrows
\(g_{01}\colon x_0\leftarrow x_1\), \(g_{12}\colon x_1\leftarrow
x_2\), \(g_{23}\colon x_2\leftarrow x_3\) in~\(\Cat\), the
following diagram commutes:
\begin{equation}
\label{eq:coherence_category-diagram}
\begin{tikzpicture}[yscale=1.5,xscale=3,baseline=(current bounding
box.west)]
\node (m-1-1) at (144:1)
{\((\Bisp_{g_{01}}\Grcomp_{\Gr_{x_1}} \Bisp_{g_{12}})
\Grcomp_{\Gr_{x_2}} \Bisp_{g_{23}}\)};
\node (m-1-1b) at (216:1) {\(\Bisp_{g_{01}}\Grcomp_{\Gr_{x_1}}
(\Bisp_{g_{12}}\Grcomp_{\Gr_{x_2}} \Bisp_{g_{23}})\)};
\node (m-1-2) at (72:1)
{\(\Bisp_{g_{02}}\Grcomp_{\Gr_{x_2}}\Bisp_{g_{23}}\)};
\node (m-2-1) at (288:1)
{\(\Bisp_{g_{01}}\Grcomp_{\Gr_{x_1}}\Bisp_{g_{13}}\)};
\node (m-2-2) at (0:.8) {\(\Bisp_{g_{03}}\)};
\draw[dar] (m-1-1) -- node[swap] {\(\scriptstyle\cong\)} node
{\scriptsize\textup{associator}} (m-1-1b);
\draw[dar] (m-1-1.north) -- node[very near end]
{\(\scriptstyle\mu_{g_{01},g_{12}}\Grcomp_{\Gr_{x_2}}\id_{\Bisp_{g_{23}}}\)}
(m-1-2.west);
\draw[dar] (m-1-1b.south) -- node[swap,very near end]
{\(\scriptstyle\id_{\Bisp_{g_{01}}}\Grcomp_{\Gr_{x_1}}\mu_{g_{12},g_{23}}\)}
(m-2-1.west);
\draw[dar] (m-1-2.south) -- node[inner sep=0pt]
{\(\scriptstyle\mu_{g_{02},g_{23}}\)} (m-2-2);
\draw[dar] (m-2-1.north) -- node[swap,inner sep=1pt]
{\(\scriptstyle\mu_{g_{01},g_{13}}\)} (m-2-2);
\end{tikzpicture}
\end{equation}
here \(g_{02}\mathrel{\vcentcolon=} g_{01}\circ g_{12}\), \(g_{13}\mathrel{\vcentcolon=}
g_{12}\circ g_{23}\), and \(g_{03}\mathrel{\vcentcolon=} g_{01}\circ g_{12}\circ
g_{23}\).
\end{enumerate}
\end{proposition}
\begin{definition}[\cite{Meyer:Diagrams_models}*{Definition 3.8}]
Let~\(\Cat\) be a category. A diagram of groupoid correspondences
\(F\colon \Cat\to\Grcat\) described by the data
\((\Gr_x,\Bisp_g,\mu_{g,h})\) is \emph{proper} if all the groupoid
correspondences~\(\Bisp_g\) are proper. It is \emph{locally
compact} if all the groupoids~\(\Gr_x\) and the
correspondences~\(\Bisp_g\) are locally compact.
\end{definition}
\begin{definition}[\cite{Meyer:Diagrams_models}*{Definition 4.5}]
\label{def:diagram_dynamical_system}
An \emph{\(F\)\nobreakdash-action} on a space~\(Y\) consists of
\begin{itemize}
\item a partition \(Y = \bigsqcup_{x\in\Cat^0} Y_x\) into clopen
subsets;
\item continuous maps \(\rg\colon Y_x \to \Gr_x^0\);
\item open, continuous, surjective maps \(\alpha_g\colon \Bisp_g
\times_{\s,\Gr_x^0,\rg} Y_x \to Y_{x'}\) for arrows \(g\colon
x'\leftarrow x\) in~\(\Cat\), denoted multiplicatively as
\(\alpha_g(\gamma,y) = \gamma\cdot y\);
\end{itemize}
such that
\begin{enumerate}[label=\textup{(\ref*{def:diagram_dynamical_system}.\arabic*)},
leftmargin=*,labelindent=0em]
\item \label{en:diagram_dynamical_system1}
\(\rg(\gamma_2\cdot y) = \rg(\gamma_2)\) and \(\gamma_1\cdot
(\gamma_2\cdot y) = (\gamma_1 \cdot \gamma_2)\cdot y\) for
composable arrows \(g_1,g_2\) in~\(\Cat\), \(\gamma_1\in
\Bisp_{g_1}\), \(\gamma_2\in \Bisp_{g_2}\), and \(y\in
Y_{\s(g_2)}\) with \(\s(\gamma_1) = \rg(\gamma_2)\),
\(\s(\gamma_2) = \rg(y)\);
\item \label{en:diagram_dynamical_system2}
if \(\gamma\cdot y = \gamma'\cdot y'\) for \(\gamma,\gamma'\in
\Bisp_g\), \(y,y'\in Y_{\s(g)}\), there is \(\eta\in \Gr_{\s(g)}\)
with \(\gamma' = \gamma\cdot \eta\) and \(y = \eta\cdot y'\);
equivalently, \(\Qu(\gamma)=\Qu(\gamma')\) for the orbit space
projection \(\Qu\colon \Bisp_g \to \Bisp_g/\Gr_{\s(g)}\)
and \(y = \braket{\gamma}{\gamma'} y'\).
\end{enumerate}
\end{definition}
\begin{definition}[\cite{Meyer:Diagrams_models}*{Definition 4.13}]
\label{def:universal_F-action}
An \(F\)\nobreakdash-action~\(\Omega\) is \emph{universal} if for any
\(F\)\nobreakdash-action~\(Y\), there is a unique \(F\)\nobreakdash-equivariant map
\(Y\to \Omega\).
\end{definition}
\begin{definition}[\cite{Meyer:Diagrams_models}*{Definition 4.13}]
\label{def:universal_action}
A \emph{groupoid model for \(F\)\nobreakdash-actions} is an étale
groupoid~\(\Gr[U]\) with natural bijections between the sets of
\(\Gr[U]\)\nobreakdash-actions and \(F\)\nobreakdash-actions on~\(Y\) for all
spaces~\(Y\).
\end{definition}
It follows from~\cite{Meyer:Diagrams_models}*{Proposition~5.12} that
a diagram has a groupoid model if and only if it has a universal
\(F\)\nobreakdash-action. By definition, an \(F\)\nobreakdash-action is universal if
and only if it is terminal in the category of \(F\)\nobreakdash-actions. Our
first goal below will be to prove that any diagram of groupoid
correspondences has a universal \(F\)\nobreakdash-action and hence also a
groupoid model. The universal action and the groupoid model of a
diagram are unique up to canonical isomorphism if they exist (see
\cite{Meyer:Diagrams_models}*{Proposition~4.16}).
A key point in our construction of the universal \(F\)\nobreakdash-action is
an alternative description of an \(F\)\nobreakdash-action, which uses partial
homeomorphisms associated to slices of the groupoid correspondences
in the diagram.
Let \(\Bisp\colon \Gr[H]\leftarrow \Gr\) be a groupoid
correspondence and let \(\U,\V\subseteq \Bisp\) be slices. Recall
that \(\braket{x}{y}\) for \(x,y\in\Bisp\) with \(\Qu(x) = \Qu(y)\)
is the unique arrow in~\(\Gr\) with \(x\cdot \braket{x}{y} = y\).
The subset
\[
\braket{\U}{\V} \mathrel{\vcentcolon=}
\setgiven{\braket{x}{y}\in\Gr}{x\in\U,\ y\in\V,\ \Qu(x)=\Qu(y)}
\]
is a slice in the groupoid~\(\Gr\) by
\cite{Antunes-Ko-Meyer:Groupoid_correspondences}*{Lemma~7.7}. Next,
let \(\Bisp\colon \Gr[H]\leftarrow \Gr\) and
\(\Bisp[Y]\colon \Gr\leftarrow \Gr[K]\) be groupoid correspondences
and let \(\U\subseteq \Bisp\) and \(\V\subseteq \Bisp[Y]\) be
slices. Then
\[
\U \cdot \V\mathrel{\vcentcolon=}
\setgiven{[x,y] \in \Bisp\Grcomp_{\Gr} \Bisp[Y]}{x\in\U,\
y\in\V,\ \s(x) = \rg(y)}
\]
is a slice in the composite groupoid correspondence
\(\Bisp\Grcomp_{\Gr} \Bisp[Y]\) by
\cite{Antunes-Ko-Meyer:Groupoid_correspondences}*{Lemma~7.14}.
Let~\(F\) be a diagram of groupoid correspondences. Let \(\Bis(F)\)
be the set of all slices of the correspondences~\(\Bisp_g\) for all
arrows \(g\in\Cat\), modulo the relation that we identify the empty
slices of \(\Bis(\Bisp_g)\) for all \(g\in \Cat\). Given composable
arrows \(g,h\in\Cat\) and slices \(\U\subseteq \Bisp_g\),
\(\V\subseteq \Bisp_h\), then \(\U\V \mathrel{\vcentcolon=} \mu_{g,h}(\U\cdot \V)\)
is a slice in~\(\Bisp_{g h}\). If \(g,h\) are not composable, then
we let \(\U\V\) be the empty slice~\(\emptyset\). This
turns~\(\Bis(F)\) into a semigroup with zero element~\(\emptyset\).
\begin{definition}
Let~\(Y\) be a topological space. A \emph{partial homeomorphism}
of~\(Y\) is a homeomorphism between two open subsets of~\(Y\).
These are composed by the obvious formula: if \(f,g\) are partial
homeomorphisms of~\(Y\), then \(f g\) is the partial homeomorphism
of~\(Y\) that is defined on \(y\in Y\) if and only if \(g(y)\) and
\(f(g(y))\) are defined, and then \((f g)(y) \mathrel{\vcentcolon=} f(g(y))\).
If~\(f\) is a partial homeomorphism of~\(Y\), we let~\(f^*\) be
its ``partial inverse'', defined on the image of~\(f\) by
\(f^*(f(y)) = y\) for all \(y\) in the domain of~\(f\).
\end{definition}
Let~\(Y\) with the partition \(Y = \bigsqcup_{x\in\Cat^0} Y_x\) be
an \(F\)\nobreakdash-action. Then slices in \(\Bis(F)\) act on~\(Y\) by
partial homeomorphisms.
For an arrow \(g\colon x\leftarrow x'\) in~\(\Cat\), a slice
\(\U\subseteq \Bisp_g\) acts on~\(Y\) by a partial homeomorphism
\[
\vartheta(\U) \colon Y_{x'} \supseteq \rg^{-1}(\s(\U)) \to Y_x,
\]
which maps \(y \in Y_{x'}\) with \(\rg(y) \in \s(\U)\) to
\(\gamma\cdot y\) for the unique \(\gamma\in \U\) with
\(\s(\gamma)=\rg(y)\). The following lemmas describe
\(F\)\nobreakdash-actions and \(F\)\nobreakdash-equivariant maps through these partial
homeomorphisms.
\begin{lemma}[\cite{Meyer:Diagrams_models}*{Lemma~5.3}]
\label{lem:F-action_from_theta}
Let~\(Y\) be a space and let
\(\rg\colon Y\to \bigsqcup_{x\in\Cat^0} \Gr_x^0\) and
\(\vartheta\colon \Bis(F)\to I(Y)\) be maps. These come from an
\(F\)\nobreakdash-action on~\(Y\) if and only if
\begin{enumerate}[label=\textup{(\ref*{lem:F-action_from_theta}.\arabic*)},
leftmargin=*,labelindent=0em]
\item \label{en:F-action_from_theta1}%
\(\vartheta(\U \V) = \vartheta(\U)\vartheta(\V)\)
for all \(\U,\V\in \Bis(F)\);
\item \label{en:F-action_from_theta2}%
\(\vartheta(\U_1)^*\vartheta(\U_2) =
\vartheta(\braket{\U_1}{\U_2})\)
for all \(g\in\Cat\), \(\U_1,\U_2\in\Bis(\Bisp_g)\);
\item \label{en:F-action_from_theta3}%
the images of~\(\vartheta(\U)\) for \(\U\in\Bisp_g\) cover
\(Y_{\rg(g)} \mathrel{\vcentcolon=} \rg^{-1}(\Gr_{\rg(g)}^0)\) for each
\(g\in\Cat\);
\item \label{en:F-action_from_theta5}%
\(\rg\circ \vartheta(\U) = \U_\dagger\circ\rg\) as partial maps
\(Y \to \Gr^0\) for any \(\U\in\Bis(F)\).
\end{enumerate}
The corresponding \(F\)\nobreakdash-action on~\(Y\) is unique if it exists,
and it satisfies
\begin{enumerate}[label=\textup{(\ref*{lem:F-action_from_theta}.\arabic*)},
leftmargin=*,labelindent=0em,resume]
\item \label{en:F-action_from_theta4}%
for \(U\subseteq \Gr_x^0\) open, \(\vartheta(U)\) is the
identity map on~\(\rg^{-1}(U)\);
\item \label{en:F-action_from_theta6}%
for any \(\U\in\Bis(F)\), the domain of \(\vartheta(\U)\) is
\(\rg^{-1}(\s(\U))\).
\end{enumerate}
\end{lemma}
\begin{lemma}[\cite{Meyer:Diagrams_models}*{Lemma~5.4}]
\label{lem:theta_gives_equivariant}
Let \(Y\) and~\(Y'\) be \(F\)\nobreakdash-actions. A continuous map
\(\varphi\colon Y\to Y'\) is \(F\)\nobreakdash-equivariant if and only if
\(\rg'\circ \varphi = \rg\) and
\(\vartheta'(\U)\circ\varphi = \varphi\circ\vartheta(\U)\) for all
\(\U\in \Bis(F)\).
\end{lemma}
\section{General existence of a groupoid model}
\label{sec:general_existence}
Our next goal is to prove that any diagram of groupoid
correspondences has a groupoid model. By the results
of~\cite{Meyer:Diagrams_models} mentioned above, it suffices to show
that its category of actions has a terminal object. Our proof will
use the following criterion for this:
\begin{lemma}
\label{lem:final_object_exists}
Let~\(\Cat[D]\) be a cocomplete, locally small category. Assume
that there is a set of objects \(\Phi\subseteq \Cat[D]\) such that
for any object \(x\in\Cat[D]^0\) there is a \(y\in\Phi\) and an
arrow \(x \to y\). Then~\(\Cat[D]\) has a terminal object.
\end{lemma}
\begin{proof}
This is dual to \cite{Riehl:Categories_context}*{Lemma 4.6.5},
which characterises the existence of an initial object in a
complete, locally small category.
\end{proof}
\begin{theorem}
\label{the:groupoid_model_universal_action_exists}
Any diagram of groupoid correspondences
\(F\colon \Cat \to \Grcat\) has a universal \(F\)\nobreakdash-action and a
groupoid model.
\end{theorem}
\begin{proof}
By the discussion above, it suffices to prove that the category of
\(F\)\nobreakdash-actions satisfies the assumptions in
\longref{Lemma}{lem:final_object_exists}. We first exhibit the
set of objects~\(\Phi\).
Let~\(Y\) be any space with an \(F\)\nobreakdash-action. Equip~\(Y\) with
the canonical action of the inverse semigroup~\(\IS(F)\). Call an
open subset of~\(Y\) \emph{necessary} if it is the domain of some
element of~\(\IS(F)\). Let~\(\tau'\) be the topology on~\(Y\)
that is generated by the necessary open subsets, and let~\(Y'\)
be~\(Y\) with the topology~\(\tau'\). Let~\(Y''\) be the quotient
of~\(Y'\) by the equivalence relation where two points \(y_1,y_2\)
are identified if
\[
\setgiven{U\in\tau'}{y_1 \in U}
= \setgiven{U\in\tau'}{y_2 \in U}
\]
and \(\rg(y_1) = \rg(y_2)\) for the canonical continuous map
\(\rg\colon Y\to \bigsqcup_{x\in\Cat^0} \Gr_x^0\). The continuous
map \(\rg\colon Y\to \Gr^0 \mathrel{\vcentcolon=} \bigsqcup_{x\in\Cat^0} \Gr_x^0\)
descends to a map on~\(Y''\), which is continuous because the
subsets \(\rg^{-1}(U)\) for open subsets \(U\subseteq \Gr_x^0\),
\(x\in\Cat^0\) are ``necessary'' by \ref{en:F-action_from_theta4}.
The \(\IS(F)\)\nobreakdash-action on~\(Y\) descends to an
\(\IS(F)\)\nobreakdash-action on~\(Y''\) because all the domains of
elements of \(\IS(F)\) are in~\(\tau'\). Then
\longref{Lemma}{lem:F-action_from_theta} implies that the
\(F\)\nobreakdash-action on~\(Y\) descends to an \(F\)\nobreakdash-action
on~\(Y''\). The quotient map \(Y\prto Y''\) is a continuous
\(F\)\nobreakdash-equivariant map.
Next, we control the cardinality of the set~\(Y''\). By
construction, finite intersections of necessary open subsets form
a basis of the topology~\(\tau'\). A point in~\(Y''\) is
determined by its image in~\(\Gr^0\) and the set of basic open
subsets that contain it. This defines an injective map
from~\(Y''\) to the product of~\(\Gr^0\) and the power
set~\(\mathcal{P}(\Upsilon)\) for the set~\(\Upsilon\) of finite
subsets of \(\IS(F)\). We may use this injective map to transfer
the \(F\)\nobreakdash-action on~\(Y''\) to an isomorphic \(F\)\nobreakdash-action on
a subset of \(\Gr^0\times \mathcal{P}(\Upsilon)\), equipped with
some topology. Let~\(\Phi\) be the set of all \(F\)\nobreakdash-actions on
subsets of \(\Gr^0\times \mathcal{P}(\Upsilon)\), equipped with
some topology. This is indeed a set, not a class. The argument
above shows that any \(F\)\nobreakdash-action admits a continuous
\(F\)\nobreakdash-equivariant map to an \(F\)\nobreakdash-action in~\(\Phi\), as
required.
The category of \(F\)\nobreakdash-actions is clearly locally small. It
remains to prove that it is cocomplete. It suffices to prove that
it has all small coproducts and coequalisers (see
\cite{Riehl:Categories_context}*{Theorem~3.4.12}). Coproducts are
easy: if \((Y_i)_{i\in I}\) is a set of \(F\)\nobreakdash-actions, then the
disjoint union \(\bigsqcup_{i\in I} Y_i\) with the canonical
topology carries a unique \(F\)\nobreakdash-action for which the inclusions
\(Y_i \to \bigsqcup_{i\in I} Y_i\) are all \(F\)\nobreakdash-equivariant,
and this is a coproduct in the category of \(F\)\nobreakdash-actions. Now
let \(Y_1\) and~\(Y_2\) be two spaces with \(F\)\nobreakdash-actions and
let \(f,g\colon Y_1 \rightrightarrows Y_2\) be two
\(F\)\nobreakdash-equivariant continuous maps. Equip~\(Y_2\) with the
equivalence relation~\(\sim\) that is generated by
\(f(y)\sim g(y)\) for all \(y\in Y_1\) and let~\(Y\)
be~\(Y_2/{\sim}\) with the quotient topology. This is the
coequaliser of \(f,g\) in the category of topological spaces. We
claim that there is a unique \(F\)\nobreakdash-action on~\(Y\) so that the
quotient map is \(F\)\nobreakdash-equivariant. And this \(F\)\nobreakdash-action
turns~\(Y\) into a coequaliser of \(f,g\) in the category of
\(F\)\nobreakdash-actions. We use \longref{Lemma}{lem:F-action_from_theta}
to build the \(F\)\nobreakdash-action on~\(Y\). Since \(f,g\) are
\(F\)\nobreakdash-equivariant, the continuous maps
\(\rg\colon Y_2 \to\Gr^0\) equalises \(f,g\). Then~\(\rg\)
descends to a continuous map \(\rg\colon Y\to \Gr^0\). Let
\(t\in \IS(F)\). The domain of~\(t\) is closed under~\(\sim\)
because \(f,g\) are \(\IS(F)\)\nobreakdash-equivariant, and \(y_1\sim y_2\)
implies \(\vartheta(t)(y_1) \sim \vartheta(t)(y_2)\). Therefore,
the image of the domain of~\(\vartheta(t)\) in~\(Y\) is open in
the quotient topology and \(\vartheta(t)\) descends to a partial
homeomorphism of~\(Y\). This defines an action of \(\IS(F)\)
on~\(Y\). All conditions in
\longref{Lemma}{lem:F-action_from_theta} pass from~\(Y_2\)
to~\(Y\). We have found an \(F\)\nobreakdash-action on~\(Y\). Any
continuous map \(h\colon Y_2 \to Z\) with \(h\circ f = h\circ g\)
descends uniquely to a continuous map \(h^\flat\colon Y\to Z\).
If~\(h\) is \(F\)\nobreakdash-equivariant, then so is~\(h^\flat\) by
\longref{Lemma}{lem:theta_gives_equivariant}.
Thus~\(Y\) is a coequaliser of \(f,g\). This finishes the proof
that the category of \(F\)\nobreakdash-actions is cocomplete. And then the
existence of a final object follows.
\end{proof}
\longref{Theorem}{the:groupoid_model_universal_action_exists} has
the merit that it works for any diagram of groupoid correspondences.
For applications to \(\Cst\)\nobreakdash-algebras, however, the groupoid
model should be a locally compact groupoid. Equivalently, the
underlying space~\(\Omega\) of the universal action should be
locally compact and Hausdorff. \longref{Example}{exa:words} shows
that~\(\Omega\) may fail to be locally compact in rather simple
examples. In the following sections, we are going to prove
that~\(\Omega\) is locally compact and Hausdorff whenever~\(F\) is a
diagram of proper, locally compact groupoid correspondences. Like
the proof of
\longref{Theorem}{the:groupoid_model_universal_action_exists}, our
proof of this statement will not be constructive. The key tool is a
relative form of the Stone--Čech compactification, which we will use
to show that any \(F\)\nobreakdash-action maps to an \(F\)\nobreakdash-action on a
locally compact Hausdorff space.
\section{The relative Stone--Čech compactification}
\label{sec:relative_SC}
We begin by recalling some well known definitions.
\begin{proposition}[\cite{Bourbaki:Topologie_generale}*{I.10.1,
I.10.3 Proposition~7}]
\label{pro:proper_map}
Let \(X\) and~\(Y\) be topological spaces. A map
\(f\colon X\to Y\) is \emph{proper} if and only if
\(f \times \id_Z\colon X\times Z\to Y\times Z\) is closed for
every topological space~\(Z\).
If~\(X\) is Hausdorff and~\(Y\) is Hausdorff, locally compact,
then \(f\colon X\to Y\) is proper if and only if preimages of
compact subsets are compact.
\end{proposition}
\begin{definition}[\cite{May-Sigurdsson:Parametrized_homotopy_theory}]
Let~\(B\) be a topological space. A \emph{\(B\)\nobreakdash-space} is a
topological space~\(Z\) with a continuous map \(r \colon Z \to B\),
called anchor map. It is called \emph{proper} if~\(r\) is a proper
map.
Let \((Z_1,r_1)\) and \((Z_2,r_2)\) be two \(B\)\nobreakdash-spaces. A
\emph{\(B\)\nobreakdash-map} is a continuous map \(f \colon Z_1 \to Z_2\) such
that the following diagram commutes:
\[
\begin{tikzcd}[row sep=small]
Z_1 \ar[rr, "f"] \ar[dr, "r_1"']&&
Z_2 \ar[dl, "r_2"] \\
& B
\end{tikzcd}
\]
Let~\(\gps{B}\) be the category of \(B\)\nobreakdash-spaces, which has
\(B\)\nobreakdash-spaces as its objects and \(B\)\nobreakdash-maps as its morphisms,
with the usual composition of maps. Let
\(\pgps{B} \subseteq \gps{B}\) be the full subcategory of those
\(B\)\nobreakdash-spaces \((Z,r)\) where the space~\(Z\) is Hausdorff and
the map~\(r\) is proper.
\end{definition}
\begin{remark}
\label{rem:proper_is_locally_compact}
If~\(B\) is Hausdorff, locally compact and \((Z,r)\) is a proper
\(B\)-space, then~\(Z\) is locally compact by
\longref{Proposition}{pro:proper_map}. This is how we are going
to prove that the underlying space of a universal action is
locally compact.
\end{remark}
For a topological space~\(X\), its Stone--\v{C}ech compactification
is a compact Hausdorff space~\(\beta X\) with a continuous map
\(\iota_X\colon X\to \beta X\), such that any continuous map
from~\(X\) to a compact Hausdorff space factors uniquely
through~\(\iota_X\). In other words, the Stone--\v{C}ech
compactification~\(\beta\) is left adjoint to the inclusion of the
full subcategory of compact Hausdorff spaces into the category of
all topological spaces. If~\(B\) is the one-point space, then a
\(B\)\nobreakdash-space is just a space, and \(B\)\nobreakdash-maps are just
continuous maps. A proper, Hausdorff \(B\)\nobreakdash-space is just a
compact Hausdorff space. Thus the Stone--\v{C}ech compactification
is a left adjoint for the inclusion \(\pgps{B} \subseteq \gps{B}\)
in the case where~\(B\) is a point. The \emph{relative}
Stone--\v{C}ech compactification generalises this to all Hausdorff,
locally compact spaces~\(B\).
For a topological space~\(X\), let \(\Contb(X)\) be the
\(\Cst\)\nobreakdash-algebra of all bounded, continuous functions \(X\to\C\).
A continuous map \(f\colon X\to Y\) induces a $^*$\nobreakdash-{}homomorphism
\(f^*\colon \Contb(Y) \to \Contb(X)\), \(h\mapsto h\circ f\).
If~\(X\) is Hausdorff, locally compact, then we let
\(\Cont_0(X) \subseteq \Contb(X)\) be the ideal of all continuous
functions \(X\to\C\) that vanish at~\(\infty\). If \(X\) and~\(Y\)
are Hausdorff, locally compact spaces and \(f\colon X\to Y\) is a continuous
map, then the restriction of \(f^*\colon \Contb(X)\to \Contb(Y)\) to
\(\Cont_0(X)\) is \emph{nondegenerate}, that is,
\[
f^*(\Cont_0(X)) \cdot \Cont_0(Y) = \Cont_0(Y).
\]
Conversely, any nondegenerate $^*$\nobreakdash-{}homomorphism is of this form
for a unique continuous map~\(f\). The range of~\(f^*\) is
contained in \(\Cont_0(Y)\) if and only if~\(f\) is proper.
\begin{definition}
Let~\(B\) be a locally compact Hausdorff space and let \((X,r)\) be a
\(B\)\nobreakdash-space. The \emph{relative Stone--\v{C}ech
compactification} \(\rscc{B} X\) of~\(X\) over~\(B\) is defined as
the spectrum of the \(C^*\)\nobreakdash-subalgebra
\[
H_X \mathrel{\vcentcolon=} \Contb(X) \cdot r^*(\Cont_0(B)) \subseteq \Contb(X).
\]
\end{definition}
We show that the relative Stone--\v{C}ech compactification is indeed
the reflector (left adjoint) of the inclusion \(\pgps{B} \hookrightarrow \gps{B}\).
In the following, we let~\(B\) be a locally compact Hausdorff space,
\((X,r)\) an object in \(\gps{B}\) and \((X',r')\) an object in
\(\pgps{B}\). Then~\(X'\) is Hausdorff by the definition of
\(\pgps{B}\) and locally compact by
\longref{Remark}{rem:proper_is_locally_compact}.
The inclusion \(i^* \colon H_X \hookrightarrow \Contb(X)\) is a
$^*$\nobreakdash-{}homomorphism. For each \(x \in X\), denote by
\(\mathrm{ev}_x\) the evaluation map at~\(x\). Then
\(\mathrm{ev}_x\circ i^* \colon H_X \to \C\) is a character
on~\(H_X\). It is nonzero on~\(H_X\) because
\(\mathrm{ev}_x\circ i^*(1\cdot r^*(h))\neq0\) if \(h\in\Cont_0(B)\)
satisfies \(h(r(x))\neq 0\). Thus \(\mathrm{ev}_x\circ i^*\) is a
point in the spectrum~\(\rscc{B} X\) of~\(H_X\). This defines a map
\(i \colon X \to \rscc{B} X\). The map~\(i\) is continuous because
\(h\circ i\) is continuous for all
\(h\in H_X = \Cont_0(\rscc{B} X)\).
\begin{lemma}
\label{uniquedual}
Let \(f, g \colon X \rightrightarrows X'\). If \(f \neq g\), then
\(f^* \neq g^*\colon \Cont_0(X') \to \Contb(X)\).
\end{lemma}
\begin{proof}
By assumption, there is \(x \in X\) with \(f(x) \neq g(x)\) in~\(X'\).
Since~\(X'\) is Hausdorff and locally compact, we may separate
\(f(x)\) and~\(g(x)\) by relatively compact, open neighbourhoods
\(U_f\) and~\(U_g\). Urysohn's Lemma gives a continuous function
\(h\colon \overline{U_f} \to [0,1]\) with \(h(f(x))=1\) and
\(h|_{\partial U_f} = 0\). Extend~\(h\) by~\(0\) to a
function~\(\tilde{h}\) on~\(X'\). This belongs to \(\Cont_0(X')\)
because \(h|_{\partial U_f} = 0\) and~\(\overline{U_f}\) is
compact, and \(\tilde{h}(g(x))=0\). Thus
\(f^*(\tilde{h}) \neq g^*(\tilde{h})\).
\end{proof}
\begin{lemma}
\label{dense}
Let~\(S\) be a subset of a locally compact Hausdorff space~\(X'\). If
the restriction map from \(\Cont_0(X')\) to \(\Contb(S)\) is
injective, then~\(S\) is dense in~\(X'\).
\end{lemma}
\begin{proof}
We prove the contrapositive statement. Suppose that~\(S\) is not
dense in~\(X'\). Then \(\overline{S} \neq X'\). As in the proof of
\longref{Lemma}{uniquedual}, there is a nonzero continuous
function \(h\in \Cont_0(X' \backslash \overline{S})\).
Extending~\(h\) by zero gives a nonzero function in
\(\Cont_0(X')\) that vanishes on~\(S\).
\end{proof}
\begin{lemma}
\label{lem:X_dense_in_beta}
The image of~\(X\) in \(\rscc{B} X\) is dense.
\end{lemma}
\begin{proof}
\longref{Lemma}{dense} shows this because
\(i^*\colon \Cont_0(\rscc{B} X) \cong H_X \to \Contb(X)\) is
injective.
\end{proof}
\begin{proposition}
\label{betaXtoY}
Let \(f \colon X \to X'\) be a morphism in \(\gps{B}\). Assume~\(X'\)
to be a Hausdorff proper \(B\)\nobreakdash-space. Then there is a
unique continuous map \(f' \colon \rscc{B} X \to X'\) such
that the following diagram commutes:
\[
\begin{tikzcd}[sep=small]
X \ar[rr, "f"] \ar[dr, "i"']&& X'
\\
& \rscc{B} X \ar[ur,dashed, "\exists ! f'"']
\end{tikzcd}
\]
The map~\(f'\) is automatically proper.
\end{proposition}
\begin{proof}
Let \(f^* \colon \Cont_0(X') \to \Contb(X)\) be the dual map
of~\(f\) and let \(i^* \colon H_X \hookrightarrow \Contb(X)\) be
the inclusion map. Since~\(r'\) is proper, it induces a
nondegenerate $^*$\nobreakdash-{}homomorphism
\((r')^* \colon \Cont_0(B) \to \Cont_0(X')\). We use this to show
that \(f^*(\Cont_0(X')) \subseteq H_X\):
\begin{multline*}
f^*(\Cont_0(X'))
= f^*({r'}^*(\Cont_0(B)) \cdot \Cont_0(X'))
\\= f^*({r'}^*(\Cont_0(B))) \cdot f^*(\Cont_0(X'))
= r^*(\Cont_0(B)) \cdot f^*(\Cont_0(X'))
\subseteq H_X.
\end{multline*}
Let~\((f^*)'\) be~\(f^*\) viewed as a $^*$\nobreakdash-{}homomorphism
\(\Cont_0(X') \to H_X\). We claim that~\((f^*)'\) is
nondegenerate. The proof uses that a $^*$\nobreakdash-{}homomorphism is
nondegenerate if and only if it maps an approximate unit again to
an approximate unit; this well known result goes back at least to
\cite{Rieffel:Induced_Banach}*{Proposition~3.4}. Let
\((e_i)_{i \in I}\) be an approximate unit in \(\Cont_0(B)\).
Then \({r'}^*(e_i)\) is an approximate unit in \(\Cont_0(X')\).
Now
\((f^*)'({r'}^*(e_i)) = r^*(e_i) = i^* ({\rscc{B}} r)^* (e_i)\).
For any \(\varphi_1 \in \Contb(X)\) and
\(\varphi_2 \in \Cont_0(B)\),
\(\norm{\varphi_1 \cdot r^*(\varphi_2)r^*(e_i) - \varphi_1 \cdot
r^*(\varphi_2)} \le \norm{\varphi_1} \norm{r^*(\varphi_2 e_i) -
r^*(\varphi_2)} \to 0\), as \(r^*\) is continuous. Hence
\(r^*(e_i)\) is an approximate unit in~\(H_X\). We let
\(f' \colon \rscc{B} X \to X'\) be the dual of~\((f^*)'\). This
is a proper continuous map. Since~\(X'\) is Hausdorff, two
continuous maps to~\(X'\) that are equal on a dense subset are
equal everywhere. Therefore, \(f'\) is unique by
\longref{Lemma}{lem:X_dense_in_beta}.
\end{proof}
\begin{corollary}
\label{2inj}
The anchor map \(r \colon X \to B\) extends uniquely to a proper
continuous map \({\rscc{B}} r \colon {\rscc{B}} X \to B\), such that
the following diagram commutes:
\[
\begin{tikzcd}[row sep=small]
X \ar[rr, "i"] \ar[dr, "r"']&&
\rscc{B} X
\ar[dl, dashed,
"\exists! \rscc{B} r"] \\
& B
\end{tikzcd}
\]
\end{corollary}
\begin{proof}
Since the identity map \(B \to B\) is proper, \(B\) is an object in
\(\pgps{B}\). Now apply \longref{Proposition}{betaXtoY} in the case
where \(X' = B\) and \(f = r \colon X \to B\).
\end{proof}
\begin{proposition}
In the above setting, the following diagram commutes:
\[
\begin{tikzcd}[row sep=small]
{\rscc{B}} X \ar[rr, "f'"] \ar[dr, "{\rscc{B}} r"']&&
X'\ar[dl, "r'"]
\\
& B
\end{tikzcd}
\]
\end{proposition}
\begin{proof}
We get
\(i^* \circ (f^*)' \circ (r')^* = i^* \circ ({\rscc{B}} r)^*\) by
construction. Since~\(i^*\) is a monomorphism, this implies
\((f^*)' \circ (r')^* = (\rscc{B} r)^*\).
\end{proof}
\begin{theorem}
\label{the:rscc_reflector}
\(\rscc{B}\) is a reflector or, equivalently, it is left adjoint to
the inclusion functor
\(I \colon \pgps{B} \hookrightarrow \gps{B}\).
\end{theorem}
\begin{proof}
The propositions above tell us that~\(\rscc{B}\) is left adjoint
to~\(I\).
\end{proof}
\begin{lemma}
\label{propercorr}
Let~\(X\) be a topological space, and let \(Y\) and~\(Z\) be
locally compact Hausdorff spaces. Let \(f_1 \colon X \to Y\) be
continuous and let \(f_2 \colon Y \to Z\) be proper and
continuous. Then
\(\rscc{Y} (X,f_1) \cong \rscc{Z} (X,f_2\circ f_1)\).
\end{lemma}
\begin{proof}
It suffices to show that
\(\Contb(X) \cdot {f_1}^*(\Cont_0(Y)) = \Contb(X) \cdot
(f_2f_1)^*(\Cont_0(Z))\). Since~\(f_2\) is proper,
\(f_2^* \colon \Cont_0(Z) \to \Cont_0(Y)\) is nondegenerate. In
particular, \(f_1^*(f_2^*(\Cont_0(Z)) \subseteq f_1^*(\Cont_0(Y))\),
giving the inclusion~``\(\supseteq\)''. Since
\(f_2^*(\Cont_0(Z)) \cdot \Cont_0(Y) = \Cont_0(Y)\), we compute
\[
f_1^*(\Cont_0(Y)) = f_1^*(f_2^*(\Cont_0(Z) \cdot \Cont_0(Y)))
= (f_2f_1)^*(\Cont_0(Z)) \cdot f_1^*(\Cont_0(Y))
\]
and then
\[
\Contb(X) \cdot f_1^*(\Cont_0(Y)) = \Contb(X) \cdot
(f_2f_1)^*(\Cont_0(Z)) \cdot f_1^*(\Cont_0(Y)) \subseteq \Contb(X)
\cdot (f_2f_1)^*(\Cont_0(Z)).\qedhere
\]
\end{proof}
\begin{lemma}
\label{sq}
In a commuting diagram of topological spaces and continuous maps
\[
\begin{tikzcd}
X_1 \ar[rr, "f" ] \ar[d, "r_1"']&& X_2
\ar[d, "r_2"] \\
B_1 \ar[rr, "f_0"] && B_2,
\end{tikzcd}
\]
assume \(B_1\) and~\(B_2\) to be locally compact Hausdorff
and~\(f_0\) to be proper. Then there is a unique continuous map
\(\tilde{f} \colon \rscc{B_1} X_1 \to \rscc{B_2} X_2\) that makes
the following diagram commute:
\[
\begin{tikzcd}
X_1 \ar[rr, "f" ] \ar[d, "i_1"']&& X_2
\ar[d, "i_2"] \\
\rscc{B_1} X_1 \ar[rr, dotted, "\tilde{f}"] \ar[d, "\rscc{B_1}
r_1"']&& \rscc{B_2} X_2
\ar[d, "\rscc{B_2} r_2"] \\
B_1 \ar[rr, "f_0"] && B_2
\end{tikzcd}
\]
\end{lemma}
\begin{proof}
Since~\(f_0\) is proper, \(B_1\) is an object in the category of
Hausdorff proper \(B_2\)\nobreakdash-spaces. Then
\longref{Theorem}{the:rscc_reflector} implies
\(\rscc{B_2} B_1 \cong B_1\) and gives a commuting diagram
\[
\begin{tikzcd}
X_1 \ar[rr, "f" ] \ar[d, "i_1^2"']&& X_2
\ar[d, "i_2"] \\
\rscc{B_2} X_1 \ar[rr, "\rscc{B_2} f" ] \ar[d, "\rscc{B_2} r_1"']&&
\rscc{B_2} X_2
\ar[d, "\rscc{B_2} r_2"] \\
B_1 \ar[rr, "f_0"] && B_2.
\end{tikzcd}
\]
\longref{Lemma}{propercorr} gives
\(\rscc{B_1} X_1 = \rscc{B_2} X_1\), and this turns the diagram
above into what we need. The map \(\tilde{f} =\rscc{B_2} f\) is
unique because~\(\rscc{B_2} X_2\) is Hausdorff and the image
of~\(X_1\) in~\(\rscc{B_2} X_1\) is dense by
\longref{Lemma}{lem:X_dense_in_beta}.
\end{proof}
\begin{lemma}
\label{equalityextension}
Given a commuting diagram of continuous maps
\[
\begin{tikzcd}
X_1 \ar[rr, bend left, "f"] \ar[r, "h"'] \ar[d, "\rg_1"']&
X_2 \ar[r, "g"' ] \ar[d, "\rg_2"] &
X_3 \ar[d, "\rg_3"] \\
B_1 \ar[r, "h_0"] \ar[rr, bend right, "f_0"']
& B_2 \ar[r, "g_0"] & B_3
\end{tikzcd}
\]
with locally compact Hausdorff spaces~\(B_j\) and proper \(h_0\)
and~\(g_0\), the maps constructed in \longref{Lemma}{sq} satisfy
\(\tilde{f} = \tilde{g} \circ \tilde{h}\).
\end{lemma}
\begin{proof}
The map~\(\tilde{g} \circ \tilde{h}\) also has the properties that
uniquely characterise~\(\tilde{f}\).
\end{proof}
\begin{lemma}
\label{extiso}
If the map~\(f\) in \longref{Lemma}{sq} is a homeomorphism, then so
is~\(\tilde{f}\).
\end{lemma}
\begin{proof}
Apply \longref{Lemma}{equalityextension} to the compositions
\(f\circ f^{-1}\) and \(f^{-1}\circ f\).
\end{proof}
The following results will be used in the next section to extend an
action of a diagram to the relative Stone--\v{C}ech
compactification.
\begin{lemma}
\label{lem:rsc_disjoint_union}
Let~\(I\) be a set, let~\(B_i\) for \(i\in I\) be locally compact
Hausdorff spaces, and let \(r_i\colon Y_i\to B_i\) be topological spaces
over~\(B_i\). Let \(B = \bigsqcup_{i\in I} B_i\) and
\(Y = \bigsqcup_{i\in I} Y_i\) with the induced map
\(r\colon Y\to B\). Then
\(\rscc{B} Y \cong \bigsqcup_{i\in I} \rscc{B_i} Y_i\).
\end{lemma}
\begin{proof}
The map that takes the family \((Y_i,r_i)\) of spaces over~\(B_i\)
to \((Y,r)\) as a space over~\(B\) is an equivalence of categories
from the product of categories \(\prod_{i\in I} \gps{B_i}\) to the
category \(\gps{B}\). A space over~\(B\) is Hausdorff and proper
if and only if its pieces over~\(B_i\) are Hausdorff and proper
for all \(i\in I\). That is, the isomorphism of categories above
identifies the subcategory \(\pgps{B}\) of Hausdorff and proper
\(B\)\nobreakdash-spaces with the product of the subcategories
\(\pgps{B_i}\). The product of the reflectors
\(\rscc{B_i}\colon \gps{B_i} \to \pgps{B_i}\) is a reflector
\(\prod_{i\in I} \gps{B_i} \to \prod_{i\in I} \pgps{B_i}\). Under
the equivalence above, this becomes the reflector~\(\rscc{B}\).
Both reflectors must be naturally isomorphic.
\end{proof}
\begin{lemma}
\label{fonpreimg}
Let~$\Gr$ be a locally compact groupoid, let~$V$ be an open subset
of~$\Gr^0$, and let $(Z,k\colon Z\to\Gr^0)$ be a locally compact
Hausdorff space over~$\Gr^0$. Then
$\Cont_0(k^{-1}(V)) \cong k^*(\Cont_0(V)) \cdot \Cont_0(Z)$.
\end{lemma}
\begin{proof}
Let $J \mathrel{\vcentcolon=} k^*(\Cont_0(V)) \cdot \Cont_0(Z)$. This is
an ideal in $\Cont_0(Z)$. So its spectrum~\(\hat{J}\) is an open
subset of~$Z$. Namely, it consists of those \(z\in Z\) for which
there is \(f\in J\) with \(f(z)\neq0\). There is always
\(h\in \Cont_0(Z)\) with \(h(z) \neq0\). Therefore,
\(z\in\hat{J}\) if and only if there is \(g\in \Cont_0(V)\) with
\(k^*(g)(z)\neq0\). Since \(k^*(g)(z) = g(k(z))\), such a~\(g\)
exists if and only if \(k(z)\in V\). Thus
\(\hat{J} = k^{-1}(V)\).
\end{proof}
\begin{lemma}
\label{lem:ident}
Let~\(B\) be a locally compact Hausdorff space and let
\(V\subseteq B\) be an open subset. Let \((Y,\rg_Y)\) be
a space over~\(B\) and let
\((\rscc{B} Y,\rg_{\rscc{B} Y}) \in \gps{B}\) be its relative
Stone--\v{C}ech compactification. Then
\(\rg_Y^{-1}(V) \subseteq Y\) is a space over~\(V\), so that
\(\rscc{V} (\rg_Y^{-1} (V))\) is defined, and
\(\rscc{V} (\rg_Y^{-1} (V)) \cong (\rg_{\rscc{B} Y})^{-1} (V)
\subseteq\rscc{B} Y\).
\end{lemma}
\begin{proof}
We proceed in terms of their \(\Cst\)\nobreakdash-algebras. By definition
of the relative Stone--\v{C}ech compactification,
$\rscc{V} (r_Y^{-1} (V))$ is the spectrum of the commutative
\(\Cst\)\nobreakdash-algebra
$\Contb(r_Y ^{-1}(V)) \cdot r_Y ^* (\Cont_0(V))$. By
\longref{Lemma}{fonpreimg}, $(\beta_{\Gr^0} r_Y)^{-1} (V)$ corresponds to
$\Cont_0(\beta_{\Gr^0} Y) \cdot r_Y ^* (\Cont_0(V))$. Since
$\Cont_0(\Gr^0) \cdot \Cont_0(V) = \Cont_0(V)$, we compute
\[
\Cont_0(\beta_{\Gr^0} Y) \cdot r_Y^*(\Cont_0(V))
= \Contb(Y) \cdot r_Y^*(\Cont_0(\Gr^0)) \cdot r_Y^*(\Cont_0(V))
= \Contb(Y) \cdot r_Y^*(\Cont_0(V)).
\]
Therefore, it suffices to show that
$\Contb(r_Y^{-1}(V)) \cdot r_Y^*(\Cont_0(V)) \cong \Contb(Y)
\cdot r_Y^*(\Cont_0(V))$.
Bounded functions on~$Y$ restrict to bounded functions
on~$r_Y^{-1} (V)$, and this restriction map is injective on the
subalgebra $\Contb(Y) \cdot r_Y ^*(\Cont_0(V))$ because functions
in this subalgebra vanish outside $r_Y^{-1}(V)$. Therefore, there
is an inclusion
\[
\varrho \colon
\Contb(Y) \cdot r_Y ^* (\Cont_0(V)) \hookrightarrow
\Contb(r_Y^{-1}(V)) \cdot r_Y ^* (\Cont_0(V)).
\]
We must prove that it is surjective. Any element of
\(\Contb(r_Y^{-1}(V)) \cdot r_Y ^* (\Cont_0(V))\) is of the form
\(f\cdot h\) with $f \in \Contb(r_Y ^{-1}(V))$ and
$h \in r_Y ^* (\Cont_0(V))$. The Cohen--Hewitt Factorisation
Theorem gives $h_1, h_2 \in r_Y ^* (\Cont_0(V))$ with
$h = h_1 \cdot h_2$. Let $\varphi$ be the
extension of $f \cdot h_1 \colon r_Y ^{-1}(V) \to \C$ by zero. We
are going to show that $\varphi$ is continuous
on~$Y$. Since
\(\varrho(\varphi\cdot h_2) = f\cdot h\), it
follows that~\(\varrho\) is surjective.
It remains to prove that \(\varphi\) is
continuous. The only points where this is unclear are the
boundary points of \(r_Y^{-1}(V)\). Let \((y_n)_{n\in N}\) be a
net that converges towards such a boundary point. We claim
that \(\varphi(y_n)\) converges to~\(0\). This
proves the claim. If \(y_n \notin r_Y^{-1}(V)\), then
\(\varphi(y_n)=0\) by construction. So it is no
loss of generality to assume \(y_n \in r_Y^{-1}(V)\) for all
\(n\in N\). Then \(r_Y(y_n)\) is a net in~\(V\) that
converges towards~\(\infty\). Therefore,
\(\lim h_1'(r_Y(y_n)) = 0\) for all \(h_1' \in \Cont_0(V)\). This
implies \(\lim h_1(y_n) = 0\). Since~\(f\) is bounded, this
implies \(\lim \varphi(y_n)=0\).
\end{proof}
\section{Extending actions to the relative Stone--Čech compactification}
\label{sec:extend_action}
The aim of this section is to extend an action of a diagram on a
topological space~\(Y\) with the anchor map
\(\rg_Y \colon Y \to \Gr^0\) to~\(\beta_{\Gr^0} Y\). Actions of étale
groupoids are a special case of such diagram actions, and this
special case is a bit easier. Therefore, we first treat only
actions of groupoids. Since our aim is to generalise to diagram
actions, we do not complete the proof in this case, however. We
only prove a more technical result about the action of slices of the
groupoid.
Let~\(\Gr\) be a locally compact étale groupoid acting on a
topological space~\(Y\) with the anchor map
\(\rg_Y \colon Y \to \Gr^0\). The action of~\(\Gr\) on~\(Y\) may be
encoded as in \longref{Lemma}{lem:F-action_from_theta} by the anchor
map \(\rg_Y\colon Y\to \Gr^0\) and partial homeomorphisms
\(\vartheta_Y(\U)\) of~\(Y\) for all slices \(\U\in \Bis(\Gr)\)
on~\(Y\), subject to some conditions. In fact, in this case the
conditions simplify quite a bit, but we do not go into this here.
The anchor map~\(\rg_Y\) extends to a continuous map
\(\beta_{\Gr^0} \rg_Y\colon \beta_{\Gr^0} Y \to \Gr^0\) by construction. The
following lemma describes the canonical extension of the partial
homeomorphisms~\(\vartheta_Y(\U)\):
\begin{proposition}
\label{extendgpaction}
Let \(\U \in \Bis(\Gr)\). The partial
homeomorphism~\(\vartheta_Y (\U)\) of~\(Y\) extends uniquely to a
partial homeomorphism
\[
\vartheta_{\beta_{\Gr^0} Y}(\U)\colon
(\beta_{\Gr^0} \rg_Y)^{-1} (\s(\U)) \xrightarrow\sim (\beta_{\Gr^0} \rg_Y)^{-1} (\rg(\U)).
\]
Here ``extends'' means that
\(\vartheta_{\beta_{\Gr^0} Y}(\U) \circ i_Y = i_Y \circ \vartheta_Y(\U)\)
for the canonical map \(i_Y\colon Y \to \beta_{\Gr^0} Y\).
\end{proposition}
\begin{proof}
The anchor map \(\rg_Y\colon Y\to\Gr^0\) is \(\Gr\)\nobreakdash-equivariant
when we let~\(\Gr\) act on~\(\Gr^0\) in the usual way. The
slice~\(\U\) acts both on~\(Y\) and on~\(\Gr^0\), and the latter
action is the composite homeomorphism
\(\rg|_\U \circ (\s|_U)^{-1}\colon \s(\U) \xrightarrow\sim \U \xrightarrow\sim
\rg(\U)\). The naturality of the construction of~\(\vartheta\)
shows that the following diagram commutes:
\[
\begin{tikzcd}
\rg_Y^{-1} (\s(\U)) \ar[r, " \vartheta_Y (\U)", " \cong"']
\ar[d, "\rg_Y"']&
\rg_Y^{-1} (\rg(\U))
\ar[d, "\rg_Y"] \\
\s(\U) \ar[r, "\cong"', "\vartheta_{\Gr^0}(\U)"] & \rg(\U)
\end{tikzcd}
\]
Now \longref{Lemma}{sq} with \(B_1 = \s(\U)\) and
\(B_2 = \rg(\U)\) gives a map
\[
\widetilde{\vartheta_Y (\U)} \colon \rscc{\s(\U)} (\rg_Y^{-1}
(\s(\U))) \to \rscc{\rg(\U)} (\rg_Y^{-1} (\rg(\U))).
\]
It is a homeomorphism by \longref{Lemma}{extiso}.
\longref{Lemma}{lem:ident} identifies the domain and codomain of
\(\widetilde{\vartheta_Y (\U)}\) with
\((\beta_{\Gr^0} \rg_Y)^{-1} (\s(\U))\) and \((\beta_{\Gr^0} \rg_Y)^{-1} (\rg(\U))\)
as spaces over \(\s(\U)\) and \(\rg(\U)\), respectively. So we
get a partial homeomorphism \(\vartheta_{\rscc{\Gr^0} Y} (\U)\) of
\(\rscc{\Gr^0} Y\) that makes the following diagram commute:
\begin{equation}
\label{eq:vartheta_beta_U_diagram}
\begin{tikzcd}[column sep=large]
\rg_Y^{-1} (\s(\U)) \ar[r, " \vartheta_Y (\U)", " \cong"']
\ar[d, "i_Y"']&
\rg_Y^{-1} (\rg(\U))
\ar[d, "i_Y"] & Y\\
\rg_{\rscc{\Gr^0} Y}^{-1} (\s(\U))
\ar[r, "\vartheta_{\rscc{\Gr^0} Y} (\U)", "\cong"']
\ar[d, "\rg_{\rscc{\Gr^0}Y}"']&
\rg_{\rscc{\Gr^0} Y}^{-1} (\rg(\U))
\ar[d, "\rg_{\rscc{\Gr^0}Y}"] & \beta_{\Gr^0} Y\\
\s(\U) \ar[r, "\cong"', "\vartheta_{\Gr^0}(\U)"] & \rg(\U) & \Gr^0
\end{tikzcd}
\end{equation}
The argument also shows
\(\vartheta_{\beta_{\Gr^0} Y}(\U) \circ i_Y = i_Y \circ \vartheta_Y(\U)\)
and that the image of \(\rg_Y^{-1}(\s(\U))\) in
\((\beta_{\Gr^0} \rg_Y)^{-1} (\rg(\U))\) is dense. Since the space
\(\beta_{\Gr^0} Y\) is Hausdorff, this implies that the top
square~\eqref{eq:vartheta_beta_U_diagram} determines the extension
\(\vartheta_{\rscc{\Gr^0} Y} (\U)\) uniquely.
\end{proof}
To show that the \(\Gr\)\nobreakdash-action on~\(Y\) extends uniquely to a
\(\Gr\)\nobreakdash-action on~\(\rscc{\Gr^0} Y\), it would remain to prove
that the partial homeomorphisms \(\vartheta_{\rscc{\Gr^0} Y} (\U)\)
for slices \(\U\in\Bis(\Gr)\) satisfy the conditions in
\longref{Lemma}{lem:F-action_from_theta}. We will prove this in the
more general case of diagram actions.
Before we continue to this more general case, we rewrite the
diagram~\eqref{eq:vartheta_beta_U_diagram} in a way useful for the
generalisation to \(F\)\nobreakdash-actions below. We claim
that~\eqref{eq:vartheta_beta_U_diagram} commutes if and only if the
following diagram commutes, where the dashed arrows are partial
homeomorphisms and the usual arrows are globally defined continuous
maps:
\begin{equation}
\label{eq:vartheta_beta_U_diagram_partial}
\begin{tikzcd}[column sep=huge]
Y \ar[r, dashed, "\vartheta_Y(\U)"] \ar[d, "i_Y"']&
Y \ar[r, dashed, "\vartheta_Y(\U)^*"] \ar[d, "i_Y"']&
Y \ar[d, "i_Y"]\\
\rscc{\Gr^0} Y
\ar[r, dashed, "\vartheta_{\rscc{\Gr^0} Y} (\U)"]
\ar[d, "\rg_{\rscc{\Gr^0}Y}"']&
\rscc{\Gr^0} Y
\ar[r, dashed, "\vartheta_{\rscc{\Gr^0} Y} (\U)^*"]
\ar[d, "\rg_{\rscc{\Gr^0}Y}"] &
\rscc{\Gr^0} Y
\ar[d, "\rg_{\rscc{\Gr^0}Y}"'] \\
\Gr^0 \ar[r, dashed, "\vartheta_{\Gr^0}(\U)"] &
\Gr^0 \ar[r, dashed, "\vartheta_{\Gr^0}(\U)^*"] &
\Gr^0
\end{tikzcd}
\end{equation}
A diagram of partial maps commutes if and only if any two parallel
partial maps in the diagram are equal, and this includes an equality
of their domains. The domain of
\(\rg_{\beta_{\Gr^0} Y}\circ \vartheta_{\beta_{\Gr^0} Y}(\U)\) is equal to the domain
of \(\vartheta_{\Gr^0}(\U)\), whereas the domain of
\(\vartheta_{\Gr^0}(\U) \circ \rg_{\beta_{\Gr^0} Y}\) is
\(\rg_{\beta_{\Gr^0} Y}^{-1}(\s(\U))\) because \(\vartheta_{\Gr^0}(\U)\) has
domain~\(\s(\U)\). Thus the bottom left square implies that
\(\vartheta_{\beta_{\Gr^0} Y}(\U)\) has the domain
\(\rg_{\beta_{\Gr^0} Y}^{-1}(\s(\U))\). Similarly, the bottom right square
implies that \(\vartheta_{\beta_{\Gr^0} Y}(\U)^*\) has the domain
\(\rg_{\beta_{\Gr^0} Y}^{-1}(\rg(\U))\). Equivalently,
\(\vartheta_{\beta_{\Gr^0} Y}(\U)\) has the image
\(\rg_{\beta_{\Gr^0} Y}^{-1}(\rg(\U))\). In the top row, the domain and
image of \(\vartheta_Y(\U)\) must be \(\rg_Y^{-1}(\s(\U))\) and
\(\rg_Y^{-1}(\rg(\U))\) for the diagram to commute. In addition,
the diagram commutes as a diagram of ordinary maps when we replace
each entry by the domain of the partial maps that start there. This
gives exactly~\eqref{eq:vartheta_beta_U_diagram}. So the
diagram~\eqref{eq:vartheta_beta_U_diagram_partial} encodes both the
commutativity of~\eqref{eq:vartheta_beta_U_diagram} and the domains
and images of the partial maps in that diagram.
Now let~\(\Cat\) be a category and let \((\Gr_x, \Bisp_g, \mu_{g,h})\)
describe a \(\Cat\)\nobreakdash-shaped diagram
\(F \colon \Cat \to \Grcat_{\lc,\prop}\). That is, each~\(\Gr_x\)
for \(x\in\Cat^0\) is a locally compact, étale groupoid,
each~\(\Bisp_g\) for \(g\in \Cat(x,x')\) is a proper, locally
compact, étale groupoid correspondence
\(\Bisp_g \colon \Gr_{x'} \leftarrow \Gr_x\), and each~\(\mu_{g,h}\)
for \(g,h\in \Cat\) with \(\s(g) = \rg(h)\) is a homeomorphism
\(\mu_{g,h}\colon \Bisp_g \Grcomp_{\Gr_{\s(g)}} \Bisp_h \xrightarrow\sim
\Bisp_{g h}\), subject to the conditions in
\longref{Proposition}{pro:diagrams_in_Grcat}. Let~\(Y\) be a
topological space with an action of~\(F\). The action contains a
disjoint union decomposition \(Y = \bigsqcup_{x\in\Cat^0} Y_x\) and
continuous maps \(\rg_x\colon Y_x \to \Gr_x^0\), which we assemble
into a single continuous map \(\rg\colon Y\to \Gr^0\) with
\(\Gr^0 \mathrel{\vcentcolon=} \bigsqcup_{x\in\Cat^0} \Gr_x^0\). This makes~\(Y\) a
space over~\(Y\) and allows us to define the Stone--\v{C}ech
compactification \(\rscc{\Gr^0}{Y}\) of~\(Y\) relative to~\(\Gr^0\).
We are going to extend the action of~\(F\) on~\(Y\) to an action
on~\(\beta_{\Gr^0} Y\).
The key is the description of \(F\)\nobreakdash-actions in
\longref{Lemma}{lem:F-action_from_theta}. The space~\(\beta_{\Gr^0} Y\) comes
with a canonical map \(\rg_{\beta_{\Gr^0} Y}\colon \beta_{\Gr^0} Y \to \Gr^0\), which is
one piece of data assumed in
\longref{Lemma}{lem:F-action_from_theta}. We are going to construct
partial homeomorphisms~\(\vartheta_{\beta_{\Gr^0} Y}(\U)\) for all
\(\U\in\Bis(F)\) and then check the conditions in
\longref{Lemma}{lem:F-action_from_theta}. Before we start, we
notice that, by \longref{Lemma}{lem:rsc_disjoint_union},
\[
\beta_{\Gr^0} Y = \bigsqcup_{x\in\Cat^0} \rscc{\Gr_x^0} Y_x.
\]
\begin{lemma}
\label{lem:bisections_act_on_rsc}
Let \(\U\in\Bis(\Bisp_g)\) for some \(x,x'\in \Cat^0\) and
\(g\in\Cat(x,x')\). There is a unique partial homeomorphism
\(\vartheta_{\beta_{\Gr^0} Y}(\U)\) from \(\rscc{\Gr_x^0} Y_x\) to
\(\rscc{\Gr_{x'}^0} Y_{x'}\) that makes the following diagram
commute:
\begin{equation}
\label{eq:vartheta_beta_U_diagram_partial_corr}
\begin{tikzcd}[column sep=huge]
Y_x \ar[r, dashed, "\vartheta_Y(\U)"] \ar[d, "i_{Y_x}"']&
Y_{x'} \ar[r, dashed, "\vartheta_Y(\U)^*"] \ar[d, "i_{Y_{x'}}"']&
Y_x \ar[d, "i_{Y_x}"]\\
\rscc{\Gr^0_x} Y_x
\ar[r, dashed, "\vartheta_{\rscc{\Gr^0} Y} (\U)"]
\ar[d, "\rg_{\rscc{\Gr^0_x}Y_x}"']&
\rscc{\Gr^0_{x'}} Y_{x'}
\ar[r, dashed, "\vartheta_{\rscc{\Gr^0} Y} (\U)^*"]
\ar[d, "\pi"']
\ar[dd, bend left, near end, "\rg_{\rscc{\Gr^0_{x'}}Y_{x'}}"] &
\rscc{\Gr^0_x} Y_x
\ar[d, "\rg_{\rscc{\Gr^0_x}Y_x}"] \\
\Gr_x^0 \ar[r, dashed, "\U_*"] \ar[rd, dotted, "\U_\dagger"'] &
\Bisp_g/\Gr_x \ar[r, dashed, "(\U_*)^*"] \ar[d, "\rg_*"'] &
\Gr^0_x \\
& \Gr_{x'}^0
\end{tikzcd}
\end{equation}
Here continuous maps are drawn as usual arrows, partial
homeomorphisms as dashed arrows, and one partial map is drawn as a
dotted arrow.
\end{lemma}
\begin{proof}
We first recall how the arrows \(\U_*\), \(\rg_*\)
and~\(\U_\dagger\)
in~\eqref{eq:vartheta_beta_U_diagram_partial_corr} are defined and
check that the triangle they form commutes. Since~\(\U\) is a
slice, \(\s|_{\U}\colon \U\xrightarrow\sim \s(\U)\subseteq \Gr_x^0\) and
\(\Qu|_{\U}\colon \U \xrightarrow\sim \Qu(\U) \subseteq \Bisp_g/\Gr_x\) are
homeomorphisms onto open subsets. This yields the partial
homeomomorphism
\(\U_*\mathrel{\vcentcolon=}\Qu|_{\U} \circ (\s|_{\U})^{-1}\colon \s(\U) \xrightarrow\sim
\Qu(\U)\). The map \(\rg_*\colon \Bisp_g/\Gr_x \to \Gr_{x'}^0\)
in~\eqref{eq:vartheta_beta_U_diagram_partial_corr} is induced by
the anchor map \(\rg\colon \Bisp_g \to \Gr_{x'}^0\). By
definition,
\(\U_\dagger \mathrel{\vcentcolon=} \rg_*\circ \U_*\colon \s(\U) \to \Qu(\U) \to
\rg(\U) \subseteq \Gr_{x'}^0\).
The vertical maps in the first and third column of
diagram~\eqref{eq:vartheta_beta_U_diagram_partial_corr} and the
maps \(i_{Y_{x'}}\) and \(\rg_{\rscc{\Gr^0_{x'}}Y_{x'}}\) in the
second column are part of the construction of the relative
Stone--\v{C}ech compactification. Next we construct a map
\(\pi\colon \rscc{\Gr_{x'}^0} Y_{x'} \to \Bisp_g/\Gr_x\) with
\(\rg_*\circ \pi = \rg_{\rscc{\Gr^0_{x'}}Y_{x'}}\). There is a
canonical map
\(\pi_Y\colon Y_{x'}\xrightarrow\sim \Bisp_g \Grcomp Y_x \to
\Bisp_g/\Gr_x\) that maps \(\gamma\cdot y\) for
\(\gamma\in\Bisp_g\), \(y\in Y_x\) with \(\s(\gamma) = \rg_Y(y)\)
to the right \(\Gr_x\)\nobreakdash-orbit of~\(\gamma\); this is well
defined because \(\gamma\cdot y = \gamma_2\cdot y_2\) implies
\(\gamma_2 = \gamma\cdot \eta\) and \(y_2= \eta^{-1}\cdot y\) for some
\(\eta\in \Gr\) with \(\rg(\eta) = \s(\gamma)\). We compute
\(\rg_* \circ \pi_Y= \rg_Y\colon Y_{x'} \to \Gr_{x'}^0\) because
\(\rg_*\circ \pi_Y(\gamma\cdot y) = \rg(\gamma) =
\rg_Y(\gamma\cdot y)\) for all \(\gamma\in\Bisp_g\), \(y\in Y_x\)
with \(\s(\gamma) = \rg_Y(y)\). So~\(\pi_Y\) is a map
over~\(\Gr_{x'}^0\). By assumption, the space \(\Bisp_g/\Gr_x\)
is Hausdorff and the map
\(\rg_*\colon \Bisp_g/\Gr_x \to \Gr_{x'}^0\) is proper. By
\longref{Theorem}{the:rscc_reflector}, \(\pi_Y\) factors uniquely
through a proper, continuous map
\(\pi\colon \rscc{\Gr^0} Y \to \Bisp_g/\Gr_x\)
over~\(\Gr_{x'}^0\). That this is a map over~\(\Gr_{x'}^0\) means
that
\(\rg_*\circ \pi = \rg_{\rscc{\Gr^0} Y}\colon \rscc{\Gr_{x'}^0}
Y_{x'} \to \Gr_{x'}^0\).
Next we recall the construction of the partial
homeomorphism~\(\vartheta_Y(\U)\) from~\(Y_x\) to~\(Y_{x'}\) and
prove that
\begin{equation}
\label{eq:slice_pi_equation}
(\U_*)^* \circ \pi_Y=\rg_Y \circ \vartheta_Y(\U)^*.
\end{equation}
By construction,
\(\vartheta_Y(\U)\) has the domain \(\rg_Y^{-1}(\s(\U))\) and is
defined by \(\vartheta_Y(\U)(y) \mathrel{\vcentcolon=} \gamma\cdot y\) if
\(y\in\rg_Y^{-1}(\s(\U))\) and \(\gamma\in\U\) is the unique element
with \(\s(\gamma) = \rg_Y(y)\). As a consequence,
\(\pi_Y (\vartheta_Y(\U)(y)) = \Qu(\gamma) = \U_*(\s(\gamma)) =
\U_*(\rg_Y(y))\). Since the partial maps
\(\pi_Y \circ \vartheta_Y(\U)\) and \(\U_* \circ \rg_Y\) both have
the domain \(\rg_Y^{-1}(\s(\U))\), we conclude that
\(\pi_Y \circ \vartheta_Y(\U) = \U_* \circ \rg_Y\) as partial maps
from~\(Y_x\) to \(\Bisp_g/\Gr_x\). We claim that the partial maps
\((\U_*)^* \circ \pi_Y\) and \(\rg_Y \circ \vartheta_Y(\U)^*\)
from~\(Y_x\) to \(\Gr_x^0\) are equal as well. The first of them
has domain \(\pi_Y^{-1}(\Qu(\U))\) because the image of~\(\U_*\)
is \(\Qu(\U)\), and the domain of the second one is the image of
\(\vartheta_Y(\U)\). Therefore, we must show that the image of
\(\vartheta_Y(\U)\) is equal to
\(\pi_Y^{-1}(\Qu(\U)) \subseteq Y_{x'}\).
It is clear that~\(\pi_Y\) maps the image of~\(\vartheta_Y(\U)\)
into~\(\Qu(\U)\). Conversely, let
\(z \in \pi_Y^{-1}(\Qu(\U)) \subseteq Y_{x'}\). There are
\(\gamma\in\Bisp_g\), \(y\in Y_{x'}\) with \(\s(\gamma) = \rg(y)\) and
\(z = \gamma\cdot y\). Then \(\pi_Y(z) \mathrel{\vcentcolon=} \Qu(\gamma)\), and this
belongs to~\(\Qu(\U)\) by assumption. Therefore, there is a
unique \(\eta\in \Gr\) with \(\s(\gamma) = \rg(\eta)\) and
\(\gamma\cdot \eta \in \U\). Then
\(z=(\gamma \eta)\cdot (\eta^{-1} y)=\vartheta_Y(\U)(\eta^{-1}y)\). So~\(z\)
belongs to the image of~\(\vartheta_Y(\U)\). In addition, we get
\[
\rg_Y(\vartheta_Y(\U)^*(z))
= \rg_Y(\eta^{-1} y)
= \rg(\eta^{-1})
= \s(\eta)
= \s(\gamma\cdot \eta)
= (\U_*)^*\Qu(\gamma)
= (\U_*)^*\pi_Y(z).
\]
This finishes the proof of~\eqref{eq:slice_pi_equation}.
As in the proof of \longref{Proposition}{extendgpaction}, we now
apply \longref{Lemma}{sq} and \longref{Lemma}{extiso} with
\(B_1 = \s(\U)\) and \(B_2 = \Qu(\U)\) to get a unique
homeomorphism
\[
\widetilde{\vartheta_Y (\U)} \colon
\rscc{\s(\U)} (\rg_Y^{-1}(\s(\U)))
\xrightarrow\sim \rscc{\Qu(\U)} (\pi_Y^{-1} (\Qu(\U)))
\]
with \(i_Y \vartheta_Y (\U) = \widetilde{\vartheta_Y (\U)} i_Y\)
on \(\rg_Y^{-1}(\s(\U)) \subseteq Y_x\). Then
\longref{Lemma}{lem:ident} identifies the domain and codomain of
\(\widetilde{\vartheta_Y (\U)}\):
\begin{align*}
\rscc{\s(\U)} (\rg_Y^{-1}(\s(\U)))
&\cong (\beta_{\Gr^0} \rg_Y)^{-1} (\s(\U)) \subseteq \beta_{\Gr^0} Y_x,\\
\rscc{\Qu(\U)} (\pi_Y^{-1} (\Qu(\U)))
&\cong \pi^{-1}(\Qu(\U)) \subseteq \rscc{\Bisp_g/\Gr_x} Y_{x'}.
\end{align*}
\longref{Lemma}{lem:ident} identifies
\(\rscc{\Bisp_g/\Gr_x} Y_{x'}\) with the Stone--\v{C}ech
compactification of~\(Y_{x'}\) relative to~\(\Gr_{x'}^0\) because
\(\rg_*\colon \Bisp_g/\Gr_x \to \Gr_{x'}^0\) is proper.
Composing~\(\widetilde{\vartheta_Y (\U)}\) with these
homeomorphisms gives a partial homeomorphism
\(\vartheta_{\rscc{\Gr^0} Y} (\U)\) of \(\rscc{\Gr^0} Y\) that
makes the diagram~\eqref{eq:vartheta_beta_U_diagram_partial_corr}
commute. It is unique because the target space is Hausdorff
and~\(i_Y\) maps \(\rg_Y^{-1}(\s(\U))\) to a dense subset of its
domain, where the top left square
in~\eqref{eq:vartheta_beta_U_diagram_partial_corr} determines
\(\vartheta_{\rscc{\Gr^0} Y} (\U)\).
\end{proof}
\begin{theorem}
\label{the:unique_extension_F-action}
Let \(F\colon \Cat\to\Grcat_{\lc,\prop}\) be a diagram of proper,
locally compact groupoid correspondences. Let~\(Y\) be a
topological space with an \(F\)\nobreakdash-action. There is a unique
\(F\)\nobreakdash-action on \(\beta_{\Gr^0} Y\) such that the canonical map
\(i_Y\colon Y\to \beta_{\Gr^0} Y\) is \(F\)\nobreakdash-equivariant.
\end{theorem}
\begin{proof}
The Stone--\v{C}ech compactification relative to
\(\Gr^0 \mathrel{\vcentcolon=} \bigsqcup_{x\in\Cat^0} \Gr_x^0\) is well defined
because \(\bigsqcup_{x \in \Cat^0} \Gr_x^0\) is locally compact
and Hausdorff. There is a canonical map
\(\beta_{\Gr^0} r\colon \beta_{\Gr^0} Y \to \Gr^0\). It is the unique map with
\(\beta_{\Gr^0} r\circ i_Y = r\colon Y\to\Gr^0\). Hence this is the only
choice for an anchor map if we want~\(i\) to be
\(F\)\nobreakdash-equivariant. \longref{Lemma}{lem:bisections_act_on_rsc}
provides partial homeomorphisms \(\vartheta_{\beta_{\Gr^0} Y}(\U)\) of
\(\beta_{\Gr^0} Y\) for all slices \(\U\in\Bis(F)\). We claim that these
satisfy the conditions in
\longref{Lemma}{lem:F-action_from_theta}.
We first check~\ref{en:F-action_from_theta1}. Let
\(\U\in\Bis(\Bisp_g)\), \(\V\in\Bis(\Bisp_h)\) for
\(g\in\Cat(x,x')\), \(h\in \Cat(x'',x)\) for
\(x,x',x''\in\Cat^0\). The diagram
in~\eqref{eq:vartheta_beta_U_diagram_partial_corr} describes the
domain and the codomain of the maps \(\vartheta_{\beta_{\Gr^0} Y}(\U)\) as
the preimages of \(\s(\U)\) and~\(\Qu(\U)\), respectively. The
domain of \(\vartheta_{\beta_{\Gr^0} Y}(\U)\vartheta_{\beta_{\Gr^0} Y}(\V)\) is the
set of \(y\in Y_{x''}\) with \(\rg_{\beta_{\Gr^0} Y}(y) \in \s(\V)\) and
\(\vartheta_{\beta_{\Gr^0} Y}(\V)(y) \in \rg_{\beta_{\Gr^0} Y}^{-1}(\s(\U))\).
Since
\(\rg_{\beta_{\Gr^0} Y}\circ \vartheta_{\beta_{\Gr^0} Y}(\V) = \V_*\circ \rg_{\beta_{\Gr^0}
Y}\), the second condition on~\(y\) is equivalent to
\(\V_*(\rg_{\beta_{\Gr^0} Y}(y)) \in \s(\U)\). As a consequence,
\(\vartheta_{\beta_{\Gr^0} Y}(\U)\vartheta_{\beta_{\Gr^0} Y}(\V)\) and
\(\vartheta_{\beta_{\Gr^0} Y}(\U \V)\) have the same domain. The diagram
in~\eqref{eq:vartheta_beta_U_diagram_partial_corr} also implies
\(\vartheta_{\beta_{\Gr^0} Y}(\U)\vartheta_{\beta_{\Gr^0} Y}(\V) i_Y =
\vartheta_{\beta_{\Gr^0} Y}(\U \V) i_Y\). Since the target space
\(\beta_{\Gr^0} Y\) of \(\vartheta_{\beta_{\Gr^0} Y}(\U)\vartheta_{\beta_{\Gr^0} Y}(\V)\) and
\(\vartheta_{\beta_{\Gr^0} Y}(\U \V)\) is Hausdorff and
\(i_Y(\rg_Y^{-1}(\s(\U\V)))\) is dense in the domain
\(\rg_{\beta_{\Gr^0} Y}^{-1}(\s(\U\V))\) of our two partial maps, we get
\(\vartheta_{\beta_{\Gr^0} Y}(\U)\vartheta_{\beta_{\Gr^0} Y}(\V) = \vartheta_{\beta_{\Gr^0}
Y}(\U \V)\).
The proof of~\ref{en:F-action_from_theta2} is similar, using also
the right half of~\eqref{eq:vartheta_beta_U_diagram_partial_corr}.
To prove \longref{condition}{en:F-action_from_theta3}, we use that
the range of~\(\vartheta_{\beta_{\Gr^0} Y}(\U)\) is \(\pi^{-1}(\Qu(\U))\).
These open subsets for slices~\(\U\) of~\(\Bisp_g\)
cover~\(\beta_{\Gr^0} Y_{x'}\) because the open subsets
\(\Qu(\U) \subseteq \Bisp_g/\Gr_x\) for slices~\(\U\)
cover~\(\Bisp_g/\Gr_x\). Finally,
\longref{condition}{en:F-action_from_theta5} is already contained
in~\eqref{eq:vartheta_beta_U_diagram_partial_corr}.
Now \longref{Lemma}{lem:F-action_from_theta} shows that the
map~\(\rg_{\beta_{\Gr^0} Y}\) and the partial homeomorphisms
\(\vartheta_{\beta_{\Gr^0} Y}(\U)\) give a unique \(F\)\nobreakdash-action
on~\(\beta_{\Gr^0} Y\). By
\longref{Lemma}{lem:theta_gives_equivariant}, the top
part of~\eqref{eq:vartheta_beta_U_diagram_partial_corr} says that
the map~\(i_Y\) is \(F\)\nobreakdash-equivariant. In addition, since this
determines the partial homeomorphisms \(\vartheta_{\beta_{\Gr^0} Y}(\U)\)
uniquely, the \(F\)\nobreakdash-action on~\(\beta_{\Gr^0} Y\) is unique as asserted.
\end{proof}
\section{Locally compact groupoid models for proper diagrams}
\label{sec:proper_model}
In this subsection, we prove the main result of this article,
namely, that the universal action of a diagram of proper, locally
compact groupoid correspondences takes place on a Hausdorff proper
\(\Gr^0\)-space~\(\Omega\). Since~\(\Gr^0\) is Hausdorff, locally compact, it
follows that~\(\Omega\) is Hausdorff, locally compact. The key point is the
following proposition:
\begin{proposition}
\label{pro:F-action_reflector}
Let \(F\colon \Cat\to\Grcat_{\lc,\prop}\) be a diagram of proper,
locally compact groupoid correspondences. The full subcategory of
\(F\)\nobreakdash-actions on Hausdorff proper \(\Gr^0\)\nobreakdash-spaces is a
reflective subcategory of the category of all \(F\)\nobreakdash-actions. The
left adjoint to the inclusion maps an \(F\)\nobreakdash-action on a
space~\(Y\) to the induced \(F\)\nobreakdash-action on the Stone--\v{C}ech
compactification of~\(Y\) relative to
\(\Gr^0 \mathrel{\vcentcolon=} \bigsqcup_{x \in \Cat^0} \Gr_x^0\).
\end{proposition}
\begin{proof}
\longref{Theorem}{the:rscc_reflector} says that the full
subcategory of Hausdorff proper \(\Gr^0\)\nobreakdash-spaces is a reflective
subcategory of the category of all \(\Gr^0\)\nobreakdash-spaces, with the
relative Stone--\v{C}ech compactification~\(\beta_{\Gr^0}\) as the left
adjoint functor of the inclusion.
Let \(Y\) and~\(Y'\) be topological spaces with an action of~\(F\)
and let \(\varphi\colon Y\to Y'\) be an \(F\)\nobreakdash-equivariant map.
Assume that~\(Y'\) is Hausdorff and that its anchor map
\(\rg'\colon Y' \to \Gr^0\) is proper. By
\longref{Theorem}{the:unique_extension_F-action}, there is a
unique \(F\)\nobreakdash-action on the relative Stone--\v{C}ech
compactification \(\beta_{\Gr^0} Y\) that makes the inclusion map
\(i_Y\colon Y\to \beta_{\Gr^0} Y\) \(F\)\nobreakdash-equivariant. By
\longref{Proposition}{betaXtoY}, there is a unique
\(\Gr^0\)\nobreakdash-map \(\tilde{\varphi}\colon \beta_{\Gr^0} Y \to Y'\) with
\(\tilde{\varphi} i_Y = \varphi\). Any \(F\)\nobreakdash-equivariant map
is also a \(\Gr^0\)-map by
\longref{Lemma}{lem:theta_gives_equivariant}. Therefore,
\(\tilde{\varphi}\) is the only map \(\beta_{\Gr^0} Y\to Y'\) with
\(\tilde{\varphi} i_Y = \varphi\) that has a chance to be
\(F\)\nobreakdash-equivariant. To complete the proof, we must show that
\(\tilde{\varphi}\) is \(F\)\nobreakdash-equivariant. We describe an
\(F\)\nobreakdash-action on a space~\(Y\) as in
\longref{Lemma}{lem:F-action_from_theta} through a continuous map
\(\rg_Y\colon Y\to \Gr^0\) and partial homeomorphisms
\(\vartheta_Y(\U)\) for all slices \(\U\in\Bis(F)\), subject to
the conditions
\ref{en:F-action_from_theta1}--\ref{en:F-action_from_theta5}. By
\longref{Lemma}{lem:theta_gives_equivariant}, it remains to prove
that the partial maps \(\vartheta_{Y'}(\U)\circ \tilde{\varphi}\)
and \(\tilde{\varphi} \circ \vartheta_{\beta_{\Gr^0} Y}(\U)\) agree for any
slice \(\U\in\Bis(F)\). We pick~\(\U\). Then~\(\U\) is a slice
in \(\Bisp_g\) for some \(x,x'\in\Cat^0\) and \(g \in\Cat(x,x')\).
First, we check that our two partial maps have the same domain.
Since~\(\tilde{\varphi}\) is a globally defined map, the domain of
\(\tilde{\varphi} \circ \vartheta_{\beta_{\Gr^0} Y}(\U)\) is the domain of
\(\vartheta_{\beta_{\Gr^0} Y}(\U)\) and the domain
of~\(\vartheta_{Y'}(\U)\circ \tilde{\varphi}\) is the
\(\tilde{\varphi}\)\nobreakdash-preimage of the domain
of~\(\vartheta_{Y'}(\U)\). The domains of
\(\vartheta_{\beta_{\Gr^0} Y}(\U)\) and~\(\vartheta_{Y'}(\U)\) are
\(\rg_{\beta_{\Gr^0} Y_x}^{-1}(\s(\U))\) and \(\rg_{Y'_x}^{-1}(\s(\U))\),
respectively. Since~\(\tilde{\varphi}\) is a \(\Gr^0\)\nobreakdash-map,
the domain of~\(\vartheta_{Y'}(\U)\circ \tilde{\varphi}\) is also
equal to the \(\rg_{\beta_{\Gr^0} Y_x}^{-1}(\s(\U))\). This proves the
claim that both partial maps have the same domain.
Since \(i_Y\) and~\(\varphi\) are \(F\)\nobreakdash-equivariant, we know
that
\(\vartheta_{\beta_{\Gr^0} Y}(\U) \circ i_Y = i_Y \circ \vartheta_Y(\U)\)
and
\(\vartheta_{Y'}(\U) \circ \varphi = \varphi \circ
\vartheta_Y(\U)\). Together with \(\tilde{\varphi} \circ i_Y =
\varphi\), this implies
\[
(\vartheta_{Y'}(\U) \circ \tilde{\varphi}) \circ i_Y
= \vartheta_{Y'}(\U) \circ \varphi
= \varphi \circ \vartheta_Y(\U)
= \tilde{\varphi} \circ i_Y \circ \vartheta_Y(\U)
= (\tilde{\varphi} \circ \vartheta_{\beta_{\Gr^0} Y}(\U)) \circ i_Y.
\]
These partial maps have domain \(\rg_{Y_x}^{-1}(\s(\U))\). The
\(i_Y\)\nobreakdash-image of this is dense in
\(\rg_{\beta_{\Gr^0} Y_x}^{-1}(\s(\U))\) because of
\longref{Lemma}{lem:ident} and
\longref{Lemma}{lem:X_dense_in_beta}. Since the target~\(Y'\) of
\(\vartheta_{Y'}(\U)\circ \tilde{\varphi}\) and
\(\tilde{\varphi} \circ \vartheta_{\beta_{\Gr^0} Y}(\U)\) is Hausdorff and
these maps agree on a dense subset, we get
\(\vartheta_{Y'}(\U)\circ \tilde{\varphi}= \tilde{\varphi} \circ
\vartheta_{\beta_{\Gr^0} Y}(\U)\) as needed.
\end{proof}
\begin{proposition}[\cite{Riehl:Categories_context}*{Corollary~5.6.6}]
\label{riehl}
The inclusion of a reflective full subcategory
\(\Cat[D] \hookrightarrow \Cat\) creates all limits that~\(\Cat\)
admits. As a consequence, if a diagram in~\(\Cat[D]\) has a limit
in~\(\Cat\), then it also has a limit in~\(\Cat[D]\), which is
isomorphic to the limit in~\(\Cat\).
\end{proposition}
\begin{theorem}
\label{the:proper_Hausdorff_Omega}
Let \(F\colon \Cat\to\Grcat_{\lc,\prop}\) be a diagram of proper,
locally compact groupoid correspondences. Then the universal
\(F\)\nobreakdash-action takes place on a space~\(\Omega\) that is
Hausdorff, locally compact and proper over
\(\Gr^0 \mathrel{\vcentcolon=} \bigsqcup_{x\in\Cat^0} \Gr_x^0\). The groupoid
model of~\(F\) is a locally compact groupoid.
\end{theorem}
\begin{proof}
We give two proofs. First, a universal \(F\)\nobreakdash-action is the
same as a terminal object in the category of \(F\)\nobreakdash-actions, and
this is an example of a limit, namely, of the empty diagram.
\longref{Theorem}{the:groupoid_model_universal_action_exists} says
that a terminal object exists in the category of all
\(F\)\nobreakdash-actions. \longref{Proposition}{pro:F-action_reflector}
and \longref{Proposition}{riehl} imply that this terminal object
is isomorphic to an object in the subcategory of Hausdorff proper
\(\Gr\)\nobreakdash-spaces. Actually, our subcategory is closed under
isomorphism, and so the terminal object belongs to it. By
\longref{Remark}{rem:proper_is_locally_compact}, this implies that
its underlying space is locally compact.
The second proof is more explicit. Let~\(\Omega\) be the
universal \(F\)\nobreakdash-action. The relative Stone--\v{C}ech
compactification comes with a canonical \(F\)\nobreakdash-equivariant map
\(\iota \colon \Omega \hookrightarrow \beta_{\Gr^0} \Omega\); here we use
the canonical \(F\)\nobreakdash-action on~\(\beta_{\Gr^0} \Omega\).
Since~\(\Omega\) is universal, there is a canonical map
\(\beta_{\Gr^0} \Omega \to \Omega\) as well. The composite map
\(\Omega \to \beta_{\Gr^0} \Omega \to \Omega\) is the identity map
because~\(\Omega\) is terminal. The composite map
\(\beta_{\Gr^0} \Omega \to \Omega \to \beta_{\Gr^0} \Omega\) and the identity map
have the same composite with~\(\iota\). Since the range
of~\(\iota\) is dense by \longref{Lemma}{lem:X_dense_in_beta}
and~\(\beta_{\Gr^0} \Omega\) is Hausdorff, it follows that the composite
map \(\beta_{\Gr^0} \Omega \to \Omega \to \beta_{\Gr^0} \Omega\) is equal to the
identity map as well. So \(\Omega \cong \beta_{\Gr^0} \Omega\), and this
means that~\(\Omega\) is Hausdorff and proper over~\(\Gr^0\).
\end{proof}
\begin{corollary}
\label{cor:Omega_compact}
Let \(F\colon \Cat\to\Grcat_{\lc,\prop}\) be a diagram of proper,
locally compact groupoid correspondences. Assume that~\(\Cat^0\)
is finite and that each object space~\(\Gr_x^0\) in the diagram is
compact. Then the universal \(F\)\nobreakdash-action takes place on a
compact Hausdorff space. The groupoid model of~\(F\) is a locally
compact groupoid with compact object space.
\end{corollary}
\begin{proof}
Our extra assumptions compared to
\longref{Theorem}{the:proper_Hausdorff_Omega} say that~\(\Gr^0\)
is compact. Then Hausdorff spaces that are proper over~\(\Gr^0\)
are compact.
\end{proof}
\begin{example}
\label{exa:nm-dynamical_system}
The \((m,n)\)-dynamical systems of Ara, Exel and
Katsura~\cite{Ara-Exel-Katsura:Dynamical_systems} are described in
\cite{Meyer:Diagrams_models}*{Section~4.4} as actions of a certain
diagram of proper groupoid correspondences. The diagram is an
equaliser diagram of the form \(\Gr \rightrightarrows \Gr\),
where~\(\Gr\) is the one-arrow one-object groupoid. A proper
groupoid correspondence \(\Gr \to \Gr\) is just a finite set, and
it is determined up to isomorphism by its cardinality. We get
\((m,n)\)-dynamical systems when we pick the two sets to have
cardinality \(m\) and~\(n\), respectively.
\longref{Corollary}{cor:Omega_compact} applies to this diagram and
shows that its universal action takes place on a compact Hausdorff
space. Ara, Exel and Katsura describe in
\cite{Ara-Exel-Katsura:Dynamical_systems}*{Theorem~3.8} an
\((m,n)\)-dynamical system that is universal among
\((m,n)\)-dynamical systems on compact Hausdorff spaces.
\longref{Corollary}{cor:Omega_compact} shows that it remains
universal if we allow \((m,n)\)-dynamical systems on arbitrary
topological spaces.
\end{example}
\begin{example}
\label{exa:words}
Let \(\Cat = (\mathbb{N}, +)\) be the category with a single object and
morphisms the nonnegative integers. A diagram
\(F\colon \Cat\to\Grcat\) is determined by a single groupoid
correspondence \(\Bisp\colon \Gr\leftarrow\Gr\) for an étale
groupoid~\(\Gr\) (see \cite{Meyer:Diagrams_models}*{Section~3.4}).
Let~\(\Gr\) be the trivial groupoid with one arrow and one object.
Then~\(\Bisp\) is just a discrete set because the source map
\(\Bisp \to \Gr^0\) is a local homeomorphism. The groupoid model
of the resulting diagram is a special case of the self-similar
groups treated in~\cite{Meyer:Diagrams_models}*{Section~9.2}, in
the case when the group is trivial. It is shown there that the
universal action takes place on the space
\(\Omega \mathrel{\vcentcolon=} \prod_{n\in\mathbb{N}} \Bisp\). If~\(\Bisp\) is finite,
then~\(\Omega\) is compact by Tychonoff’s Theorem. In contrast,
if~\(\Bisp\) is infinite, then~\(\Omega\) is not even locally
compact. This example shows that we need a diagram of proper
correspondences for the groupoid model to be locally compact.
\end{example}
\begin{bibdiv}
\begin{biblist}
\bibselect{references}
\end{biblist}
\end{bibdiv}
\end{document}
|
2,869,038,154,672 | arxiv |
\section{Conclusion}
\vspace{-0.1cm}
In this paper, we have shown that many of the supervised robust methods do not learn anything useful under high label noise rates.
However, they perform significantly better with the SimCLR initializer on image datasets and can even outperform previous state-of-the-art methods for learning under label noise.
Even the typical method, i.e., training a deep neural network-based classifier under the categorical cross-entropy loss, can outperform previous state-of-the-art methods under some noise conditions.
These observations suggest that lack of good visual representations is a possible reason that many supervised robust methods perform poorly on image classification tasks.
We believe that our findings can serve as a new baseline for learning under label noise on image datasets.
Moreover, we believe that decoupling the representation learning problem from learning under label noise would lead to new methods that can do well on either of these tasks with complementary strengths without the need for methods targeting both of these tasks together.
{\small
\bibliographystyle{ieee_fullname}
\section{Experimental Results}
\vspace{-0.1cm}
{\bf Datasets and Experimental Setup:} We demonstrate the efficacy of our proposed approach on CIFAR-10, CIFAR-100, and Clothing1M datasets. Unless otherwise specified, we use ResNet-50 (RN-50) as the classifier; for CIFAR datasets, we adopt the common practice of replacing the first convolutional layer of kernel size 7, stride 2 with a convolutional layer of kernel size 3 and stride 1 and removing the first max-pool operation in RN-50 \cite{simclr}.
CIFAR-10 and CIFAR-100 datasets contain 50k training samples and 10k test samples; label noise is introduced synthetically on the training samples. We keep 1000 clean training samples for validation purposes. We experiment with symmetric noise and asymmetric noise. Under symmetric noise, the true class label is changed to any of the class labels (including the true label) whereas, under asymmetric noise, the true class label is changed to a similar class label. We use the exact same setup of \cite{generalized-ce,forward} for introducing asymmetric noise. For CIFAR-10, the class mappings are TRUCK $\rightarrow$ AUTOMOBILE, BIRD $\rightarrow$ AIRPLANE, DEER $\rightarrow$ HORSE, CAT $\leftrightarrow$ DOG. For CIFAR-100, the class mappings are generated from the next class in that group (where 100 classes are categorized into 20 groups of 5 classes).
Clothing1M dataset is real-world datasets consisting of 1M training samples; labels are generated from surrounding text in an online shopping website \cite{xiao2015learning}. Clothing1M dataset contains around 38\% noisy samples \cite{c2d} and we do not introduce any additional noise on this dataset.
{\bf Pre-Training:} We compare the supervised robust methods using two initialization, namely the SimCLR initializer and the ImageNet pre-trained initializer \cite{pretraining}.
To train the SimCLR encoder $\hat{f}(\cdot)$ and the projection head $g(\cdot)$, we use a batch size of 1024 (1200) and run for 1000 (300) epochs with the LARS optimizer \cite{lars} on a single NVIDIA RTX8000 (12 NVIDIA M40) GPU(s) on the CIFAR-10/100 (Clothing1M) datasets.
For the Clothing1M dataset, we use the standard pre-trained ImageNet RN-50 initialization. For the CIFAR datasets, we train a RN-50 classifier from scratch (with CIFAR changes in the first convolutional layer) on the ImageNet-$32\times32$ (INet32) dataset \cite{imagenet32} that achieves 43.67\% Top-1 validation accuracy. %
{\bf Methods:}
We use the SimCLR RN-50 initializer for three methods: standard ERM training with the CCE loss, ERM training with the generalized cross-entropy loss $L_q$ (q=0.5 or 0.66), and MWNet. The value of q in $L_q$ loss is important only for a high rate of noise ($\geq 0.8$), where the $L_q$ loss with a large q (0.66) is difficult to optimize; however, for all other noise rates, a higher value of q leads to better performance. We fine-tune for 120 epochs on the CIFAR datasets with the SGD optimizer, learning rate of 0.01, momentum of 0.9, weight-decay of 0.0001, and a batch size of 100.
For other baseline methods, we use the results listed in their respective paper (or their public implementation).
Note that
different prior works use different architectures; thus, we also list the test accuracy from training on clean samples with that architecture (and initialization).
For CIFAR datasets, we list the average accuracy from five runs of noisy label generation.
We use the CCE loss, the $L_q$ loss (q=0.66), and the MAE loss with the SimCLR initializer on the Clothing1M dataset. We use the SGD optimizer, a batch size of 32, momentum of 0.9, and an initial learning rate of 0.001. Following \cite{elr,dividemix}, we randomly sample 4000x32 training samples in each epoch such that the total number of samples from each of the classes are equal. We fine-tune for 60 epochs and reduce the learning rate by a factor of 2 after every 10 epochs.
\vspace{-0.1cm}
\subsection{Results and Discussion}
\vspace{-0.1cm}
Table~\ref{tab:sym} lists classification performance on the test set under symmetric noise on the CIFAR datasets. The SimCLR initializer significantly improves performance for the CCE loss, the $L_q$ loss, and the MWNet method.
Under 90\% label noise, the CCE loss has an accuracy of 42.7\% (10.1\%) with a random initializer and DivideMix has an accuracy of 93.2\% (31.5\%) on the CIFAR-10 (CIFAR-100) dataset. Under the same noise rate, the CCE loss with the SimCLR initializer has an accuracy of 82.9\% (52.11\%) on the CIFAR-10 (100) dataset which translates to a 9\% (65\%) gain compared to the state-of-the-art method DivideMix.
Moreover, the SimCLR initializer beats these performances even further with the MWNet method and the $L_q$ loss. Under very high levels of label noise, MWNet and the $L_q$ loss are not able to learn anything useful with the standard random initializer. However, with the SimCLR initializer, these methods perform significantly better than the state-of-the-art method.
Table~\ref{tab:asym} lists the classification performance on the test set under asymmetric label noise on the CIFAR datasets. Similarly, we observe that the SimCLR initializer improves %
model robustness under asymmetric label noise. However, supervised robust methods do not beat the prior-state-of-the-art method for the asymmetric noise case.%
Table~\ref{tab:clothing} lists the test performance on the Clothing1M dataset. Although the CCE loss, the $L_q$ loss, and the MAE loss do not outperform state-of-the-art methods with the SimCLR initializer, they perform remarkably well.
We also observe that RN-50 pre-trained on the INet32 dataset also improves model robustness under label noise on the CIFAR datasets. Note that there is significant overlap between the classes of the INet32 dataset and the classes of the CIFAR datasets. On the CIFAR-100 dataset, the RN-50 pre-trained model (on INet32) significantly improves test accuracy to 81.51\% from fine-tuning compared to $\sim75\%$ with a random or the SimCLR initializer.
However, the SimCLR initializer does not require such a considerable knowledge transfer from another larger dataset. Moreover, we observe that the drop in performance (w.r.t.\ the accuracy from the clean training samples) is significantly lower for the SimCLR initializer compared to the pre-trained ImageNet initializer. The pre-trained INet32 initializer seems to help more in the case of the asymmetric noise case; class overlap and label corruptions to a similar class might be a reason behind the improvement. In contrast, in the Clothing1M dataset, only two of the classes (out of 14) are present in the ImageNet dataset.
Consequently, the CCE loss with the SimCLR initializer improves the test performance by 6\%
compared to the ImageNet pre-trained RN-50 initializer.
\section{Learning under Label Noise}
In standard classification tasks, we are given a dataset $\mathcal{D}=\{(\mathbf{x}_i,\mathbf{y}_i)\}_{i=1}^N$ where $\mathbf{x}_i$ is the feature vector of the $i^{\text{th}}$ sample (image) and $\mathbf{y}_i\in \{0,1\}^K$ is the class label vector with $K$ total classes. We minimize the following empirical risk minimization (ERM) objective,
\vspace{-0.15in}
\begin{equation}
\min_{\mathbf{w}}\frac{1}{N} \sum_{i=1}^N \ell_{\text{CCE}}(\mathbf{y}_i, f(\mathbf{x}_i;\mathbf{w})),
\label{eq:erm}
\vspace{-0.15in}
\end{equation}
where $f(\cdot; \mathbf{w})$ is a deep neural network (DNN)-based classifier
with parameters $\mathbf{w}$ and $\ell_{\text{CCE}}$ is the categorical cross-entropy (CCE) loss. However, real-world datasets often contain noisy labels; i.e., $\mathbf{y}_i$ can be corrupted. DNNs are sensitive to label noise when they are trained with the CCE loss, which reduces %
their ability to generalize to clean dataset.
Most of the early works on learning under label noise can be called %
as \emph{supervised robust methods} %
and they are equally applicable to image, text, or any other data types. A general trick
to mitigate the impact of
label noise is to replace the CCE loss function $\ell_{\text{CCE}}$ with a loss that is more robust to label noise
in Eq.~\ref{eq:erm} \cite{generalized-ce,peer,normalized-loss,ghosh2015,ghosh2017-dt,ghosh2017,unhinged,peer,ldmi}. In \cite{ghosh2017}, the authors show that a loss function $\ell$ is robust to uniform label noise if it
satisfies the condition $\sum_{k=1}^K\ell(k,f(\mathbf{x}_i;\mathbf{w}))=C$ for some constant $C$. The mean absolute error (MAE) loss satisfies this symmetric condition; however, the MAE loss is difficult to optimize under the ERM objective with DNNs. Several loss functions have been proposed that offer model robustness under label noise and they are easier to optimize compared to the MAE loss \cite{peer,ldmi,generalized-ce,normalized-loss}. For example, the generalized cross-entropy loss $L_q$ ($q\in (0,1]$ is a hyper-parameter) is defined as \cite{generalized-ce}
$L_q(\mathbf{y},f(\mathbf{x};\mathbf{w})) = \frac{1-\mathbf{y}^{\intercal}f(\mathbf{x};\mathbf{w})^q }{q}$.
The $L_q$ loss is equivalent to the CCE loss when $q\!\rightarrow\! 0$ and is equivalent to the MAE loss when $q\!=\!1$. However, these robust loss functions do not perform well on large image datasets.
Another common strategy for learning under label noise is to separate out the noisy samples from the clean samples or re-weight the training samples and stick with the CCE loss; we can simply change the objective as%
\vspace{-0.2cm}
\begin{align}
\min_{\mathbf{w}}\frac{1}{N} \sum_{i=1}^N \mathcal{W}(\mathbf{x}_i,\mathbf{y}_i) \ell_{\text{CCE}}(\mathbf{y}_i, f(\mathbf{x}_i;\mathbf{w})),\label{eq:weight-erm}
\vspace{-0.2cm}
\end{align}
where $\mathcal{W}(\mathbf{x}_i,\mathbf{y}_i)\in [0,1]$ is the assigned weight for the training sample $(\mathbf{x}_i,\mathbf{y}_i)$.
A common heuristic, studied in earlier research, is that %
noisy samples have higher loss values compared to the clean samples \cite{identifying-mislabeled,identifying}.
Many recent methods apply this idea to filter out or lower %
weights to possibly noisy samples \cite{s-model,co-teaching,joint,webly,co-teaching,mentornet,joint,veit2017learning,yuan2018iterative,focal,reed,masking}. Instead of filtering samples based on the loss value, a more principled way is to \emph{learn} a weighting \emph{function} $\mathcal{W}(\ell(\mathbf{y}_i, f(\mathbf{x}_i;\mathbf{w}));\theta)$ in a data-driven manner where the function takes the loss value as the input.
Meta-learning-based methods have been particularly useful to learn a weighting function \cite{mwnet,robust-mwnet,l2rw,l2l-noise}. As an example,
Meta-Weight Network (MWNet) \cite{mwnet} learns a weighting function $\mathcal{W}$ with parameter $\theta$ using a small number of clean validation samples in a bilevel setup \cite{bilevel}. The objective is to learn the optimal weighting function $\mathcal{W}(\ell(\cdot,\cdot); \theta^{\ast})$ such that the optimal classifier parameters $\mathbf{w}^{\ast}(\theta^{\ast})$ on the training samples (train), obtained from Eq.~\ref{eq:weight-erm}, optimizes the ERM objective on the clean validation samples (val). The bilevel optimization problem can be written as %
\vspace{-0.2cm}
\begin{align*}
&\quad\quad\quad\quad\quad\quad\min_{\theta} \sum_{j\in \text{val}} \ell_{}\Big(\mathbf{y}_j,f(\mathbf{x}_j;\mathbf{w}^{\ast}(\theta))\Big)\\
&\mbox{ s.t. } \mathbf{w}^{\ast}\!(\theta)\!=\!\!\argmin_{\mathbf{w}}\! \!\!\sum_{i\in \text{train}}\!\!\! \mathcal{W}\!\Big(\!\ell_{}\!(\mathbf{y}_i,\!f(\mathbf{x}_i;\mathbf{w}));\theta\!\Big)
\!\ell_{}\!(\mathbf{y}_i,\!f(\mathbf{x}_i;\mathbf{w})\!).
\vspace{-0.2cm}
\end{align*}
These \emph{supervised robust methods} perform well across many data types. %
Recently, many semi-supervised learning (SSL) methods have been proposed for image datasets to mitigate the impact of label noise. %
SSL methods aim to improve the performance of a DNN classifier by exploiting unlabeled data \cite{mixmatch}. Common tricks in SSL methods include %
using a consistency regularization loss to encourage %
the classifier to have similar predictions for %
an image $\mathbf{x}_i$ and the augmented view of the image $\text{Aug}(\mathbf{x}_i)$, an entropy minimization objective to promote high confidence predictions, and a label guessing method to produce a good guess from many augmentations of the same image \cite{mixmatch,mixup}.
DivideMix, an SSL method, divides the training dataset into the clean (labeled) and noisy (unlabeled) parts using the observation that noisy samples tend to have a higher loss value \cite{dividemix}. These SSL methods have been shown to be superior to the supervised robust methods on image datasets \cite{elr,dividemix}.
\vspace{-0.1cm}
\subsection{Contributions}
\vspace{-0.1cm}
We observe that SSL methods for label noise can use unlabeled (noisy) samples effectively to improve
their representation learning capability.
Consequently, prior supervised robust learning methods suffer a significant drop in performance compared to the SSL methods on image datasets.
Hence, we ask the following question:
\vspace{-0.2cm}
\begin{itemize}
[leftmargin=*]
\item Is the performance drop of supervised robust methods caused by label noise or the impaired representation %
learned using fewer clean samples?
\vspace{-0.2cm}
\end{itemize}
Thus, we study the effect of fine-tuning these supervised robust methods after initializing them with \emph{good} representations learned by a self-supervised method. Contrastive learning has emerged as a key method for self-supervised learning
from visual data; the general idea is to learn good visual representations of images through comparing and contrasting different views of an original image under various data augmentation operations \cite{simclr,simclrv2}. We find that the supervised robust methods work remarkably well when they are initialized with the contrastive representation learning model. Surprisingly, we notice that even using the (most sensitive) CCE loss can outperform state-of-the-art SSL methods under high %
label noise. Moreover, we observe that the generalized cross-entropy loss \cite{generalized-ce} can retain good performance even under 95\% uniform label noise on the CIFAR-100 dataset whereas training with a random initializer does not outperform a random model. These observations suggest that the drop in performance for the supervised robust methods is due to the lack of good visual representations. We use one representative method from each of the two major paradigms we described for the supervised robust methods (the $L_q$ loss for the loss correction approach and the MWNet method for the sample re-weighting strategy)
to illustrate the benefits of fine-tuning representations learned through contrastive learning with a classification task under label noise.
\vspace{-0.1cm}
\subsection{Related Works}
\vspace{-0.1cm}
The idea of using a pre-trained model initializer or self-supervised learning is not new in label noise research.
In \cite{self-supervision}, the authors use auxiliary tasks, such as rotation prediction, to improve model robustness under label noise.
In \cite{pretraining}, the authors propose to use a pre-trained Imagenet classifier to improve model robustness. These methods lead to improved performance under high label noise, adversarial perturbation, class imbalance conditions, and on out-of-distribution detection tasks. However, they require a larger similar dataset where label noise is not present %
or need to use an auxiliary loss from the self-supervised tasks in addition to the classification task. In contrast, our work does not propose any additional auxiliary tasks or require any larger datasets. We learn the contrastive model for visual representations from the same dataset as the classification task. This is helpful when the classification task uses datasets (e.g., in medical imaging datasets) that are very different than the commonly used large-scale image datasets (e.g., the ImageNet dataset).
The most related work is \cite{c2d}, which uses a contrastive model to improve the DivideMix algorithm. However, we show that a self-supervised contrastive learning model
initializer can improve model robustness under label noise for many supervised robust methods.
\vspace{-0.1cm}
\subsection{Methodology}
\vspace{-0.1cm}
We will use the SimCLR framework for contrastive learning \cite{simclr,simclrv2}; however, other visual representation learning methods (including other contrastive learning methods) can also potentially improve model robustness under label noise.
We use a base encoder $\hat{f}(\cdot)$ (ResNet-50 in this paper) to encode each image ${\mathbf{x}}_i$ to $\mathbf{h}_i=\hat{f}({\mathbf{x}}_i)$, and a two-layer multi-layer perceptron $g(\cdot)$ as the projection head to project into a fixed dimension embedding $\mathbf{z}_i=g(\mathbf{h}_i)$. %
Using $M$ images and two augmentations for each image, we construct a dataset of $2M$ images $\{\mathbf{x}_{i,0}, \mathbf{x}_{i,1}\}_{i=1}^M$ and project them into $\{\mathbf{z}_{i,0}, \mathbf{z}_{i,1}\}_{i=1}^M$ using the base encoder and the projection head. The final objective in the SimCLR framework is defined as%
\vspace{-0.1cm}
\begin{equation*}
\sum_{i=1}^M\!\sum_{j=0}^1 \!-\!\log\!\frac{\exp{(\text{sim}(\mathbf{z}_{i,j},\mathbf{z}_{i,j+1\%2})/\tau)}}{\!-\!\exp{(1/\tau)}\!+\!\sum_{k=1,l=0}^{k=M,l=1}\!\exp\!{(\text{sim}(\mathbf{z}_{i,j}\!,\!\mathbf{z}_{k,l})\!/\!\tau\!)}}\!,
\vspace{-0.1cm}
\end{equation*}
where $\tau$ is the temperature parameter, and $\text{sim}(\mathbf{z}_i,\mathbf{z}_j)$ is the normalized cosine similarity $\frac{\mathbf{z}_i^{\intercal}\mathbf{z}_j}{||\mathbf{z}_i||||\mathbf{z}_j||}$.
We use the same dataset $\mathcal{D}=\{\mathbf{x}_i,\mathbf{y}_i\}_{i=1}^N$ to learn the SimCLR encoder $\hat{f}(\cdot)$.
The base encoder $\hat{f}$ does not contain the classification head (last output layer). For supervised robust methods, we use this encoder $\hat{f}$ to initialize the DNN classifier $f(\cdot;\mathbf{w})$ and we set the weights and biases of the classification head of $f(\cdot;\mathbf{w})$ to zero at initialization.
Note that %
we fine-tune the final classifier $f(\cdot)$ for each method and do not keep the base encoder $\hat{f}(\cdot)$ fixed.
\section{Contrastive Learning}
Contrastive learning has emerged as \ml{maybe say a key? avoid pissing people off} the key method for self-supervised visual representation learning. We will use the SimCLR contrastive model \ml{inconsistent terminology - make sure you clearly state what simclr is and what contrastive learning is, and stick with it throughout; a model, or a method to learn a model? you say contrastive learning is a method in the previous sentence. now it's a contrastive model. that does not look good.} \cite{simclr,simclrv2} throughout the paper; however, other visual representation learning models (including other contrastive models) can also potentially improve robustness under label noise. The main idea in learning a contrastive model \ml{learning a contrastive model? or learn representations using a contrastive loss? you have to be more precise here - what is a contrastive model? make it clear!} is to maximize the similarity of two augmented views of the same data point using a contrastive loss.
In particular, we use a base encoder $f(\cdot)$ (ResNet-50 in this paper) to encode samples \ml{what is a sample? you call it ``data point'' in the previous sentence...} ${\mathbf{x}}_i$ to $\mathbf{h}_i=f({\mathbf{x}}_i)$, and two layer MLP $g(\cdot)$ as the projection head to project into $\mathbf{z}_i=g(\mathbf{h}_i)$. In a set of $2N$ examples \ml{now you call it ``example''. great :) super sloppy!} $\{{\mathbf{x}}_i\}$, for each example ${\mathbf{x}}_i$, there is one positive example ${\mathbf{x}}_j$ \ml{this part reads: for each example, there is one positive example and one negative example. not good. this terminology is not consistent and a mess. you cannot use terms interchangeably - pick one and stick with it!} (another view of the same sample), and $2N-2$ negative examples $\{{\mathbf{x}}_k\}_{k\neq i, j}$. The contrastive loss for a pair of positive example $i$, $j$, is defined as,
\begin{equation*}
l_{i,j}=-\log\frac{\exp{(\text{sim}(\mathbf{z}_i,\mathbf{z}_j)/\tau)}}{\sum_{k,k\neq i}\exp{(\text{sim}(\mathbf{z}_i,\mathbf{z}_k)/\tau)}},
\end{equation*}
where $\tau$ is the temperature parameter, and $\text{sim}(\mathbf{z}_i,\mathbf{z}_j)$ is the normalized cosine similarity $\frac{\mathbf{z}_i^{\intercal}\mathbf{z}_j}{||\mathbf{z}_i||||\mathbf{z}_j||}$. We use the same noisy dataset \ml{what is the dataset? you should clearly define what is an image, what is its label, and what is the noise? it's not label noise since you're talking about contrastive learning, so i think it's the noisy views that you create, right? people may get confused because this paper is about label noise. then you say throw away the labels, so do you also throw away the noise? super confusing.} (throwing away the labels) to learn the contrastive model for visual representations. The base encoder $f(\cdot)$ (the ResNet-50 \ml{the resnet-50 WHAT? again, you say resnet-50 above and the resnet-50 model here. inconsistent.}) is then used as the initializer when learning with label noise. \ml{clearly state what is the consequence of all this - learned visual representations, h! what you wrote here is technically correct but creates significant barriers for a reader to follow through. try to guide them and make their life easier. you may not they will appreciate your effort.}
\section{Robust Learning under Label Noise}
\ml{this could've helped a ton in the previous section!!!}
We represent a sample image using $(\mathbf{x}_i,\mathbf{y}_i)$ where $\mathbf{x}_i$ is the feature vector of the $i^{\text{th}}$ image and $\mathbf{y}_i\in \{0,1\}^K$ is the class label vector with $K$ total classes \ml{wrong - feature vector is h, I think?}. We denote the classifier as $f(\mathbf{x}; \mathbf{w})$ \ml{bad notation: you used f as your encoder before!} where $\mathbf{w}$ is the classifier weights \ml{parameters? that would encapsulate weights and biases}. Generally, we minimize the risk on the training samples using CCE loss $\ell_{\text{CCE}}$ as,
\begin{align*}
\mathbf{w}^{\ast} =\argmin_{\mathbf{w}} \sum_{i\in \text{train}} \ell_{\text{CCE}}(\mathbf{y}_i, f(\mathbf{x}_i;\mathbf{w}))
\vspace{-0.2in}
\end{align*}
We do not propose any new methods for handling label noise. We take two representative methods, one from inherently robust methods, and one from noise cleansing methods to show that a pre-trained contrastive model can improve noise robustness \ml{noise robustness? is that a term? robustness under label noise sounds more accurate? noise robustness sounds like a property of noise, not a method.} for any method.
\paragraph{Generalized Cross-Entropy Loss}
Training with CCE loss is sensitive to label noise; in \cite{ghosh2017}, authors show symmetric losses such as Mean Absolute Error (MAE) is robust to uniform label noise. However, MAE is difficult to optimize in modern DNN architecture (even with clean samples). There have been many attempts to alleviate this problem with loss functions that is easy to optimize with provable noise robustness guarantees. \ml{so here this motivation seems repetitive - you've discussed this in the related work section. here i would just say something like: we take a representative loss that is shown to be robust to label noise...}
We take a representative method, generalized cross-entropy loss, that allows to trade-off noise robustness \ml{same problem as before} with optimization easiness \ml{this sounds awkward... easiness lol} \cite{generalized-ce}. In particular \ml{in particular? you're not listing a bunch of things. this should be specifically}, the generalized cross-entropy loss $L_q$ is defined as, \ml{no need to add , before you break for an equation}
\begin{equation*}
L_q(f(\mathbf{x};\mathbf{w}),\mathbf{y}) = \frac{1-f_y(\mathbf{x};\mathbf{w})^q }{q}
\vspace{-0.1in}
\end{equation*}
where $q\in (0,1]$ \ml{q is what? a variable? an algorithm parameter?} and $f_y(\cdot)$ is the probability for class $\mathbf{y}$. Using \ml{the! aaaaaaah} limit theorem \ml{i googled what this is. all i could find is this thing called the CENTRAL limit theorem.}, it can be shown that $L_q$ loss is equivalent to CCE loss when $q\rightarrow 0$ and is equivalent to \ml{aaaaah} MAE loss when $q=1$.
\paragraph{Meta-Weight Network}
Another common paradigm is to detect, reweigh \ml{the rule for commas: a, b, and c. if it's just two things you don't need the ,} possibly corrupted \ml{corrupted how?} samples while training. Meta-learning-based methods have been particularly useful to learn a reweighting strategy that performs well on an unbiased \ml{bias against what? in a paper you should be consistent - don't just throw out the term that comes off the top of your head... be consistent!} validation dataset \cite{mwnet,robust-mwnet,l2rw,l2l-noise}. We will use \ml{aaaaah} Meta-Weight Network (MWNet) as the representative method from this group \ml{of what? you haven't referred to a group before}.
MWNet learns an \ml{an?!} weighting function $\mathcal{W}$ with parameter $\theta$ \cite{mwnet}. Using a set of (small number of) clean validation samples, MWNet learns the optimal weighting function $\theta^{\ast}$ in a bilevel optimization problem \cite{bilevel}. \ml{the use of a full stop is weird here} \ml{huge problems these last 2 sentences. first, you said W is a function, but what is its input and output? second, optimal weighting function is not the same as thetastar (which is a set of parameter values). these things correspond to each other but are not the same!}
\begin{align*}
&\min_{\theta} \mathcal{L}^{\text{val}}(\mathbf{w}^{\ast}(\theta))\triangleq\sum_{j\in \text{val}} \ell_{\text{CCE}}\Big(f(\mathbf{x}_j;\mathbf{w}^{\ast}(\theta)),\mathbf{y}_j\Big)\mbox{ s.t.}\\
&\mathbf{w}^{\ast}(\theta)\!=\!\argmin_{\mathbf{w}}\! \sum_{i\in \text{train}}\! \mathcal{W}\Big(\ell_{\text{CCE}}(\cdot,\cdot);\theta\!\Big)
\!\ell_{\text{CCE}}(f(\mathbf{x}_i;\mathbf{w}),\mathbf{y}_i) \nonumber
\end{align*}\ml{s.t. goes to the 2nd line!} \ml{you do not define what train and val are - sloppy}
The objective \ml{confusing word choice. i think you mean our objective is to ..., but it could be confused with the optimization objective. why not just say our goal here is to ...?} is to optimize for weighting network parameters $\theta$ such that the optimum classifier from the inner objective $\mathbf{w}^{\ast}(\theta)$ (learned using the weighting parameter $\theta$) maximizes performance on the outer level meta \ml{train, val, meta - very confusing. you need more explanations here.} objective \ml{performance on objective does not make sense. you can say maximize performance on a problem, or maximize an objective.} (validation performances \ml{performance is singular}).
Similar to many optimization-based meta-learning approaches, the weighting function (base classifier in meta-learning setup) is optimized with one step SGD in the inner objective \cite{maml,bilevel,mwnet}. \ml{inner problem makes more sense i think - you have inner and outer problems, they each have their own objective, and you solve them by taking a GD step w.r.t. an objective... your use of these terms here can be improved.}
\section{Experimental Results}
We follow the standard experimental setup used in prior work \cite{mwnet,generalized-ce}. We use ResNet-50 (RN-50) \cite{resnet} \ml{as what?}; for CIFAR datasets, we use \ml{adopt} the common practice of replacing the first conv \ml{short for what? just conv sounds weird - write out the full word or use a special font or something} layer of kernel size 7, stride 2 with a conv layer of kernel size 3 and stride 1 and removing the first max-pool operation in RN-50 \cite{simclr}.
{\bf Datasets:} We keep a random 1000 training samples as the validation set for the CIFAR-10/CIFAR-100 dataset. For the remaining 49000 \ml{training?} samples, we synthetically generate label noise. We keep the test dataset clean and list performance on the test dataset. We apply \ml{start with an over view: we use xxx types of label noise: uniform, asymmetric, ... another problem is that uniform is not the exact opposite of asymmetric - call it symmetric or call the other non-uniform?} the uniform noise with probability $\{0,0.2,0.5, 0.8,0.9, 0.95\}$ where the true class is uniformly corrupted to any of the classes (including the true class).
We also experiment with the typical asymmetric noise scenario. We apply the asymmetric noise with probability $\{0.2, 0.3, 0.4\}$ where the true class label is corrupted to \ml{changed to?} a similar class, following the exact same setup used in prior research \cite{generalized-ce,forward}; for CIFAR-10, the class mappings are TRUCK $\rightarrow$ AUTOMOBILE, BIRD $\rightarrow$ AIRPLANE, DEER $\rightarrow$ HORSE, CAT $\leftrightarrow$ DOG. For CIFAR-100, the class mappings are generated from the next circular class \ml{circular class - is this a standard term? doesn't sound too informative.} (where 100 classes are categorized into 20 groups of 5 classes).
We also compare our methods on the large-scale Clothing1M dataset with real-world noisy labels \ml{tiny detail - parallel structure: you said ``we synthetically generate label noise''. make this ``label noise is generally by human labelers'' and previously ``label noise is generated synthetically'' is probably a quick fix.} on 1 million training samples \cite{xiao2015learning}.
{\bf SimCLR Training:} For training \ml{to train} the base encoder $\hat{f}(\cdot)$ and the projection head $g(\cdot)$, we use a batch size of 1024 (1200) and run for 1000 (300) epochs with \ml{the} LARS optimizer \cite{lars} on a single NVIDIA RTX8000 (12 NVIDIA M40) GPU(s) on the CIFAR-10/100 (Clothing-1M) datasets. We use a learning rate of 4 (1.4), perform warmup \ml{jargon - what does warmup mean?} for 0.5\% (1\%) of the total iterations, and employ the cosine annealing strategy \ml{jargon - what does this strategy do? i know these are standard but you should still write a bit more here.} for the CIFAR-10/100 (Clothing1M) datasets.
{\bf ImageNet Pre-training:} We also compare the SimCLR initialization against the RN-50 model pre-trained on the ImageNet dataset. For the Clothing1M dataset, we use a standard pre-trained RN-50 classifier. For the CIFAR datasets, we train a RN-50 classifier from scratch (with CIFAR changes in the first conv layer) on the ImageNet-$32\times32$ (INet32) dataset that achieves 43.67\% Top-1 validation accuracy \cite{imagenet32} similar to \cite{pretraining}. \ml{not clear what this similar to is referring to}
{\bf Methods:} We use the SimCLR RN-50 initializer for three methods: \ml{no the here since you're not calling it a method} standard ERM training with the CCE loss, ERM training with the generalized cross-entropy loss $L_q$ (q=0.5 or 0.66), and MWNet. The value of q is important only for larger \ml{``high rate'' sounds more common i think?} noise rates ($\geq 0.8$), where a large q (0.66) is difficult to optimize \ml{``is'' is bad - you don't optimize q. it's more like picking this value results in a (numerically) hard-to-solve optimization problem, right?}; however, for all other noise rates, a higher value of q performs better \ml{q does not perform better. it leads to better performance}. We fine-tune for 120 epochs on the CIFAR datasets with \ml{better bring up SGD first before writing about param values} a learning rate of 0.01, SGD optimizer with a momentum of 0.9, weight-decay of 0.0001, batch size of 100, and warmup cosine annealing strategy (warmup for 1\% of the total iterations).
For other baseline methods, we use the results listed in their respective papers. Note, \ml{bad language} different prior works use different architectures; thus, we also list the test accuracy from training on clean samples with that architecture (and initialization). \ml{this somehow doesn't seem too clear to me but i dont know how to fix}
We list the average accuracy from five runs of noise generations. \ml{i dont think you generate the noise explicitly? might as well call it noisy label generation? but then what about clothing which is already noisy?}
For the Clothing1M dataset, we compare with \ml{what is ``we''? here you say compare with xxx loss, ..., zzz loss - this reads as if you proposed a loss. you'd better say we USE these losses?} the CCE loss, the $L_q$ loss (q=0.66), and the MAE loss. We use a batch size of 32, \ml{same problem as above} SGD optimizer with a momentum of 0.9, and an initial learning rate of 0.001. For each epoch, we randomly sample 4000 batches of training samples such that classes are balanced \ml{does randomly sample this many batches give you balanced classes? in what? training? or each epoch?}, similar to \cite{dividemix,elr}. We fine-tune for 60 epochs and reduce the learning rate by a factor of 2 after every 10 epochs.
\subsection{Results and Discussion}
Table~\ref{tab:sym} lists the \ml{no the!} test performance \ml{not test performance i think? it's classification performance on the test set} under uniform label noise on the CIFAR datasets. The CCE loss, the $L_q$ loss, and the MWNet method improve performance significantly with the SimCLR initializer \ml{in this sentence you say ``method improve performance''. does not sound good. why not xxx init improves performance for ...?}. Under 90\% label noise, the CCE loss has an accuracy of 42.7\% (10.1\%) with a random initializer and DivideMix has an accuracy of 93.2\% (31.5\%) on the CIFAR-10 (CIFAR-100) dataset. Under the same noise rate, the CCE loss with the SimCLR initializer has an accuracy of 82.9\% (52.11\%) on the CIFAR-10 (100) dataset which translates to a 9\% (65\%) gain compared to the state-of-the-art method DivideMix. Further \ml{something. moreover, something. furthermore, something.}, the MWNet method and the $L_q$ loss improve \ml{same problem as before, simclr improves them, not they improve} performance significantly compared to the CCE loss with the SimCLR initializer. We tried to learn the MWNet model and the method based on $L_q$ loss on 95\% uniform noise rate with the random initializer, but we could not optimize these models under such a high noise rate. However, with the SimCLR initializer, these methods perform significantly better than the state-of-the-art method. \ml{this comes out of the blue here - you're saying these other things would crash under label noise this high? if so, begin by saying ``under very high levels of label noise, ...''}
Table~\ref{tab:asym} lists the test performance \ml{same as above} under asymmetric label noise on the CIFAR datasets. Similar to the uniform noise, we observe that the SimCLR initializer improves \ml{see? now you say simclr improves xxx. sounds much smoother, eh?} model robustness under asymmetric label noise. However, supervised robust methods do not beat the prior-state-of-the-art method ELR \ml{have you defined this abbreviation?} for the asymmetric noise case.
Table~\ref{tab:clothing} lists the test performance on the Clothing1M dataset. Although the CCE loss, the $L_q$ loss, and the MAE loss do not outperform state-of-the-art methods with the SimCLR initializer, they perform remarkably well without any specialized procedures. \ml{dunno what spacialized procedures are. jargon.}
\ml{i don't think i really get this ``transfer'' thing - you can maybe start the imagenet pre-training paragraph above by quickly summarizing why are we comparing against this here}
{\bf ImageNet pre-training vs.\ SimCLR training: } We also observe that RN-50 pre-trained on the ImageNet dataset also improves model robustness under label noise. Note, \ml{same problem} there is significant overlap in \ml{between} the classes of the ImageNet dataset and \ml{that of} the CIFAR datasets. On the CIFAR-100 dataset, the test accuracy significantly improves to \ml{something significantly improves to sounds weird. say xxx significantly improves accuracy to xxx.} 81.51\% from fine-tuning the RN-50 pre-trained model (on INet32) compared to $\sim75\%$ with a random or the SimCLR initializer.
However, the SimCLR initializer does not depend on \ml{require?} such a significant transfer of knowledge \ml{``significant transfer of knowledge'' sounds awkward} from another larger dataset. Further, \ml{same problem as above} we observe that the drop in performance (w.r.t.\ the accuracy from the clean training samples) is significantly lower for the SimCLR initializer compared to the pre-trained initializer. The pre-trained INet32 initializer seems to help more on \ml{in a case} the asymmetric noise case; class overlap and label corruptions to a similar class might be a reason behind this \ml{this what?}. In contrast, in the Clothing1M dataset, only two of the classes (out of 14) are present in the ImageNet dataset.
Consequently, the CCE loss with the SimCLR initializer improves the test performance by 3.3 points \ml{like points on an exam?} compared to the ImageNet pre-trained RN-50 initializer.
\section{Conclusion}
Supervised robust methods are especially susceptible to label noise \ml{the name is bad here - these methods have ``robust'' in their name, but susceptible to label noise - sounds weird.} on vision \ml{image?} datasets where representation learning is more important \ml{than what? when you dish out a comparison, you can just say A is more xx. you have to say A is more xx than B.}.
We have shown that many supervised robust methods perform significantly better with the SimCLR initializer. \ml{you should list the representation stuff here as a possible cause.}
Equipped with the SimCLR initializer, even the CCE loss can beat the state-of-the-art SSL method in some cases. Thus, we advocate using the CCE loss with the SimCLR initializer as the new baseline method for learning under label noise on vision datasets.
\ml{the next sentence sounds weird; this shouldn't belong here. i would just cut this if we need more space}
However, we do not advocate any specific earlier works; instead, we intend to show that these supervised robust methods performed inferior to the SSL methods due to the lack of learning visual representation.
We believe extending our work to other supervised robust methods will enable many prior them \ml{many prior them?!} to be strong contenders to the SSL methods on vision datasets. Further, we believe decoupling the representation learning problem from the task of learning under label noise would bring new methods that can do well on either of these tasks while they may fail on the other task. \ml{idea is good but the writing can be improved - apply the same rules you learned from the intro here and the abstract!}
\section{Introduction}
\ml{main problem with your writing: you have too much jargon. can you guarantee everyone who reviews your paper is familiar with this line of work? no. so you should BRIEFLY explain what these things do instead of throwing out a term such as ``MixUp''. yes, i am an amateur on this topic, and many other topics, but i'm not stupid - from a reviewer's point of view, i would very much enjoy it if you can help me get a rough idea of what's going on in this area. then i can pick up the key idea and form a judgement. don't estimate people's ability to learn things they're not familiar with even from only reading a single WELL-WRITTEN paper.}
Deep neural networks (DNN), trained with \ml{not precise: you assume a lot of unsaid things here - label means the DNN is used for a classification task, where you minimize losses... making it a little bit more explicit helps.} categorical cross-entropy (CCE) loss, are sensitive to label noise.
Several methods, often endowed with a robust loss functions, tackle label noise inherently \ml{i know what you mean by ``inherently'' after seeing your wacv paper. but it comes out of the blue here - what i would call jargon. you may get away with it in some situations but it's not good practice. here one easy thing you can do is - you say robust lost functions. robust to what? explaining it in a few words would work - that way you make it clear that the trick these methods do is to change the loss function itself.} \cite{generalized-ce,peer,normalized-loss,ghosh2015,ghosh2017-dt,ghosh2017,unhinged,peer,ldmi}.
However, robust losses are sometimes difficult to optimize with \ml{``with'' is not precise. you optimize an objective with an algorithm such as SGD. you can say in conjunction with or explain a bit more, briefly summarizing your prior work...} DNNs \cite{generalized-ce}.
Thus, a group of methods also propose noise-cleaning \ml{from what you described below these methods do not ``clean'' the noise - your text suggests that they find noisy samples and treat them differently. you're not killing the noise. is this a standard term?} modules to separate noisy samples in the standard training process with \ml{the} CCE loss \ml{``the'' has been a consistent problem - you need it here since CCE is a specific thing, just like you need it where you say ``the AUC metric'', ``the Adam optimizer'' - it's a special thing that is different from others.} \cite{mwnet,l2rw,robust-mwnet,cleannet,co-teaching,glc,l2l-noise,m-correction}. \ml{this is again ambiguous - i would imagine you say high loss samples are noisier ones - you can make that explicit by adding just a few more words.}
Recently, several methods propose a semi-supervised framework that leverages unlabeled samples in addition to the noise-cleaning modules \cite{dividemix,elr}.
These models leverage the power of self-supervised learning to improve visual representation learning from the unlabeled samples; e.g., DivideMix uses MixUp \ml{here's a point where you're stuck in the middle - you should either make it more specific, i.e., say what does MixUP actually do, or cut the crap here. just throwing this out here is again jargon without proper context.} consistency regularization and \ml{again, you need the here since it's a specific strategy} entropy minimization strategy \cite{mixup,dividemix}.
These semi-supervised strategies have been highly successful compared to prior supervised robust learning methods \cite{dividemix}. \ml{``supervised robust learning methods'' - is that a standard term?}
Thus, it is natural to ask whether the significant performance drop \ml{inconsistent and imprecise language - go over what you wrote above again - you surveyed a bunch of methods, saying the last type (semi-supervised) is better. you don't talk about a ``drop''. I know you're referring to the first type underperforming the second type, but you have to use more consistent language - either say robust xxx suffers a drop in performanced compared to semi-supervised or say: is the improvement of semi-supervised over robust xxx is caused by robustness to label noise or good visual representations? the point is consistency - we see A<B, and why is A<B? you shouldn't say B>A, and then why is A<B.} of supervised robust methods is due to the \emph{sensitivity} to label noise or because of not able to learn a \emph{`good' visual representations} under label noise. \ml{A xx representationS - many such instances.}
\ml{from this point on, the organization is messed up. you start by overviewing methods to combat label noise, and the observation that in prior literature, semi-supervised methods win. that actually makes sense. however, from now on, i have no idea what is your contribution (this paragraph? it reads like a research diary), what is the related work discussion (next paragraph)? make it clear! the structure of the first part of a paper should be - background, motivation, contributions, related work. you should list things like contrastive learning in the motivation since your contribution statements need it - it can't come later. you should mention each method BRIEFLY, intuitively summarizing what it does.}
\ml{i always like to structure my paragraphs this way - 1. background + (maybe 2) what is an important problem, 2. how do people solve it - what is the problem with this approach/these approaches, 3. another way people solve it - repeat, ..., 4. what research questions do we ask from these observations or what is missing and needs to be done, 5. contributions - what is our approach to tackle which problem, briefly summarize our approach, how did we justify it, what are some possible takeaways, 6. related work - what do competitive/similar methods do, and how is it different compared to our approach. you seem to already do this last part in the last paragraph, but it needs to be structured better.}
\ml{let me give you an example of quickly summarizing something without going into details. say for contrastive learning. i'm not an expert there and don't know all the details. but here's a summary - ``contrastive learning uses a self-supervision pre-traing method to learn good visual representations of images through comparing and contrasting different versions of an original image under various common perturbations.'' not far off, huh?}
\ml{this half sentence lol} We find that although CCE loss is sensitive to label, as proved in prior research \cite{ghosh2017}, the significant portion of the performance drop when training with CCE loss, is because of a lack of learning good visual representations.
In contrast, semi-supervised methods can effectively learn visual representations even under label noise improving their performances.
Surprisingly, we find when initialized with a proper visual representation model, the CCE loss can beat other noise-robust models, including state-of-the-art methods (trained with random initializer) under severe label noise.
However, the point is not to advocate the use of CCE loss under label noise; instead, we show many prior supervised noise-robust methods improve significantly when properly initialized.
For example, generalized cross-entropy loss \cite{generalized-ce}, a robust loss, can retain good performance even under 95\% uniform label noise on the CIFAR-100 dataset whereas training with a random initializer does not perform better than a random model.
This observation brings another question on whether the current baseline methods should be learning the visual representations and the label noise problem altogether?
We believe decoupling these two separate problems would bring new methods that can do very well on either of these tasks while they may fail on the other task.
\ml{key writing tip: parallel structure. when you discuss several different things, keep a RIGID structure: First, xx method does xxx [], which ... As a result, it achieves xxx. However, it fails to xxx. Second, yy method does yyy [], which ... As a result, it achieves yyy... Each thing has to follow the same RIGID structure as much as possible. it would make things look super nice and very easy to follow.}
However, the idea of using a pre-trained model initializer or self-supervised learning is not new, even in the label noise research.
In \cite{self-supervision}, authors use the auxiliary tasks, such as rotation prediction to improves robustness under label noise.
In \cite{pretraining}, authors use a pre-trained Imagenet model to improve robustness.
In contrast, our work does not use any auxiliary task or use a pre-trained model from other datasets.
We use the same noisy dataset for training a contrastive model that initialize the network when learning with label noise.
Thus, datasets that have a significant covariate shift between pre-training datasets (most often ImageNet) can still be handled using the contrastive \ml{jargon - dropping it here without explaining what it is makes absolutely zero sense.} model initializer.
The closest to our work is \cite{c2d}, which uses contrastive pre-trained models to improve the DivideMix algorithm.
However, we show that a self-supervised contrastive model initializer can improve robustness under label noise for many supervised base models.
In particular, training with sensitive cross-entropy loss (with contrastive initializer) can also beat state-of-the-art label noise methods.
Moreover, we show for two specific methods (one noise-tolerant method, and one noise-cleaning method), used in literature, that they can offer significant improvement under label noise than cross-entropy loss with the right initializer.
With the right initialization, robust supervised methods can concentrate on the main task of learning with label noise.
Before delving into specific methods used in this paper, we provide background on contrastive representation learning.
\ml{key writing tip: put yourself into the reviewer's shoes. think about it from their angle. when you write something, read it again - does it follow from the previous sentence? does it build up to what you're going to write next? a paper should be coherent - you can't dump something in without motivating it or giving people enough to anticipate it. put it into the context of the paper - are all the terms clear and unambiguous? do they clearly refer to something?}
|
2,869,038,154,673 | arxiv | \section{Introduction}
Resource allocation has become an indispensable part of the design of any engineering system that consumes resources, such as electricity power in home energy management \cite{DeAngelis13}, access bandwidth and battery life in wireless communications \cite{Inaltekin05}, computing bandwidth under certain QoS requirements \cite{Bini11}, computing bandwidth for time-sensitive applications \cite{ChasparisMaggio16}, computing bandwidth and memory in parallelized applications \cite{Brecht93}.
When resource allocation is performed online and the number, arrival and departure times of the tasks are not known a priori (as in the case of CPU bandwidth allocation), the role of a resource manager ($\mathsf{RM}$) is to guarantee an \emph{efficient} operation of all tasks by appropriately distributing resources. However, guaranteeing efficiency through the adjustment of resources requires the formulation of a centralized optimization problem (e.g., mixed-integer linear programming formulations \cite{Bini11}), which further requires information about the specifics of each task (i.e., application details). Such information may not be available to neither the $\mathsf{RM}$\ nor the task itself.
Given the difficulties involved in the formulation of centralized optimization problems, not to mention their computational complexity, feedback from the running tasks in the form of performance measurements may provide valuable information for the establishment of efficient allocations. Such (feedback-based) techniques have recently considered in several scientific domains, such as in the case of application parallelization (where information about the memory access patterns or affinity between threads and data are used in the form of scheduling hints) \cite{Broquedis10}, or in the case of allocating virtual processors to time-sensitive applications \cite{ChasparisMaggio16}.
To tackle the issues of centralized optimization techniques, resource allocation problems have also been addressed through distributed or game-theoretic optimization schemes. The main goal of such approaches is to address a centralized (\emph{global}) objective for resource allocation through agent-based (\emph{local}) objectives, where, for instance, agents may represent the tasks to be allocated. Examples include the cooperative game formulation for allocating bandwidth in grid computing \cite{Sub08}, the non-cooperative game formulation in the problem of medium access protocols in communications \cite{Tembine09} or for allocating resources in cloud computing \cite{Wei10}. The main advantage of distributing the decision-making process is the considerable reduction in computational complexity (a group of $N$ tasks can be allocated to $m$ resources with $m^N$ possible ways, while a single task may be allocated with only $m$ possible ways). This further allows for the development of online selection rules where tasks/agents make decisions often using current observations of their \emph{own} performance.
This paper proposes a distributed learning scheme specifically tailored for addressing the problem of dynamically assigning/pinning threads of a parallelized application to the available processing units. Prior work has demonstrated the importance of thread-to-core bindings in the overall performance of a parallelized application. For example, \cite{Klug11} describes a tool that checks the performance of each of the available thread-to-core bindings and searches an optimal placement. Unfortunately, the \emph{exhaustive-search} type of optimization that is implemented may prohibit runtime implementation. Reference \cite{Broquedis10} combines the problem of thread scheduling with scheduling hints related to thread-memory affinity issues. These hints are able to accommodate load distribution given information for the application structure and the hardware topology. The HWLOC library is used to perform the topology discovery which builds a hierarchical architecture composed of hardware objects (NUMA nodes, sockets, caches, cores, etc.), and the BubbleSched library \cite{Thibault07} is used to implement scheduling policies.
A similar scheduling policy is also implemented by \cite{Olivier11}.
This form of scheduling strategies exhibits several disadvantages when dealing with dynamic environments (e.g., varying amount of available resources). In particular, retrieving the exact affinity relations during runtime may be an issue due to the involved information complexity. Furthermore, in the presence of other applications running on the same platform, the above methodologies will fail to identify irregular application behavior and react promptly to such irregularities. Instead, in such dynamic environments, it is more appropriate to consider learning-based optimization techniques where the scheduling policy is being updated based on performance measurements from the running threads. Through such measurement- or learning-based scheme, we can a) \emph{reduce information complexity} (i.e., when dealing with a large number of possible thread/memory bindings) since only performance measurements need to be collected during runtime, and b) \emph{adapt to uncertain/irregular application behavior}.
To this end, this paper proposes a dynamic (algorithmic-based) scheme for optimally allocating threads of a parallelized application into a set of available CPU cores. The proposed methodology implements a distributed reinforcement learning algorithm (executed in parallel by a resource manager/scheduler), according to which each thread is considered an independent agent making decisions over its own CPU-affinities. The proposed algorithm requires minimum information exchange, that is only the performance measurements collected from each running thread. Furthermore, it exhibits adaptivity and robustness to possible irregularities in the behavior of a thread or to possible changes in the availability of resources. We analytically demonstrate that the reinforcement-learning scheme asymptotically learns a locally-optimal allocation, while it is flexible enough to accommodate several optimization criteria. We also demonstrate through experiments in a Linux platform that the proposed algorithm outperforms the scheduling strategies of the operating system with respect to completion time.
The paper is organized as follows. Section~\ref{sec:framework} describes the overall framework and objective. Section~\ref{sec:MultiagentFormulation} introduces the concept of multi-agent formulations and discusses their advantages. Section~\ref{sec:ReinforcementLearning} presents the proposed reinforcement-learning algorithm for dynamic placement of threads and Section~\ref{sec:ConvergenceAnalysis} presents its convergence analysis. Section \ref{sec:Experiments} presents experiments of the proposed algorithm in a Linux platform and comparison tests with the operating system's response. Finally, Section~\ref{sec:Conclusions} presents concluding remarks.
\textit{Notation:}
\begin{itemize}
\item[$\bullet$] $|x|$ denotes the Euclidean norm of a vector $x\in\mathbb{R}^{n}$.
\item[$\bullet$] $\mathsf{dist}(x,A)$ denotes the minimum distance from a vector $x\in\mathbb{R}^{n}$ to a set $A\subset\mathbb{R}^{n}$, i.e., $\mathsf{dist}(x,A)\doteq\inf_{y\in{A}}|x-y|$.
\item[$\bullet$] $\mathcal{B}_{\delta}(A)$ denotes the $\delta$-neighborhood of a set $A\subset\mathbb{R}^{n}$, i.e., $\mathcal{B}_{\delta}(A)\doteq\{x\in\mathbb{R}^{n}:\mathsf{dist}(x,A)<\delta\}$.
\item[$\bullet$] For some finite set $A$, $\magn{A}$ denotes the cardinality of $A$.
\item[$\bullet$] The probability simplex of dimension $n$ is denoted $\SIMPLEX{n}$ and defined as
$$\SIMPLEX{n}\doteq\Big\{x=(x_1,...,x_n)\in[0,1]^n:\sum_{i=1}^nx_i = 1\Big\}.$$
\item[$\bullet$] $\Pi_{\SIMPLEX{n}}[x]$ is the projection of a vector $x\in\mathbb{R}^{n}$ onto $\SIMPLEX{n}$.
\item[$\bullet$] $e_j\in\mathbb{R}^{n}$ denotes the unit vector whose $j$th entry is equal to 1 while all other entries are zero;
\item[$\bullet$] For a vector $\sigma\in\SIMPLEX{n}$, let $\RAND{\sigma}{a_1,...,a_n}$ denote the random selection of an element of the set $\{a_1,...,a_n\}$ according to the distribution $\sigma$;
\end{itemize}
\section{Problem Formulation \& Objective}
\label{sec:framework}
\subsection{Framework}
\label{sec:RMapp}
We consider a resource allocation framework for addressing the problem of dynamic pinning of parallelized applications. In particular, we consider a number of threads $\mathcal{I}=\{1,2,...,n\}$ resulting from a parallelized application. These threads need to be pinned/scheduled for processing into a set of available CPU's $\mathcal{J}=\{1,2,...,m\}$ (not necessarily homogeneous).
We denote the \emph{assignment} of a thread $i$ to the set of available CPU's by $\alpha_i \in \mathcal{A}_i \equiv \mathcal{J}$, i.e., $\alpha_i$ designates the number of the CPU where this thread is being assigned to. Let also $\alpha=\{\alpha_i\}_i$ denote the \emph{assignment profile}.
Responsible for the assignment of CPU's into the threads is the Resource Manager ($\mathsf{RM}$), which periodically checks the prior performance of each thread and makes a decision over their next CPU placements so that a (user-specified) objective is maximized. Throughout the paper, we will assume that:
\begin{itemize}
\item[(a)] The internal properties and details of the threads are not known to the $\mathsf{RM}${}. Instead, the $\mathsf{RM}$\ may only have access to measurements related to their performance (e.g., their processing speed).
\item[(b)] Threads may not be idled or postponed. Instead, the goal of the $\mathsf{RM}${} is to assign the \emph{currently} available resources to the \emph{currently} running threads.
\item[(c)] Each thread may only be assigned to a single CPU core.
\end{itemize}
\begin{figure}[t!]
\centering
\input{communication.tex}
\caption{Schematic of \emph{static} resource allocation framework.}
\label{fig:StaticApproach}
\end{figure}
\subsection{Static optimization \& issues} \label{sec:StaticOptimization}
The selection of a centralized objective is open-ended. In the remainder of the paper, we will consider two main possibilities of a centralized objective in order to emphasize the flexibility of the introduced methodology to address alternative criteria. In the first case, the centralized objective will correspond to maximizing the average processing speed. In the second case, it will correspond to maximizing the average processing speed while maintaining a balance between the processing speeds of the running threads.
Let $v_i=v_i(\alpha,w)$ denote the processing speed of thread $i$ which depends on both the overall assignment $\alpha$, as well as exogenous parameters aggregated within $w$. The exogenous parameters $w$ summarize, for example, the impact of other applications running on the same platform or other irregularities of the applications. Then, the previously mentioned centralized objectives may take on the following form:
\begin{eqnarray} \label{eq:CentralizedObjective}
\max_{\alpha\in\mathcal{A}} & f(\alpha,w),
\end{eqnarray}
where
\begin{enumerate}
\item[(O1)] $f(\alpha,w) \doteq \sum_{i=1}^{n} v_i/n$, corresponds to the average processing speed of all threads;
\item[(O2)] $f(\alpha,w) \doteq \sum_{i=1}^{n} [v_i - \gamma(v_i-\sum_{j\in\mathcal{I}}v_j/n)^2]/n$, for some $\gamma>0$, corresponds to the average processing speed minus a penalty that is proportional to the speed variance among threads.
\end{enumerate}
Any solution to the optimization problem (\ref{eq:CentralizedObjective}) would correspond to an \emph{efficient assignment}. Figure~\ref{fig:StaticApproach} presents a schematic of a \emph{static} resource allocation framework sequence of actions where the centralized objective (\ref{eq:CentralizedObjective}) is solved by the $\mathsf{RM}$\ once and then it communicates the optimal assignment to the threads.
However, there are two significant issues when posing an optimization problem in the form of (\ref{eq:CentralizedObjective}). In particular,
\begin{enumerate}
\item the function $v_i(\alpha,w)$ is unknown and it may only be evaluated through measurements of the processing speed, denoted $\tilde{v}_i$;
\item the exogenous influence $w$ is unknown and may vary with time, thus the optimal assignment may not be fixed with time.
\end{enumerate}
In conclusion, the static resource allocation framework of Figure~\ref{fig:StaticApproach} presented in (\ref{eq:CentralizedObjective}) is not easily implementable.
\subsection{Measurement- or learning-based optimization} \label{sec:MeasurementBasedOptimization}
We wish to target a \emph{static} objective of the form (\ref{eq:CentralizedObjective}) through a \emph{measurement-based} (or \emph{learning-based}) optimization approach. According to such approach, the $\mathsf{RM}$\ reacts to measurements of the objective function $f(\alpha,w)$, periodically collected at time instances $k=1,2,...$ and denoted $\tilde{f}(k)$. In the case of objective (O1), $\tilde{f}(k)\doteq\sum_{i=1}^{n}\tilde{v}_i(k)/n$. Given these measurements and the current assignment $\alpha(k)$ of resources, the $\mathsf{RM}$\ selects the next assignment of resources $\alpha(k+1)$ so that the measured objective approaches the true optimum of the unknown function $f(\alpha,w)$. In other words, the $\mathsf{RM}$\ employs an update rule of the form:
\begin{equation} \label{eq:GenericUpdateRule}
\{(\tilde{v}_i(1),\alpha_i(1)),...,(\tilde{v}_i(k),\alpha_i(k))\}_i\mapsto\{\alpha_i(k+1)\}_i
\end{equation}
according to which prior pairs of measurements and assignments for each thread $i$ are mapped into a new assignment $\alpha_i(k+1)$ that will be employed during the next evaluation interval.
\begin{figure}[th!]
\centering
\input{communication_adaptive.tex}
\caption{Schematic of \emph{dynamic} resource allocation framework.}
\label{fig:DynamicApproach}
\end{figure}
The overall framework is illustrated in Figure~\ref{fig:DynamicApproach} describing the flow of information and steps executed. In particular, at any given time instance $k=1,2,...$, each thread $i$ communicates to the $\mathsf{RM}$\ its current processing speed $\tilde{v}_i(k)$. Then the $\mathsf{RM}$\ updates the assignments for each thread $i$, $\alpha_i(k+1)$, and communicates this assignment to them.
\subsection{Distributed learning} \label{sec:DistributedOptimization}
Parallelized applications consist of multiple threads that can be controlled independently with respect to their CPU affinity (at least in Linux machines). Recently developed performance-recording tools (e.g., PAPI \cite{Mucci99}) also allows for a real-time collection of performance counters during the execution time of a thread. Thus, decisions over the assignment of CPU affinities can be performed independently for each thread, allowing for the introduction of a \emph{distributed learning} framework. Under such a framework, the $\mathsf{RM}$\ treats each thread as an independent decision maker and provides each thread with an independent decision rule.
A distributed learning approach (i) \emph{reduces computation complexity}, since each thread has only $m$ available choices (instead of $m^{N}$ available group choices), and (ii) \emph{allows for an immediate response to changes observed in the environment} (e.g., available computing bandwidth), thus increasing the adaptivity and robustness of the resource allocation mechanism. For example, if the load of the $j$th CPU core increases (captured through the exogenous parameters $w_j$), then the thread(s) currently running on CPU $j$ may immediately react to this change without necessarily altering the assignment of the remaining threads.
Such immediate reaction to changes in the environment constitutes a great advantage in comparison to centralized schemes. In the absence of an explicit form of the centralized objective (\ref{eq:CentralizedObjective}), a centralized framework would require a testing period over which all possible assignments are tested over an evaluation period and then compared with respect to their performance. When all possible assignments have been tested and evaluated, then the best one can be selected. Even if such optimization is repeated often, it is obvious that such \emph{exhaustive-search} approach will suffer from slow responses to possible changes in the environment.
It is also evident that the large evaluation period required by an exhaustive-search framework cannot provide any performance guarantees during the evaluation phase. In particular, since all alternative assignments need to be tested over some evaluation interval, bad assignments also have to be tried and evaluated. This may have an unpredictable impact in the overall performance of the application, thus reducing the impact of the optimization itself.
On the other hand, distributed learning schemes can be designed to allow only for small variations in the current assignment. For example, threads may experiment independently for alternative CPU assignments, however the frequency of such experimentations can be tuned independently for each thread. At the same time, an experimentation that leads to a worse assignment may always be reversed by the thread performing this experimentation, thus maintaining good performance throughout the execution time. Hence, distributed learning may allow for (iii) \emph{a more direct and careful experimentation of the alternative options}.
At the same time, distributed learning schemes can be designed to (iv) \emph{gradually approach at least locally optimum assignments}, which include all solutions to the static centralized optimization (\ref{eq:CentralizedObjective}). Thus, such schemes may provide guarantees over the performance of the overall parallelized application.
Points (i)--(iv) discussed above constitute the main advantages of a distributed learning scheme compared to a centralized approach.
\subsection{Objective} \label{sec:Objective}
The objective in this paper is to address the problem of adaptive or dynamic pinning through a distributed learning framework. Each thread will constitute an independent decision maker or agent, thus naturally introducing a multi-agent formulation. Each thread selects its own CPU assignments independently using its own preference criterion (although the necessary computations for such selection are executed by the $\mathsf{RM}$).
The goal is to design a preference criterion and a selection rule for each thread, so that when each thread tries to maximize its own (\emph{local}) criterion then certain guarantees can be achieved regarding the overall (\emph{global}) performance of the parallelized application. Furthermore, the selection criterion for each thread should be adaptive and robust to possible changes observed in the environment (e.g., the resource availability).
In the following sections, we will go through the design for such a distributed scheme, and we will provide guarantees with respect to its asymptotic behavior.
\section{Multi-Agent Formulation} \label{sec:MultiagentFormulation}
The first step towards a distributed learning scheme is the decomposition of the decision making process into multiple decision makers (or agents). Naturally, in the problem of placing threads of a parallelized application into a set of available processing units, a thread may naturally constitute an independent decision maker.
\subsection{Strategy} \label{sec:Strategy}
Since each agent (or thread) selects actions independently, we generally assume that each agent's action is a realization of an independent discrete random variable. Let $\sigma_{ij}\in[0,1]$, $j\in\mathcal{A}_i$, denote the probability that agent $i$ selects its $j$th action in $\mathcal{A}_i$. If $\sum_{j=1}^{\magn{\mathcal{A}_i}}\sigma_{ij}=1$, then $\sigma_i\doteq(\sigma_{i1},...,\sigma_{i\magn{\mathcal{A}_i}})$ is a probability distribution over the set of actions $\mathcal{A}_i$ (or \emph{strategy} of agent $i$). Then $\sigma_i\in\SIMPLEX{\magn{\mathcal{A}_i}}$. To provide an example, consider the case of 3 available CPU cores, i.e., $\mathcal{A}_i=\{1,2,3\}$. In this case, the strategy $\sigma_i\in\SIMPLEX{3}$ of thread $i$ may take the following form:
\begin{equation*}
\sigma_{i} = \left(\begin{array}{c}
0.2 \\
0.5 \\
0.3
\end{array}\right),
\end{equation*}
such that $20\%$ corresponds to the probability of assigning itself to CPU core $1$, $50\%$ corresponds to the probability of assigning itself to CPU core $2$ and $30\%$ corresponds to the probability of assigning itself to CPU core $3$. Briefly, the assignment selection will be denoted by $$\alpha_i = \RAND{\sigma_i}{\mathcal{A}_i}.$$
We will also use the term \emph{strategy profile} to denote the combination of strategies of all agents $\sigma=(\sigma_1,...,\sigma_n)\in\mathbf{\Delta}$ where $\mathbf{\Delta}\doteq\SIMPLEX{\magn{\mathcal{A}_1}}\times ... \times \SIMPLEX{\magn{\mathcal{A}_n}}$ is the set of strategy profiles.
Note that if $\sigma_i$ is a unit vector (or a vertex of $\SIMPLEX{\magn{\mathcal{A}_i}}$), say $e_j$, then agent $i$ selects its $j$th action with probability one. Such a strategy will be called \emph{pure strategy}. Likewise, a \emph{pure strategy profile} is a profile of pure strategies. We will also use the term \emph{mixed strategy} to denote a strategy that is not pure.
\subsection{Utility function \& expected payoff} \label{sec:UtilityFunction}
A cornerstone in the design of any measurement-based algorithm is the \emph{preference criterion} or \emph{utility function} $u_i$ for each thread $i\in\mathcal{A}$. The utility function captures the benefit of a decision maker (thread) as resulting from the assignment profile $\alpha$ selected by all threads, i.e., it represents a function of the form $u_i:\mathcal{A}\to\mathbb{R}$. Often, we may decompose the argument of the utility function as follows $u_i(\alpha) = u_i(\alpha_i,\alpha_{-i})$, where $-i\doteq\mathcal{I}\backslash{i}$. The utility function introduces a preference relation for each decision maker where $u_i(\alpha_i,\alpha_{-i}) \geq u_{i}(\alpha_i',\alpha_{-i})$ translates to $\alpha_i$ being more desirable/preferable than $\alpha_i'$.
It is important to note that the utility function $u_i$ of each agent/thread $i$ is subject to \emph{design} and it is introduced in order to guide the preferences of each agent. Thus, $u_i$ may not necessarily correspond to a measured quantity, but it could be a function of available performance counters.
For example, a natural choice for the utility of each thread is its own execution speed $v_i$. Other options may include more egalitarian criteria, where the utility function of each thread corresponds to the overall global objective $f(\alpha,w)$. The definition of a utility function is open-ended.
\subsection{Assignment Game} \label{sec:AssignmentGame}
Assuming that each thread (or agent) may decide independently on its own CPU placement, so that its preference criterion is maximized, a strategic-form game between the running threads can naturally be introduced. We define it as a strategic interaction or game because the strategy of each thread indirectly influences the performance of the other threads, thus introducing an interdependence between their utility functions. We define the triple $\{\mathcal{I},\mathcal{A},\{u_i\}_i\}$ as an \emph{assignment game}.
\subsection{Nash Equilibria} \label{sec:NashEquilibria}
Given a strategy profile $\sigma\in\mathbf{\Delta}$, the \emph{expected payoff vector} of each agent $i$, $U_i:\mathbf{\Delta}\to\mathbb{R}^{\magn{\mathcal{A}_i}}$, can be computed by
\begin{equation} \label{eq:ExpectedUtility}
U_i(\sigma) \doteq \sum_{\alpha_i\in\mathcal{A}_i}e_{\alpha_i}\sum_{\alpha_{-i}\in\mathcal{A}_{-i}}\left(\prod_{s\in{-i}}\sigma_{s\alpha_{s}}\right)u_i(\alpha_i,\alpha_{-i}).
\end{equation}
We may think of the $j$th entry of the expected payoff vector $U_i$, denoted $U_{ij}(\sigma)$, as the expected payoff of agent $i$ playing action $j$ at strategy profile $\sigma$.
Finally, let $u_i(\sigma)$ be the \emph{expected payoff} of agent $i$ at strategy profile $\sigma\in\mathbf{\Delta}$, which satisfies:
\begin{equation}
u_i(\sigma) = \sigma_i^{\mathrm T} U_i(\sigma).
\end{equation}
\begin{definition}[Nash Equilibrium] \label{eq:NashEquilibrium}
A strategy profile $\sigma^*=(\sigma_1^*,...,\sigma_n^*)\in\mathbf{\Delta}$ is a Nash equilibrium if, for each agent $i\in\mathcal{I}$,
\begin{equation} \label{eq:NashCondition}
u_i(\sigma_i^*,\sigma_{-i}^*) \geq u_i(\sigma_i,\sigma_{-i}^*),
\end{equation}
for all $\sigma_i\in\SIMPLEX{\magn{\mathcal{A}_i}}$ with $\sigma_i\neq\sigma_i^*$.
\end{definition}
In other words, a strategy profile is a Nash equilibrium when no agent has the incentive to change this strategy (given that every other agent does not change its strategy). In the special case where for all $i\in\mathcal{I}$, $\sigma_i^*$ is a pure strategy, then the Nash equilibrium is called \emph{pure Nash equilibrium}.
\subsection{Efficient assignments vs Nash equilibria} \label{sec:EfficientAllocationsVSNashEquilibria}
As we shall see in a forthcoming section, Nash equilibria can be potential attractors of several classes of distributed learning schemes, therefore their relation to the efficient assignments becomes important.
Nash equilibria correspond to \emph{locally} stable equilibria (with respect to the agents' preferences), i.e., no agent has the incentive to alter its strategy. On the other hand, \emph{efficient assignments} correspond to strategy profiles that maximize the global objective (\ref{eq:CentralizedObjective}). As probably expected, a Nash equilibrium does not necessarily coincide with an efficient assignment and vice versa. Both the utility function of each agent $i$, $u_i$, as well as the global objective $f(\alpha,w)$ are \emph{subject to design}, and their selection determines the relation between Nash equilibria and efficient assignments.
The $\mathsf{RM}$\ can be designed to have access to the performances of all threads. Thus, a natural choice for the utility of each thread can be the overall objective function, i.e.,
\begin{equation} \label{eq:AssignmentGameUtility}
u_i(\alpha) \doteq f(\alpha,w),
\end{equation}
for some given exogenous factor $w$. Note that this definition is independent of whether objective (O1) or (O2) is selected. Such classes of strategic interactions where the utilities of all independent agents are identical, are referred to as \emph{identical interest games} and they are part of a larger family of games, namely \emph{potential games}. It is straightforward to check that in this case, \emph{the set of efficient assignments belongs to the set of Nash equilibria (locally optimal allocations)}. In this case, it is desirable that agents learn to select placements that correspond to Nash equilibria, since a) it provides a minimum performance guarantee (since \emph{all} non-locally optimal placements are excluded), and b) it increases the probability for converging to the solution(s) of the global objective (\ref{eq:CentralizedObjective}).
\section{Reinforcement Learning (RL)} \label{sec:ReinforcementLearning}
In the previous section, we introduced utility functions for each thread (or agent), so that the set of efficient assignments (\ref{eq:CentralizedObjective}) are restricted within the set of Nash equilibria. However, as we have already discussed in Section~\ref{sec:MeasurementBasedOptimization}, the utility function of each thread is not known a-priori, rather it may only be measured after the selection of a particular assignment is in place. Thus, the question that naturally arises is \emph{how agents may choose assignments based only on their available measurements so that eventually an efficient assignment is established for all threads}.
To this end, we employ a distributed learning framework (namely, \emph{perturbed learning automata}) that is based on the reinforcement learning algorithm introduced in \cite{ChasparisShamma11_DGA,ChasparisShammaRantzer14}. It belongs to the general class of \emph{learning automata} \cite{Narendra89}.
The basic idea behind reinforcement learning is rather simple. If agent $i$ selects action $j$ at instance $k$ and a favorable payoff results, $u_i(\alpha)$, the action probability $\sigma_{ij}(k)$ is increased and all other entries of $\sigma_i(k)$ are decreased.
The precise manner in which $\sigma_i(k)$ changes, depending on the assignment $\alpha_i(k)$ performed at stage $k$ and the response $u_i(\alpha(k))$ of the environment, completely defines the reinforcement learning model.
\subsection{Strategy update} \label{sec:StrategyUpdate}
According to the \emph{perturbed reinforcement learning} \cite{ChasparisShamma11_DGA,ChasparisShammaRantzer14}, the strategy of each thread at any time instance $k=1,2,...$ is as follows:
\begin{equation} \label{eq:SelectionRule}
\sigma_i(k) = (1-\lambda)x_i(k) - \frac{\lambda}{\magn{\mathcal{A}_i}}
\end{equation}
where $\lambda>0$ corresponds to a perturbation term (or \emph{mutation}) and $x_i(k)$ corresponds to the \emph{nominal strategy} of agent $i$. The nominal strategy is updated according to the following update recursion:
\begin{equation} \label{eq:StrategyUpdate}
x_i(k+1) = \Pi_{\SIMPLEX{\magn{\mathcal{A}_i}}}\left[x_i(k) + \epsilon u_i(\alpha(k))[e_{\alpha_i(k)}-x_i(k)]\right],
\end{equation}
for some constant step-size $\epsilon>0$. Note that according to this recursion, the new nominal strategy will increase in the direction of the action $\alpha_i(k)$ which is currently selected and it will increase proportionally to the utility received from this selection. For sufficiently small step size $\epsilon>0$ and given that the utility function $u_i(\cdot)$ is uniformly bounded for all action profiles $\alpha\in\mathcal{A}$, the projection operator $\Pi_{\SIMPLEX{\magn{\mathcal{A}_i}}}[\cdot]$ can be skipped.
In comparison to \cite{ChasparisShamma11_DGA,ChasparisShammaRantzer14}, the difference here lies in the use of the constant step size $\epsilon>0$ (instead of a decreasing step-size sequence). This selection increases the adaptivity and robustness of the algorithm to possible changes in the environment. This is because a constant step size provides a fast transition of the nominal strategy from one pure strategy to another.
Furthermore, the reason for introducing the perturbation term $\lambda$ is to provide the possibility for the nominal strategy to escape from pure strategy profiles, that is profiles at which all agents assign probability one in one of the actions. Setting $\lambda>0$ is essential for providing an adaptive response of the algorithm to changes in the environment.
\section{Convergence Analysis} \label{sec:ConvergenceAnalysis}
In this section, we establish a connection between the asymptotic behavior of the nominal strategy profile $x(k)$ with the Nash equilibria of the assignment game, when the utility function $u_i$ for each thread $i$ is defined by (\ref{eq:AssignmentGameUtility}) and the objective is given by either (O1) or (O2). Let us denote $\mathcal{S}^{\lambda}$ to be the set of \emph{stationary points} of the mean-field dynamics (cf.,~\cite{KushnerYin03}) of the recursion (\ref{eq:StrategyUpdate}) (when the projection operator has been skipped), defined as follows
\begin{equation*} \label{eq:StationaryPoints}
\mathcal{S}^{\lambda}\doteq \left\{ x\in\mathbf{\Delta} : g_i^{\lambda}(x)\doteq\mathbb{E}\left[u_i(\alpha(k))[e_{\alpha_i(k)} - x_i(k)]|x(k)=x\right] = 0, \forall i\in\mathcal{I} \right\}.
\end{equation*}
The expectation operator $\mathbb{E}[\cdot]$ is defined appropriately over the canonical path space $\Omega=\mathbf{\Delta}^{\infty}$ with an element $\omega$ being a sequence $\{x(0),x(1),...\}$ with $x(k)=(x_1(k),...,x_{n}(k))\in\mathbf{\Delta}$ generated by the reinforcement learning process. Similarly we define the probability operator $\mathbb{P}[\cdot]$. In other words, the set of stationary points corresponds to the strategy profiles at which the expected change in the strategy profile is zero.
According to \cite{ChasparisShamma11_DGA,ChasparisShammaRantzer14}, a connection can be established between the set of stationary points $\mathcal{S}^{\lambda}$ and the set of Nash equilibria of the assignment game. In particular, for sufficiently small $\lambda>0$, \emph{the set of $\mathcal{S}^{\lambda}$ includes only $\lambda$-perturbations of Nash-equilibrium strategies}. This is due to the fact that the mean-field dynamics $\{g_i^{\lambda}(\cdot)\}_i$ are continuously differentiable functions with respect to $\lambda$.\footnote{For more information regarding the sensitivity of stationary points to $\lambda>0$, see \cite[Lemma~6.2]{ChasparisShammaRantzer14}.}
The following proposition is a straightforward extension of \cite[Theorem~1]{ChasparisShammaRantzer14} to the case of constant step-size.
\begin{proposition} \label{Pr:InfinitelyOftenVisits}
Let the $\mathsf{RM}$\ employ the strategy update rule (\ref{eq:StrategyUpdate}) and placement selection (\ref{eq:SelectionRule}) for each thread $i$. Updates are performed periodically with a fixed period such that $\tilde{v}_i(k)>0$ for all $i$ and $k$. Let the utility function for each thread $i$ satisfy (\ref{eq:AssignmentGameUtility}) under either objective (O1) or (O2), where $\gamma\geq{0}$ is small enough such that $u_i(\alpha(k))>{0}$ for all $k$.
Then, for some $\lambda>0$ sufficiently small, there exists $\delta=\delta(\lambda)$, with $\delta(\lambda)\downarrow{0}$ as $\lambda\downarrow{0}$, such that
\begin{equation} \label{eq:Convergence}
\mathbb{P}\left[\liminf_{k\to\infty} \mathsf{dist}(x(k),\mathcal{B}_{\delta}(\mathcal{S}^{\lambda}))=0\right]=1.
\end{equation}
\end{proposition}
\begin{proof}
The proof follows the exact same steps of the first part of \cite[Theorem~1]{ChasparisShammaRantzer14}, where the decreasing step-size sequence is being replaced by a constant $\epsilon>0$.
\end{proof}
Proposition~\ref{Pr:InfinitelyOftenVisits} states that when we select $\lambda$ sufficiently small, the nominal strategy trajectory will be approaching the set $\mathcal{B}_{\delta}(\mathcal{S}^{\lambda})$ infinitely often with probability one, that is a small neighborhood of the Nash equilibria. We require that the update period is large enough so that each thread is using resources within each evaluation period. Of course, if a thread stops executing then the same result holds but for the updated set of threads.
However, the above proposition does not provide any guarantees regarding the time fraction that the process spends in any Nash equilibrium. The following proposition establishes this connection.
\begin{proposition}[Weak convergence to Nash equilibria] \label{Pr:WeakConvergence}
\textit{
Under the hypotheses of Proposition~\ref{Pr:InfinitelyOftenVisits}, the fraction of time that the nominal strategy profile $x(k)$ spends in $\mathcal{B}_{\delta}(\mathcal{S}^{\lambda})$ goes to one (in probability) as $\epsilon\to{0}$ and $k\to\infty$.
}
\end{proposition}
\begin{proof}
The proof follows directly from \cite[Theorem~8.4.1]{KushnerYin03} and Proposition~\ref{Pr:InfinitelyOftenVisits}.
\end{proof}
Proposition~\ref{Pr:WeakConvergence} states that if we take a small step size $\epsilon>0$, then as the time index $k$ increases, we should expect that the nominal strategy spends the majority of the time within a small neighborhood of the Nash equilibrium strategies. According to Section~\ref{sec:EfficientAllocationsVSNashEquilibria}, we know that when the utility function for each thread is defined according to (\ref{eq:AssignmentGameUtility}), then \emph{the set of Nash equilibria includes the set of efficient assignments}, i.e., the solutions of (\ref{eq:CentralizedObjective}). Thus, due to Proposition~\ref{Pr:WeakConvergence}, it is guaranteed that the nominal strategies $x_i(k)$, $i\in\mathcal{I}$, will spend the majority of the time in a small neighborhood of locally-optimal assignments, which provides a minimum performance guarantee throughout the running time of the parallelized application.
Note that due to varying exogenous factors, the Nash-equilibrium assignments may not stay fixed for all future times. The above proposition states that the process will spend the majority of the time within the set of the Nash-equilibrium assignments for as long as this set is fixed. If, at some point in time, this set changes (due to, e.g., other applications start running on the same platform), then the above result continues to hold but for the new set of Nash equilibria. Hence, the process is adaptive to possible performance variations.
\section{Experiments} \label{sec:Experiments}
In this section, we present an experimental study of the proposed reinforcement learning scheme for dynamic pinning of parallelized applications. Experiments were conducted on \texttt{20$\times$Intel\copyright Xeon\copyright CPU E5-2650 v3 \@ 2.30 GHz} running Linux Kernel 64bit 3.13.0-43-generic. The machine divides the physical cores into two NUMA nodes (Node 1: 0-9 CPU's, Node 2: 10-19 CPU's).
\subsection{Experimental Setup}
We consider a computationally intensive routine that executes a fixed number of computations (corresponding to the combinations of $M$ out of a set of $N>M$ numbers). The routine is being parallelized using the \texttt{pthread.h} (C++ POSIX thread library), where each thread is executing a replicate of the above set of computations. The nature of these computations does not play any role and in fact it may vary between threads (as we shall see in both of the forthcoming experiments).
Throughout the execution, and with a fixed period of $0.3$ sec, the $\mathsf{RM}$\ collects measurements of the total instructions per sec (using the PAPI library \cite{Mucci99}) for each one of the threads separately. Given the provided measurements, the update rule of Equation~(\ref{eq:StrategyUpdate}) with the utility function (\ref{eq:AssignmentGameUtility}) under (O2) is executed by the $\mathsf{RM}$. Placement of the threads to the available CPU's is achieved through the \texttt{sched.h} library (in particular, the \texttt{pthread\_setaffinity\_np} function). In the following, we demonstrate the response of the RL scheme in comparison to the Operating System (OS) response (i.e., when placement of the threads is not controlled by the $\mathsf{RM}$). We compare them for different values of $\gamma\geq{0}$ in order to investigate the influence of more balanced speeds to the overall running time.
In all the forthcoming experiments, the $\mathsf{RM}$\ is executed within the master thread which is always running in the first available CPU (CPU~1). Furthermore, in all experiments, only the first one of the two NUMA nodes are utilized, since our intention is to demonstrate the potential benefit of an efficient thread placement when the effect of memory placement is rather small.
\begin{figure}[h!]
\centering
\begin{minipage}{0.99\textwidth}
\includegraphics{Experiment3_SpeedComparison.pdf} \\[-20pt]
\includegraphics{Experiment3_strategies_thread1.pdf}
\includegraphics{Experiment3_strategies_thread2.pdf} \\[-20pt]
\includegraphics{Experiment3_strategies_thread3.pdf}
\includegraphics{Experiment3_strategies_thread4.pdf}
\end{minipage}
\caption{Running average execution speed when 4 threads run on 3 identical CPU's. Thread 3 requires about half the computing bandwidth compared to the rest of the threads which are identical. The strategies correspond to the RL scheme with $\gamma=0.04$. The RL schemes run with $\epsilon=0.005$ and $\lambda=0.005$.}
\label{fig:Experiment3}
\end{figure}
\subsection{Experiment 1: Non-Identical Threads under Limited Resources} \label{sec:Experiment3}
In this experiment, we consider the case of limited resources (i.e., when the number of threads is larger than the number of available CPU's). However, one of the threads requires CPU time with smaller frequency than the rest of the threads (i.e., executes its computations with smaller frequency). We should expect that in an optimal setup, threads that do not require CPU time often should be the ones sharing a CPU. On the other hand, threads that require larger bandwidth, they should be placed alone.
In particular, in this experiment, Thread 3 requires about half the computing bandwidth compared to the rest of the threads (Thread 1, 2 and 4). The resulting performance is depicted in Figure~\ref{fig:Experiment3}.
We observe indeed that Threads 1, 2 \& 4 (which require larger computing bandwidths) are allocated to different CPU's (CPU 1, 3 and 2, respectively). On the other hand, Thread 3 is switching between CPU 1 and CPU 3, since both provide almost equal processing bandwidth to Thread 3. In other words, the less demanding application is sharing the CPU with one of the more demanding threads. Note that this assignment corresponds to a Nash equilibrium (as Proposition~\ref{Pr:WeakConvergence} states), since there is no thread that can benefit by changing its strategy. It is also straightforward to check that this assignment is also efficient.
Note, finally, that the difference with the processing speed of the OS scheme is small, although a more balanced processing speed ($\gamma=0.04$) improved slightly the overall completion time.
\begin{figure}[t!]
\centering
\begin{minipage}{0.99\textwidth}
\includegraphics{Experiment5_SpeedComparison.pdf}
\end{minipage}
\caption{Running average execution speed when 7 non-identical threads run on 3 CPU cores. Threads~1 \& 2 require about half the computing bandwidth compared to the rest of the threads (which are identical). Thread 3 is joining after 120 sec. The RL schemes run with $\epsilon=0.003$ and $\lambda=0.005$.}
\label{fig:Experiment5}
\end{figure}
\subsection{Experiment 2: Non-Identical Threads in a Dynamic Environment} \label{sec:Experiment4}
In this experiment, we demonstrate the robustness of the algorithm in a dynamic environment. We consider 7 threads. The first two (Threads 1 \& 2) require about half the computing bandwidth compared to the rest. The rest of the threads (Threads 3, 4, 5, 6 and 7) are identical. However, Thread 3 starts running later in time (in particular, after 120 sec).
Figure~\ref{fig:Experiment5} illustrates the evolution of the RL scheduling scheme under different values of $\gamma$. Again in this case, a fastest response of the overall application can be achieved when higher values of $\gamma$ are selected. The difference should be attributed to the fact that the OS fails to distinguish between threads with different bandwidth requirements. Table~\ref{Tb:ComparisonOSwithRL} presents a statistical analysis of these schemes where the speed difference between the RL ($\gamma=0.04$) and the OS reaches approximately 5\% on average.
\begin{table}[h!]
\centering
\begin{tabular}{c|c|c|c|c}
Run \# & OS & RL ($\gamma=0$) & RL ($\gamma=0.02$) & RL ($\gamma=0.04$) \\\hline\hline
1 & 513 sec & 505 sec & 492 sec & 489 sec \\\hline
2 & 530 sec & 506 sec & 489 sec & 494 sec \\\hline
3 & 536 sec & 517 sec & 518 sec & 515 sec \\\hline
4 & 533 sec & 507 sec & 515 sec & 509 sec \\\hline
5 & 523 sec & 502 sec & 491 sec & 496 sec \\\hline
6 & 513 sec & 523 sec & 501 sec & 492 sec \\\hline
7 & 520 sec & 514 sec & 497 sec & 492 sec \\\hline
8 & 530 sec & 518 sec & 499 sec & 497 sec \\\hline
9 & 520 sec & 532 sec & 500 sec & 497 sec \\\hline
10 & 528 sec & 517 sec & 493 sec & 492 sec \\\hline\hline
\textbf{aver.} & \textbf{524.6 sec} & \textbf{514.10} & \textbf{499.5 sec} & \textbf{497.3} \\\hline\hline
\textbf{s.d.} & \textbf{8.06 sec} & \textbf{9.29} & \textbf{9.85 sec} & \textbf{8.27} \\\hline
\end{tabular}
\caption{Comparison between the OS performance and RL schemes when $\epsilon=0.003$ and $\lambda=0.005$ for different values of $\gamma$ under Experiment 2.}
\label{Tb:ComparisonOSwithRL}
\end{table}
\section{Conclusions} \label{sec:Conclusions}
We proposed a measurement-based learning scheme for addressing the problem of efficient dynamic pinning of parallelized applications into processing units. According to this scheme, a centralized objective is decomposed into thread-based objectives, where each thread is assigned its own utility function. A $\mathsf{RM}$\ updates a strategy for each one of the threads corresponding to its beliefs over the most beneficial CPU placement for this thread. Updates are based on a reinforcement learning rule, where prior actions are reinforced proportionally to the resulting utility. It was shown that, when we appropriately design the threads' utilities, then convergence to the set of locally optimal assignments is achieved. Besides its reduced computational complexity, the proposed scheme is adaptive and robust to possible changes in the environment.
\bibliographystyle{splncs03}
|
2,869,038,154,674 | arxiv | \section{Introduction}
\label{sec:intro}
Technologies that enable detection of quanta of electromagnetic radiation, i.e. single photons, are widely used in particle physics, and find application in the fields of medical imaging, LIDAR, quantum science, biology, and others \cite{Review1, Review2}.
Despite their bulkiness, high operating voltage and other drawbacks, vacuum-based detectors still outperform silicon-based detectors when it comes to single photon sensitivity, as the dark count rate at room temperature is several orders of magnitude lower.
The difference is further enhanced if the devices need to operate in presence of radiation. The high sensitivity to displacement damage of silicon photomultipliers (SiPM) makes their use in accelerator environment challenging, and even more so if sensitivity to single photons is required \cite{SiPMreview,SiPMnostro}.
This is the case of next generation ring imaging Cherenkov (RICH) detectors, used for particle identification in high energy physics experiments, and in particular in the LHCb experiment at the large hadron collider (LHC) operating at CERN \cite{LHCbRICH}.
The increasing luminosity of the accelerator and experiments calls for detectors able to operate at high photon counting rate.
The use of precise timing information has been proposed to improve particle identification performance \cite{LHCbUpgradeII, LHCbUpgradeIIp}.
In vacuum-based photodetectors, a photon entering the device causes the emission of a photoelectron from the photocathode, with typical quantum efficiency up to $\sim$30\%.
The photoelectron enters an electron multiplier stage with gain of $\sim$10$^6$, and a charge signal of $\sim$100~fC is collected at the anode.
Electron multiplication in a ``standard'' PMT is provided by a series of discrete electrodes, or dynodes.
Each stage provides a gain of 3-5 depending on the accelerating voltage and dynode material.
In this work we used the multi-anode PMT (MaPMT) Hamamatsu R13742-103-M64, with 12 metal dynode stages\footnote{Serial number: FA0047. The R13742 a device equivalent to the R11265, produced by Hamamatsu for the LHCb RICH detectors.}.
The charge is collected on $2.88 \times 2.88$~mm$^2$ square anodes. The device has $8\times 8$ pixels in a total area of $26.2 \times 26.2$~mm$^2$, and a total active area ratio of 78\%.
The maximum bias voltage is 1100~V.
We used the recommended voltage divider ratio of 2.3:1.2:1:...:1:0.5.
The average gain of the tested sample is $1.5 \times 10^6$ at 900~V, $4.5 \times 10^6$ at 1000~V.
The single photon counting performance of this PMT model at low photon rate was characterized in a previous work, time resolution excluded \cite{R11265}.
An alternative structure for electron multiplication in a vacuum tube is the microchannel plate (MCP).
In this case, multiplication occurs in $\sim$10~$\mu$m diameter channels etched in a mm-thick slab of a high resistivity material (typically lead glass).
Two MCP slabs are usually stacked in a Chevron (v-shape) configuration to reach a gain of $\sim$10$^6$.
The shorter path of the electron cloud in the microchannels compared to the case of discrete dynodes translates to a smaller transit time spread, hence a better time resolution compared to conventional PMTs \cite{ReviewThierry}.
However, due to the high resistivity of the base material, each microchannel has a $\sim$ms recharge time after each signal, which leads to saturation at high rate \cite{MCPsat1,MCPsat2}.
In this work we measured the Hamamatsu R10754-07-M16 MCP-PMT, a 2-stage MCP with 10~$\mu$m diameter channels\footnote{Serial number: KT0862. The R10754 MCP-PMT was developed for the TOP detector of the Belle experiment \cite{BelleTDR}.}.
The anodes are arranged in $4 \times 4$ square pixels of $5.28 \times 5.28$~mm$^2$, a total area of $27.6 \times 27.6$~mm$^2$ and an active area ratio of 58\%.
The maximum bias voltage of the device is 2300~V. We used the recommended voltage divider ratio (0.5:2.5:2.5:2.5:1). The gain is $1 \times 10^6$ at 2150~V and $3.5 \times 10^6$ at 2250~V.
We compared its performance with that of the R13742, focusing on timing resolution (transit time spread) and its dependence on photon rate.
\section{Test setup and analysis method}
\begin{figure}[t]
\centering
\includegraphics[width=.6\textwidth]{block_scheme.pdf}
\hspace{1cm}
\includegraphics[width=.3\textwidth]{PMT_MCP_photo.jpg}
\caption{\label{fig:testsetup} Left: block schematic of the test setup. The black arrows represent electrical signals, the blue arrows represent light. Right: photograph of the Hamamatsu R10754 MCP-PMT (left) and R13742 MaPMT (right) resting on the masks used to illuminate only the pixel centers.}
\end{figure}
Figure \ref{fig:testsetup}, left side, shows a block schematic of the setup.
The photodetector was illuminated with laser pulses of 70~ps FWHM duration and 405~nm wavelength from a Hamamatsu PLP-10 (C10196 controller and M10306-29 head).
The amplitude of the laser pulses was kept at a medium setting on the PLP-10 controller (knob set between 8 and 10). This seemed optimal, since we observed a larger pulse duration at lower settings, and a smaller secondary pulse delayed by about 200~ps at higher settings.
We placed a filter in front of the laser head to reach the single photon regime, with attenuation ranging from 10$^3$ to 10$^6$ (Thorlabs AR-coated absorptive neutral density filters NE30A-A to NE60A-A).
The filters will be denoted by F3 to F6 in the following, where the number denotes the optical density, or absorbance, of the filter.
3D-printed masks with 1~mm diameter holes were used to illuminate only the pixel centers, when needed.
Figure \ref{fig:testsetup}, right side, shows a photograph of both photodetectors.
The anode signals were read out with a LMH6702 current feedback operational amplifier, operating as a fast integrator with $C_F\simeq3$~pF (internal) and $R_F=1$~k$\Omega$ \cite{CFopamp}.
The signals at the output of the amplifier had a rise time $t_r= $~1.5~ns and a fall time $t_f\simeq$~10~ns.
They were fed to the oscilloscope Rohde~\&~Schwarz RTO1044 (4~GHz, 20~GS/s), digitally low-pass filtered at 300~MHz and acquired.
A signal from the MaPMT and one from the MCP-PMT, selected to have the same amplitude, are shown in figure \ref{fig:onesignal}.
The measured gain, halved by the 50~$\Omega$ termination, was 27.5~mV/Me$^-$, calibrated by injecting a known charge trough a test capacitor.
The baseline noise as seen at the oscilloscope was 0.25~mV RMS, or $\sigma_Q =9$~ke$^-$ RMS, the main contributor being the current noise at the inverting input of the operational amplifier (18.5~pA/$\sqrt{Hz}$).
The effect of electronic noise on the measurement of the threshold crossing time can be estimated as $\sigma_t = \sigma_V / f'(t)$, where $\sigma_V$ is the voltage noise and $f'(t)$ is the time derivative of the signal. It can equivalently be expressed as $\sigma_t = t_r \sigma_Q / Q$, where $t_r$ is the rise time, $\sigma_Q$ the equivalent noise charge and $Q$ is the charge carried by the signal.
For signals above 10$^6$~e$^-$ (single photons at nominal PMT gain), this gives less than 15~ps RMS, negligible compared to the timing resolution of the PMT and laser.
Pixels that were not read out were connected to ground.
\begin{figure}[t]
\centering
\includegraphics[trim=100px 260px 100px 250px,clip,width=.5\textwidth]{MaPMT_MCP_waveforms.pdf}
\caption{\label{fig:onesignal} Typical signals from MaPMT and MCP-PMT, selected to have the same amplitude. They are almost identical, since they are shaped by the same amplifier. Time is detected at 15\% of the pulse amplitude.}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[trim=100px 240px 100px 250px,clip,width=.55\textwidth]{fitted-histo.pdf}
\caption{\label{fig:histogram} Typical distribution of threshold crossing times, fitted with a Gaussian curve in two steps. The blue markers indicate data excluded from the second fit. The difference between the two fitting curves is small, indicating that the right tail gives in any case a negligible contribution.}
\end{figure}
The photon rate hitting the detector was varied by changing the laser pulse repetition rate $\nu_L$ from a few Hz up to 100~MHz.
The filters in front of the photodetector were chosen so that the actual rate of non-empty events $\nu_P$ was always below $\nu_L/10$.
Assuming the number of photons in each event to be Poisson-distributed with average $\mu$, the probability of observing $n$ photons is $P(n) = {\mu^n e^{-\mu}}/{n!}$.
The probability of observing a non-empty event is $ 1-P(0) = 1- e^{- \mu} \simeq \mu$, where the approximation is valid if $\mu$ is small. This is also equal to $\nu_P / \nu_L$.
The fraction of non-empty events that contain two or more photons is given by
\begin{equation}
\frac{1-P(0)-P(1)}{1-P(0)} = \frac{1- e^{- \mu}-\mu e^{- \mu}}{1- e^{- \mu}}
\simeq \frac{\mu^2/2}{\mu} = \frac{\mu}{2}.
\label{eq:Poisson}
\end{equation}
By choosing the filter to get $\nu_P < \nu_L/10$, or $\mu < 0.1$, according to \ref{eq:Poisson} the non-empty signals were at $>95\%$ single photons, which ensures a sufficient purity of single photon events.
To avoid acquiring lots of empty signals, the oscilloscope was set to trigger on the sequence (within a 100~ns window) of a signal from the photodetector, detected with a threshold just above noise, and the delayed by 20~ns ``sync~out'' signal from the laser controller. The latter provided the time reference for each signal.
For low and medium rate measurements, all signals were acquired by the oscilloscope.
For the highest rate measurements, where $\nu_P$ exceeded the maximum trigger rate capability of the oscilloscope (a few MHz), the oscilloscope could only acquire a fraction of the total.
The actual photon rate in these measurements was estimated by replacing the filter with a $10 \times$ or $100 \times$ higher attenuation, measuring the rate in the same conditions except for the filter, and then scaling it back by multiplying it by 10 or 100.
The validity of this method was verified by comparing the measured photon rates with F3 and $\nu_L=1$~kHz, F4 and $\nu_L=10$~kHz, F5 and $\nu_L=100$~kHz, observing similar values of $\nu_P$.
An offline algorithm was used to analyze the acquired waveforms.
A threshold at 15\% of its amplitude was applied to each signal, as shown in figure \ref{fig:onesignal}.
This is equivalent to constant fraction discrimination, and avoids spurious contributions to time resolution due to amplitude walk.
The threshold crossing times of a given set of waveforms were collected in a histogram binned at 20~ps.
An example is shown in figure \ref{fig:histogram}.
There is an asymmetric tail at the right side of the histogram (delayed signals).
This is likely due to recoil electrons inside the MCP, although some contribution from the laser cannot be excluded.
The Gaussian fit was performed in two steps: first the entire distribution was fitted, to determine mean and standard deviation $\sigma$ (1st pass). The points that lie more than $\pm 2 \sigma$ away from the mean of the distribution were then excluded, and the remaining data was fitted again (2nd pass).
The difference between the two fitting Gaussian curves is anyway often negligible, as can be seen in figure \ref{fig:histogram}, where the two curves are nearly identical.
We checked the stability of the algorithm by changing the threshold and the bin width around these values, and did not observe significant variations in the results.
The error bars associated with each measurement represent the statistical uncertainty on the fit parameters ($1 \sigma$ confidence level).
\section{Results}
\begin{figure}[t]
\centering
\includegraphics[trim=100px 240px 100px 250px,clip,width=.7\textwidth]{fwhm_mampt_mcp_rate.pdf}
\caption{\label{fig:timingVSrate} Measured timing resolution for MaPMT and MCP-PMT as a function of rate.}
\end{figure}
Figure \ref{fig:timingVSrate} shows the measured timing resolution of the MaPMT and the MCP-PMT as a function of rate.
A mask was placed in front of the devices, illuminating just the pixel centers with holes of 1~mm diameter.
The measured photon rate $\nu_P$ was scaled per mm$^2$ using the hole area (0.785~mm$^2$).
At the operating voltages noted in the plot legend, gain was above 10$^6$ for both devices. In this gain range the noise of the readout chain was negligible, and the only contributions to time resolution are from the photodetector and the laser.
The time resolution of the MaPMT in these conditions is 250~ps~FWHM, independent of rate up to 10~MHz/mm$^2$, the highest tested.
Measurements of the MCP-PMT at low rate give a time resolution of 90~ps~FWHM, about equally contributed by the laser pulse duration ($\leq 70$~ps~FWHM) and the MCP-PMT ($\sim$70~ps~FWHM) summed in quadrature.
At rates above 100~kHz/mm$^2$ the resolution of the MCP-PMT begins to degrade. The curve rapidly points upward, overtaking that of the MaPMT at about 1~MHz/mm$^2$.
This is due to saturation of the MCP. It happens when the average interval between photon signals hitting the same microchannel becomes comparable or smaller than the time it takes to recharge the channel walls after a signal.
\begin{figure}[t]
\centering
\includegraphics[trim=100px 270px 100px 250px,clip,width=.6\textwidth]{MCP-mask-rate-amplitude_A.pdf}
\caption{\label{fig:MCPspectra} Single photon spectra of a pixel of the MCP-PMT at different photon rates.}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[trim=100px 240px 100px 250px,clip,width=.7\textwidth]{mcp_average_amplitude_ratio.pdf}
\caption{\label{fig:MCPsaturationmask} Average amplitude of MCP signals as a function of photon rate, when different pixel areas are illuminated. The vertical scale is relative to the point at the lowest rate for each curve.}
\end{figure}
Figure \ref{fig:MCPspectra} shows the single photon spectra of the MCP at different rates.
Up to 30~kHz/mm$^2$ the single photon peak is still separated from the noise pedestal, and the spectrum is indistinguishable from those taken at a much lower rate. At higher rate saturation occurs. The spectra at 220~kHz/mm$^2$ and above show that the average gain is strongly reduced. Since this is due to the increasing number of microchannels that are not fully recharged when they are hit by the next signals, the reduction of average gain goes together with loss of efficiency.
Figure \ref{fig:MCPsaturationmask} shows the average amplitude of the anode signals as a function of absolute signal rate, with increasing illuminated area on the pixel.
The amplitude is normalised to the value at the lowest rate.
The first effects of saturation are visible above $\sim$10~kHz with the 1~mm diameter hole (0.785~mm$^2$), $\sim$50~kHz with the 2.4~mm diameter hole (4.52~mm$^2$), and $\sim$400~kHz when the entire pixel area ($5.28 \times 5.28$~mm$^2=27.9$mm$^2$) is uncovered.
The ratio of saturation rate and illuminated area (proportional to the number of microchannels) is approximately constant, confirming that it is a local phenomenon.
Going back to figure \ref{fig:timingVSrate}, it is anyway interesting to note that the time resolution is stable below 100~ps~FWHM up to the onset of saturation, and even slightly beyond.
The results on time resolution at high signal rate are compatible with those in \cite{MCPBelle} for devices of the same family, independently of MCP resistance. Concerning gain reduction, our results are compatible with those in \cite{MCPBelle} for high resistance MCPs. In that work, MCPs of smaller resistance were shown to withstand illumination rates beyond $\sim$2~MHz/anode, or $\sim$100~kHz/mm$^2$, without gain loss.
Our results on rate stability are also compatible with those presented in \cite{MCPPanda} for a 2-inch device based on the same technology (Hamamatsu R13266).
\begin{figure}[t]
\centering
\includegraphics[trim=100px 240px 100px 250px,clip,width=.7\textwidth]{Rate.pdf}
\caption{\label{fig:withwithoutmask} Time resolution of the MaPMT and MCP-PMT with and without the mask used to illuminate only the pixel center. Removing the mask has no effects on the MCP-PMT, while the resolution of the MaPMT degrades to 400~ps FWHM.}
\end{figure}
Besides scaling the absolute pixel rate, removing the mask has no effect on the time resolution of the MCP-PMT. This is not the case for the MaPMT.
Figure \ref{fig:withwithoutmask} compares measurements taken with and without the mask in front of the photodetectors.
When the mask is removed, the resolution of the MaPMT degrades from 250~ps~FWHM to 400~ps~FWHM, independently of rate.
This was investigated by offsetting the mask hole to illuminate only the pixel boundary instead of the center. Photons hitting the pixel boundary region were observed to have both a longer transit time (arriving late by 300~ps on average) and a larger transit time spread ($\sim$440~ps FWHM) compared with photons hitting the pixel center.
When the entire pixel area is uncovered, this leads to an overall resolution of 400~ps FWHM as measured and shown in figure \ref{fig:withwithoutmask}.
\section{Conclusions and outlook}
We presented a comparison of the timing performance of a multi-anode PMT (Hamamatsu R13742) and a MCP-PMT (Hamamatsu R10754).
The MaPMT offers stable operation up to 10~MHz/mm$^2$. Despite the fact that illuminating only the pixel center gives a timing resolution of 250~ps FWHM, its resolution when the entire active surface is illuminated is 400~ps FWHM, setting it far apart from the timing performance of the MCP-PMT.
The MCP-PMT offers superior timing resolution compared to the MaPMT, but the maximum sustainable rate before saturation at nominal gain is just below 100~kHz/mm$^2$.
Operation of MCP-based photodetectors as single photon counters at MHz/mm$^2$ rates and above will likely require using microchannels of smaller diameter and lower resistivity, to reduce the hit probability and recharge time of each microchannel. These come with stimulating, but currently unsurmounted, technological challenges for MCP manufacturing, cooling and operation.
|
2,869,038,154,675 | arxiv | \section{Introduction}
In the literature several examples of multiplication of
distributions exist, more or less interesting and more or less
useful for concrete applications. In \cite{bag1} and \cite{bag2} we
have proposed our own definition of multiplication in one spatial
dimension, $d=1$, and we have proved that two or more delta
functions can be multiplied and produce, as a result, a linear
functional over ${\cal D}(\rm\bf{R})$. However, our definition does not
admit a {\em natural} extension to $d>1$. This is a strong
limitation, both mathematically and for concrete applications: for
instance, it is known to the civil engineers community that the
problem of fracture mechanics may be analyzed considering beams
showing discontinuities along the beam span, \cite{cadde,cadde2}.
The {\em classical} approach for finding solutions of beams with
discontinuities consists in looking for continuous solutions between
discontinuities and imposing continuity conditions at the fractures.
In \cite{cadde,cadde2} a different strategy has been successfully
proposed, modeling the flexural stiffness and the slope
discontinuities as delta functions. However, in this approach, the
problem of multiplying two delta functions naturally arises, and
this was solved using the definition given in \cite{bag1}. This
application proved to be useful not only to get a solution in a
closed form but also to get numerical results in a reasonable simple
way, when compared with the older existing approaches. These very
promising results, however, have been discussed only in $d=1$ since
the mathematical framework, which is behind the concrete
application, only existed in one dimension, \cite{bag1,bag2}. It is
not surprising, therefore, that extensions of our procedure to
higher spatial dimensions has been strongly urged in order to
consider the same problem for more general physical systems, i.e.
for physical systems in higher dimensions like two or
three-dimensional beams, in which the fractures can be schematized
as two or three-dimensional delta functions.
In a completely different field of research the same necessity
appeared: for instance, in the analysis of stochastic processes the
need for what is called a {\em renormalization procedure for the
powers of white noise}, where the white noise is essentially a delta
function, has produced many results, see \cite{acc}. Also,
applications to electric circuits exist which are surely less
mathematically oriented, \cite{bor}, and again are based on the
possibility of giving a meaning to $\delta^2(x)$. With these
motivations we have considered the problem of defining the
multiplication of two distributions in more than one spatial
dimension. However, our original definition cannot be easily
extended to $d>1$, because the regularization proposed in
\cite{breme} and used in \cite{bag1,bag2} does not work without
major changes in this case. For this reason we propose here a
different definition of multiplication, which works perfectly in any
spatial dimension. Moreover, this new approach returns results which
are very close to those in \cite{bag1,bag2}, for $d=1$.
The paper is organized as follows:
in the next Section we briefly recall the main definitions and
results of \cite{bag1} and \cite{bag2};
in Section 3 we propose a different definition of multiplication
in one dimension and we show that results which are not
significantly different from those of Section 2 are recovered;
in Section 4 we extend the definition to an arbitrary spatial
dimension and prove that with this new definition two delta
functions can be multiplied.
\section{A brief resume of our previous results}
In \cite{bag1,bag2} we have introduced a (family of) multiplications
of distributions making use of two different regularization
discussed in the literature and adapted to our purposes. Here, for
completeness' sake, we briefly recall our strategy without going
into too many details.
The first ingredient was first introduced in \cite{breme}, where it
was proven that, given a distribution $T$ with compact support, the
function \begin{equation} \label{analitic}
{\bf T}^0(z) \equiv \frac{1}{2\pi i}\, T\cdot(x-z)^{-1}
\end{equation} exists and is holomorphic in $z$ in the whole $z$-plane minus
the support of $T$. Moreover, if $T(x)$ is a continuous function
with compact support, then $T_{red}(x,\epsilon) \equiv {\bf
T}^0(x+i\epsilon)-{\bf T}^0(x-i\epsilon) $ converges uniformly to
$T(x)$ on the whole real axis for $\epsilon \rightarrow 0^+$.
Finally, if $T$ is a distribution in ${\cal D'}(\rm\bf{R})$ with
compact support then $T_{red}(x,\epsilon)$ converges to $T$ in the
following sense
$$
T (\Psi) = \lim_{\epsilon \rightarrow 0} \int_{-\infty}^{\infty}
T_{red}(x,\epsilon) \, \Psi (x) \, dx
$$
for every test function $\Psi \in {\cal D}(\rm\bf{R})$.
As discussed in \cite{breme}, this definition can be extended to a
larger class of one-dimensional distributions with support not
necessarily compact, ${\cal V}'(\rm\bf{R})$, while it is much harder
to extend the same definition to more than one spatial dimension.
The second ingredient is the so-called method of sequential
completion, which is discussed for instance in \cite{col}, and it
follows essentially from very well known results on the regularity
of the convolution of distributions and test functions. Let $\Phi
\in {\cal D}(\rm\bf{R})$ be a given function with supp $\Phi \subseteq
[-1,1]$ and $\int_{\rm\bf{R}} \Phi (x) \, dx =1$. We call
$\delta-$sequence the sequence $\delta_n,\, n\in {\rm\bf{N}},$ defined
by $\delta_n(x) \equiv n\, \Phi(nx)$. Then, $\forall \, T \in {\cal
D'}(\rm\bf{R})$, the convolution $T_n \equiv T*\delta_n$ is a
$C^{\infty}-$function for any fixed $n\in \rm\bf{N}$. The sequence
$\{T_n\}$ converges to $T$ in the topology of ${\cal D'}$, when $n
\rightarrow \infty$. Moreover, if $T(x)$ is a continuous function
with compact support, then $T_n(x)$ converges uniformly to $T(x)$.
We are now ready to recall our original definition: for any couple
of distributions $T,S \, \in {\cal D'}(\rm\bf{R}), \, \forall \,
\alpha, \beta
>0$ and $\forall \, \Psi \, \in {\cal D}(\rm\bf{R})$ we start defining the following
quantity: \begin{equation} (S\otimes T)_n^{(\alpha,\beta)}(\Psi ) \equiv
\frac{1}{2} \int_{-\infty}^{\infty} [S_n^{(\beta)}(x)\,
T_{red}(x,\frac{1}{n^\alpha}) + T_n^{(\beta)}(x)\,
S_{red}(x,\frac{1}{n^\alpha})]\, \Psi (x) \, dx \end{equation} where \begin{equation}
S_n^{(\beta)}(x) \equiv (S*\delta_n^{(\beta)})(x), \label{conv}
\end{equation} with $\delta_n^{(\beta)}(x) \equiv n^{\beta} \Phi
(n^{\beta}x)$, which is surely well defined for any choice of
$\alpha, \beta,\, T,\, S$ and $\Psi$. Moreover, if the limit of
the sequence $\left\{(S\otimes T)_n^{(\alpha,\beta)}(\Psi
)\right\}$ exists for all $\Psi(x)\in{\cal D}(\rm\bf{R})$, we define
$(S\otimes T)_{(\alpha,\beta)}(\Psi )$ as: \begin{equation} \label{def}
(S\otimes T)_{(\alpha,\beta)} (\Psi )\equiv \lim_{n \rightarrow
\infty} (S\otimes T)_n^{(\alpha,\beta)}(\Psi ) \end{equation}
Of course, as already remarked in \cite{bag1}, the definition
(\ref{def}) really defines many multiplications of distributions.
In order to obtain {\underline {one definite}} product we have to
fix the positive values of $\alpha$ and $\beta$ and the particular
function $\Phi$ which is used to construct the $\delta$-sequence.
Moreover, if $T(x)$ and $S(x)$ are two continuous functions with
compact supports, and if $\alpha$ and $\beta$ are any pair of
positive real numbers, then: (i) $T_n^{(\beta)}(x)\,
S_{red}(x,\frac{1}{n^\alpha}) \mbox{ converges uniformly to }
S(x)\, T(x)$; (ii) $\forall \, \Psi(x) \in {\cal D}(\rm\bf{R}) \Rightarrow
(T\otimes S)_{(\alpha,\beta)}(\Psi) = \int_{-\infty}^{\infty}
T(x)\, S(x) \, \Psi (x)\, dx$.
It is furthermore very easy to see that the product $(S\otimes T)_
{(\alpha,\beta)}$ is a linear functional on ${\cal D}(\rm\bf{R})$
due to the linearity of the integral and to the properties of the
limit. The continuity of such a functional is, on the contrary,
not obvious at all, but, as formulas (\ref{resf1})-(\ref{resf6})
show, is a free benefit of our procedure.
\vspace{2mm} We now recall few results obtained in \cite{bag1,bag2}.
If we assume $\Phi$ to be of the form \begin{equation}
\Phi(x) = \left\{
\begin{array}{ll}
\frac{x^m}{N_m} \cdot \exp\{\frac{1}{x^2-1}\}, & |x| <1 \\
0, & |x| \geq 1.
\end{array}
\right.
\label{fi} \end{equation} where $m$ is an even natural number and $N_m$ is a
normalization constant which gives $\int_{-1}^{1} \Phi(x)\, dx=1$,
and we put $ A_j \equiv \int_{-\infty}^{\infty}
\frac{\Phi(x)}{x^j}\, dx, $ whenever it exists, we have proved that:
\noindent if $m>1$ then \begin{equation}
(\delta \otimes \delta)_{(\alpha,\beta)} = \left\{
\begin{array}{ll}
\frac{1}{\pi}A_2 \delta, & \alpha=2\beta \\
0, & \alpha>2\beta;
\end{array}
\right.
\label{resf1} \end{equation} if $m>2$ then \begin{equation} (\delta \otimes
\delta')_{(\alpha,\beta)} = 0 \hspace{2cm} \forall \alpha\geq
3\beta;
\label{resf2}
\end{equation} if $m>3$ then \begin{equation}
(\delta' \otimes \delta')_{(\alpha,\beta)} = \left\{
\begin{array}{ll}
\frac{-6}{\pi}A_4 \delta, & \alpha=4\beta \\
0, & \alpha>4\beta.
\end{array}
\right.
\label{resf3} \end{equation} Also, if $m>3$ then \begin{equation}
(\delta \otimes \delta'')_{(\alpha,\beta)} = \left\{
\begin{array}{ll}
\frac{6}{\pi}A_4 \delta, & \alpha=4\beta \\
0, & \alpha>4\beta.
\end{array}
\right.
\label{resf4} \end{equation} If $m>4$ then \begin{equation} (\delta' \otimes
\delta'')_{(\alpha,\beta)} = 0 \hspace{2cm} \forall \alpha\geq
5\beta.
\label{resf5}
\end{equation} Finally, if $m>5$ then \begin{equation}
(\delta'' \otimes \delta'')_{(\alpha,\beta)} = \left\{
\begin{array}{ll}
\frac{120}{\pi}A_6 \delta, & \alpha=6\beta \\
0, & \alpha>6\beta.
\end{array}
\right.
\label{resf6} \end{equation}
It is worth stressing that formula (\ref{resf1}), for
$\alpha>2\beta$, coincides with the result given by the neutrix
product discussed by Zhi and Fisher, see \cite{zhi}. Also, because
of our technique which strongly relies on the Lebesgue dominated
convergence theorem, LDCT, the above formulas only give sufficient
conditions for the product between different distributions to
exist. In other words: formula (\ref{resf1}) does not necessarily
implies that $(\delta \otimes \delta)_{(\alpha,\beta)}$ does not
exist for $\alpha<2\beta$. In this case, however, different
techniques should be used to check the existence or the
non-existence of $(\delta \otimes \delta)_{(\alpha,\beta)}$.
More remarks on this approach can be
found in \cite{bag1} where, in particular the above results are
extended to the product between two distributions like
$\delta^{(l)}$ and $\delta^{(k)}$ for generic $l,k=0,1,2,\ldots$. In
\cite{bag2} we have further discussed the extension of the
definition of our multiplication to more distributions and to the
case of non commuting distributions, which is relevant in quantum
field theory.
\section{A different definition in $d=1$}
The idea behind the definition we introduce in this section is very
simple: since the regularization $T\rightarrow T_{red}$ cannot be
easily generalized to higher spatial dimensions, $d>1$, we use twice
the convolution procedure in (\ref{conv}), $T\rightarrow
T_n^{(\alpha)}=T\ast\delta_n^{(\alpha)}$, with different values of
$\alpha$ as we will see.
Let $\Phi(x)\in{\cal D}(\rm\bf{R})$ be a given non negative function,
with support in $[-1,1]$ and such that
$\int_{\rm\bf{R}}\Phi(x)\,dx=1$. In the rest of this section, as in
\cite{bag1,bag2}, we will essentially consider the following
particular choice of $\Phi(x)$, \begin{equation}
\Phi(x) = \left\{
\begin{array}{ll}
\frac{x^m}{N_m} \cdot \exp\{\frac{1}{x^2-1}\}, & |x| <1 \\
0, & |x| \geq 1.
\end{array}
\right.
\label{31} \end{equation} where $m$ is some fixed even natural number and
$N_m$ is a normalization constant fixed by the condition
$\int_{-1}^{1} \Phi(x)\, dx=1$. As we have discussed in
\cite{bag1} the sequence
$\delta_n^{(\alpha)}(x)=n^\alpha\Phi(n^\alpha x)$ is a {\em
delta-sequence} for any choice of $\alpha>0$. This means that: (1)
$\delta_n^{(\alpha)}(x)\rightarrow \delta(x)$ in ${\cal D}'(\rm\bf{R})$
for any $\alpha>0$; (2) for any $n\in\rm\bf{N}$ and for all
$\alpha>0$ if $T(x)$ is a continuous function with compact support
then the convolution
$T_n^{(\alpha)}(x)=(T\ast\delta_n^{(\alpha)})(x)$ converges to
$T(x)$ uniformly for all $\alpha>0$; (3) if $T(x)\in{\cal D}(\rm\bf{R})$
then $T_n^{(\alpha)}(x)$ converges to $T(x)$ in the topology
$\tau_{\cal D}$ of ${\cal D}(\rm\bf{R})$; (4) if $T\in{\cal D}'(\rm\bf{R})$ then
$T_n^{(\alpha)}(x)$ is a $C^\infty$ function and it converges to
$T$ in ${\cal D}'(\rm\bf{R})$ as $n$ diverges independently of $\alpha>0$.
We remark here that all these results can be naturally extended to
larger spatial dimensions, and this will be useful in the next
section.
Our next step is to replace definition (\ref{def}) with our
alternative multiplication. To begin with, let us consider two
distributions $T, S\in{\cal D}'(\rm\bf{R})$, and let us compute their
convolutions $T_n^{(\alpha)}(x)=(T\ast\delta_n^{(\alpha)})(x)$ and
$S_n^{(\beta)}(x)=(T\ast\delta_n^{(\beta)})(x)$ with
$\delta_n^{(\alpha)}(x)=n^\alpha\Phi(n^\alpha x)$, for
$\alpha,\beta>0$ to be fixed in the following. $T_n^{(\alpha)}(x)$
and $S_n^{(\beta)}(x)$ are both $C^\infty$ functions, so that for
any $\Psi(x)\in{\cal D}(\rm\bf{R})$ and for each fixed $n\in\rm\bf{N}$, the
following integrals surely exist: \begin{equation} (S\odot
T)_n^{(\alpha,\beta)}(\Psi ) \equiv \frac{1}{2} \int_{\rm\bf{R}}
\left[S_n^{(\alpha)}(x)\, T_n^{(\beta)}(x) +
S_n^{(\beta)}(x)\,T_n^{(\alpha)}(x) \right]\, \Psi (x) \, dx,
\label{32}\end{equation} \begin{equation} (S\odot_d T)_n^{(\alpha,\beta)}(\Psi ) \equiv
\int_{\rm\bf{R}} S_n^{(\alpha)}(x)\, T_n^{(\beta)}(x) \, \Psi (x) \,
dx, \label{33}\end{equation} \begin{equation} (S\odot_{ex} T)_n^{(\alpha,\beta)}(\Psi )
\equiv \int_{\rm\bf{R}} S_n^{(\beta)}(x)\, T_n^{(\alpha)}(x) \, \Psi
(x) \, dx. \label{34}\end{equation} The suffix $d$ stands for {\em direct},
while $ex$ stands for {\em exchange}. This is because the two
related integrals remind us the direct and the exchange
contributions in an energy Hartree-Fock computation, typical of
quantum many-body systems. It is clear that if $S\equiv T$ then the
three integrals above coincide: $(S\odot S)_n^{(\alpha,\beta)}(\Psi
)=(S\odot_d S)_n^{(\alpha,\beta)}(\Psi )=(S\odot_{ex}
S)_n^{(\alpha,\beta)}(\Psi )$, for all $\alpha,\beta,\Psi,n$. In
general, however, they are different and we will discuss an example
in which they really produce different results when
$n\rightarrow\infty$. We say that the distributions $S$ and $T$ are
$\odot$-multiplicable, and we indicate with $(S\odot
T)_{(\alpha,\beta)}$ their product, if the following limit exists
for all $\Psi(x)\in{\cal D}(\rm\bf{R})$: \begin{equation} (S\odot
T)_{(\alpha,\beta)}(\Psi )=\lim_{n,\,\infty}(S\odot
T)_n^{(\alpha,\beta)}(\Psi ).\label{35}\end{equation} Analogously we put, when
the following limits exist, \begin{equation} (S\odot_d T)_{(\alpha,\beta)}(\Psi
)=\lim_{n\,\rightarrow\,\infty}(S\odot_d T)_n^{(\alpha,\beta)}(\Psi
).\label{36}\end{equation} and \begin{equation} (S\odot_{ex} T)_{(\alpha,\beta)}(\Psi
)=\lim_{n\,\rightarrow\,\infty}(S\odot_{ex}
T)_n^{(\alpha,\beta)}(\Psi ).\label{37}\end{equation}
Again, it is clear that, whenever they exist, $(S\odot
S)_{(\alpha,\beta)}(\Psi )=(S\odot S)_{(\beta,\alpha)}(\Psi
)=(S\odot_d S)_{(\alpha,\beta)}(\Psi )=(S\odot_{ex}
S)_{(\alpha,\beta)}(\Psi )$, for all $\Psi(x)\in{\cal D}(\rm\bf{R})$ while
they are different, in general, if $S\neq T$. Of course, the
existence of these limits in general will depend on the values of
$\alpha$ and $\beta$ and on the particular choice of $\Phi(x)$. For
this reason, as in \cite{bag1,bag2}, we are really defining a {\em
class of multiplications} of distributions and not just a single
one. We have already discussed in \cite{bag1} a simple example which
shows how these different multiplications may have a physical
interpretation in a simple quantum mechanical system. We will
discuss further physical applications of our procedure in a
forthcoming paper.
We now list a set of properties which follow directly from the
definitions:
\begin{enumerate}
\item if $S(x)$ and $T(x)$ are continuous functions with compact
support then \begin{equation} (S\odot T)_{(\alpha,\beta)}(\Psi )=(S\odot_d
T)_{(\alpha,\beta)}(\Psi )=(S\odot_{ex} T)_{(\alpha,\beta)}(\Psi
)=\int_{\rm\bf{R}}\,S(x)\,T(x)\,\Psi(x)\,dx, \label{38}\end{equation} for all
$\Psi(x)\in{\cal D}(\rm\bf{R})$ and for all $\alpha,\beta>0$.
\item for all fixed $n$, for all $\alpha,\beta>0$ and for all
$\Psi(x)\in{\cal D}(\rm\bf{R})$ we have \begin{equation} (S\odot_d
T)_n^{(\alpha,\beta)}(\Psi )=(S\odot_{ex} T)_n^{(\beta,\alpha)}(\Psi
)\label{39}\end{equation}
\item for all $n$, $\alpha, \beta>0$ and for all $\Psi(x)\in{\cal D}(\rm\bf{R})$
we have \begin{equation} \left(S\odot_d
T\right)_n^{(\alpha,\beta)}(\Psi)=\left(T\odot_d
S\right)_n^{(\beta,\alpha)}(\Psi), \mbox{ and } \left(S\odot_{ex}
T\right)_n^{(\alpha,\beta)}(\Psi)=\left(T\odot_{ex}
S\right)_n^{(\beta,\alpha)}(\Psi) \label{310}\end{equation}
\item given $S,T\in{\cal D}'(\rm\bf{R})$ such that the following quantities exist we
have \begin{equation} (S\odot T)_{(\alpha,\beta)}(\Psi
)=\frac{1}{2}\left\{(S\odot_d T)_{(\alpha,\beta)}(\Psi
)+(S\odot_{ex} T)_{(\alpha,\beta)}(\Psi )\right\}\label{311}\end{equation}
\end{enumerate}
We will now discuss a few examples of these definitions, beginning
with maybe the most relevant for concrete applications, i.e. the
multiplication of two delta functions.
\vspace{2mm}
{\bf Example nr.1: $\left(\delta\odot
\delta\right)_{(\alpha,\beta)}$ }
\vspace{2mm}
First of all we remind that, in this case, the $\odot$, $\odot_d$
and $\odot_{ex}$ multiplications all coincide. If the following
limit exists for some $\alpha, \beta>0$, we have
$$
\left(\delta\odot
\delta\right)_{(\alpha,\beta)}(\Psi)=\left(\delta\odot
\delta\right)_{(\beta,\alpha)}(\Psi)=\lim_{n\,\rightarrow\,\infty}\int_{\rm\bf{R}}
\delta_n^{(\alpha)}(x)\, \delta_n^{(\beta)}(x) \, \Psi (x) \, dx=$$
$$=\lim_{n\,\rightarrow\,\infty}n^{\alpha+\beta}\int_{\rm\bf{R}} \Phi(n^\alpha\,
x)\, \Phi(n^\beta\, x)\, \, \Psi (x) \, dx,
$$
for all $\Psi(x)\in{\cal D}(\rm\bf{R})$. It is an easy exercise to check
that this limit does not exist, if $\Psi(0)\neq0$ and
$\alpha=\beta$. Therefore we consider, first of all, the case
$\alpha>\beta$. In this case we can write
$$
\left(\delta\odot
\delta\right)_n^{(\alpha,\beta)}(\Psi)=n^{\beta}\,\int_{-1}^1\,\Phi(x)\,\Phi(xn^{\beta-\alpha})\,
\Psi(xn^{-\alpha})\,dx,
$$
and the existence of its limit can be proved, as in
\cite{bag1,bag2}, using the LDCT. Choosing $\Phi(x)$ as in
(\ref{31}), and defining
$B_m=\frac{1}{eN_m}\,\int_{-1}^1\,x^m\,\Phi(x)\,dx$, which is
surely well defined and strictly positive for all fixed even $m$,
it is quite easy to deduce that \begin{equation}
(\delta \odot \delta)_{(\alpha,\beta)}(\Psi) = \left\{
\begin{array}{ll}
B_m \Psi(0)= B_m\,\delta(\Psi), & \alpha=\beta\left(1+\frac{1}{m}\right) \\
0, & \alpha>\beta\left(1+\frac{1}{m}\right).
\end{array}
\right.
\label{312} \end{equation} As we can see, this result is quite close to the
one in (\ref{resf1}). Analogously to what already stressed in
Section 2, formula (\ref{312}) does not imply that $(\delta \odot
\delta)_{(\alpha,\beta)}(\Psi)$ does not exist if
$\alpha<\beta\left(1+\frac{1}{m}\right)$ because using the LDCT we
only find sufficient conditions for $(\delta \odot
\delta)_{(\alpha,\beta)}(\Psi)$ to exist. However, with respect to
our results in \cite{bag1}, here we can say a bit more, because we
have $(\delta \odot \delta)_{(\alpha,\beta)}(\Psi)=(\delta \odot
\delta)_{(\beta,\alpha)}(\Psi)$, which was not true in general for
the multiplication $\otimes_{(\alpha,\beta)}$ introduced in
\cite{bag1}. Therefore we find that \begin{equation}
(\delta \odot \delta)_{(\alpha,\beta)}(\Psi) = \left\{
\begin{array}{ll}
B_m\,\delta(\Psi), & \alpha=\beta\left(1+\frac{1}{m}\right)^{-1},\,\mbox{ or }
\alpha=\beta\left(1+\frac{1}{m}\right) \\
0, & \alpha<\beta\left(1+\frac{1}{m}\right)^{-1} \,
\hspace{2mm} \mbox{ or } \alpha>\beta\left(1+\frac{1}{m}\right),
\end{array}
\right.
\label{313} \end{equation} while nothing can be said in general if
$\alpha\in\left]\beta\left(1+\frac{1}{m}\right)^{-1},\beta\left(1+\frac{1}{m}\right)\right[$.
\vspace{2mm}
{\bf Example nr.2: $\left(\delta\odot
\delta'\right)_{(\alpha,\beta)}$ }
\vspace{2mm}
In this case the three multiplications $\odot$, $\odot_d$ and
$\odot_{ex}$ do not need to coincide. Indeed we will find serious
differences between the three, as expected.
First of all we concentrate on the computation of
$\left(\delta\odot_d \delta'\right)_{(\alpha,\beta)}$. Because of
(\ref{39}) this will also produce $\left(\delta\odot_{ex}
\delta'\right)_{(\beta,\alpha)}$. We have
$$
\left(\delta\odot_d
\delta'\right)_n^{(\alpha,\beta)}(\Psi)=n^{\alpha+2\beta}\,\int_{\rm\bf{R}}\,\Phi(n^\alpha\,x)\,\Phi'(n^\beta\,x)\,
\Psi(x)\,dx,
$$
where $\Phi'$ is the derivative of $\Phi$. Again, it is easy to
check that the limit of this sequence does not exist, if
$\alpha=\beta$, for all $\Psi(x)\in{\cal D}(\rm\bf{R})$ but only for those
$\Psi(x)$ which go to zero fast enough when $x\rightarrow 0$. Let us
then consider $\left(\delta\odot_d
\delta'\right)_n^{(\alpha,\beta)}(\Psi)$ for $\alpha>\beta$. In this
case we can write
$$
\left(\delta\odot_d
\delta\right)_n^{(\alpha,\beta)}(\Psi)=n^{2\beta}\,\int_{-1}^1\,\Phi(x)\,\Phi'(xn^{\beta-\alpha})\,
\Psi(xn^{-\alpha})\,dx,
$$
and by the LDCT we deduce that \begin{equation}
(\delta \odot_d \delta')_{(\alpha,\beta)}(\Psi) = \left\{
\begin{array}{ll}
K_m \,\delta(\Psi), & \alpha=\beta\,\frac{m+1}{m-1} \\
0, & \alpha>\beta\,\frac{m+1}{m-1},
\end{array}
\right.
\label{315} \end{equation} where
$K_m=\frac{m}{N_me}\,\int_{-1}^1\,x^{m-1}\,\Phi(x)\,dx$. We see
that, contrarily to (\ref{resf2}), we can obtain a non trivial
result with the $\odot_d$ multiplication. It is clear therefore
that also the $(\delta \odot_{ex}\delta')$ can be non trivial, as
remarked above .
The situation is completely different for the $(\delta \odot
\delta')_{(\beta,\alpha)}(\Psi)$. Indeed, it is not difficult to
understand that the LDCT cannot be used to prove its existence. The
reason is quite general and is the following:
suppose that for two distributions $T$ and $S$ in ${\cal D}'(\rm\bf{R})$ $(T
\odot_d S)_{(\alpha,\beta)}$ exists for $\alpha$ and $\beta$ such
that $\alpha>\gamma\beta$, where $\gamma$ is some constant larger
than 1 appearing because of the LDCT. For instance here
$\gamma=\frac{m+1}{m-1}$, while in Example nr.1 we had
$\gamma=1+\frac{1}{m}$. Therefore, since if $(S\odot_{ex}
T)_{(\beta,\alpha)}$ and $(S\odot_d T)_{(\alpha,\beta)}$ both exist
and coincide, using (\ref{311}) we have \begin{equation} (S\odot
T)_{(\alpha,\beta)}(\Psi )=\frac{1}{2}\left\{(S\odot_d
T)_{(\alpha,\beta)}(\Psi )+(S\odot_{d} T)_{(\beta,\alpha)}(\Psi
)\right\}\label{316}\end{equation} Of course, $(S\odot T)_{(\alpha,\beta)}(\Psi
)$ exists if $(S\odot_{d} T)_{(\alpha,\beta)}(\Psi )$ and
$(S\odot_{d} T)_{(\beta,\alpha)}(\Psi )$ both exist, which in turn
implies that $\alpha>\gamma\beta$ and, at the same time, that
$\beta>\gamma\alpha$. These are clearly incompatible. Therefore in
order to check whether $(S\odot T)_{(\alpha,\beta)}(\Psi )$ exists
or not it is impossible to use the LDCT which only gives sufficient
conditions for the multiplication to be defined: some different
strategy should be considered.
\vspace{2mm}
{\bf Example nr.3: $\left(\delta'\odot
\delta'\right)_{(\alpha,\beta)}$ }
\vspace{2mm}
As for Example nr.1 we remark that here the $\odot$, $\odot_d$ and
$\odot_{ex}$ multiplications all coincide. If the following limit
exists for some $\alpha, \beta>0$, we have
$$
\left(\delta'\odot
\delta'\right)_{(\alpha,\beta)}(\Psi)=\left(\delta'\odot
\delta'\right)_{(\beta,\alpha)}(\Psi)=\lim_{n\,\rightarrow\,\infty}\int_{\rm\bf{R}}
\delta_n^{\,'(\alpha)}(x)\, \delta_n^{\,'(\beta)}(x) \, \Psi (x) \,
dx=$$
$$=\lim_{n\,\rightarrow\,\infty}n^{2\alpha+2\beta}\int_{\rm\bf{R}} \Phi'(n^\alpha\,
x)\, \Phi'(n^\beta\, x)\, \, \Psi (x) \, dx,
$$
for all $\Psi(x)\in{\cal D}(\rm\bf{R})$. As before, it is quite easy to
check that this limit does not exist, if $\Psi(0)\neq0$, when
$\alpha=\beta$. Therefore we start considering the case
$\alpha>\beta$. In this case we can write
$$
\left(\delta'\odot
\delta'\right)_n^{(\alpha,\beta)}(\Psi)=n^{\alpha+2\beta}\,\int_{-1}^1\,\Phi'(x)\,\Phi'(xn^{\beta-\alpha})\,
\Psi(xn^{-\alpha})\,dx,
$$
and again the existence of its limit can be proved using the
LDCT. Choosing $\Phi(x)$ as
in (\ref{31}) and defining $\tilde
B_m=\frac{m}{eN_m}\,\int_{-1}^1\,x^{m-1}\,\Phi'(x)\,dx$, which
surely exists for all fixed even $m$, we deduce that \begin{equation}
(\delta' \odot \delta')_{(\alpha,\beta)}(\Psi) = \left\{
\begin{array}{ll}
\tilde B_m \,\delta(\Psi), & \alpha=\beta\,\frac{m+1}{m-2} \\
0, & \alpha>\beta\,\frac{m+1}{m-2}.
\end{array}
\right.
\label{316b} \end{equation} However this is not the end of the story,
because we still can use the symmetry $(\delta' \odot
\delta')_{(\alpha,\beta)}(\Psi)=(\delta' \odot
\delta')_{(\beta,\alpha)}(\Psi)$ discussed before. We find \begin{equation}
(\delta' \odot \delta')_{(\alpha,\beta)}(\Psi) = \left\{
\begin{array}{ll}
\tilde B_m\,\delta(\Psi), & \alpha=\beta\,\frac{m-2}{m+1},\,\mbox{ or }
\alpha=\beta\,\frac{m+1}{m-2} \\
0, & \alpha<\beta\,\frac{m-2}{m+1},\,\mbox{ or }
\alpha>\beta\,\frac{m+1}{m-2},
\end{array}
\right.
\label{317} \end{equation} while nothing can be said in general if
$\alpha\in\left]\beta\,\frac{m-2}{m+1},\beta\,\beta\,\frac{m+1}{m-2}\right[$.
Needless to say, we need here to restrict to the following values of
$m$: $m=4,6,8,\ldots$.
\vspace{2mm}
Summarizing we find that results which are very close to those in
\cite{bag1} can be recovered with each one of the definitions in
(\ref{35}), (\ref{36}) or (\ref{37}). The main differences
essentially arise from the lack of symmetries of these two last
definitions compared to definition (\ref{35}) and the one in
\cite{bag1}.
\section{More spatial dimensions and conclusions}
While the definition of the multiplication given in \cite{bag1}, as
we have stressed before, cannot be extended easily to ${\rm\bf{R}}^d$,
$d>1$, definitions (\ref{35}), (\ref{36}) or (\ref{37}) admit a
natural extension to any spatial dimensions. We concentrate here
only on the symmetric definition, (\ref{35}), since it is the most
relevant one for the application we are interested in here. Of
course no particular differences appear in the attempt of extending
$\odot_d$ and $\odot_{ex}$ to $d>1$.
The starting point is again a given non negative function
$\Phi(\underline{x})\in{\cal D}(\rm\bf{R}^d)$ with support in
$I_1:=\underbrace{[-1,1]\times\cdots\times[-1,1]}_{d \mbox{
times}}$, and such that
$\int_{I_1}\Phi(\underline{x})\,d\underline{x}=1$. In this case the
delta-sequence is
$\delta_n^{(\alpha)}(\underline{x})=n^{d\alpha}\Phi(n^\alpha
\underline{x})$, for any choice of $\alpha>0$. The same results
listed in the previous section again hold in this more general
situation. For instance, if $T\in{\cal D}'({\rm\bf{R}}^d)$ then
$\{T_n^{(\alpha)}(\underline{x})=(T\ast\delta_n^{(\alpha)})(\underline{x})\}$
is a sequence of $C^\infty$ functions and it converges to $T$ in
${\cal D}'({\rm\bf{R}}^d)$ as $n$ diverges, independently of $\alpha>0$.
Therefore, let us consider two distributions $T,
S\in{\cal D}'(\rm\bf{R}^d)$, and let us consider their convolutions
$T_n^{(\alpha)}(\underline{x})=(T\ast\delta_n^{(\alpha)})(\underline{x})$
and
$S_n^{(\beta)}(\underline{x})=(S\ast\delta_n^{(\beta)})(\underline{x})$
with $\delta_n^{(\alpha)}(\underline{x})=n^{d\alpha}\Phi(n^\alpha
\underline{x})$, for $\alpha,\beta>0$. As usual,
$T_n^{(\alpha)}(\underline{x})$ and $S_n^{(\beta)}(\underline{x})$
are both $C^\infty$ functions, so that the following integral
surely exists: \begin{equation} (S\odot T)_n^{(\alpha,\beta)}(\Psi ) \equiv
\frac{1}{2} \int_{\rm\bf{R}^d} \left[S_n^{(\alpha)}(\underline{x})\,
T_n^{(\beta)}(\underline{x}) +
S_n^{(\beta)}(\underline{x})\,T_n^{(\alpha)}(\underline{x})
\right]\, \Psi (\underline{x}) \, d\underline{x}, \label{41}\end{equation}
$\forall\,\Psi(\underline{x})\in{\cal D}(\rm\bf{R}^d)$. As before the two
distributions $S$ and $T$ are $\odot$-multiplicable if the following
limit exists independently of $\Psi(\underline{x})\in{\cal D}(\rm\bf{R}^d)$:
\begin{equation} (S\odot T)_{(\alpha,\beta)}(\Psi
)=\lim_{n\,\rightarrow\,\infty}(S\odot T)_n^{(\alpha,\beta)}(\Psi
).\label{42}\end{equation}
We consider in the following the $\odot$-multiplication of two
delta functions, considering two different choices for the
function $\Phi(\underline{x})$ both extending the one-dimensional
case.
The starting point of our computation is the usual one: if it
exists, $(\delta\odot \delta)_{(\alpha,\beta)}(\Psi )$ must be such
that
$$
(\delta\odot \delta)_{(\alpha,\beta)}(\Psi
)=\lim_{n\,\rightarrow\,\infty}\,n^{d\alpha+d\beta}\int_{\rm\bf{R}^d}
\Phi(n^\alpha\, \underline{x})\, \Phi(n^\beta\, \underline{x})\, \,
\Psi (\underline{x}) \, d\underline{x}.
$$
Again, this limit does not exist if $\alpha=\beta$, but for very
peculiar functions $\Psi(\underline{x})$. If we consider what
happens for $\alpha>\beta$ then the limit exists under certain extra
conditions.
For instance, if we take
$\Phi(\underline{x})=\prod_{j=1}^d\,\Phi(x_j)$, where $\Phi(x_j)$ is
the one in (\ref{31}), the computation factorizes and the final
result, considering also the symmetry of the multiplication, is a
simple extension of the one in (\ref{313}): \begin{equation}
(\delta \odot \delta)_{(\alpha,\beta)}(\Psi) = \left\{
\begin{array}{ll}
B_m^d\,\delta(\Psi), & \alpha=\beta\left(1+\frac{1}{m}\right)^{-1},\,\mbox{ or }
\alpha=\beta\left(1+\frac{1}{m}\right) \\
0, & \alpha<\beta\left(1+\frac{1}{m}\right)^{-1}
\hspace{2mm}
\mbox{ or } \alpha>\beta\left(1+\frac{1}{m}\right),
\end{array}
\right.
\label{43} \end{equation} \vspace{2mm} A different choice of
$\Phi(\underline{x})$, again related to the one in (\ref{31}), is
the following: \begin{equation}
\Phi(\underline{x}) = \left\{
\begin{array}{ll}
\frac{\|\underline{x}\|^m}{N'_m} \, \exp\{\frac{1}{\|\underline{x}\|^2-1}\}, & \|\underline{x}\| <1 \\
0, & \|\underline{x}\| \geq 1,
\end{array}
\right.
\label{44} \end{equation} where $N'_m$ is a normalization constant and
$\|\underline{x}\|=\sqrt{x_1^2+\cdots +x_d^2}$. With this choice,
defining
$C_m=\frac{1}{N'_me}\int_{\rm\bf{R}}\|\underline{x}\|\,\Phi(\underline{x})\,d\underline{x}$
and using the symmetry property of $\odot_{(\alpha,\beta)}$, we find
\begin{equation}
(\delta \odot \delta)_{(\alpha,\beta)}(\Psi) = \left\{
\begin{array}{ll}
C_m\,\delta(\Psi), & \alpha=\beta\left(1+\frac{d}{m}\right)^{-1},\,\mbox{ or }
\alpha=\beta\left(1+\frac{d}{m}\right) \\
0, & \alpha<\beta\left(1+\frac{d}{m}\right)^{-1}
\hspace{2.5mm}
\mbox{ or } \alpha>\beta\left(1+\frac{d}{m}\right).
\end{array}
\right.
\label{45} \end{equation} Therefore the limit defining the product of two delta
can be defined, at least under certain conditions, also with this
choice of $\Phi(\underline{x})$ . The main differences between the
above results are the values of the constants and the fact that $d$
explicitly appears in the result in (\ref{43}), while it only
appears in the condition relating $\alpha$ and $\beta$ in
(\ref{45}).
\vspace{3mm}
The conclusion of this short paper is that the use of sequential
completion, properly adapted for our interests, is much simpler
\underline{and} convenient. The next step of our analysis will be
to use our results in applications to three-dimensional
engineering structures, trying to extend the results in
\cite{cadde,cadde2}.
\vspace{50pt}
\noindent{\large \bf Acknowledgments} \vspace{5mm}
This work has been partially supported by M.U.R.S.T.
|
2,869,038,154,676 | arxiv | \section{Introduction}
Differential Privacy (DP) and its applications to machine learning (ML) have established themselves as the tool of choice for statistical analyses on sensitive data. They allow analysts working with such data to obtain useful insights while offering objective guarantees of privacy to the individuals whose data is contained within the dataset. DP guarantees are typically realised through the addition of calibrated noise to statistical queries. The randomisation of queries however introduces an unavoidable \say{tug-of-war} between privacy and accuracy, the so-called \textit{privacy-utility trade-off}. This trade-off is undesirable and may be among the principal deterrents from the widespread willingness to commit to the usage of DP in statistical analyses.
The main reason why DP is considered harmful for utility is perhaps an incomplete understanding of its very formulation: In its canonical definition, DP is a worst-case guarantee against a very powerful (i.e. optimal) adversary with access to unbounded computational power and auxiliary information \cite{tschantz2020sok}. Erring on the side of security in this way is prudent, as it means that DP bounds always hold for weaker adversaries. However, the privacy guarantee of an algorithm under realistic conditions, where such adversaries may not exist, could be more optimistic than indicated. This naturally leads to the question what the \say{actual} privacy guarantees of algorithms are under relaxed adversarial assumptions.
Works on empirical verification of DP guarantees \cite{carlini2022membership, jagielski2020auditing, nasr2021adversary} have recently led to two general findings:
\begin{enumerate}
\item The DP guarantee in the worst case is (almost) tight, meaning that an improved analysis is not able to offer stronger bounds on existing algorithms under the same assumptions;
\item A relaxation of the threat model on the other hand leads to dramatic improvements in the empirical DP guarantees of the algorithm.
\end{enumerate}
Motivated by these findings, we initiate an investigation into a minimal threat model relaxation which results in an \say{almost optimal} adversary. Complementing the aforementioned empirical works, which instantiate adversaries who conduct membership inference tests, we assume a formal viewpoint but retain the hypothesis testing framework. Our contributions can be summarised as follows:
\begin{itemize}
\item We begin by introducing a mild formal relaxation of the usual DP assumption of a Neyman-Pearson-Optimal (NPO) adversary to a Generalised Likelihood Ratio Testing (GLRT) adversary. We discuss the operational significance of this formal relaxation in Section \ref{sec:background};
\item In this setting, we provide tight privacy guarantees for the Gaussian mechanism in the spirit of Gaussian DP (GDP) and $(\varepsilon, \delta)$-DP, which we show to be considerably stronger than under the worst-case assumptions, especially in the high privacy regime.
\item We provide composition results and subsampling guarantees for our bounds for use e.g. in deep learning applications.
\item We find that --contrary to the worst-case setting-- the performance of the adversary in the GLRT relaxation is dependent on the dimensionality of the query, with high-dimensional queries having stronger privacy guarantees. We link this phenomenon to the asymptotic convergence of our bounds to an amplified GDP guarantee.
\item Finally, we experimentally evaluate our bounds, showing them to be tight against empirical adversaries.
\end{itemize}
\section{Prior Work}
\textbf{Empirical verification of DP}: Several prior works have investigated DP guarantees from an empirical point-of-view. For instance, \cite{jagielski2020auditing} utilised data poisoning attacks to verify the privacy guarantees of DP-SGD, while \cite{nasr2021adversary} \textit{instantiate} adversaries in a variety of settings and test their membership inference capabilities. A similar work in this spirit is \cite{humphries2020differentially}.
\textbf{Formalisation of membership inference attacks}: \cite{shokri2017membership} is among the earliest works to formalise the notion of a membership inference attack against a machine learning model albeit in a \textit{black-box} setting, where the adversary only has access to predictions from a targeted machine learning model. Follow-up works like \cite{ye2021enhanced, carlini2022membership} have extended the attack framework to a variety of settings. Recent works by \cite{sablayrolles2019white} or by \cite{mahloujifar2022optimal} have also provided formal bounds on membership inference success in a DP setting.
\textbf{Software tools and empirical mitigation strategies}: Alongside the aforementioned works, a variety of software tools has been proposed to \textit{audit} the privacy guarantees of ML systems, such as \textit{ML-Doctor} \cite{liu2022ml} or \textit{ML Privacy Meter} \cite{ye2021enhanced}. Such tools operate on the premises related to the aforementioned \textit{adversary instantiation}.
Of note, DP is not the only technique to defend against membership inference attacks (although it is among the few formal ones). Works like \cite{liu2021generalization, usynin2022zen} have proposed so-called \textit{model adaptation} strategies, that is, methods which empirically harden the model against attacks without necessarily offering formal guarantees.
\textbf{Gaussian DP, numerical composition and subsampling amplification}: Our analysis relies heavily on the hypothesis testing interpretation of DP and specifically Gaussian DP (GDP) \cite{dong2021gaussian}, however we present our privacy bounds in terms of the more familiar Receiver-Operator-Characteristic (ROC) curve similarly to \cite{Kaissis_Knolle_Jungmann_Ziller_Usynin_Rueckert_2022}. We note that for the purposes of the current work, the guarantees are identical. Some of our guarantees have no tractable analytic form, instead requiring numerical computations, similar to \cite{gopi2021numerical, zhu2022optimal}. We make strong use of the duality between GDP and \textit{privacy profiles} for privacy amplification by subsampling, a technique described in \cite{balle2020privacy}.
\section{Background}\label{sec:background}
\subsection{The DP threat model}
We begin by briefly formulating the DP threat model in terms of an abstract, non-cooperative \textit{membership inference game}. This will then allow us to relax this threat model and thus present our main results in a more comprehensible way. Throughout, we assume two parties, a \textit{curator} $\mathcal{C}$ and an \textit{adversary} $\mathcal{A}$ and will limit our purview to the Gaussian mechanism of DP.
\begin{definition}[DP membership inference game]
Under the DP threat model, the game can be reduced to the following specifications. We note that any added complexity beyond the level described below can only serve to make the game harder for $\mathcal{A}$ and thus improve privacy.
\begin{enumerate}
\item The adversary $\mathcal{A}$ selects a function $f: \mathcal{X} \rightarrow \mathbb{R}^n$ where $\mathcal{X}$ is the space of datasets with (known) global $\ell_2$-sensitivity $\Delta$ and crafts two adjacent datasets $D$ and $D'$ such that $D \coloneqq \lbrace A \rbrace$ and $D' \coloneqq \lbrace A, B \rbrace$. Here, $A,B$ are the data of two individuals and \textbf{fully known} to $\mathcal{A}$. We denote the adjacency relationship by $\simeq$.
\item The curator $\mathcal{C}$ secretly evaluates either $f(D)$ or $f(D')$ and publishes the result $y$ with Gaussian noise of variance $\sigma^2\mathbf{I}^n$ calibrated to $\Delta$.
\item The adversary $\mathcal{A}$, using all available information, determines whether $D$ or $D'$ was used for computing $y$.
\end{enumerate}
The game is considered won by the adversary if they make a correct determination.
\end{definition}
Under this threat model, the process of computing the result and releasing it with Gaussian noise is the DP mechanism. Note that the aforementioned problem can be reformulated as the problem of detecting the presence of a single individual given the output. This gives rise to the description typically associated with DP guarantees: \say{DP guarantees hold even if the adversary has access to the data of all individuals except the one being tested}. The reason for this is that, due to their knowledge of the data and the function $f$, $\mathcal{A}$ can always \say{shift} the problem so that (WLOG) $f(A) = 0$, from which it follows that $f(B) = \Delta$ (where the strict equality is due to the presence of only two points in the dataset and consistent with the DP guarantee).
More formally, the problem can thus be expressed as the following one-sided hypothesis test:
\begin{equation}
\mathcal{H}_0: y = Z \;\; \text{vs.} \;\; \mathcal{H}_1: y = \Delta + Z, Z \sim \mathcal{N}(0, \sigma^2)
\end{equation}
and is equivalent to asking $\mathcal{A}$ to distinguish the distributions $\mathcal{N}(0, \sigma^2)$ and $\mathcal{N}(\Delta, \sigma^2)$ based on a single draw. The full knowledge of the two distributions' parameters renders both hypotheses \textit{simple}. In other words, $\mathcal{A}$ is able to compute the following log-likelihood ratio test statistic:
\begin{equation}\label{NPO_LR}
\log \left(\frac{\text{Pr}(y \mid \mathcal{N}(\Delta, \sigma^2))}{\text{Pr}(y \mid \mathcal{N}(0, \sigma^2))} \right) = \frac{1}{2\sigma^2}\left(\vert y \vert^2 - \vert y- \Delta \vert^2\right),
\end{equation}
which depends only on known quantities. We call this type of adversary \textit{Neyman-Pearson-Optimal} (NPO) as they are able to detect the presence of the individual in question with the best possible trade-off between Type I and Type II errors, consistent with the guarantee of the Neyman-Pearson lemma \cite{neyman1933ix}. As is evident from Equation \eqref{NPO_LR}, the capabilities of an NPO adversary are independent of query dimensionality. Due to the isotropic properties of the Gaussian mechanism, the ability to form the full likelihood ratio allow $\mathcal{A}$ to \say{rotate the problem} in a way that allows them to linearly classify the output, which amounts to computing the multivariate version of the $z$-test. We remark in passing that this property forms the basis of \textit{linear discriminant analysis}, a classification technique reliant upon the aforementioned property. GDP utilises the worst-case capabilities of an NPO adversary as the basis for formulating a DP guarantee:
\begin{definition}[$\mu$-GDP, \cite{dong2021gaussian}]
A mechanism $\mathcal{M}$ preserves $\mu$-GDP if, $\forall D, D': D\simeq D'$, distinguishing between $f(D)$ and $f(D')$ based on an output of $\mathcal{M}$ is at least as hard (in terms of Type I and Type II error) as distinguishing between $\mathcal{N}(0, 1)$ and $\mathcal{N}(\mu, 1)$ with $\mu=\frac{\Delta}{\sigma}$.
\end{definition}
We stress that this guarantee is symmetric and given (1) over the randomness of $\mathcal{M}$, i.e. considering the datasets and $f$ as deterministic and (2) without consideration to (i.e. over) the adversary's prior. GDP utilises a \textit{trade-off} function/curve $T(x)$ to summarise the set of all achievable Type I and Type II errors by the adversary. An equivalent formulation is the \textit{testing region} in \cite{kairouz2015composition}. The \textit{trade-off} function is identical to the complement of the ROC curve (i.e. $T(x)=1-R(x)$), that is, the curve which plots the probability of correctly selecting $\mathcal{H}_1$ when it is true (equivalently, the probability of true positives, probability of detection ($P_d$) or sensitivity) against the probability of falsely selecting $\mathcal{H}_1$ when $\mathcal{H}_0$ is true (equivalently, the probability of false positive ($P_f$), probability of false alarm or $1-$specificity. Due to its greater familiarity, we will utilise $R(x)$ throughout as the guarantee is identical. For more details, we refer to \cite{dong2021gaussian, Kaissis_Knolle_Jungmann_Ziller_Usynin_Rueckert_2022}.
\subsection{Our threat model relaxation}
As the introductory section above outlines, the NPO adversary is the most powerful adversary imaginable (i.e. \textit{omnipotent}). Mathematically, the \say{key to omnipotence} is the aforementioned ability to form the full likelihood ratio. This crucially relies on either knowledge of or control over the dataset and/or function. In many realistic settings however, this assumption may be too pessimistic. A few examples:
\begin{itemize}
\item In federated learning, adversarial actors don't have access to other participants' datasets as they only witness the outputs of models trained on these datasets.
\item In (e.g. medical) settings where data is generated and securely stored in a single institution, access to the full dataset is highly improbable.
\item When neural networks are trained on private data with DP-SGD, the adversary has no control over the function and only receives the output of the computation.
\end{itemize}
The goal of our work is to provide a formal and tight privacy guarantee for such settings. Such a guarantee can then e.g. be parameterised by the probability that the NPO setting does not apply. For example, it can be stated that with probability $p$, the adversary has no access to the dataset while with probability $1-p$ they do, where $p$ is optimally very close to $1$. Such a \textit{flexibilisation} of the threat model allows stakeholders to make holistic and sound decisions on the choice of privacy parameters while maintaining the worst-case outlook offered by DP. Assuming the formal point-of-view once more, the key difference between the NPO setting and the examples above is that, in the latter setting, $\mathcal{A}$ is \textbf{unable to form the full likelihood ratio} as at least one parameter of the distribution under $\mathcal{H}_1$ is unknown. This renders $\mathcal{H}_1$ a \textit{composite} hypothesis. We define the threat model relaxation as follows:
\begin{definition}[Relaxed threat model]
We say that an adversary operates under a relaxed threat model if they are unable to fully specify the distributions of \emph{at least one of} $\mathcal{H}_0$ or $\mathcal{H}_1$ because some parameters are unknown to them and must thus be estimated.
\end{definition}
To formally analyse this threat model in the rest of the paper, we will make the \textit{smallest possible} relaxation to the adversary and assume they are \textit{only lacking full knowledge of a single parameter} of the distribution of $\mathcal{H}_1$. This results in a \textit{nearly omnipotent} adversary. As is shown, even this minimal relaxation leads to a substantial amplification in privacy in certain regimes which are very relevant to everyday practice. For example, consider an adversary $\mathcal{A}_R$ who has full knowledge of $f$ and $\mathcal{M}$, but no access to $D$ or $D'$. Using this information, the adversary can infer all required information \textit{except the sign of $\Delta$}, as they now only know that $\Vert f(D) - f(D') \Vert_2 = \Delta$. Of note, we use the term \textit{sign} here to denote the multivariate \textit{signum} function $\operatorname{sgn}(x) = \frac{x}{\Vert x \Vert_2}$, i.e. the principal direction of the vector in $d$-dimensional space. More intuitively, $\mathcal{A}_R$ knows that $f(D)$ and $f(D')$ have \textit{distance} $\Delta$ from each-other, but not \textit{where} they are located with respect to one-another spatially. This situation results in the following hypothesis test (in the case of scalar $y$):
\begin{equation}
\mathcal{H}_0: y = Z \;\; \text{vs.} \;\; \mathcal{H}_1: y = \pm \Delta + Z, Z \sim \mathcal{N}(0, \sigma^2)
\end{equation}
Evidently $\mathcal{H}_1$ is not unique and thus unrealisable. Formally, a \textit{uniformly most powerful} test does not exist to distinguish between the hypotheses. The adversary thus has to resort in a category of tests summarised under the term \textit{Generalised Likelihood Tests}, and we term such an adversary a \textit{Generalised Likelihood Test} (GLRT) adversary. For a general $\boldsymbol{y}$, the hypothesis test takes the following form:
\begin{equation}
\mathcal{H}_0: \boldsymbol{y} \sim \mathcal{N}(\boldsymbol{0}, \sigma^2\mathbf{I}) \;\; \text{vs.} \;\; \mathcal{H}_1: \boldsymbol{y} \sim \mathcal{N}(\boldsymbol{\nu}, \sigma^2\mathbf{I}),
\end{equation}
where $\boldsymbol{\nu} \in \mathbb{R}^d$ is unknown. Alternatively, we can formulate the equivalent \textit{two-sided} test:
\begin{equation}
\mathcal{H}_0: \boldsymbol{\nu} = \mathbf{0} \;\; \text{vs.} \;\; \mathcal{H}_1: \boldsymbol{\nu} \neq \mathbf{0}.
\end{equation}
Then, the adversary will reject $\mathcal{H}_0$ when the following ratio takes a large value:
\begin{equation}
\frac{\text{Pr}(\boldsymbol{y} \mid \boldsymbol{\nu}^{\ast}, \mathcal{H}_1)}{\text{Pr}(\boldsymbol{y} \mid \mathcal{H}_0)},
\end{equation}
where $\boldsymbol{\nu}^{\ast}$ is the maximum likelihood estimate of $\boldsymbol{\nu}$. As seen in the proof to Theorem 1 below, in the relaxed threat model membership inference game, this will lead to classifying $\boldsymbol{y}$ by magnitude alone. We will show that this has two effects:
\begin{enumerate}
\item It substantially amplifies privacy for the individuals in $D$/$D'$;
\item This amplification is moreover dependent on the dimensionality of the query with stronger privacy for higher-dimensional queries.
\end{enumerate}
\section{Results}
\subsection{ROC curve in the GLRT setting}
We are now ready to precisely describe the membership inference capabilities of the GLRT adversary in terms of their hypothesis testing prowess.
\begin{theorem}
Let $R(x)$ be the ROC curve of the following test:
\begin{equation}\label{test_1}
\mathcal{H}_0: \boldsymbol{y} \sim \mathcal{N}(\boldsymbol{0}, \sigma^2\mathbf{I}) \;\; \text{vs.} \;\; \mathcal{H}_1: \boldsymbol{y} \sim \mathcal{N}(\boldsymbol{\nu}, \sigma^2\mathbf{I})
\end{equation}
and $R'(x)$ be the ROC curve of the following test:
\begin{equation}
\mathcal{H}'_0: \boldsymbol{y} \sim \mathcal{N}(\boldsymbol{\nu}, \sigma^2\mathbf{I}) \;\; \text{vs.} \;\; \mathcal{H}'_1: \boldsymbol{y} \sim \mathcal{N}(\boldsymbol{0}, \sigma^2\mathbf{I}),
\end{equation}
where $y$ is the output of a Gaussian mechanism with noise variance $\sigma^2$ known to the adversary and $\boldsymbol{\nu}$ be an unknown vector of dimensionality $d$ with $\Vert \boldsymbol{\nu} \Vert_2 \leq \Delta$ for some constant $\Delta$ which is known to the adversary. Then, the following hold:
\begin{equation}\label{R}
R(x) = \Psi_{\chi^2_d\left(\frac{\Delta^2}{\sigma^2}, \sigma^2 \right)} \left( \Psi^{-1}_{\chi^2_d\left(0, \sigma^2 \right)}(x)\right)
\end{equation}
and
\begin{equation}
R'(x) = \Phi_{\chi^2_d\left(0, \sigma^2 \right)} \left(\Phi^{-1}_{\chi^2_d\left(\frac{\Delta^2}{\sigma^2}, \sigma^2 \right)}(x) \right).
\end{equation}
Above, $\Phi$ is the cumulative distribution function, $\Psi$ the survival function, $\Phi^{-1}$ and $\Psi^{-1}$ their respective inverses, $\chi^2_d(\lambda, \sigma^2)$ denotes the noncentral chi-squared distribution with $d$ degrees of freedom, noncentrality parameter $\lambda$ and scaling factor $\sigma^2$. Setting $\lambda=0$ recovers the central chi-squared distribution.
\end{theorem}
\begin{proof}
We give the proof for $R$ (Equations \eqref{test_1} and \eqref{R}), the proof for $R'$ follows from it. Since the adversary does not know the true value of $\boldsymbol{\nu}$, the full likelihood ratio cannot be formed and a Neyman-Pearson approach (i.e. $z$-test) is out of the question. Instead, classification is done by magnitude. As described above, the null hypothesis is rejected if the following ratio takes a large value:
\begin{equation}
\frac{\text{Pr}(\boldsymbol{y} \mid \boldsymbol{\nu}^{\ast}, \mathcal{H}_1)}{\text{Pr}(\boldsymbol{y} \mid \mathcal{H}_0)}.
\end{equation}
However, the maximum likelihood estimate $\boldsymbol{\nu}^{\ast}$ is just $\boldsymbol{y}$, thus the likelihood ratio is simplified to (cancelling the normalisation factors):
\begin{equation}
\frac{\exp\left\{-\frac{\lVert \boldsymbol{y} - \boldsymbol{\nu}\rVert_2^2}{2\sigma^2}\right\}}{\exp\left\{-\frac{\lVert \boldsymbol{y}\rVert_2^2}{2\sigma^2}\right\}} = \frac{1}{\exp\left\{-\frac{\lVert \boldsymbol{y}\rVert_2^2}{2\sigma^2}\right\}}.
\end{equation}
Taking logarithms and collecting known terms, we obtain that the test statistic is:
\begin{equation} \label{test_statistic}
T(\boldsymbol{y}) = \lVert \boldsymbol{y} \rVert^2_2 \lessgtr c^2,
\end{equation}
where $c^2$ is a (non-negative) cut-off value chosen to satisfy a desired level of Type I/Type II errors (or significance level and power level or $P_d$ and $P_f$).
Then, by the definition of the null and alternative hypotheses,
\begin{align}
& P_f = \text{Pr}(\lVert \boldsymbol{y} \rVert^2 > c^2 \mid \mathcal{H}_0) \; \text{and} \\
& P_d = \text{Pr}(\lVert \boldsymbol{y} \rVert^2 > c^2 \mid \mathcal{H}_1)
\end{align}
hold.
Under $\mathcal{H}_0$, $T(\boldsymbol{y})$ follows a central chi-squared distribution with $d$ degrees of freedom and scale $\sigma^2$, as it is the distribution of the magnitude of a draw from a $d$-dimensional multivariate Gaussian random variable with spherical covariance $\sigma^2\mathbf{I}$. Thus:
\begin{equation}
P_f = \Psi_{\chi^2_d \left(0, \sigma^2\right)}(c^2) \Leftrightarrow c^2 = \Psi^{-1}_{\chi^2_d\left(0, \sigma^2 \right)}(P_f),
\end{equation}
Under the alternative hypothesis, the distribution is noncentral chi squared with $d$ degrees of freedom, scale $\sigma^2$ and noncentrality parameter $\frac{\Delta^2}{\sigma^2}$ as it is the distribution of the magnitude of a draw from a $d$-dimensional multivariate Gaussian random variable with spherical covariance $\sigma^2\mathbf{I}$. The ROC curve plots $P_f(c^2), P_d(c^2)$ parametrically for all values of $c^2$. Observing that in the plot, $P_f$ is the $x$-coordinate, we substitute the expression for $c^2$ from above and obtain:
\begin{equation}
\Psi_{\chi^2_d\left(\frac{\Delta^2}{\sigma^2}, \sigma^2 \right)} \left( \Psi^{-1}_{\chi^2_d\left(0, \sigma^2 \right)}(x) \right).
\end{equation}
A concrete example for $d=1$ can be found in the proof to Corollary 1.
\end{proof}
Computing both $R(x)$ and $R'(x)$ is imposed by the requirement of DP to hold when $D$ and $D'$ are swapped. $\mathcal{H}_0$/$\mathcal{H}_1$ and $\mathcal{H}'_0$/$\mathcal{H}'_1$ are easily identified as the tests to distinguish $D \coloneqq \lbrace A \rbrace$ from $D' \coloneqq \lbrace A, B \rbrace$ and $D'$ from $D$, respectively.
In practice, we will always use the symmetrified and concavified version of the ROC curve $R_s(x)$, as prescribed in \cite{dong2021gaussian}. We refer to the aforementioned manuscript for details of the construction or $R_s(x)$. In brief, the curve is constructed by taking the concave upper envelope of $R$ and $R'$ by linearly interpolating between the points where the slope of the curves is parallel to the diagonal of the unit square. By symmetrifying the curves, it is guaranteed that the correct (worst-case) bound is used to convert to $(\varepsilon, \delta)$-DP by Legendre-Fenchel conjugation, a procedure we outline below. Examples of the curves in question are shown in Figure \ref{fig:roc_curves}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.45\textwidth]{symm.pdf}
\caption{Exemplary visualisations of $R(x)$ (blue) and $R'(x)$ (green) as well as the concavified and symmetrified ROC curve $R_s(x)$ (dashed black) for a Gaussian mechanism with $\Delta=1, \sigma^2=1$ under the GLRT assumption. $R$ and $R'$ are symmetric with respect to reflection over the off-diagonal but not symmetric ROC curves, whereas $R_s(x)$ is a symmetric ROC curve. For reference, the ROC curve of an NPO adversary $R_{\mathcal{N}}$ at the same $\Delta$ and $\sigma^2$-values is shown in red. Observe that $R_{\mathcal{N}}$ is also a symmetric ROC curve.}
\label{fig:roc_curves}
\end{figure}
We are particularly interested in the case where $d=1$, as it represents the worst-case scenario in terms of privacy (respectively the easiest problem for the adversary, see Section \ref{sec:dimensionality}). We emphasise that in this case, all the adversary is unaware of is the \textit{sign of the mean under the alternative hypothesis}.
\begin{corollary}
When $d=1$, $R(x)$ admits the following closed-form expression:
\begin{equation}
Q\left(Q^{-1}\left(\frac{x}{2}\right)-\frac{\Delta}{\sigma}\right) + Q\left(Q^{-1}\left(\frac{x}{2}\right)+\frac{\Delta}{\sigma}\right),
\end{equation}
where $Q$ is the survival function of the standard normal distribution and $Q^{-1}$ its inverse. Compare the ROC curve of the NPO adversary shown in red in Figure \ref{fig:roc_curves}:
\begin{equation}\label{eq:gauss_roc}
Q\left(Q^{-1}(x) - \frac{\Delta}{\sigma}\right).
\end{equation}
\end{corollary}
\begin{proof}
This is a special case of the proof to Theorem 1 above, but now for scalar $y$ since $d=1$. Under the null hypothesis, the survival function of the central chi-squared distribution with one degree of freedom admits an analytical form:
\begin{align} \label{erfc}
1-\frac{\gamma\left( \frac{1}{2}, \frac{c^2}{2{\sigma^2}} \right)}{\Gamma\left( \frac{1}{2}\right)} = &1-\frac{\sqrt{\pi}\operatorname{erf}\left(\sqrt{\frac{c^2}{2{\sigma^2}}}\right)}{\sqrt{\pi}} = \\ = \,
&\operatorname{erfc}\left(\frac{c}{\sqrt{2}\sigma}\right),
\end{align}
where $\gamma$ is the lower incomplete gamma function, $\Gamma$ the gamma function, we inserted $\frac{c^2}{\sigma^2}$ to account for the scale and $\operatorname{erf}, \operatorname{erfc}$ are the error function and complementary error function of the Gaussian distribution, respectively. We can now exploit the following pattern:
\begin{equation}
Q(k) = \frac{1}{2}\operatorname{erfc}\left(\frac{k}{\sqrt{2}}\right),
\end{equation}
so the term in Equation \eqref{erfc} can be written as $2Q\left(\frac{c}{\sigma}\right)$. Since the $Q$ function is invertible, we have that $c=\sigma Q^{-1}\left(\frac{x}{2}\right)$.
Similarly, for the alternative hypothesis, we have:
\begin{align}
&\mathsf{Q}_{M\frac{1}{2}}\left (\sqrt{\frac{\Delta^2}{\sigma^2}}, \sqrt{\frac{c^2}{\sigma^2}} \right) = \\ = \,
& \mathsf{Q}_{M\frac{1}{2}}\left (\frac{\Delta}{\sigma}, \frac{c}{\sigma} \right),
\end{align}
where $\mathsf{Q}_{M\frac{1}{2}}$ is the Marcum Q-function of order $\frac{1}{2}$, which is the survival function of the noncentral chi-squared distribution with one degree of freedom, and we have substituted $\Delta^2$ as the noncentrality to account for the mean under the alternative hypothesis and divided by $\sigma^2$ to account for the scaling. Substituting the expression for $c$ from above, we obtain:
\begin{align}
&\mathsf{Q}_{M\frac{1}{2}}\left (\frac{\Delta}{\sigma}, \frac{\sigma Q^{-1}\left(\frac{x}{2}\right)}{\sigma} \right) = \\ = \,
&\mathsf{Q}_{M\frac{1}{2}}\left (\frac{\Delta}{\sigma}, Q^{-1}\left(\frac{x}{2}\right) \right). \label{marcum}
\end{align}
The Marcum Q-function of order $\frac{1}{2}$ also admits a closed form:
\begin{equation}\label{marcum_2}
\mathsf{Q}_{M\frac{1}{2}}(a,b) = \frac{1}{2}\left(\operatorname{erfc}\left(\frac{b-a}{\sqrt{2}} \right) + \operatorname{erfc}\left(\frac{b+a}{\sqrt{2}} \right) \right).
\end{equation}
Using the pattern $\frac{1}{2}\operatorname{erfc}\left(\frac{k}{\sqrt{2}}\right)$ as $Q(k)$, we rewrite Equation \eqref{marcum_2} as $\mathsf{Q}_{M\frac{1}{2}}(a,b) = Q(b-a) + Q(a+b)$. Finally, we substitute the arguments from Equation \ref{marcum} and obtain:
\begin{equation}
Q\left(Q^{-1}\left(\frac{x}{2}\right) - \frac{\Delta}{\sigma}\right) + Q\left(Q^{-1}\left(\frac{x}{2}\right) + \frac{\Delta}{\sigma}\right),
\end{equation}
which completes the proof. The proof of Equation \eqref{eq:gauss_roc} can be found in \cite{dong2021gaussian} or \cite{Kaissis_Knolle_Jungmann_Ziller_Usynin_Rueckert_2022} and follows by standard properties of the Gaussian survival and cumulative distribution functions.
\end{proof}
Unfortunately, $R'(x)$ has no closed-form expression for any $d$, as the noncentral chi-squared distribution cannot --in general-- be inverted analytically. However, it is easy to invert numerically as it is monotonic and routines to evaluate the inverse to high precision are available in all standard numerical software libraries and we describe a differentiable implementation below. Observe also that the ROC curve (for a given $d$) depend only on the ratio of $\Delta$ to $\sigma$. This \say{signal to noise}(SNR)-type argument is also made by \cite{Kaissis_Knolle_Jungmann_Ziller_Usynin_Rueckert_2022} for the NPO adversary.
\subsection{Composition}
The results presented above bound the capability of the GLRT adversary in the setting of a single query release. We are now interested in extending the guarantee to the composition setting, where the adversary receives the results of $N$ queries. We stress that we adopt a worst-case outlook on composition, i.e. we assume that in the setting of $N$-fold composition, the adversary incorporates all knowledge gained from queries $1, \dots, N-1$ to improve their probability of success at the membership inference game. This interpretation is consistent with the DP threat model.
\begin{theorem}
Let $\mathcal{M}$ be a Gaussian mechanism on a function with sensitivity $\Delta$, noise variance $\sigma^2$ and output dimensionality $d$. Then, under $N$-fold homogeneous composition, the ROC curves are given by:
\begin{equation}
R(x)^{\otimes N} = \Psi_{\chi^2_d(\lambda_{\text{comp}}, \sigma^2_{\text{comp}})} \left( \Psi^{-1}_{\chi^2_d\left(0, \sigma^2_{\text{comp}} \right)}(x)\right)
\end{equation}
and
\begin{equation}
R'(x)^{\otimes N} = \Phi_{\chi^2_d\left(0, \sigma^2_{\text{comp}} \right)} \left(\Phi^{-1}_{\chi^2_d\left(\lambda_{\text{comp}}, \sigma^2_{\text{comp}} \right)}(x) \right).
\end{equation}
with
\begin{equation}
\lambda_{\text{comp}\,\otimes N} = \frac{N\Delta^2}{\sigma^2} \;\; \text{and} \;\; \sigma^2_{\text{comp}\,\otimes N} = \frac{\sigma^2}{N}.
\end{equation}
Here, $\otimes N$ denotes $N$-fold composition.
\end{theorem}
\begin{proof}
Since we allow the adversary to collect all $N$ samples before having to commit to one of the two hypotheses, they can exploit the isotropic property of Gaussian noise to \say{average out} the noise, so that the test statistic in Equation \eqref{test_statistic} becomes:
\begin{equation}
\left \Vert \frac{1}{N} \sum_{i=1}^N \boldsymbol{y}_i \right \Vert_2^2 \lessgtr c^2,
\end{equation}
This transforms the distributions of the hypotheses to:
\begin{align}
&\mathcal{H}_0: \boldsymbol{y} \sim \mathcal{N}\left(\boldsymbol{0}, \frac{\sigma^2}{N}\mathbf{I}\right) \;\; \text{vs.} \\ &\mathcal{H}_1: \boldsymbol{y} \sim \mathcal{N}\left(\boldsymbol{\nu}, \frac{\sigma^2}{N}\mathbf{I}\right).
\end{align}
The claim then follows by substituting $\sigma^2 \leftarrow \frac{\sigma^2}{N}$ in Equation \eqref{R}.
\end{proof}
This result is an effect of the fact that under composition, Gaussian mechanisms remain Gaussian and (due to the independence and identical distribution), their magnitudes converge exactly to chi-squared distributions. In the setting of heterogeneous query dimensionality, sensitivity or noise, either the ROC or the privacy profile (see below) can be composed numerically using the characteristic function representation of \cite{zhu2022optimal} (i.e. either the Fourier integral or the Fourier transform, respectively), including subsampling amplification. In the setting of high-dimensional queries composed homogeneously over many rounds, such as in deep learning applications, one can leverage the asymptotic results shown below.
\subsection{Query dimensionality} \label{sec:dimensionality}
Interestingly, in the GLRT setting, query dimensionality has an adverse effect on classification capability (contrary to the NPO setting): It is substantially harder to detect the presence of an individual in a high-dimensional query output compared to a low-dimensional one (assuming the sensitivity and noise are identical). Formally, this can easily be verified by observing the monotonicity of the noncentral chi-squared cumulative distribution function/ quantile function under an increase in degrees of freedom (at a fixed noncentrality).
Intuitively, the phenomenon can be understood by the effect of dimensionality on vector magnitude. In the setting of low query sensitivity, high noise and few observations, (that is to say, the setting we are interested in in privacy), the noise in high-dimensional space dominates the signal originating from the spatial separation of the vectors \cite{Urkowitz}. This phenomenon is equivalent to the \say{curse of dimensionality} observed in other fields of statistics and machine learning. Thus, an adversary who is in control of the function or can influence the training process will want to minimise the number of free parameters to detect the change in the output induced by the presence of a single individual with the highest possible sensitivity. This is e.g. the strategy employed by \cite{nasr2021adversary} and is the reason we focus on $d=1$ as the worst-case ROC curve above. Examples for the effect of dimensionality can be found in Figures \ref{fig:dof} and \ref{fig:dimension_epsilon} and in the experimental section below.
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{DOF.pdf}
\caption{Effect on query dimensionality on the ROC curve. At a fixed $\Delta = \sigma^2 = 1$, the adversary classifying a query of dimensionality $d=1$ (red curve, $(\varepsilon=3.11, \delta=10^{-4})$-DP) will have substantially greater success than when the query has dimensionality $d=30$ (blue curve, $(0.46, 10^{-4})$-DP).}
\label{fig:dof}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{epsilon_dimensions.pdf}
\caption{Query dimensionality plotted against the effective $\varepsilon$ value at $\delta=10^{-1}$ for a GLRT adversary. $\Delta=\sigma^2=1$, $d \in [1, 2, 5, 10, 20, 50, 100, 200, 300, 500]$. A marked decrease in effective $\varepsilon$ (conversion discussed in Section \ref{sec:eps_delta_conv}) is observed with increasing dimensionality.}
\label{fig:dimension_epsilon}
\end{figure}
\subsection{Asymptotics}
To formalise the aforementioned result and foreshadow the upcoming findings that the GLRT relaxation is especially beneficial when the query dimensionality is large, the query sensitivity is low and noise magnitudes are high, we now consider the asymptotic behaviour of the ROC curves in this setting. Our results harken back to the observations of \cite{dong2021gaussian}, who --stated informally-- find that \say{very private mechanisms composed over many rounds asymptotically converge to GDP}. In practice, this means that, when the ROC curve is \say{very close} to the diagonal, i.e. for mechanisms with strong privacy guarantees, the resulting ROC curve under many rounds of composition should approach some version of the NPO Gaussian mechanism ROC curve shown in Equation \eqref{eq:gauss_roc}. Continuing the \say{signal-to-noise} argument from above: when many samples of a low SNR process are observed, the central limit theorem (CLT) applies and the system can be described well in terms of Gaussian distributions. We have the following formal result:
\begin{theorem}
Let $\mathcal{M}$ be a Gaussian mechanism on a function with sensitivity $\Delta$, noise variance $\sigma^2$ and output dimensionality $d \gg 1$ such that $\frac{\Delta}{\sigma} \ll 1$. As $N \rightarrow \infty$, the ROC curves of $\mathcal{M}$ under the GLRT assumption converge to:
\begin{align}
&Q\left(\frac{1}{\sqrt{\frac{2 \Delta^{2} N}{d \sigma^{2}} + 1}}Q^{-1}(x) - \frac{\sqrt{2} \Delta^{2} N}{2 d \sigma^{2} \sqrt{\frac{2 \Delta^{2} N}{d \sigma^{2}} + 1}} \right) \label{eq:clt_1}
\\
&\approx \; Q\left(Q^{-1}(x) - \frac{N\Delta^2}{\sigma^2}\sqrt{\frac{1}{2d}} \right). \label{eq:clt_2}
\end{align}
The similarity of Equation \eqref{eq:clt_2} to Equation \eqref{eq:gauss_roc} thus leads us to conclude that $\mathcal{M}$ converges to $\mu$-GDP mechanism with:
\begin{equation}
\mu = \frac{N\Delta^2}{\sigma^2}\sqrt{\frac{1}{2d}} \label{eq:mu}
\end{equation}
\end{theorem}
\begin{proof}
We will use the facts \cite{johnson1995continuous} that the mean of the central chi-squared distribution with $d$ degrees of freedom is $d$ and its variance is $2d$. The noncentral chi-squared distribution with $d$ degrees of freedom and noncentrality $\lambda$ has mean $d+\lambda$ and variance $2d+4\lambda$. Under the CLT, we thus have convergence in distribution as follows. Letting $\frac{\sigma^2}{N} \coloneqq \beta$:
\begin{align}
&\chi^2_d(0, \beta) \rightarrow \mathcal{N}(\beta d, 2\beta^2d) \; \text{and} \\
&\chi^2_d(\lambda, \beta) \rightarrow \mathcal{N}(\beta(d+\lambda), \beta^2(2d+4\lambda)).
\end{align}
We can thus use Equation \eqref{eq:gauss_roc} to derive the $P_f$ and $P_d$ similar to Theorem 1. We have:
\begin{equation}
P_f = Q\left(\frac{c-\beta d}{\beta \sqrt{2d}} \right) \Rightarrow c = Q^{-1}(x) \beta \sqrt{2d} + \beta d
\end{equation}
and
\begin{equation}
P_d = Q\left(\frac{c - \beta(d+\lambda)}{\beta \sqrt{2d+4\lambda}} \right).
\end{equation}
For the ROC curve, we thus obtain:
\begin{align}
R(x) = &Q\left(\frac{Q^{-1}(x) \beta \sqrt{2d} + \beta d - \beta d - \beta \lambda}{\beta \sqrt{2d+4\lambda}} \right) = \\ = \,
&Q\left( \frac{Q^{-1}(x)\sqrt{2d}-\lambda}{\beta \sqrt{2d+4\lambda}} \right) = \\ = \,
&Q\left( \frac{Q^{-1}(x)-\sqrt{\frac{d}{2}}\frac{\lambda}{d}}{\sqrt{1+\frac{2\lambda}{d}}} \right).\label{fourty_two}
\end{align}
Substituting $\lambda \leftarrow \frac{N\Delta^2}{\sigma^2}$ and separating the terms, we obtain:
\begin{equation}
Q\left(\frac{1}{\sqrt{\frac{2 \Delta^{2} N}{d \sigma^{2}} + 1}}Q^{-1}(x) - \frac{\sqrt{2} \Delta^{2} N}{2 d \sigma^{2} \sqrt{\frac{2 \Delta^{2} N}{d \sigma^{2}} + 1}} \right),
\end{equation}
which is the desired result in Equation \eqref{eq:clt_1}. To obtain Equation \eqref{eq:clt_2}, we further massage Equation \eqref{fourty_two}. Concretely, we let $\frac{\lambda}{d} \coloneqq \psi$ and Taylor expand the equation around $\psi=0$ to obtain:
\begin{equation}
Q\left( Q^{-1}(x) - \psi \left( \sqrt{\frac{d}{2}} + Q^{-1}(x) \right) + \mathcal{O}(\psi^2) \right).
\end{equation}
When $d$ is large, $\mathcal{O}(\psi^2)$ vanishes and the $\sqrt{\frac{d}{2}}$ dominates the term in the parentheses, yielding:
\begin{equation}
Q\left( Q^{-1}(x) - \psi \sqrt{\frac{d}{2}} \right).
\end{equation}
Finally, substituting $\psi \leftarrow \frac{\lambda}{d}$ and $\lambda \leftarrow \frac{N\Delta^2}{\sigma^2}$, we get:
\begin{equation}
Q\left(Q^{-1}(x) - \frac{N\Delta^2}{\sigma^2}\sqrt{\frac{1}{2d}} \right),
\end{equation}
which is Equation \eqref{eq:clt_2}.
\end{proof}
Figure \ref{fig:clt} shows the accuracy of this result, which is in no small part due to the fact that both the central and the noncentral chi-squared distributions converge \textit{exactly} to Gaussian distributions when their degrees of freedom are high. Compared to Figure \ref{fig:roc_curves}, we also observe that the symmetrisation and concavification of $R$ and $R'$ are superfluous in this setting, as the curves become symmetric.
We remark that --although the CLT is a legitimate method to analyse composition-- the results are only valid asymptotically (i.e. in the infinite sample regime). A significantly faster (finite sample) convergence can be shown using the Berry-Eseen-theorem and the Edgeworth approximation implemented in \cite{wang2022analytical}, which can be used for composition. Moreover, we see that $\mu$ in Equation \eqref{eq:mu} is quadratic in the SNR (i.e. in $\frac{\Delta}{\sigma}$) and inversely proportional to query dimensionality, which explains why the GLRT relaxation is especially powerful in the high-privacy and high-dimensional regime (such as deep learning).
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{CLT.pdf}
\caption{Convergence of $R$ (blue curve) and $R'$ (green curve) to the ROC curve of a $\mu$-GDP mechanism as shown in Equation \eqref{eq:clt_2}. In this case, $\Delta=1, \sigma^2=100, d=300$ and $N=1000$.}
\label{fig:clt}
\end{figure}
\subsection{Conversion to $(\varepsilon, \delta)$-DP}\label{sec:eps_delta_conv}
As seen in Figure \ref{fig:roc_curves}, the GLRT adversary has diminished membership inference capabilities compared to the NPO adversary. The GDP framework has the benefit of encapsulating \textit{all} $P_d$/$P_f$ in one curve, but practitioners are often interested in the $(\varepsilon, \delta)$-guarantee, which --for many-- is considered \say{canonical}. Deriving this guarantee directly in the GLRT setting is difficult, as the specification of the privacy loss random variable involves terms which are not analytically tractable. Instead, we aim to exploit the lossless conversion between the ROC curve and the privacy profile, that is, an infinite collection of $(\varepsilon, \delta(\varepsilon))$-guarantees \cite{balle2020privacy}, which can be realised through Legendre-Fenchel duality (i.e. the concave conjugate (dual) of the ROC curve (or convex conjugate of the trade-off curve)). Our strategy is as follows:
\begin{enumerate}
\item For a given $\Delta$ and $\sigma$, instantiate $R(x)$ and $R'(x)$;
\item Compute the concavification and symmetrisation of the curves $R_s(x)$ as in \cite{dong2021gaussian};
\item Compute the Legendre-Fenchel conjugate $R^{\ast}(x)$;
\item Perform a change of variables from $(x, y(x))$ to $(\varepsilon, \delta(\varepsilon))$.
\end{enumerate}
The construction of $R^{\ast}(x)$ has to be performed numerically and has a success guarantee as the curve is monotonic. However, the \say{interesting} parts of the curve are the extreme locations where $P_d$ or $P_f$ are very small/large and thus the slope is very steep or near zero. These correspond to the low values of $\delta$ required for real-life applications. Since we found the technique to obtain the conjugate used in the \textrm{autodp} software package by \cite{zhu2022optimal} to be slow and sometimes numerically unstable, we implement the conjugate using double floating point precision central-difference numerical differentiation. We programmatically transform a subroutine $p$, which computes the value of a probability function of interest (in our case, $\Psi_{\chi^2}$ and $\Phi_{\chi^2}$) into a new subroutine $p'$ which computes the value of the derivative at the same point. Similarly, we derive $\Psi^{-1}_{\chi^2}$ and $\Phi^{-1}_{\chi^2}$. Using the aforementioned technique, we propose a refined procedure to compute $R^{\ast}(x)$. Our method is exemplified in Algorithm \eqref{alg:our_inversion}.
\begin{algorithm}[h]
\begin{algorithmic}
\Require Numerical ROC curve subroutines $R, R'$, symmetrisation and concavification subroutine \textbf{symm} described in \cite{dong2021gaussian}, numerical differentiation subroutine $\nabla$, root finding subroutine \textbf{root}, desired value of privacy parameter $\varepsilon$.
\Procedure{symm\_r}{$R, R'$}\Comment{Compute $R_s$ from $R, R'$}
\State $R_s \gets \textbf{symm}(R, R')$
\State \textbf{return} subroutine $R_s$
\EndProcedure
\Procedure{diff\_r}{$R_s$}\Comment{Compute $\mathcal{D}R_s$ from $R_s$}
\State $\mathcal{D}R_s \gets \nabla R_s$
\State \textbf{return} subroutine $\mathcal{D}R_s$
\EndProcedure
\Procedure{delta\_epsilon\_r}{$\varepsilon$}
\State $m \gets e^{\varepsilon}$
\State $x \gets$ \textbf{root} $R_s(x)=m$
\State $y \gets \textbf{call} \; R_s(x)$
\State $b \gets \textbf{call} \; \mathcal{D}R_s(x)$
\State $\delta \gets -x*b+y$
\State \textbf{return} $\delta$
\EndProcedure
\end{algorithmic}
\caption{Legendre-Fenchel conjugate computation using derivatives.}
\label{alg:our_inversion}
\end{algorithm}
The resulting privacy profile for the GLRT and NPO adversaries is exemplified in Figure \ref{fig:priv_profile}. Expectedly, the GLRT relaxation leads to a considerably lower value of $\delta$ for a given $\varepsilon$, especially in the high privacy regime $\varepsilon < 2$.
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{priv_profile.pdf}
\caption{Privacy profile curves for the NPO (red curve) and the GLRT (blue curve) adversaries. $\Delta = \sigma^2 = 1$. A substantial reduction is observed in $\delta$, especially in the high privacy regime. The NPO privacy profile is computed as described in \cite{balle2020privacy}.}
\label{fig:priv_profile}
\end{figure}
\subsection{Subsampling amplification}
DP applications can profit from the \textit{privacy amplification by sub-sampling} property which states that when secret subsamples are drawn from the database, the non-inclusion of a proportion of the database in the query results proportionally amplifies their privacy guarantees. This property also holds for the GLRT relaxation. Specifically, we analysed the privacy amplification of add/remove Poisson sampling as discussed in \cite{balle2020privacy} which states that, if a mechanism is $(\varepsilon, \delta)$-DP, the subsampled version of the mechanism with sampling probability $\gamma$ satisfies $\log(1+\gamma(e^{\varepsilon}-1)), \gamma \delta$-DP. To compute the guarantee in practice, we thus instantiate $R(x)$ and $R'(x)$, compute $R_s(x)$, then use duality to convert to $R^{\ast}(x)$ and finally amplify the resulting privacy profile. Since the privacy profile encapsulates \textit{all} $(\varepsilon, \delta(\varepsilon))$-pairs, it can be composed losslessly (including the subsampling analysis) by taking the Fourier transform, composing the characteristic functions and then taking the inverse of the resulting function as described in \cite{zhu2022optimal}. Figure \ref{fig:subsampling} exemplifies the amplification of the GLRT relaxation's privacy profile in comparison to the sub-sampled version of the NPO privacy profile.
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{subsampling.pdf}
\caption{Subsampling amplification of the GLRT privacy profile at $\Delta=1, \sigma^2=36, \gamma=0.2$ computed over $N=100$ compositions, showing an improvement in $\delta$-to-$\varepsilon$ compared to the NPO privacy profile with the same parameters.}
\label{fig:subsampling}
\end{figure}
\section{Numerical Experiments}
We conclude our investigation with a set of numerical experiments to demonstrate the tightness of our results in practice.
\subsection{Worst-case adversarial performance}
We begin by instantiating a worst-case (in terms of privacy) GLRT membership inference game. The adversary is faced with a pair of adjacent datasets containing at most two individuals and can control the query function so that its output on one of the individuals is $0$. Moreover, the query function is scalar-valued and has known global $L_2$-sensitivity $\Delta$. In the experiment below, $\Delta=1$. The noise added is of known magnitude $\sigma^2=36$. The adversary interacts with the system for $N=70$ composition rounds. We only require that the the adversary classifies the output of the final composition round correctly and they may use all information from the previous rounds. The adversarial task is thus:
\noindent
\textit{Classify an output $y=f(X)$ as coming from $\mathcal{N}(0, \sigma^2)$ or from $\mathcal{N}(\nu, \sigma^2)$ where $X$ is unknown and one of $D \coloneqq \lbrace A \rbrace$ or $D' \coloneqq \lbrace A,B \rbrace$ given that $\vert \nu \vert = \Delta$.} \newline
\noindent
In this case, the adversary proceeds as follows:
\begin{enumerate}
\item Assume the distribution centred on $0$ is the \say{negative} class ($\mathcal{H}_0$) and the one at distance $\Delta$ is the positive class ($\mathcal{H}_1$).
\item Collect $y_1, \cdots, y_N$ observations under composition.
\item Compute $T(y) = \left \vert \frac{1}{N}\sum_{i=1}^Ny_i \right \vert^2$.
\item Set a threshold $c$ to match a desired power and significance level (or $P_d$/$P_f$). If $T(y) < c^2$, assign to the negative class (fail to reject the null), else to the positive class (reject the null).
\item Repeat the process, but consider $\Delta$ the negative class and $0$ the positive one.
\end{enumerate}
For our experiment, we flip a fair coin and sample either $y \sim \mathcal{N}(0, \sigma^2)$ or $y \sim \mathcal{N}(\Delta, \sigma^2)$ to reflect an indifferent prior. The process is repeated $1000$ times and at $100$ values of $c^2$. At each grid point, we measure the true state of the system and the adversary's decision and use them to compute the empirical ROC curves, which we compare to the expected curves from Theorem 1. Figure \ref{fig:predicted_actual} shows the empirical ROC curves in comparison with their theoretical counterparts whereas Figure \ref{fig:actual_symmetric_gaussian} shows them in comparison to the symmetrified ROC and the NPO ROC curve. The empirical ROC curves match the expected ones very well, indicating that our bounds are tight and that the GLRT relaxation indeed affords higher privacy than the NPO assumption.
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{predicted_actual.pdf}
\caption{Empirical (purple and orange) and theoretical (blue and green) ROC curves for the worst-case adversary. $\Delta=1, \sigma^2=36, N=70$ composition rounds. The empirical curves match the theoretical ones. Average of $1000$ experiments and $100$ cutoff values. $(\varepsilon, \delta) = (2.94, 10^{-2})$.}
\label{fig:predicted_actual}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{actual_symmetrified_gaussian.pdf}
\caption{The same empirical (purple and orange) ROC curves for the experiment in Figure \ref{fig:predicted_actual}, but with the symmetrified theoretical curve (dashed black) and the NPO Gaussian mechanism curves (red) shown for comparison. The GLRT setting affords more empirical and theoretical privacy than the NPO regime. $(\varepsilon, \delta) = (2.94, 10^{-2})$ for the GLRT setting and $(3.63, 10^{-2})$ for the NPO setting. }
\label{fig:actual_symmetric_gaussian}
\end{figure}
\subsection{Empirical performance in the high-dimensional query case}
To evaluate the theoretical claim that high-dimensional queries afford stronger privacy, we repeated the aforementioned experimental setting with the following modifications. Instead of a scalar query, the adversary now receives a $50$-dimensional vector with \textit{no capability to reduce the effective dimensionality} e.g. through auxiliary information. We assume $N=50$ composition rounds and $\Delta=1, \sigma^2=12.25$. As witnessed in Figure \ref{fig:high_dimensional}, the classification problem is indeed considerably harder for the adversary in this setting compared to the scalar problem above.
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{high_dimensional.pdf}
\caption{Empirical ROC curves (purple and orange) for the high-dimensional query experiment. Compared to the expected ROC curve for the same dimensionality (dashed black), the empirical result matches exactly. However, compared to the scalar case in the same setting (continuous black), the hypothesis test is considerably harder. The NPO adversary (red curve) has the easiest task. $(\varepsilon, \delta) = (0.76, 10^{-2})$ for $d=50$, $(5.39, 10^{-2})$ for $d=1$ and $(6.01, 10^{-2})$ for the NPO setting.}
\label{fig:high_dimensional}
\end{figure}
\section{Conclusion}
The broad application of differential privacy will require our community to address legitimate stakeholder concerns about the privacy-utility trade-offs of privacy-preserving systems. Several prior works have empirically noted that, under realistic conditions, the guarantees of such systems are stronger than the worst case assumes. However, so far, there is a lack of investigations into \textit{formal} (rather than empirical) relaxations of the threat model and into the provision of tight guarantees in the spirit of DP. We find that the mild, and in many cases realistic, relaxation from an NPO to a GLRT adversary yields substantially amplified privacy bounds for the Gaussian mechanism. This is especially true for strongly private mechanisms with low sensitivity and high noise magnitudes.
Our work should not be misconstrued as an attempt to undermine the DP guarantee. We explicitly request that our guarantees be given \textit{alongside} the worst-case guarantees and not in isolation, so as not to mislead stakeholders but instead to inform them more comprehensively about the privacy properties of sensitive data-processing systems.
\printbibliography
\end{document}
|
2,869,038,154,677 | arxiv | \section{Introduction}
The goal of the Daya Bay reactor neutrino experiment is to make a measurement of the neutrino mixing angle $\sin^2 2\theta_{13}$ to a precision of 0.01 at 90\% confidence within three years of running \cite{Guo:2007ug}. The experiment has already released its first measurements of $\sin^2 2\theta_{13}$ \cite{DYBprl,DYBcpc}, with improved results to follow. Daya Bay is one of a new generation of reactor antineutrino disappearance experiments with near and far detector pairs located at kilometer-scale baselines from large nuclear power complexes. The experiment consists of eight antineutrino detectors (ADs) installed or under construction at three locations near the six-reactor Guangdong Nuclear Power Plant complex, located near Hong Kong and Shenzhen, China. The site layout is shown in Figure~\ref{fig:DYBsite}. An accurate measurement of $\theta_{13}$ is of great importance to the particle physics community. Its value constrains many models of electroweak-sector physics and, prior to the first results from Daya Bay, had not been measured with high precision. Daya Bay's first experimental results \cite{DYBprl, DYBcpc, DYBad12} represent major progress toward the experiment's long-term goals and are a significant milestone in neutrino physics.
\begin{figure}[b]\hfil
\includegraphics[height=57mm]{figures/DYBSiteLayout.pdf}\hfil%
\caption{Site layout of the Daya Bay Reactor Neutrino Experiment. Six reactor cores are shown, along with eight locations for antineutrino detectors.}\label{fig:DYBsite}
\end{figure}
Like many other neutrino experiments, the Daya Bay antineutrino detectors (ADs) rely on the inverse beta-decay (IBD) reaction to detect antineutrinos:
\[\bar{\nu}_e + p \rightarrow e^+ + n.\]
The basic IBD reaction has an energy threshold of $1.806\unit{MeV}$ \cite{vogelIBD}; detectors using the IBD reaction are not sensitive to antineutrinos with energies below this threshold. The IBD cross-section is about $10^{-42}\unit{cm^2}$ at $E_{\bar\nu}=5\unit{MeV}$ \cite{vogelIBD}, necessitating a very sensitive detector. IBD on free protons is an especially desirable reaction for detector use as its signature is both clean and distinctive. The positron released during the event annihilates quickly on a detector electron, producing annihilation radiation; this is the ``prompt signal'' of an IBD event. The released neutron can also be detected if additional target nuclei that have both a high neutron capture cross-section and emit a clean energy signal on neutron capture are present in the detector. Natural gadolinium, containing \nuclide{Gd}{157} and \nuclide{Gd}{155}, is used for this purpose in the Daya Bay detectors. Both nuclei have very large neutron capture cross-sections and clean neutron capture signatures. The photons released following a neutron capturing on Gd in the detector form the ``delayed signal'' of an IBD event. This combination of prompt and delayed signals is very distinctive and easy to distinguish from background, making a practically-sized antineutrino detector possible to build.
This paper discusses the instrumentation located within the Daya Bay antineutrino detectors. The target mass monitoring systems, discussed in section~\ref{sec:levelinstrumentation}, track changes in the target mass of the ADs in situ and in real time during physics data collection. An accurate determination of the target mass is critical to the analysis of data from an antineutrino detector, as the expected IBD event rate is proportional to the number of target protons in the detector. Additional instrumentation, discussed in section~\ref{sec:additionalinstrumentation}, monitors the general health and stability of the detectors.
\section{Antineutrino detector design}
\begin{figure}[t]\hfil\subfloat[A cross-section of a filled detector, taken through the mineral oil overflow tanks.]{
\includegraphics[width=74mm]{figures/FullAdOverviewFigure.pdf}}\hfill
\subfloat[Close-in section of the detector and overflow tanks, taken through the calibration ports.]{\includegraphics[width=74mm]{figures/OverflowOverviewFigure.pdf}}\hfil%
\caption{Overview drawings of a Daya Bay antineutrino detector, showing the nested liquid volumes and locations of instrumentation and key reference points. The stainless steel vessel of each AD is $5\unit{meters}$ in diameter and $5\unit{meters}$ tall, the outer acrylic vessel is $4\unit{m}$ tall and $4\unit{m}$ in diameter, and the inner acrylic vessel is $3\unit{m}$ tall and $3\unit{m}$ in diameter. Colors distinguish the different liquid regions: gadolinium-doped liquid scintillator (GdLS) is shown in green, plain liquid scintillator (LS) in red, and mineral oil (MO) in blue.}\label{fig:ADoverview}
\end{figure}
The Daya Bay antineutrino detector (AD) design shown in Figure~\ref{fig:ADoverview} has been optimized for detecting antineutrinos via inverse beta decay (IBD) events. It consists of three nested volumes, with two acrylic vessels containing the inner liquids. The innermost volume contains approximately 20 tons of gadolinium-doped organic liquid scintillator (``GdLS''), serving as the IBD target for antineutrino capture. The liquid scintillator is primarily linear alkyl benzene (LAB). Hydrogen atoms in the LAB are the primary targets for antineutrino capture. The middle region contains about $20\unit{tons}$ of unloaded scintillator (the ``LS volume'') acting as a gamma catcher, ensuring that high-energy photons produced in the target volume are reliably converted to scintillation light. The outer volume, filled with about $36\unit{tons}$ of inert mineral oil (MO), provides a buffer between the photomultiplier tubes (PMTs) and the scintillator, improving PMT light collection from the target region and shielding the scintillating liquids from the radioactivity of the PMT materials. Above the main detector volumes are overflow tanks, shown in Figure~\ref{fig:OFcutaway}, which provide space for thermal expansion of the detector liquids. More information on the detector design can be found in \cite{DYBad12}, on the acrylic vessels in \cite{2012arXiv1202.2000B}, and on the scintillator properties and production in~\cite{Yeh2007329,Ding2008238}.
The detector design makes predicting the expected event rate from the antineutrino flux straightforward. The three distinct volumes, physically separated with transparent acrylic walls, eliminate the need for fiducial volume cuts, reducing cut uncertainty and maximizing detection efficiency. The target volume, doped with gadolinium, is the only region that contributes meaningfully to the measured IBD event rate \cite{Guo:2007ug}, and so the number of IBD targets in the target volume directly determines the predicted event rate.\footnote{Geometrical spill-in and spill-out effects, from IBD events occurring at the edge of the GdLS region, are accounted for accurately using Monte Carlo simulations~\cite[\textsection5.5]{DYBad12}. They can affect the event rate by up to 5\%~\cite{DYBcpc}.} Only free protons (hydrogen atoms) are useful IBD targets: while antineutrinos do initiate IBD reactions on protons bound in nuclei, the produced neutron generally is still bound and cannot be easily detected, and the positron and photon signals are difficult to distinguish from accidental backgrounds. The free proton count in the target is directly proportional to the mass of liquid. Chemical analysis of the target liquid measures the carbon-to-hydrogen ratio, which is used together with the target mass to determine the total number of target protons in the detector.
\subsection{Overflow tanks}
\begin{figure}[tb]
\centering
\includegraphics[clip=true, trim=3.125in 4.125in 3.125in 1.875in, width=\textwidth]{figures/Liquid_Levels_section.pdf}
\caption{A close-up view of the central and mineral oil overflow tanks, showing the two nested central overflow tanks and the liquid communication between the overflow tanks and the main detector volume. Mineral oil enters its overflow tanks through holes in the vessel lid. GdLS enters the inner central overflow tank through a bellows that passes through the middle of the central calibration port connecting the inner volumes to the central overflow tank. LS occupies the exterior portion of the calibration port tube, connecting to the outer central overflow tank.}\label{fig:OFcutaway}
\end{figure}
The Daya Bay ADs are subjected to variations in temperature and external pressure during their construction and deployment. A completely sealed detector would be unable to tolerate any changes in external conditions; as the detector liquids expanded or contracted due to thermal changes, the resulting pressure differentials and stresses in the acrylic vessels could damage or destroy the detector. To avoid this problem, each detector volume has one or more overflow tanks located on top of the detector, as shown in Figures~\ref{fig:ADoverview},~\ref{fig:OFcutaway}, and~\ref{fig:ADlidlayout}. Each overflow tank is directly connected with the main volume for that liquid type, as shown in Figure~\ref{fig:OFcutaway}; during detector filling the liquids are filled until the overflow tanks are partly full~\cite{fillingpaper}. The excess fluid capacity provided by the overflow tanks ensures that the main detector volumes are always completely full of liquid.
The central overflow tank contains two separated nested acrylic tanks. The inner region, the GdLS overflow tank, is $1285\unit{mm}$ in diameter and
$155\unit{mm}$ deep. It is surrounded by the LS overflow tank, $1790\unit{mm}$ in diameter and $145\unit{mm}$ deep. These dimensions give maximum capacities of approximately $200$ {liters} of GdLS and $165$ {liters} of LS.
The mineral oil overflow tanks are each $1150\unit{mm}$ in diameter and $250\unit{mm}$ tall, giving a capacity of $260\unit{liters}$ of MO per tank or $520\unit{liters}$ per detector.
A cover gas system supplies a continuous flow of dry nitrogen gas to the empty volume of each overflow tank, maintaining a stable, oxygen-free environment for the liquid scintillator \cite{covergaspaper}.
The range of temperatures that the overflow tanks can buffer against is limited. During detector filling, the overflow volumes are filled to approximately one third of maximum capacity, giving an operational temperature range of $23^{+4}_{-2}\,\ensuremath{\deg\mathrm{C}}$ for the filled detectors~\cite{fillingpaper}{}\footnote{Based on the thermal expansion coefficient of GdLS, measured as $\Delta V/V = (9.02\pm0.16)\times10^{-4}{\unit{K}}^{-1}$ at $19\ensuremath{\deg\mathrm{C}}$ \cite{minfangemail}.}. The typical operating temperatures of 22.5--23.0\ensuremath{\deg\mathrm{C}}{} have been well within this range\footnote{Temperature variations between the different water pools are larger than the variations of a single pool; the temperature of each pool is typically controlled to $\pm0.2\ensuremath{\deg\mathrm{C}}$ or better.} (see also Figure~\ref{fig:monitoringdata}). The instrumentation described in this paper continuously monitors the levels and temperatures of the fluids in all overflow tanks, ensuring that conditions remain within the operational range of the detectors at all times.
\section{Stability and monitoring requirements}\label{sec:requirements}
The design goals of the Daya Bay experiment \cite[\textsection3.2]{Guo:2007ug} specify a baseline target mass uncertainty of 0.2\% ($2000 \unit{ppm}$) and goal target mass uncertainty of 0.02\% ($200 \unit{ppm}$) on a nominal target mass of $\text{20,000}\unit{kg}$. This requirement corresponds to a baseline uncertainty of $40\unit{kg}$ in the nominal target mass and goal uncertainty of $4\unit{kg}$. The detector filling system~\cite{fillingpaper}{} was designed to surpass the goal uncertainty, and the target mass monitoring system described in this paper must be able to match this accuracy goal in order to track longer-term detector changes. Given the central overflow tank diameter of $1285\unit{mm}$ and GdLS density of approximately $860\unit{g/L}$, a $4\unit{kg}$ uncertainty on overflow volume mass corresponds to about a $3.5\unit{mm}$ uncertainty on the overflow tank liquid level. Sensors were chosen with specified uncertainties significantly smaller than this level, minimizing the contribution to the total target mass uncertainty from the overflow tank monitoring.
In practice, after sensor calibrations and accounting for overflow tank geometry uncertainties and overall detector tilt uncertainties, we achieved a total overflow tank mass uncertainty of $2.2\unit{kg}$, corresponding to 0.011\% of the nominal target mass ($110\unit{ppm}$).
\section{Liquid level monitoring instrumentation}\label{sec:levelinstrumentation}
Three separate systems monitor the liquid height in the detector overflow tanks, as pictured in Figure~\ref{fig:ADlidlayout}. The two central overflow tanks are each instrumented with an ultrasonic liquid level sensor and a capacitive liquid level sensor. In addition, cameras in the off-center calibration ports monitor the GdLS and LS liquid levels. One of the two mineral oil overflow tanks contains a capacitive liquid level sensor. The GdLS and LS overflow tanks each have two independent sensors measuring the current liquid level to provide redundancy in the event of a sensor failure. The mineral oil level is less critical and is only monitored by a single sensor. The monitoring cameras provide a cross-check against potential long-term drift in the GdLS and LS sensors.
\begin{figure}[!p]%
\def\subfigheight{41mm}%
\hfil%
\subfloat[Top view of the central overflow tanks, showing the mounting locations of the sensors and the support structure of the tanks. The tank's outer diameter is $1.8\unit{m}$.]{\label{fig:ADlidlayoutdiagram}\includegraphics[width=74mm]{figures/OFTankTopView.pdf}}\hfill
\subfloat[A central overflow tank lid shown at the end of assembly, with all instrumentation installed. (Note that the detector $0\ensuremath{^\circ}$ axis faces the camera in this image.)]{\label{fig:centralOFtankpicture}\includegraphics[width=74mm]{figures/AD6_Overflow_Sensors_and_GasIMG_1247.jpg}}\hfil%
\\\hfil%
{\captionsetup{justification=raggedright}%
\subfloat[Ultrasonic liquid level sensor.]{\includegraphics[height=\subfigheight]{figures/ultrasonic_mounted.jpg}}%
\hfill\subfloat[Temperature sensor.]{\includegraphics[height=\subfigheight,clip=true, trim=20in 10in 17in 0in]{figures/IMG_1676.jpg}}%
\hfill\subfloat[PTFE capacitive liquid level sensor.]{\includegraphics[height=\subfigheight,clip=true, trim=16in 4in 15in 3in]{figures/central_cap_mounted.jpg}}}%
\hfill\subfloat[Two-axis inclinometer.]{\includegraphics[height=\subfigheight, clip=true, trim=18in 6in 10in 6in]{figures/inclinometer_mounted.jpg}\hfil%
}\\%
\subfloat[A fully instrumented mineral oil overflow tank. The tank's outer diameter is $1.2\unit{m}$.]{\label{fig:MOoverflowtank}\includegraphics[width=\textwidth]{figures/mo_overflow_tank_instrumented.pdf}}%
\caption{Instrumentation and layout of the overflow tanks. Top row: layout of the central overflow tank, containing GdLS and LS. Center row: central overflow tank sensors. Bottom: mineral oil overflow tank.}\label{fig:ADlidlayout}
\end{figure}
\subsection{Ultrasonic level sensors}
\begin{figure}[tb]\centering
\includegraphics[height=65mm,clip=true,trim=0 6in 0 6in]{figures/ultrasonic_mounted.jpg}
\caption{An ultrasonic distance sensor shown mounted with rear light baffle on an overflow tank during detector assembly. The acrylic reflector is visible at left. (See also Figure~\protect\ref{fig:ultracalibrationstand}.)}\label{fig:ultrasonic}
\end{figure}
The ultrasonic liquid level sensors, which measure the distance from the sensor to the target by timing the echo of an ultrasonic pulse, are commercial Senix Corporation ToughSonic TSPC--30S1 distance sensors \cite{senixcorp}. They are non-contact sensors, which is ideal for minimizing disruptions to the scintillator and alleviating materials compatibility concerns. As the fluid level in the overflow tank rises, the distance measured by the ultrasonic sensor decreases linearly, proportional to the increase in fluid height. Two sensors are located in each central overflow tank, one in the GdLS volume and one in the LS volume. As shown in Figure~\ref{fig:ultrasonic}, they are mounted horizontally on the top of the overflow tank lid, with the ultrasound beam directed downwards by a flat acrylic reflector located in the mount. This arrangement was necessitated by space limitations; we found that the addition of the reflector caused no discernible change in the performance of the sensor. Figure~\ref{fig:ultrasonic} also shows a light baffle on the back of the sensor, needed to block light from the sensor's rear status LEDs from reaching the detector PMTs.
As described in section~\ref{sec:ultrasoniccalibration}, each ultrasonic sensor was individually calibrated. After calibration, the ultrasonic sensors are the most accurate liquid level sensors, and are our primary liquid height reference.
\subsection{Capacitive level sensors}
The three capacitive liquid level sensors, provided by Gill Sensors \cite{gillsensorsco}, are customized versions of automotive fuel tank sensors. They consist of two concentric cylinders mounted vertically in the liquid tanks as shown in Figures~\ref{fig:ADoverview} and~\ref{fig:capsensors}. The volume between the cylinders is open to the liquid, and as the tank level rises this area fills in with liquid. The dielectric properties of the liquid cause a change in the capacitance between the two cylinders, which is read out by instrumentation in the top of the sensor. The sensor outputs a value in counts, representing the fraction of the sensor that is immersed in liquid, with 0 as the minimum value. Nominally a 10-bit ADC is used, giving a resolution of 1024 counts over the full range of the sensor. In practice, we observed sensor readings of 1050 counts or more, suggesting a more complicated signal processing scheme inside the sensor. Each sensor was individually calibrated as described in section~\ref{sec:capcalibration}.
There are two types of capacitive sensors in each detector. One mineral oil overflow tank per detector contains a single commercially available stainless steel R-Series sensor, shown in Figure~\ref{fig:MOcapsensor}. Its only customization is a factory calibration to the dielectric properties of mineral oil. The R-Series sensors are $16\unit{mm}$ in diameter and $235\unit{mm}$ long; $220\unit{mm}$ of their length is sensitive to liquid level changes. Stainless steel is not suitable for contact with gadolinium-doped scintillator, so the GdLS and LS volumes in the central overflow tank use custom M-Series sensors from Gill, shown in Figure~\ref{fig:tefloncapsensor}. The only wetted material in these sensors is chemically inert polytetrafluoroethylene (PTFE). Ordinary white PTFE makes up the head of the sensor, enclosing the readout electronics, while conductive carbon-impregnated black PTFE makes up the capacitive body of the sensor. Both varieties of PTFE are chemically compatible with the Daya Bay liquid scintillator over long periods of time. The M-series sensors are $16\unit{mm}$ in diameter and $153\unit{mm}$ long (active region $144\unit{mm}$). They were provided with a factory calibration for linear alkyl benzene.
The factory calibrations of all sensors were supplemented with a laboratory calibration, described in section~\ref{sec:capcalibration}. In practice, the capacitive sensors have coarser resolution and are not as reproducible as the ultrasonic sensors, so they are used as secondary sensors in the event that an ultrasonic sensor fails. They also have the advantage of being insensitive to the gas composition in the overflow volumes, since they do not depend on the speed of sound as the ultrasonic sensors do. Along with the off-center cameras, they provide a cross-check on the ultrasonic sensor data, ensuring the ultrasonic sensor response is not changing over time.
\begin{figure}[t]\hfil
\subfloat[A stainless steel capacitive liquid level sensor, located in a mineral oil overflow tank.]{\label{fig:MOcapsensor}%
\includegraphics[clip=true, trim=4.25in 1in 0.8125in 2in, height=70mm]{figures/mo_cap_mounted.jpg}%
}\hfill
\subfloat[A PTFE capacitive liquid level sensor, located in the central overflow tank.]{\label{fig:tefloncapsensor}%
\includegraphics[clip=true, trim=6in 0in 6in 0in, height=70mm]{figures/central_cap_mounted.jpg}}\hfil%
\caption{The two styles of capacitive liquid level sensor, shown mounted during detector assembly.}\label{fig:capsensors}
\end{figure}
\subsection{Off-center cameras}
\begin{figure}[tb]\hfil%
\subfloat[Model view of the off-center camera system.]{\label{fig:DGUTcameraschematic}\includegraphics[width=70mm]{figures/DGUT_camera_system_model.pdf}}\hfil%
\subfloat[An assembled off-center camera system in AD8.]{\label{fig:DGUTcameraphoto}\includegraphics[width=70mm]{figures/DGUT_camera_system_labeled.pdf}}\hfil%
\caption{Images of the off-center camera systems. There are two cameras in each detector to monitor the GdLS and LS liquid levels at the off-center calibration ports. See also Figures~\protect\ref{fig:ADoverview} and~\protect\ref{fig:DGUTcameraimages}.}\label{fig:DGUTcamerasystem}
\end{figure}
\begin{figure}[tb]\hfil%
\subfloat[Off-center camera image of the GdLS level in AD3. Note that the GdLS calibration tube is straight.]{\label{fig:GdLSimage}\includegraphics[width=70mm]{figures/AD3_GDLS_11_11_11_23_19_46.jpg}}\hfil%
\subfloat[Off-center camera image of the LS level in AD3. Note that the LS calibration tube has beveled sides.]{\label{fig:LSimage}\includegraphics[width=70mm]{figures/AD3_LS_11_11_11_23_19_48.jpg}}\hfil%
\caption{Sample images taken by the off-center camera system.}\label{fig:DGUTcameraimages}
\end{figure}
There are two camera systems located on the detector lid to monitor the GdLS and LS liquid levels.
As shown in Figure~\ref{fig:ADoverview}, they are located in the off-axis calibration ports of the detector, and view the liquid level in the calibration tubes.
Figure~\ref{fig:DGUTcamerasystem} shows an overview and picture of the system.
Sample images from the cameras are shown in Figure~\ref{fig:DGUTcameraimages}.
A ruled scale placed between the cameras and the calibration tubes provides a reference scale for tracking fluid level changes.
This enables the cameras to be used as a cross-check against drifts of the other sensors.
The position of the camera elements and calibration tubes was surveyed during installation as part of the global detector survey, so the relative positions of the camera and visible sections of the calibration tubes are known to millimeter accuracy.
Infrared-sensitive cameras were chosen with the goal of being able to run the level monitoring system during detector operation, without interference with the detector PMTs.
The cameras are model Watec WAT-902H2 Supreme, equipped with a 380-kilopixel monochromatic infrared-sensitive CCD sensor.
Illumination for each camera is provided by an array of infrared LEDs behind a diffusing screen, facing the camera from behind the calibration tube.
The array produces $30\unit{mW}$ of infrared illumination at the camera using $300\unit{mW}$ of electrical power.
The LED spectrum is peaked at $886\unit{nm}$ with some emittance over the range of $800$ to $1000\unit{nm}$.
This spectrum overlaps with the camera CCD's near-infrared sensitivity, but is beyond the detector PMT sensitivity (Hamamatsu R5912 PMT response falls off at $700\unit{nm}$) and the absorption spectrum of the scintillator, which does not extend significantly beyond $400\unit{nm}$.
While dark box tests confirmed that the infrared LEDs did not trigger the PMTs, in-situ testing revealed enough of an increase in PMT hit rates (up to $30\unit{Hz}$) that as a precaution the cameras are not operated while the PMTs are in use.
The off-axis camera systems are thus only operated to cross-check the other liquid sensors when physics data is not being collected.
\section{Additional instrumentation}\label{sec:additionalinstrumentation}
\subsection{Inclinometers}
Three inclinometers are mounted on the detector lid, in positions shown in Figure~\ref{fig:ADlidlayoutdiagram}. Each sensor, a Shanghai Zhichuan Electronic Tech Company Ltd.\ model ZCT245CL-485-HKLN, has a nominal accuracy of $\pm0.05\ensuremath{^\circ}$ with a resolution of $0.01\ensuremath{^\circ}$ \cite{inclmanual}. An inclinometer is pictured in Figure~\ref{fig:inclinometerphoto}. The sensors measure the absolute tilt of the detector from the local vertical. Each sensor measures the tilt of two axes: the deviation of both its local $x$-axis and its local $y$-axis from the plane perpendicular to the local vertical. All three sensors are mounted with the same $x$- and $y$-axis directions to facilitate cross-checking between sensors. Three sensors are used so that both global tilts of the entire detector and localized lid deformations, such as flexing or deflection, can be resolved. To first order, the calculation of the liquid level in the overflow volume assumes the liquid is level; the inclinometer data allows us to quantify any deviation from levelness and apply a correction if necessary.
\subsection{Overflow temperature sensors}\label{sec:OFtempsensors}
\begin{figure}[b]\hfil
\subfloat[A close-up image of two Pt100 temperature sensors. Each sensor is $6.35\unit{mm}$ in diameter.]{\label{fig:Pt100}\includegraphics[clip=true, trim=4in 14.5in 29in 10in, height=54mm]{figures/IMG_4506.jpg}}\hfil
\subfloat[A temperature sensor installed in its acrylic mount, shown during overflow tank assembly. The elbow at upper left is the cover gas intake connection.]{\label{fig:Pt100mount}\includegraphics[clip=true, trim= 8in 10in 8in 0in, height=54mm]{figures/IMG_1676.jpg}}\hfil%
\caption{The Pt100 temperature sensors (left) and mounted in place (right). A stainless steel case encloses each sensor, and a PTFE sleeve protects the junction between the casing and the sensor cable. }\label{fig:tempsensors}
\end{figure}
The central overflow volumes for both GdLS and LS each contain a single type Pt100 100-{\ensuremath{\Omega}} platinum resistance thermometer. The sensors, shown in Figure~\ref{fig:Pt100}, were provided by HW Group and are DIN class A (accuracy specification $\pm 0.2 \ensuremath{\deg\mathrm{C}}$ at $25\ensuremath{\deg\mathrm{C}}$). An acrylic mounting well, shown in Figure~\ref{fig:Pt100mount}, holds each sensor in position and prevents its stainless steel casing from contacting the detector liquids. A drop of mineral oil placed in the well creates a good thermal connection between the sensor and the acrylic.
Temperature sensor readout is performed using a HW Group model Temp-485-2xPt100 readout interface \cite{HWgPt100}. Each detector contains one dual Pt100 readout module mounted on the AD lid that reads out both the GdLS and LS overflow tank temperature sensors. The readout module measures the resistance of the Pt100 sensor and converts it to a digitized temperature reading with a resolution of $0.01 \ensuremath{\deg\mathrm{C}}$, much smaller than the quoted overall accuracy of the sensor, ensuring the absence of quantization effects. Three-wire connections between the sensor and readout module are employed to reduce measurement error from the resistance of the wire leads on the sensor.
\subsection{Internal temperature sensors}\label{sec:MOtempsensors}
\begin{figure}[tb]\hfil
\subfloat[A mineral oil temperature sensor, mounted on the PMT support ladder, with the sensor tip exposed at bottom.]{\includegraphics[clip=true, trim=0in 6in 24in 18in, width=70mm,angle=90]{figures/mo_temp_sensor_installed.jpg}}\hfil
\subfloat[Overview image of an assembled PMT support ladder. Three of the four mineral oil temperature sensors are visible; the fourth is located just beyond the bottom edge of the image.]{\includegraphics[height=70mm]{figures/mo_temp_sensor_locations.pdf}}\hfil%
\subfloat[Model view of the detector, showing the back of the PMT ladders enclosed within the stainless steel vessel. The four mineral oil temperature sensors, circled in yellow, are visible at center right.]{\includegraphics[height=70mm,clip=true,trim=0.5in 0.25in 4in 0.25in]{figures/mo_temp_sensor_model.pdf}}\hfil%
\caption{Images and mounting locations of the mineral oil internal temperature sensors.}\label{fig:motempsensors}
\end{figure}
The detector contains four temperature sensors mounted in the mineral oil volume, as shown in Figure~\ref{fig:motempsensors}. They are arranged vertically along one PMT support ladder to measure any temperature gradients in the detector resulting from temperature differentials during detector filling or operation. They are electrically and mechanically isolated from the ladder by acrylic housings to give a direct reading of the mineral oil temperature at that location. Each sensor is a Pt100 platinum resistance thermometer, as employed in the overflow tanks. They are connected using a standard four-wire connection to a custom-designed readout module that can accurately measure the sensor's resistance and resolve small changes in temperature.
\subsection{Internal cameras}
Two internal cameras are located on the outer wall of the detector, as shown in Figure~\ref{fig:ADoverview}. One is mounted looking down at the bottom of the target volume, and one is mounted looking up at the top of the target volume. Each camera has a remotely controlled LED bank containing both white and infrared LEDs to illuminate the detector during camera operation. The internal cameras are used extensively during
detector filling and then infrequently afterwards for detector inspection.
They may also be utilized during a future full-volume manual detector
calibration. They cannot be used continually during operation, due to interference between the LED light source and the detector PMTs. Their construction and operation is described in more detail in \cite{camerapaper}.
\section{Sensor calibration}
Each type of sensor measures a different quantity. Some sensors directly measure the quantity of interest, while others return a value that must be converted to physically meaningful units: millimeters of liquid height for the liquid sensors, degrees Celsius for the temperature sensors, or degrees of inclination for the tilt sensors. For many sensors, the conversion between the output quantity and useful units was provided by the manufacturer, but in order to meet the precision level required in the Daya Bay detectors, we performed laboratory calibrations of all the critical sensors.
\subsection{Ultrasonic liquid level sensors}\label{sec:ultrasoniccalibration}
\begin{figure}[bt]\hfil
\subfloat[Diagram of the ultrasonic sensor calibration stand. The dotted lines show two nominal ultrasound paths. The height $h$ is adjustable from 0 to $\mathbin{\sim}400\unit{mm}$.]{\label{fig:ultracalibrationstand}\includegraphics[width=60mm,height=60mm]{figures/DYBUltrasonicSensorCalibrationDiagram.pdf}}\hfil
\subfloat[Plot of the calibration data for ultrasonic sensor 1. The data was fit only in the operating region, $h$ from 0 to $150\unit{mm}$. Error bars are the standard deviations of multiple readings taken at each heigh
.]{\label{fig:ultracalibrationgraph}\includegraphics[height=60mm]{plots/ultracal.pdf}}\hfil%
\caption{The ultrasonic sensor calibration setup and one calibration result.}\label{fig:ultracalibration}
\end{figure}
The ultrasonic sensors were calibrated in a laboratory environment using the same apparatus diagrammed in Figure~\ref{fig:ultracalibrationstand}. Each sensor was set up as shown in Figure~\ref{fig:ultrasonic}.
The ultrasonic pulse was directed out the bottom of the mount onto a target, a flat piece of acrylic.
The target was fixed on one jaw of an outside Vernier caliper, with the other jaw fixed on a laboratory optical bench. The upper jaw could be adjusted over a wide range, from touching the bottom of the sensor mount ($h=0$ in Figure~\ref{fig:ultracalibrationstand}) to about $400\unit{mm}$ ($15\unit{in}$) from the bottom of the mount. The target was positioned at several points within this range, and the caliper height recorded along with the sensor reading. Sample data from one ultrasonic sensor's calibration is shown in Figure~\ref{fig:ultracalibrationgraph}.
Because the distance from the face of the installed ultrasonic sensor to the bottom of the overflow tank is difficult to measure accurately with common measuring tools, the ultrasonic sensor is used to determine its own zero value. This method also avoids the problem of having to determine the offset between the true path length through the calibration apparatus of Figure~\ref{fig:ultracalibrationstand} and the caliper reading. With the sensor installed in its final position in the dry overflow tank, a baseline sensor reading was recorded. This reading defines the zero-fluid point for that tank. The calibration apparatus height range was chosen so that this point would always lie near the middle of the calibrated range. Thus, subsequent changes in the sensor reading caused by increasing fluid level can be accurately converted to an increase in fluid height with the calibration data
The ultrasonic sensor measurements are especially sensitive to the speed of sound in the medium carrying the sound pulses. The calibration described here was performed in ordinary laboratory air, while the operating environment for the sensors is dry nitrogen gas. The difference between the speed of sound in nitrogen and in air is large enough that a correction was necessary. During sensor calibration, we recorded the temperature, pressure, and relative humidity of the laboratory air, then calculated the speed of sound using the expression provided by Cramer \cite{cramer1992}, with updated absolute humidity formulas from \cite{picard2008}. In operation, the speed of sound is calculated using the calibration equation of Span et al.\ for dry nitrogen \cite[\textsection7]{span2000}, incorporating the frequency correction of \cite[\textsection{4.1.2}]{span2000}. The cover nitrogen gas exchange is slow enough that the temperature measured by the liquid temperature sensors of section~\ref{sec:OFtempsensors} is approximately equal to the cover gas temperature. A nominal supply gas pressure of $21\unit{psia}$ is used for calculations. The variation of the speed of sound in nitrogen with pressure is small\footnote{In dry nitrogen at $22\ensuremath{\deg\mathrm{C}}$, the speed of sound changes by less than $200\unit{ppm}$ over the pressure range from $14$ to $21\unit{psia}$, a much larger variation than we expect in the detector cover gas supply pressure
} and can be neglected. Overall, the correction due to the varying speed of sound is about a $3\unit{mm}$ change in the measured zero point of each overflow tank, or about a $1\unit{mm}$ change in the measured operating liquid level.
\begin{figure}[t]\centering
\includegraphics[width=75mm]{plots/overflowtankcalibration.pdf}
\caption{Results of the overflow tank geometry and sensor performance cross-check. The largest observed deviation from the design geometry is $1.5\unit{L}$, which provides the ultimate maximum error bound on the overflow tank mass determination.}\label{fig:OFtankPSLgraph}
\end{figure}
As the ultrasonic sensors serve as the primary fluid level reference, an extra cross-check was performed. One overflow tank was set up in a laboratory environment and filled with high-purity water in $2000\unit{mL}$ increments, taking ultrasonic sensor readings at each step. (The capacitive sensors could not be cross-checked during this test, as they respond differently to water than to their operating liquids.) This test provided a full-system cross-check on both the response of the ultrasonic sensors and the geometry measurements of the overflow tanks. Data from the test is shown in Figure~\ref{fig:OFtankPSLgraph}. The maximum observed deviation from the predicted liquid level at a specified fluid volume was $1.5\unit{mm}$, which we take as the maximum height uncertainty of the liquid level monitoring system.
\subsection{Capacitive liquid level sensors}\label{sec:capcalibration}
\begin{figure}[t]\hfil
\subfloat[A schematic of the PTFE capacitive liquid level sensor calibration stand. The immersion depth $D$ of the sensor is calculated in terms of the spacer height $A$, stand height $B$, and sensor length $C$ as $D=C-(A+B)$.]{\label{fig:capcalibrationstand}\includegraphics[width=60mm,height=60mm]{figures/DYBCapacitanceSensorCalibrationDiagram.pdf}}\hfil
\subfloat[The results of sensor calibration for PTFE capacitive liquid level sensor 15, showing data from, fits to, and residuals for each run separately. For the combined fit, the two wet calibrations were averaged, excluding the dry calibration.]{\label{fig:capcalibrationgraph}\includegraphics[height=60mm]{plots/capcal.pdf}}\hfil%
\caption{The capacitive liquid level sensor calibration stand and one calibration result.}\label{fig:capcalibration}
\end{figure}
The GdLS and LS volume capacitive liquid level sensors were calibrated using the setup shown in Figure~\ref{fig:capcalibrationstand}. Each sensor was immersed in a beaker of linear alkyl benzene (LAB) in a spill tray. The beaker was kept continually full to the brim with LAB, topped off as necessary; liquid spilled out of the beaker at the same height, so topping off the LAB until the point where liquid spilled out ensured a constant liquid level in the beaker (i.e., dimension $B$ in Figure~\ref{fig:capcalibrationstand} remained fixed). The sensor height was adjusted by changing the height of spacers, shown in Figure~\ref{fig:capcalibrationstand} in red, with thickness given by dimension $A$. Spacer height ranged from a minimum of zero (no spacers, sensor bottom flush with support beam) to approximately $160\unit{mm}$, slightly greater than the total length of the sensor barrel, leaving the tip of the sensor suspended over the beaker. The spacers used were thin U-shaped aluminum shims for fine adjustments and aluminum tubes for larger adjustments.
Each sensor was calibrated in three passes. Before the first pass, the sensor was cleaned in pure ethanol and left to dry; ethanol and LAB are miscible, so this rinse left the sensor in a clean state. The dry sensor was then lowered into the beaker of LAB, decreasing dimension $A$ in Figure~\ref{fig:capcalibrationstand} as the sensor descended. After reaching the bottom, the sensor was then lifted out during the second pass. The beaker was kept continually topped up during this pass to maintain a constant liquid level. The third pass repeated the first pass, but with the sensor now fully wetted with LAB. All three passes were recorded and analyzed to calibrate the sensor readout. A sample calibration result from one sensor is shown in Figure~\ref{fig:capcalibrationgraph}. The final calibration result used the average of the two wetted runs, excluding the dry calibration result. During operation we expect small level changes over long time scales, which corresponds more closely to the performance of an already immersed sensor.
The calibration of the stainless steel mineral oil capacitive level sensors followed a similar procedure. A much larger quantity of mineral oil was available, so all nine mineral oil sensors were calibrated together in one large tank of oil. The oil level in this tank was directly controlled, avoiding the use of spacers or any adjustment of the sensors. Only one run was performed, starting with a dry sensor and finishing with a fully immersed sensor. Multiple calibration runs were not performed as one run was sufficient to achieve our accuracy goals for mineral oil level monitoring.
Due to the tolerances of the sensor mounts, the capacitive level sensor calibration alone was not enough to establish the absolute liquid level in the central overflow volumes. After each overflow volume was filled with liquid to its operating level, the absolute reading from the ultrasonic sensor in that volume was used as a cross-reference to establish the overall offset of the capacitive sensor. While this approach is not as robust as an absolute determination of the liquid level using only the capacitive sensor, it still allows the capacitive sensor to serve as a secondary level monitor, checking any drifts of the ultrasonic sensors and providing redundancy in case of sensor failure.
Cross-referencing is not necessary for the mineral oil sensors, which have an adjustable mount. The installation procedure for the mineral oil sensors uses a model capacitive sensor that is $2.0 \unit{mm}$ longer than the real sensors. The model sensor is installed, the sensor mount is adjusted until the bottom of the model sensor just touches the bottom of the mineral oil overflow tank, then the model is replaced with the real sensor. This procedure establishes the calibrated sensor reading as the tank liquid level minus $2.0\unit{mm}$.
\subsection{Overflow temperature sensors}
\begin{figure}[tb]\centering
\includegraphics[height=50.mm]{plots/tempcal.pdf}
\caption{Calibration data from the overflow temperature sensors installed in ADs 5--8, all of which were calibrated together. To better illustrate the sensor behavior, only the deviations from the reference reading are shown. The spread in uncalibrated readings of up to $0.4\ensuremath{\deg\mathrm{C}}$ is negligible after calibration.}\label{fig:tempcalibration}
\end{figure}
The overflow tank temperature sensors were calibrated in water baths. The Pt100 sensors were placed in a water bath along with a NIST-traceable reference thermometer\footnote{Omega Engineering model HH41 with ON-410-PP temperature probe, serial number 308697, system calibration ID 009968179.}. We performed a full-system calibration: each sensor was paired with its electrical readout module channel before calibration, and this pairing was maintained at sensor installation. Sensors were calibrated in batches. Groups of four or eight Pt100 sensors were placed in the water bath together, comprising complete sets of overflow temperature sensors for two or four detectors, respectively. Data was first taken at room temperature, then ice was added to the water bath, and several data points were taken as the system cooled down to $0\ensuremath{\deg\mathrm{C}}$. The water bath was frequently agitated to reduce thermal gradients in the cooling water. Calibration results from a batch of eight sensors calibrated together are shown in Figure~\ref{fig:tempcalibration}. All sensors exhibited good linearity within and beyond the detector's operational temperature range, meeting the requirements of section~\ref{sec:requirements}.
\begin{figure}[tb]\hfil%
\subfloat[An inclinometer mounted on the central overflow tank.]{\label{fig:inclinometerphoto}\includegraphics[clip=true,trim=9in 6in 0 6in, width=74mm]{figures/inclinometer_mounted.jpg}}\hfill%
\subfloat[The calibration data from inclinometer 16, showing the measurement of the sensor's response and zero offset.]{\label{fig:tiltcalibration}\includegraphics[width=74mm]{plots/tiltcal.pdf}}\hfil%
\caption{Images of the inclinometers and a sample of the inclinometer calibration data.}
\end{figure}
\subsection{Inclinometers}
The inclinometers also were calibrated in a laboratory setting before installation. The inclinometers are digital sensors and directly report the measured deviation of their $x$ and $y$ axes from the plane perpendicular to local gravity. We determined the zero offset of each inclinometer by taking a reading on a known-level surface, as checked to within $0.05\ensuremath{^\circ}$ using a SPI-Tronic Pro 3600 digital level. We then used a custom-fabricated 150-{mm} sine bar to tilt each axis of each inclinometer by a known amount and tabulated the readings. Sample results from this calibration are shown in Figure~\ref{fig:tiltcalibration}, demonstrating that all sensors tested showed satisfactory zero readings and inclination response.
\section{Instrumentation readout}
\subsection{Hardware design}
\begin{figure}[t]\centering
\includegraphics[width=150mm, height=75mm]{figures/DYBLidSensorElectricalBlockDiagram.pdf}
\caption{Electrical block diagram of the lid sensor system. Line color indicates signal type; the boundary between the air side of the system and the detector side is shown by a dashed line, with electrical feedthroughs in grey. Some power lines shown as single pairs are multiple parallel conductors for improved redundancy and increased capacity. All cabling is shielded.}\label{fig:lidsensorblockdiagram}
\end{figure}
The electrical layout of the main lid sensor system is shown in Figure~\ref{fig:lidsensorblockdiagram}. The sensors are all physically located within the detector volume, which is sealed and leak-tight against the water pool surrounding the detector. All sensors are connected through serial interfaces to a readout computer, which has a PCI multi-port serial card installed (a Moxa CP-118U, with eight ports each independently configurable for RS-232, RS-422, or RS-485 communication). Each detector requires four of the eight ports; one readout computer can read out two detectors with one serial card, or four detectors using two cards. The readout software, written in National Instruments LabVIEW 8.6.1f1 for compatibility with other detector readout and control systems, polls each sensor
every two seconds, redundantly recording the data into both a flat log file and the experiment's SQL database. The flat log file is easy to use for manual checks or system debugging, while the database provides the measured data directly to the experiment's analysis pipeline. A single DC power supply with a serial port connection provides a stable, continuously-monitored, remote-controllable power source for all sensor systems connected to a readout computer.
The 50-meter distance between the detector-side interface and the readout computer means that conventional serial interfaces, such as USB or RS-232, cannot be used. Instead, differential electrical interfaces are used on the main connection between the detector and the readout computer for all sensors. Most sensors were available in RS-485 variants. The 2-wire, half-duplex operation of RS-485 is well suited to low-speed, low-volume data readouts like our sensors, and is capable of line lengths of much longer than $50\unit{meters}$ even in a noisy electrical environment. One common RS-485 line services all RS-485 sensors. The line is run directly from the readout computer to the fanout board on the detector lid, where it switches to a star topology to connect to each sensor. The use of a star topology rather than the preferred daisy chain was necessitated by the practical concerns of cable routing. The short lengths of the star branches, one to two meters each, do not cause interference issues, as the signal travel time is much smaller than the typical symbol length at our data rate of 9600 baud.
The capacitive liquid level sensors were not available with an RS-485 interface, only the more common RS-232. To carry these signals to the readout computer, they are first converted to RS-422, a four-wire, full-duplex, differentially-signaled interface, logically compatible with RS-232. The communication protocol used by the capacitive sensors requires a full-duplex link, necessitating the use of RS-422 and a dedicated line for each sensor. The signal converters are situated on the electrical distribution board, mounted on the detector lid just outside the sealed central overflow volume, and powered by the common power bus. This arrangement limits the length of the less-reliable RS-232 link to only a few meters, using RS-422 for the majority of the line length.
With the exception of the temperature readout, all sensors on the common RS-485 line use checksummed communications protocols, such as Modbus/RTU, so each sensor ignores commands not directly addressed to it with matching checksums. The HWg-Poseidon protocol used by the temperature sensor does not have a checksum \cite{HWgPt100}. We have seen infrequent readout glitches caused by the temperature sensor misinterpreting commands to, or responses from, other sensors as a request to read out, causing a transient bus conflict that lasts until the temperature sensor stops transmitting. Conflicts are rare in practice, but force our readout system and data analysis chain to be able to handle a failed reading without complications. The use of a checksummed protocol for all sensors, or the use of dedicated per-sensor lines as with the capacitive sensors, would have eliminated this issue.
\begin{figure}[tb]\centering\includegraphics[width=150mm]{figures/DYBOffCenterCameraElectricalBlockDiagram.pdf}\caption{Electrical block diagram of the off-center camera system. Line color indicates signal type; the boundary between the air side of the system and the detector side is shown by a dashed line, with electrical feedthroughs in grey.}\label{fig:DGUTcamerablockdiagram}\end{figure}
The electrical layout of the off-center camera system is shown in Figure~\ref{fig:DGUTcamerablockdiagram}. As with the lid sensors, each readout system is able to service one pair of detectors, with two cameras per detector. The cameras transmit analog PAL video signals back to the readout computer over RG59 coaxial cables. A National Instruments PCI-1410 PCI card in the readout computer receives and digitizes the video signals. Each PCI-1410 has four analog video inputs. One twisted pair carries $24\unit{V}$ DC power to each camera system and the infrared LED array. The power supply is shared between all cameras connected to the same readout PC.
\subsection{Long-term detector monitoring}
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{plots/ad1_of_prl.pdf}\caption{Lid sensor monitoring data from AD1 during the first physics run, using data from 24 December 2011 to 17 February 2012. Overflow tank GdLS, LS, and MO levels are shown on the left scale, with GdLS and LS temperatures on the right scale. Data points are hourly averages.}
\label{fig:monitoringdata}
\end{figure}
\begin{figure}[t]\hfil
\subfloat[The GdLS liquid level and temperature in AD3. Arrows indicate the times the pictures at right were taken.]{\includegraphics[width=75mm]{plots/ad3_gdls.pdf}}\hfill
\subfloat[A side-by-side image showing GdLS level changes in AD3, from 24 Nov 2011 (left) to 30 Mar 2012 (right).]{\includegraphics[width=73mm]{figures/AD3_GDLS_cameracomparison.png}}\hfil%
\caption{GdLS levels in AD3 before, during, and after the first physics run. The camera and the level sensor both show the same change of approximately $4\unit{mm}$.}\label{fig:cameracomparisonshot}
\end{figure}
The overflow monitoring sensors have been operated continuously since the completion of detector installation, recording sensor readings once every two seconds. A sample of the data from AD1 installed at the Daya Bay near site is shown in Figure~\ref{fig:monitoringdata}. A split camera image from AD3 is shown in Figure~\ref{fig:cameracomparisonshot} together with sensor data. In all detectors, the liquid level sensors show the same behavior as the camera images. Camera images are taken at least once per month during scheduled detector calibration periods when physics data is not being collected.
A long-term laboratory stability test is underway with a full set of sensors, production readout system, and environmental monitor installed on a test bench at the University of Wisconsin--Madison. This setup allows us to monitor the sensors for any long-term drift unrelated to the behavior of the fluids. The sensors are readily accessible, so if any future drifts in sensor readings are observed, both the sensors and the test environment can be investigated to determine the cause. We expect to operate this long-term test stand for at least the duration of the first long-term physics run at Daya Bay.
\subsection{Monitoring during detector transport}
\begin{figure}[bt]
\centering
\includegraphics[width=\textwidth]{plots/ad1_transport.pdf}
\caption{Overflow sensor data collected during transportation of AD1 from the detector filling hall to Experimental Hall~1 on 11 May 2011. Times are Daya Bay local time, UTC~+0800. For clarity, the raw level sensor readings are displayed without error bars. Inclinometer values shown are the average of all three inclinometer readings, zero-corrected to show change from initial state.}
\label{fig:AD1transportdata}
\end{figure}
After assembly and filling, the Daya Bay ADs were transported to their final locations in the experimental halls. When full, an AD weighs about $110\unit{tons}$ and can only be transported underground using an automatic guided vehicle (AGV), a low-profile heavy-lift transport capable of bearing and moving the entire load~\cite{agv}. As one check to verify that the ADs were not damaged during transport, we ran the liquid level monitoring instrumentation and monitoring cameras during the first transport of AD1 to its installed location. A readout PC and supporting instrumentation were mounted in a mobile 19-inch rack with an uninterruptible power supply battery system to provide power; this rack trailed behind the AGV as it transported AD1 and allowed us to monitor AD1 liquid levels in real time during transport.
The data collected during the transport of AD1 is shown in Figure~\ref{fig:AD1transportdata}. Several notable features can be seen in this data. The measured liquid levels in each volume decreased, which we attribute to the agitation of the detector dislodging trapped bubbles of gas. The inclinometers show that the detector remained level within the expected range during the transport. Based on the stability of AD1 during transport, we decided not to monitor future ADs continuously during transportation, instead checking their liquid levels before and after transportation. The results have been consistent with AD1: we see no overall change except for a decrease in liquid levels associated with the release of trapped gas bubbles. We also performed a final top-off of all detector liquids after detector transportation~\cite{fillingpaper}, to replace the volume lost when the gas bubbles were released.
\section{Summary}
We present a comprehensive and redundant system of sensors for monitoring the volume, mass, and temperatures of the mineral oil (MO), liquid scintillator (LS), and gadolinium-doped liquid scintillator (GdLS) inside the Daya Bay antineutrino detectors (ADs).
The GdLS and LS fluid levels are monitored continuously by ultrasonic and carbon-impregnated PTFE capacitive sensors in the liquid overflow tanks, and are cross-checked by off-center cameras next to the detector calibration tubes. The MO fluid level is monitored by a stainless steel capacitive sensor. Two temperature probes monitor the GdLS and LS fluids, respectively, and four temperature probes evenly-spaced vertically along a photomultiplier tube support ladder read the MO temperature and determine the vertical temperature profile of the detectors. In addition to the fluid monitoring sensors, three inclinometers monitor the $x$- and $y$-axis tilts of the AD, and two internal cameras allow visual inspection of the inside of the AD.
Uncertainty of the target mass inside the detector is potentially one of the largest systematic uncertainties on a measurement of $\sin^2 2\theta_{13}$.
This system of sensors monitors the height of the target liquid in the overflow tanks to a precision of $1\unit{mm}$. When combined with the overflow tank geometry information, GdLS density measurements, and tilt sensor data, the set of sensors described here is able to track detector target mass changes with a maximum uncertainty of $2.2\unit{kg}$ (0.011\%), significantly less than the baseline design goal of $4\unit{kg}$ (0.02\%). Minimizing this uncertainty has played a key role in reducing the systematic uncertainty of Daya Bay's measurement of $\sin^2 2\theta_{13}$~\cite{DYBcpc}.
\section*{Acknowledgements}
This work was performed with support from the DOE Office of Science, High Energy Physics, under contract DE-FG02-95ER40896 and with support from the University of Wisconsin.
The authors would like to thank
B. Broerman and E. Draeger for assistance with the overflow tank assembly process;
A. Green, P. Mende, J. Nims, D. Passmore, and R. Zhao for assistance with the overflow sensor characterization and calibration;
Z.\,M. Wang for assistance with the off-center camera system surveys;
X.\,N. Li for invaluable assistance on-site;
and
C. Zhang for integrating our systems into the online data monitor.
A. Pagac and B. Broerman helped prepare many of the figures in this paper.
Many fruitful discussions with the rest of the Daya Bay Collaboration have helped to refine the systems discussed here and contributed greatly to their excellent performance.
\bibliographystyle{JHEP}
|
2,869,038,154,678 | arxiv | \section*{#1}}
\newcommand{{\sffamily Flip}\xspace}{{\sffamily Flip}\xspace}
\newcommand{{\sffamily ReCom}\xspace}{{\sffamily ReCom}\xspace}
\newcommand{\hbox{\sffamily Node Choice}\xspace}{\hbox{\sffamily Node Choice}\xspace}
\newcommand{\hspace{2in}}{\hspace{2in}}
\newcommand{\todo}[1]{{\bfseries \scriptsize \color{red} TODO: #1}}
\newcommand{\citeme}[0]{{\bfseries \scriptsize \color{purple}\framebox{CITE}}}
\newcommand{\addreference}[0]{{\bfseries \scriptsize \color{blue}\framebox{REF}}}
\newcommand{\justin}[1]{{\bfseries \scriptsize \color{red} JS: #1}}
\newcommand{\daryl}[1]{{\bfseries \scriptsize \color{blue} DD: #1}}
\newcommand{\moon}[1]{{\bfseries \scriptsize \color{purple} MD: #1}}
\usepackage[noend]{algpseudocode}
\usepackage[ruled,vlined]{algorithm2e}
\title{Recombination:\\ A family of Markov chains for
redistricting}
\author{Daryl DeFord, Moon Duchin, and Justin Solomon\thanks{Following the convention in mathematics,
author order is alphabetical.}}
\date{\today}
\begin{document}
\maketitle
\tableofcontents
\newpage
\begin{abstract}
\emph{Redistricting} is the problem of partitioning a set of geographical units
into a fixed number of districts, subject to a list
of often-vague rules and priorities.
In recent years, the use of randomized methods to sample from the vast space of districting plans has
been gaining traction in courts of law for identifying partisan gerrymanders, and it is now emerging as a possible
analytical tool for legislatures and independent commissions. In this paper, we set up redistricting as a graph partition problem and
introduce a new family of Markov chains called Recombination (or {\sffamily ReCom}\xspace)
on the space of graph partitions. The main point of comparison will be
the commonly used {\sffamily Flip}\xspace walk, which randomly changes the assignment label of a single
node at a time. We present evidence that {\sffamily ReCom}\xspace mixes efficiently, especially in contrast
to the slow-mixing {\sffamily Flip}\xspace, and provide experiments that demonstrate its qualitative behavior.
We demonstrate the advantages of {\sffamily ReCom}\xspace on real-world data and explain both the challenges of the Markov chain approach and the analytical tools that it enables. We close with a short case study involving the Virginia House of Delegates.
\end{abstract}
\section{Introduction}
In many countries, geographic regions are divided into districts that elect political representatives,
such as when states are divided into districts that elect individual members to the U.S.\ House of Representatives. The task of drawing district boundaries, or \emph{redistricting}, is fraught with technical, practical, and political challenges, and the ultimate choice of a districting plan has
major consequences in terms of which groups are able to elect candidates of choice.
Even the best-intentioned map-drawers have a formidable task in drawing plans whose structure promotes basic
fairness principles set out in law and in public opinion. Further complicating matters, agenda-driven redistricting makes it common
for line-drawers to \emph{gerrymander}, or to design plans specifically skewing elections toward a preferred outcome,
such as favoring or disfavoring a political party, demographic group, or collection of incumbents.
The fundamental technical challenge in the study of redistricting is to contend with the sheer number of possible ways to construct
districting plans. State geographies admit enormous numbers of divisions into contiguous districts;
even when winnowing down to districting plans that satisfy criteria set forth by legislatures or commissions, the number remains
far too large to enumerate all possible plans in a state. The numbers are
easily in the range of googols rather than billions, as we will explain below.
Recent methods for analyzing and comparing districting plans attempt to do so by placing a plan
in the context of valid alternatives---that is, those that cut up the same jurisdiction by the same rules and with the structural
features of the geography and the pattern of voting held constant.
Modern computational techniques can generate large and diverse \emph{ensembles} of comparison plans, even if building
the full space of alternatives is out of reach.
These ensembles contain \emph{samples} from the full space of plans, aiming to help compare a plan's properties to the range of possible designs.
For them to provide a proper counterfactual, however, we need some assurance of representative sampling, i.e.,
drawing from a probability distribution that successfully reflects the rules and priorities articulated by redistricters.
In one powerful application, ensembles have been used to
conduct
\emph{outlier analysis}, arguing that
a proposed plan has properties that are extreme outliers relative to the comparison statistics of an ensemble of
alternative plans. Such methods have been used in a string of recent legal challenges to partisan gerrymanders
(Pennsylvania, North Carolina,
Michigan, Wisconsin, Ohio), which were all successful at the district court or state supreme court level.
Outliers also received a significant amount of discussion at the U.S.\ Supreme Court, but the 5--4 majority declared that it
was too hard for a federal court to decide ``how much is too much" of an outlier.
The method is very much alive not only in state-level legal challenges but as a screening step for the initial adoption of plans,
and we expect numerous states to employ ensemble analysis in 2021 when new plans
are enacted around the country.
These
methods can help clarify the influence of each individual state's
political geography as well as the tradeoffs in possible rules and criteria.
The inferences that can be drawn from ensembles rely heavily on the distribution from which the ensembles are sampled. To that end, \emph{Markov chain Monte Carlo}, or MCMC, methods offer strong underlying theory
and heuristics, in the form of mixing theorems and convergence diagonistics. Drawing from this literature, the new Markov chains described here
pass several tests of quality, even though (as is nearly always the case in applications) rigorous
mixing time bounds are still out of reach.
Below, we define a new family of random walks called \emph{recombination} (or {\sffamily ReCom}\xspace) Markov chains on the space of partitions of a graph into a fixed number of connected subgraphs.
Recombination chains are designed for applications in redistricting. Compared to past sampling methods applied in this context---most prominently a simpler {\sffamily Flip}\xspace walk that randomly changes the labeling of individual geographic units along district borders---{\sffamily ReCom}\xspace has many favorable properties that make it well suited to the study of
redistricting. Critically for reliability of MCMC-based analysis, we present evidence that {\sffamily ReCom}\xspace mixes efficiently
to a distribution that comports with traditional districting criteria, with little or no parameter-tuning by the user.
\begin{figure}[!h]
\centering
\subfloat[Initial Partition]{\includegraphics[height=1.5in]{Finalish_Figures/1/0B100P5start.png}}\quad\
\subfloat[1,000,000 {\sffamily Flip}\xspace steps]{\includegraphics[height=1.5in]{Finalish_Figures/1/0B100P5end.png}}\quad\
\subfloat[100 {\sffamily ReCom}\xspace steps]{\includegraphics[height=1.5in]{Finalish_Figures/1/2RE0B100P1end.png}}
\caption{Comparison of the basic {\sffamily Flip}\xspace proposal versus the spanning tree {\sffamily ReCom}\xspace proposal to be described below.
Each Markov chain was run from the initial
partition of a $100\times100$ grid into 10 parts shown at left. }
\label{fig:bchoice}
\end{figure}
\subsection{Novel contributions}
We introduce a new proposal distribution called {\sffamily ReCom}\xspace for MCMC on districting plans and argue that it
provides an alternative to the previous {\sffamily Flip}\xspace-based approaches for ensemble-based analysis that makes
striking improvements in efficiency and replicability. In particular, we:
\begin{itemize}
\item describe {\sffamily ReCom}\xspace and {\sffamily Flip}\xspace random walks on the space of graph partitions, discussing
practical setup for implementing Markov chains for redistricting;
\item provide evidence for fast mixing/stable results with {\sffamily ReCom}\xspace and contrast with properties of {\sffamily Flip}\xspace;
\item study qualitative features of sampling distributions through experiments on real data, addressing common
variants like simulated annealing and parallel tempering; \quad and
\item provide a model analysis of racial gerrymandering in the Virginia House of Delegates.
\end{itemize}
To aid reproducibility of our work, an open-source implementation of {\sffamily ReCom}\xspace is available online \cite{gerrychain}.
An earlier report on the Virginia House of Delegates written for reform advocates, legislators, and the general public
is also available \cite{VA-report}.
\subsection{Review of computational approaches to redistricting}\label{sec:computationalreview}
Computational methods for generating districting plans have appeared since at least the work of Weaver, Hess, and Nagel in the 1960s \cite{weaver1963procedure,nagel_simplified_1965}.
Like much modern software for redistricting, early techniques like \cite{nagel_simplified_1965} incrementally improve districting plans in some metric while taking criteria like population balance, compactness, and partisan balance into account.
Many basic
elements still important for modern computational redistricting approaches were already in place in that work.
Quantitative criteria are extracted from redistricting practice (see our \S\ref{sec:operation});
contiguity is captured using a graph structure or ``touchlist'' (see our \S\ref{sec:redistrictingdiscrete}); a greedy hill-climbing strategy improves plans from an initial configuration; and randomization is used to improve the results.
A version of the {\sffamily Flip}\xspace step (``the trading part'') even appears in their optimization procedure.
Their particular stochastic algorithm made use of hardware available at the time: ``[R]un the same set of data cards a few times with the cards arranged in a different random order each time.''
Since this initial exploration, computational redistricting has co-evolved with the development of modern algorithms and computing equipment. Below, we highlight a few incomplete but representative examples; see \cite{cirincione_assessing_2000,tasnadi2011political,altman2010promise,saxon2018spatial,ricca2013political} for broader surveys; only selected recent work is cited below.
\paragraph*{Optimization.}
Perhaps the most common redistricting approach discussed in the technical journal literature is the \emph{optimization} of districting plans. Optimization algorithms are designed to extremize particular objective functions measuring plan properties while satisfying some set of constraints. Most commonly, algorithms proposed for this task maintain contiguity and population balance of the districts and
try to maximize the ``compactness" through some measure of shape \cite{kim2011optimization,jin2017spatial}.
Many authors have used Voronoi or power diagrams with some variant
of $k$-means \cite{fryer2011measuring,klein1,klein2,Levin2019AutomatedCR}, and there has been a lineage of approaches through integer programming \cite{buchanan} and even a partial
differential equations approach with a volume-preserving curvature flow \cite{jacobs2018partial}.
Optimization algorithms have not so far
become a significant element of reform efforts around redistricting practices,
partly because of the difficulty of using them in assessment of proposed plans that take many other criteria into account. Moreover, most formulations of global optimization problems for full-scale districting plans are likely computationally intractable
to solve, as most of the above authors acknowledge.
\paragraph*{Assembly.}
Here, a randomized process is used to create a plan from scratch, and this process is repeated to create a collection of plans
that will be used as a basis for comparison. Note that an optimization algorithm with some stochasticity could be run
repeatedly as an assembly algorithm, but generally the goals of assembly algorithms are to produce diversity where the goals
of optimization algorithms are to find a single best example.
The most basic assembly technique is to use a greedy {\em flood-fill} (agglomerative) strategy, starting from $k$ random choices
among the geographical units as the seeds of districts and growing outwards by adding neighboring units until the
jurisdiction has been filled up and the plan is complete.
Typically, these algorithms abandon a plan and re-start if they reach a configuration that cannot be completed into a valid plan, which can happen often. We are not aware of any theory to characterize the support and qualitative properties of the sampling distributions.
Examples include \cite{chen_unintentional_2013, chen2016loser,magleby_new_2018}.
\paragraph*{Random walks.}
A great deal of mathematical attention has recently focused on random walk approaches to redistricting.
These methods use a step-by-step modification procedure to begin with one districting plan and incrementally
transform it. Examples include \cite{herschlag_quantifying_2018, herschlag_evaluating_2017, Fifield_A_2018,pegden,chikina2019practical}.
An evolutionary-style variant with the same basic step
can be found in \cite{cho_toward_2016, liu_pear:_2016}.
The use of random walks for sampling is well developed across scientific domains in the form
of {\em Markov chain Monte Carlo}, or MCMC, techniques. This is what the bulk of the present paper will consider.
We emphasize that while many of the techniques used in litigation are flip-based, they always involve customizations,
such as carefully tuned constraints and weighting, crossover steps, and so on. The experiments below are not
claimed to reproduce the precise setup of any of these implementations (not least because the detailed specifications and
code are
often not made public). Many of the drawbacks, limitations, and subtleties of working with flip chains are well known to
practitioners but not yet present in the journal literature.
In addition to discussing these aspects of flip chains, we present an alternative chain that gives us an occasion to debate the mathematical
and the math-modeling needs of the application to redistricting.
\section{Markov chains}
\label{sec:Markov}
A Markov chain is simply a process for moving between positions in a {\em state space} according to a transition rule
in which the probability of arriving at a particular position at time $n+1$ depends only on the position at time $n$. That is,
it is a random walk without memory. A basic but powerful example of a Markov chain is the simple random walk
on a graph: from any node, the process chooses a neighboring node uniformly at random for the next step.
More generally, one could take a weighted random walk on a graph, imposing different probabilities on the incident edges.
One of the fundamental facts in Markov chain theory is that any Markov chain can be accurately modeled as
a (not necessarily simple) random walk on a (possibly directed) graph.
Markov chains are used for a huge variety of applications, from Google's PageRank algorithm to speech recognition
to modeling phase transitions in physical materials.
In particular, MCMC is a class of statistical methods that are used for sampling, with
a huge and fast-growing literature and a long track record of modeling success, including in a range of
social science applications. See the classic survey \cite{Diaconis} for definitions, an introduction to Markov chain theory, and a lively guide to applications.
The theoretical appeal of Markov chains comes from the convergence guarantees that they provide.
The fundamental theorem says that for any ergodic Markov chain there exists a unique stationary distribution, and that
iterating the transition step causes any initial state or probability distribution to converge to that steady state.
The number of steps that it takes to pass a threshold of closeness to the steady state is called the {\em mixing time};
in applications, it is extremely rare to be able to rigorously prove a bound on mixing time;
instead, scientific authors often appeal to a suite of heuristic convergence tests.
This paper is devoted to investigating Markov chains for a {\em global} exploration of the universe of valid redistricting plans.
From a mathematical perspective, the gold standard would be to define Markov chains for which we can ($1$) characterize the stationary distribution $\pi$ and ($2$) compute the mixing time.
(In most scientific applications, the stationary distribution is specified in advance through
the choice of an objective function and a Metropolis--Hastings or Gibbs sampler that weights states according to their scores.)
From a practical perspective in redistricting, confirming
mixing to a distribution with a simple closed-form description is neither necessary nor sufficient.
For the application, a gold standard might be ($1'$) explanation of the distributional
design and the weight that it places on particular kinds of districting plans, matched to the law and practice of redistricting,
and ($2'$) convergence heuristics and sensitivity analysis
that give researchers confidence in the robustness and replicability of their techniques.
Stronger sampling and convergence theorems are available for {\em reversible} Markov
chains: those for which the steady-state probability of being at state $P$ and transitioning to $Q$ equals the probability of
being at $Q$ and transitioning to $P$ for all pairs $P,Q$ from the state space.
In particular, a sequence of elegant theorems from the 1980s to now (Besag--Clifford \cite{besag1989generalized}, Chikina et al \cite{pegden,chikina2019practical}) shows that samples from reversible Markov chains admit conclusions about their likelihood of having been drawn from a
stationary distribution $\pi$ long before the sampling distribution approaches $\pi$.
For redistricting, this theory enables what we might call {\em local} search: while only sampling a relatively small
neighborhood, we
can draw conclusions about whether a plan has properties that are typical of random draws from $\pi$.
Importantly, these techniques can circumvent the mixing and convergence issues, but they must still contend
with issues of distributional design and sensitivity to user choice.
All previous MCMC methods we have encountered (for both local and global sampling)
are built on variations of the same proposal distribution that we call a ``flip step," for which each move
reassigns a single geographic unit from one district to a neighboring district.
This kind of proposal, for which we record several versions collectively denoted as {\sffamily Flip}\xspace, is relatively straightforward to implement and in its simplest form satisfies the properties needed for Markov chain theory to apply. We will elaborate serious disadvantages of basic {\sffamily Flip}\xspace chains, however, in an attempt to catch the literature up with practitioner knowledge: demonstrably slow mixing;
stationary distributions with undesirable qualitative properties; and additional complications in
response to standard MCMC variations like constraining, re-weighting, annealing, and tempering.
We will argue that an alternative Markov chain design we call \emph{recombination}, implemented with a spanning tree step, avoids these problems. We denote this alternative chain by {\sffamily ReCom}\xspace.
Both {\sffamily Flip}\xspace and {\sffamily ReCom}\xspace are discussed in detail in \S\ref{sec:proposals} below.
In applications, MCMC runs are often carried out with burn time (i.e., discarding the first $m$ steps) and subsampling
(collecting every $r$ samples after that to create the ensemble). If $r$ is set to match the mixing time, then
the draws will be uncorrelated and the ensemble will be distributed according to the steady-state measure.
Experiments below will explore the choice of a suitable design for a {\sffamily Flip}\xspace chain.
Some of the performance obstructions described below have led researchers to use extremely fast and/or parallelized
implementations, serious computing (or supercomputing) power, and various highly tuned or hybrid techniques that sometimes
sacrifice the Markov property entirely or make external replicability impossible.
On full-scale problems, a {\sffamily ReCom}\xspace chain with run length in the tens of thousands of steps produces ensembles
that pass many tests of quality, both in terms of convergence and in distributional design. Depending on the details of the data, this can be run in anywhere from hours to a few days on a standard laptop.
\section{Setting up the redistricting problem}
Before providing the technical details of {\sffamily Flip}\xspace and {\sffamily ReCom}\xspace, we set up the analysis of districting plans as a \emph{discrete} problem and explain how Markov chains can be designed to produce plans that comply with the rules of redistricting.
\subsection{Redistricting as a graph partition problem}\label{sec:redistrictingdiscrete}
The earliest understanding of pathologies that arise in redistricting was largely contour-driven. Starting with the original ``gerrymander,'' whose salamander-shaped boundary inspired a famous 1812 political cartoon, irregular district boundaries on a map were understood to be signals that unfair division had taken place.
Several contemporary authors now argue
for replacing the focus on contours with a discrete model \cite{duchin-tenner}, and in practice
the vast majority
of algorithmic approaches discussed above adopt the discrete model.
There are many reasons for this shift in perspective. In practice, a district is an aggregation of a finite number of
census blocks (defined by the Census Bureau every ten years) or precincts (defined by state, county, or local authorities).
District boundaries extremely rarely
cut through census blocks and typically preserve precincts,\footnote{For example,
the current Massachusetts plan splits just 1.5\% of precincts. But measuring the degree of precinct preservation is very
difficult in most states because there is no precinct shapefile publicly available.} making it reasonable to compare a proposed plan to alternatives built from block or precinct units. Furthermore, these discretizations give ample granularity; for instance,
most states have five to ten thousand precincts and several hundred thousand census blocks.
\begin{figure}[!h]
\centering
\includegraphics[width=\textwidth]{Finalish_Figures/iowa.png}
\caption{Iowa is currently the only state whose congressional districts are made of whole counties.
The dual graph of Iowa's counties is shown here together with the current Iowa congressional districts. }
\label{fig:dualgraph}
\end{figure}
From the discrete perspective, our basic object is the {\em dual graph} to a geographic partition of the state into units.
We build this graph $G=(V,E)$ by designating
a vertex for each geographic unit (e.g., block or precinct) and placing edges in $E$ between those units that are geographically adjacent; Figure~\ref{fig:dualgraph} shows an example of this construction on the counties of Iowa.
With this formalism, a districting plan is a partition of the nodes of $V$ into subsets that induce connected components
of $G$. This way, redistricting can be understood as an instance of graph partitioning, a well-studied problem in combinatorics, applied math, and network science \cite{NASCIMENTO,Schaeffer}.
Equivalently, a districting plan is an assignment of each node to a district via a labeling map $V\to \{1,\dots,k\}$.
The nodes (and sometimes the edges) of $G$ are decorated with assorted data, especially the population
associated to each vertex, which is crucial for plan validity. Other attributes for vertices may include the assignment of
the unit to a municipality or a vector of its demographic statistics. Relevant attributes attached to edges might
include the length of the boundary shared between the two adjacent units.
\subsubsection{Generating seed plans}
To actually run our Markov chains, we need a valid initial state---or \emph{seed}---in addition to the proposal method.
Although in some situations we may want to start chains from the currently enacted plan,
we will need other seed plans
if we want to demonstrate that our ensembles are adequately independent of starting point.
Thus, it is useful to be able to construct starting plans that are at least contiguous and tolerably population balanced.
Flood-fill methods (see \S\ref{sec:computationalreview} above) or spanning tree methods (see \S\ref{sec:recom} below) can be used
for plan generation, and both are implemented in our codebase.
\subsection{Sampling from the space of valid plans}
\label{sec:svp}
Increasing availability of computational resources has fundamentally changed the analysis and design of districting plans
by making it possible to explore the space of valid districting plans much more efficiently and fully.
It is now clear that any literal reading of the requirements governing redistricting permits an enormous number of potential plans for each state, far too many to build by hand or to consider systematically. The space of valid plans only grows if we account for the many possible readings of the criteria.
To illustrate this, consider the redistricting rule present in ten states that dictates that state House districts should nest
perfectly inside state Senate districts, either two-to-one (AK, IA, IL, MN, MT, NV, OR, WY) or three-to-one (OH, WI).
A
constraining way to interpret this mandate would be to fix the House districts
in advance and admit only those Senate plans that group appropriate numbers of adjacent House districts.
Even under this
narrow interpretation, a perfect matching analysis indicates that there are still
6,156,723,718,225,577,984, or over $6\times 10^{18}$, ways to form valid state Senate plans just by pairing the current
House districts in Minnesota \cite{Alaska}.
The actual choice left to redistricters, who in reality control House and Senate lines simultaneously, is far more open,
and $10^{100}$ would not be an unreasonable guess.
\subsubsection{Operationalizing the rules}
\label{sec:operation}
Securing \emph{operational} versions of rules and priorities governing the redistricting process requires a sequence of modeling decisions, with major consequences for the properties of the ensemble.
Constitutional and statutory provisions governing redistricting are never precise enough to admit a single unambiguous mathematical interpretation.
We briefly survey the operationalization of important redistricting rules.
\begin{itemize}
\item \textbf{Population balance:} For each district,
we can limit its percentage deviation from the ideal size (state population divided by $k$, the number of districts).%
\footnote{The case law around tolerated population deviation is thorny and still evolving \cite[Chapter 1]{Realists}.
For years, the basis of apportionment has been the raw
population count from the decennial Census, but there are clear moves to change to a more restrictive population
basis, such as by citizenship.}\footnote{
Excessively tight requirements for population balance can spike the rejection rate of the Markov chain and impede
its efficiency. Even for Congressional districts, which are often balanced to near-perfect equality in enacted plans, a precinct-based ensemble with
$\le 1\%$ deviation can still provide a good comparator, because those plans typically can be quickly tuned by a mapmaker
at the block level without breaking their other measurable features.}
\item \textbf{Contiguity:} Most states require district contiguity by law, and it is the standard practice even when not formally required. But even contiguity has subtleties in practice, because of water, corner adjacency, and the presence of
disconnected pieces. Unfortunately, contiguity must be handled by building and cleaning dual graphs for each state
on a case-by-case basis.
\item \textbf{Compactness:} Most states have a ``compactness" rule preferring regular district shapes, but very few attempt
a definition, and several of the attempted definitions are unusable.\footnote{There are several standard scores in litigation,
especially an isoperimetric score (``Polsby-Popper") and a comparison to the circumscribed circle (``Reock"), each
one applied to single districts.
It is easy to critique these scores, which are readily seen to be underdefined, unstable, and inconsistent \cite{duchin-tenner, bar2019gerrymandering,barnes,deford_total_2018}.
In practice, compactness is almost everywhere ruled by the eyeball test.}
We will handle it in a mathematically natural manner for a discrete model: we count the number of {\em cut edges}
in a plan, i.e., the number of edges in the dual graph whose endpoints belong to different districts (see \S\ref{sec:theory}).
This gives a notion of the discrete perimeter of a plan, and it corresponds well
to informal visual standards of regular district shapes (the ``eyeball test" that is used in practice much more heavily
than any score).
\item \textbf{Splitting rules:} Many states express a preference for districting plans that ``respect" or ``preserve" areas that are larger than the basic units of the plan, such as counties, municipalities, and (also underdefined) {\em communities of interest}.
There is no consensus on best practices for quantifying the relationship of a plan to a sparse set of geographical boundary curves.
Simply counting the number of units split (e.g., counties touching more than one district), or employing an entropy-like splitting score, are two alternatives that have been used in prior studies \cite{Mattingly,VA-criteria}
\item \textbf{Voting Rights Act (VRA):} The Voting Rights Act of 1965 is standing federal law that requires districts to be drawn to provide
qualifying minority groups with the opportunity to elect candidates of choice. \cite[Ch3-5]{Realists}.
Here, a modeler might reasonably choose to create a comparator ensemble made up of new plans that provide at least
as many districts with a substantial minority share of voting age population as the previous plan.%
\footnote{Since the VRA legal test involves assessing ``the totality of the circumstances," including local histories of discrimination and patterns of racially polarized voting, this is extraordinarily difficult to model in a Markov chain.
However, the
percentage of a minority group in the voting age population is frequently used as a proxy.
For instance, in Virginia, there are two current Congressional districts with over 40\% Black Voting Age Population,
and a plausible comparator ensemble should contain many plans that preserve that property.}
\item \textbf{Neutrality:} Often state rules will dictate that certain considerations should not be taken into account
in the redistricting process, such as partisan data or incumbency status. This is easily
handled in algorithm design
by not recording or inputting associated data, like election results or incumbent addresses.
\end{itemize}
Finally, most of these criteria are subject to an additional decision about
\begin{itemize}
\item \textbf{Aggregation and combination:} Many standard metrics used to analyze districting plans (as described above) are computed on a district-by-district basis, without specifying a scheme to aggregate scores across districts
to make plans mutually comparable
\footnote{If for instance we use an $L^\infty$ or supremum norm to summarize the compactness scores of the individual districts, then
all but the worst district can be altered with no penalty. Choosing $L^1$ or $L^2$ aggregation takes all scores into account,
but to some extent allows better districts to cover for worse ones. Pegden has argued for $L^{-1}$ to heavily
penalize the worst abuses for scores measured on a $[0,1]$ scale\cite{pegden,pegden1}.}
A modeler with multiple objective functions must also decide whether to try to combine them into a fused
objective function, whether to threshold them at different levels, how to navigate a Pareto front of possible trade-offs, and so on.
\end{itemize}
Our discussion in \S\ref{sec:experiments} provides details of how we approached some of the decisions above in our experiments.
\section{The flip and recombination chains}
\label{sec:proposals}
\subsection{Notation}\label{sec:notation}
Given a dual graph $G=(V,E)$, a $k$--partition of $G$ is a collection of disjoint subsets $P=\{V_1, V_2 \ldots, V_k\}$ such that $\bigsqcup V_i = V$. The full set of $k$--partitions of $G$ will be denoted $\mathcal{P}_k(G)$. An element of $\mathcal P_k(G)$
may also be called a districting plan, or simply a plan, and the block of the partition with vertex set $V_i$ is sometimes
called the $i$th district of the plan.
We may abuse notation by using the same symbol $P$ to denote the labeling function
$P:V\to \{1,\dots,k\}$.
That is, $P(u) = i$ means that $u\in V_i$ for the plan $P$.
In a further notational shortcut, we will sometimes write $P(u)=V_i$
to emphasize that the labels index districts.
This labeling function allows us to represent the set of cut edges in the plan as
$\partial P = \{(u,v)\in E : \ P(u)\neq P(v)\}$. We denote the set of boundary nodes by
$\partial_V P = \{u\in e: e\in \partial P \}$.
In the dual graphs derived from real-world data, our nodes are weighted with populations or other demographic data, which we represent with functions $V\to\mathbb{R}$.
This notation allows us to efficiently express constraints on the districts. For example, contiguity can be enforced by requiring that the induced subgraph on each $V_i$ is connected. The cut edge count described above as a measure of compactness
is written $|\partial P|$. A condition that bounds population deviation can be written as
$$(1-\varepsilon) \frac{\sum_V w(v) }{k}
\leq |V_i| \leq (1+\varepsilon)\frac{\sum_V w(v) }{k}.$$
For a given analysis or experiment, once the constraints have been set and fixed,
we will make use of a function $C:\mathcal{P}_k(G)\mapsto \{\texttt{True}, \texttt{False}\}$ to denote the validity check.
This avoids cumbersome notation to make explicit all of the individual constraints.
We next turn to setting out proposal methods for comparison.
A proposal method is a procedure for transitioning between states of $\mathcal P_k(G)$ according to a proposal
distribution.
Formally, each $X_P$ is a $[0,1]^{ \mathcal{P}_k(G)}$-valued random variable with coordinates summing to one, describing the transition probabilities.
Since $\mathcal P_k(G)$ is a gigantic but finite state space, the proposal distribution can be viewed as a stochastic matrix with rows and columns indexed by the states $P$, such that the $(P,Q)$ entry $X_P(Q)$ is the probability of transitioning from $P$ to $Q$
in a single move.
The resulting process is a Markov chain: each successive state is drawn according to $X_P$, where $P$ is the current state.
Since these matrices are far too large to build, we may prefer to think of the proposal distribution as a stochastic algorithm for modifying the assignment of some subset of $V$. This latter perspective does not require computing transition probabilities explicitly, but rather leaves them implicit in the stochastic algorithm for modifying a partition.
In this section, we introduce the main {\sffamily Flip}\xspace and {\sffamily ReCom}\xspace proposals analyzed in the paper and describe some of their qualitative properties. We also devote some attention to the spanning tree method that we employ in our empirical analysis.
\subsection{{\sffamily Flip}\xspace proposals}\label{sec:flipproposal}
\subsubsection{Node choice, contiguity, rejection sampling}
At its simplest, a {\sffamily Flip}\xspace proposal changes the assignment of a single node at each step in the chain in a manner that
preserves the contiguity of the plan. See Figure \ref{fig:flipschematic} for a sequence of steps in this type of Markov chain and a randomly generated 2-partition of a $50\times 50$ grid, representative of the types of partitions generated by {\sffamily Flip}\xspace and its variants. This procedure provides a convenient vehicle for exploring the complexity of the partition sampling problem.
\begin{figure}[!h]
\centering
\subfloat[Sequence of four flip steps]{\includegraphics[width = 5in]{New_Figures/flip2.png}}\\
\subfloat[Outcome of 500,000 flip steps]{\includegraphics[width=2.3in]{Finalish_Figures/Snakes0.png}
\includegraphics[width=2.3in]{Finalish_Figures/Snakes50.png}}
\caption{At each flip step, a single node on the boundary changes assignment, preserving contiguity.
This is illustrated schematically on a $5\times 4$ grid and then the end state of a long run is depicted on a $50\times 50$ grid.}
\label{fig:flipschematic}
\end{figure}
To implement {\sffamily Flip}\xspace, we must decide how to select a node whose assignment will change, for which we define an intermediate process called \hbox{\sffamily Node Choice}\xspace.
To ensure contiguity, it is intuitive to begin by choosing a vertex of $\partial_V P$ or an edge of $\partial P$, but because
degrees vary, this can introduce non-uniformity to the process.
To construct a {\em reversible} Markov chain we follow \cite{pegden} and instead sample uniformly from the set of (node, district) pairs $(u,V_i)$ where $u\in \partial_V P$ and there exists a cut edge $(u,v)\in \partial P$ with $P(v) = V_i$. This procedure amounts to making a uniform choice among the partitions that differ only by the assignment of a single boundary node. Pseudocode for this method is presented in Algorithm \ref{alg:bflip}.
The associated Markov chain has transition probabilities given by
$$X_P(Q) = \begin{cases}\frac1{|\{(v,P(w)):(v,w)\in \partial P \}|}& |\{P(u) \neq Q(u): u\in \partial P \}|=1\ \textrm{and}\ |\{P(u) \neq Q(u): u\notin \partial P \}|=0;\\
0&\textrm{otherwise}.
\end{cases} $$
This can be interpreted as a simple random walk on $\mathcal{P}_k(G)$ where two partitions are connected if they differ at a single boundary node. Thus, the Markov chain is reversible. Its stationary distribution is non-uniform,
since each plan is weighted proportionally to the number of (node, district) pairs in its boundary.
\noindent\begin{minipage}{\textwidth}
\centering
\begin{minipage}{.45\textwidth}
\centering
\begin{algorithm}[H]
\caption{\texttt{Node Choice}}\label{alg:bflip}
\textbf{Input:} Dual graph $G=(V,E)$ and current partition $P$\\
\textbf{Output:} A new partition $Q$\\
\hfill \\
\SetAlgoLined
{\bf Select:} A (node, district) pair $(u, V_i)$ uniformly from $\{(v,P(w)):(v,w)\in \partial P \}$\\
{\bf Define:} $Q(v)=\begin{cases} V_i & \textrm{if}\
u = v\\
P(v)&\textrm{otherwise.}
\end{cases}$\\
\hfill \\
{\bf Return:} $Q$
\end{algorithm}
\end{minipage}
\begin{minipage}{.45\textwidth}
\centering
\begin{algorithm}[H]
\caption{\texttt{Flip}}\label{alg:flip}
\textbf{Input:} Dual graph $G=(V,E)$ and the current partition $P$\\
\textbf{Output:} A new partition $Q$\\
\hfill \\
\SetAlgoLined
{\bf Initialize:} \textit{Allowed} = \texttt{False}\\
\While{\textit{Allowed} = \texttt{False}}{
$Q = \hbox{\sffamily Node Choice}\xspace(G,P)$ \\
$\textit{Allowed} = C(Q)$\\
}
{\bf Return:} $Q$
\end{algorithm}
\end{minipage}
\end{minipage}
At each step, the \hbox{\sffamily Node Choice}\xspace algorithm grows one district by a node and shrinks another.
One can quickly verify that a \hbox{\sffamily Node Choice}\xspace step maintains contiguity
in the district that grows but may break contiguity in the district that shrinks.
In fact, after many steps it is likely to produce a plan with no contiguous districts at all.
To address this, we adopt a rejection sampling approach, only accepting contiguous proposals.
This produces our basic {\sffamily Flip}\xspace chain (see Algorithm \ref{alg:flip} for pseudocode and Figures \ref{fig:bchoice},\ref{fig:flipschematic} for visuals).
The rejection setup does not break reversibility of the associated Markov chain, since it now amounts
to a simple random walk on the restricted state space.
Rejection sampling is practical because it is far more efficient to evaluate whether or not a particular plan is permissible than to determine the full set of adjacent plans at each step. Both the size of the state space and the relatively expensive computations that are required at the scale of real-world dual graphs contribute to this issue. If the proposal fails contiguity or another constraint check, we simply generate new proposed plans from the previous state until one
passes the check.
These methods have the advantage of explainability in court and step-by-step efficiency for computational purposes, since each new proposed plan is only a small perturbation of the previous one.
The same property that allows this apparent computational advantage, however,
also makes
it difficult for {\sffamily Flip}\xspace-type proposals to explore the space of permissible plans efficiently. Figure \ref{fig:bchoice} shows that after 1 million steps the structure of the initial state is still clearly visible, and we will discuss evidence below that 1 billion steps is enough to improve matters significantly, but not to the point of mixing. Thus, the actual computational advantage is less clear, as it may take a substantially larger number of steps of the chain to provide reliable samples.
This issue is exacerbated when legal criteria impose strict constraints on the space of plans, which may easily cause disconnectedness under this proposal.\footnote{A user can choose
to ensure connectivity by relaxing even hard legal constraints during the run and winnowing to a valid sample later, which requires additional choices and tuning.}
Attempts have been made to address this mixing issue in practice with simulated annealing or parallel tempering \cite{herschlag_evaluating_2017,herschlag_quantifying_2018,Fifield_A_2018}, but we will show in \S\ref{sec:experiments}
that on the scale of real-world problems, these fixes do not immediately overcome the fundamental barrier to successful sampling
that is caused by the combination of extremely slow mixing and the domination of distended shapes.
\subsubsection{Uniformizing}
For practical interpretation of a sample, it can be useful to have a simple description of the sampling distribution.
Although Algorithm \ref{alg:flip} does not have a uniform steady state distribution, it is possible to re-weight the transition probabilities to obtain a {\em uniform} distribution, as in the work of Chikina--Frieze--Pegden \cite{pegden}. This can be done by adding self-loops to each plan in the state space to equalize the degree; the resulting technique is given in Algorithm \ref{alg:uflip}.
To see that this gives the uniform distribution over the permissible partitions of $\mathcal{P}_k(G)$, we note that with $M$ set to the maximum degree in the state space and
$p= \dfrac{|\{(u,P(v)) : (u,v) \in \partial P\}|}{M\cdot|V|}$ we have
$$X_P(Q) = \begin{cases} 1-p & Q=P\\
p & |\{P(u) \neq Q(u): u\in V \}|=1\\
0&\textrm{otherwise}.
\end{cases} $$
Continuing to follow
Chikina et al,
we can accelerate the \texttt{Uniform Flip} algorithm without changing its proposal distribution by employing a function that returns an appropriate number of steps to wait at the current state before transitioning, so as to simulate the
expected self-loops traversed before a non-loop edge is chosen.
This variant is in Algorithm \ref{alg:uwflip}. Since the geometric variable computes the expected waiting time before selecting
a node from $\partial_V P$, this recovers the same walk and distribution with many fewer calls to the proposal function.
\noindent\begin{minipage}{\textwidth}
\centering
\begin{minipage}{.45\textwidth}
\centering
\begin{algorithm}[H]
\caption{\texttt{Uniform Flip}}\label{alg:uflip}
\textbf{Input:} Dual graph $G=(V,E)$ and current partition $P$\\
\textbf{Output:} New partition $Q$\\
\hfill \\
\SetAlgoLined
{\bf Initialize:} $p= \dfrac{|\{(u,P(v)): (u,v) \in \partial P\}|}{M\cdot|V|}$\\
\hfill\\
\eIf{\textrm{Bernoulli}(1-p) = 0}{
{\bf Return:} P}
{
\textit{Allowed} = \texttt{False}\\
\While{\textit{Allowed} = \texttt{False}}{
$Q = \hbox{\sffamily Node Choice}\xspace(G,p)$ \\
$\textit{Allowed} = C(Q)$\\
}
{\bf Return:} $Q$
}
\end{algorithm}
\end{minipage}
\begin{minipage}{.45\textwidth}
\centering
\begin{algorithm}[H]
\caption{\texttt{Uniform Flip (Fast)}}\label{alg:uwflip}
\textbf{Input:} Dual graph $G=(V,E)$ and current partition $P$\\
\textbf{Output:} Number of steps to wait in the current state ($\sigma$) and next partition ($Q$)\\
\hfill \\
\hfill\\
\SetAlgoLined
{\bf Initialize:} $p= \dfrac{|\{(u,P(v)): (u,v) \in \partial P\}|}{M\cdot|V|}$\\
\vspace{.42em}
$\sigma = \textrm{Geometric}(1-p)$
\hfill\\
\hfill\\
$Q = \hbox{\sffamily Node Choice}\xspace(G,p)$ \\
\eIf{$C(Q) = \texttt{False}$}{{\bf Return:} $(\sigma, P)$\\}{{\bf Return:} $(\sigma, Q)$}
\end{algorithm}
\end{minipage}
\end{minipage}
\vspace{1em}
On the other hand, attempting to sample from the uniform distribution causes problems for at least two reasons.
First, sampling from uniform distributions over partitions runs into complexity obstructions (see \S\ref{sec:RP=NP} below), implying as a corollary that we should not expect these chains to mix rapidly.
And even though abstract complexity results do not always dictate practical performance, in practice we will observe that
extremely long runs are needed to produce substantial change in the map.
We will demonstrate slow mixing on problems approximating real scale by showing that even the projection to summary statistics remains strongly correlated with the initial state (Figure~\ref{fig:grid_plots}).
Secondly, even if we had a uniform sampling oracle, the distribution is massively concentrated in non-compact plans:
generic connected partitions are remarkably snaky, with long tendrils and complex boundaries (see \S\ref{sec:fractal}).
The erraticness of typical shapes in the flip ensembles is undesirable from the perspective of districting, which places a premium on well-behaved boundaries. This also means that it is difficult for these chains to move effectively in the state space when compactness constraints are enforced, since generic steps increase the boundary length, leading to high rejection probabilities or disconnected state spaces.
We evaluate some standard techniques for ameliorating this issue in our experiments below.
Correcting the shape problem is not straightforward and introduces a collection of parameters that interact in
complicated ways with the other districting rules and criteria.
\subsection{{\sffamily ReCom}\xspace proposals}\label{sec:recom}
The slow mixing and poor qualitative behavior
of the {\sffamily Flip}\xspace chain leads us to introduce a new Markov chain on partitions, which changes the assignment of many vertices at once while preserving contiguity. Our new proposal is more computationally costly than {\sffamily Flip}\xspace at each step in the Markov chain, but this tradeoff is net favorable thanks to superior mixing and distributional properties.
At each step of our new chain, we select a number of districts of the current plan and form the induced subgraph of the dual graph on the nodes of those districts.
We then partition this new region according to an algorithm that preserves contiguity of the districts. We call this procedure {\em recombination} ({\sffamily ReCom}\xspace), motivated by the biological metaphor of recombining genetic information. A general version of this approach is summarized in Algorithm \ref{alg:generalrecom}; Figure \ref{fig:treerecom} shows a schematic of a single step with this proposal.
\begin{algorithm}[!h]
\caption{\texttt{Recombination (General)}}\label{alg:generalrecom}
\textbf{Input:} Dual graph $G=(V,E)$, the current partition $P$, the number of districts to merge $\ell$\\
\textbf{Output:} The next partition $Q$\\
\hfill \\
\SetAlgoLined
Select $\ell$ districts $W_1, W_2,\ldots, W_\ell$ from $P$.\\
Form the induced subgraph $H$ of $G$ on the nodes of $W=\bigcup_{i=1}^\ell W_i$.\\
Create a partition $R=\{U_1, U_2, \ldots, U_\ell\}$ of $H$\\
Define $Q(v) = \begin{cases} R(v) & \textrm{if}\ v\in H\\
P(v)&\textrm{otherwise}
\end{cases}$
\hfill\\
{\bf Return:} $Q$
\end{algorithm}
The {\sffamily ReCom}\xspace procedure in Algorithm~\ref{alg:generalrecom} is extremely general. There are two algorithmic design decisions that are required to specify the details of a {\sffamily ReCom}\xspace chain:
\begin{itemize}
\item The first parameter in the {\sffamily ReCom}\xspace method is how to \textbf{choose which districts are merged} at each step. By fixing the partitioning method, we can create entirely new plans as in \S\ref{sec:redistrictingdiscrete} by merging all of the districts at each step ($\ell=k)$.
For most of our use cases, we work at the other extreme, taking two districts at a time ($\ell=2$), and we
select our pair of adjacent districts to be merged proportionally to the length of the boundary between them, which improves compactness quickly, as we will discuss in \S\ref{sec:fractal}.\footnote{Bipartitioning is usually easier to study than $\ell$-partitioning for $\ell>2$. More importantly for this work,
the slow step in a recombination chain is the selection of a spanning tree. Drawing spanning trees for the
$\ell=k$ case (the full graph) is many times slower than for $\ell=2$ when $k$ is large, making bipartitioning a better
choice for chain efficiency.
This approach also generalizes in a second way: We can take a (maximal) matching on the dual graph of districts and bipartition each merged pair independently, taking advantage of the well-developed and effective theory of matchings.}
\item The choice of \textbf{(re)partitioning method} offers more freedom. Desirable features include full support
over contiguous partitions, ergodicity of the underlying chain, ability to control the distribution with respect to legal features (particularly population balance), computational efficiency, and ease of explanation in non-academic contexts like court cases and reform efforts. Potential examples include standard graph algorithms, like the spanning tree partitioning method we will introduce in \S\ref{sec:spanningtreerecom}, as well as methods based on minimum cuts, spectral clustering, or shortest paths between boundary points.
\end{itemize}
With these two choices, we have a well-defined Markov chain to study.
The experiments shown in the present paper are conducted with a spanning tree method of bipartitioning,
which we now describe.
\subsubsection{Spanning tree recombination}\label{sec:spanningtreerecom}
In all experiments below, we focus on a particular method of bipartitioning that creates a recombination chain
whose behavior is well-aligned to redistricting. This method merges two adjacent districts, selects a spanning
tree of the merged subgraph, and cuts it to form two new districts.
\begin{figure}[!h]
\centering
\includegraphics[width = 5in]{New_Figures/recom2.png}
\caption{A schematic of a {\sffamily ReCom}\xspace spanning tree step for a small grid with $k=2$ districts. Deleting the indicated edge from the spanning tree leaves two connected components with an equal number of nodes. }
\label{fig:treerecom}
\end{figure}
\begin{itemize}
\item First, draw a spanning tree uniformly at random from among all of the spanning trees of the merged region. Our implementation uses the loop-erased random walk method of Wilson's algorithm \cite{Wilson}.\footnote{Wilson's algorithm
is notable in that it samples uniformly from all possible spanning trees in polynomial time.}
\item Next, seek an edge to cut that balances the population within the permitted tolerance. For an arbitrary spanning tree, it is not always possible to find such an edge, in which case we draw a new tree; this is another example of rejection sampling in our implementation. In practice, the rejection rate is low enough that this step runs efficiently. If there are multiple edges that could be cut to generate partitions with the desired tolerance, we sample uniformly from among them.
\end{itemize}
Pseudocode for this technique is provided in Algorithm \ref{alg:treerecom}.
\begin{algorithm}[!h]
\caption{\texttt{ReCom} (Spanning tree bipartitioning)}\label{alg:treerecom}
\textbf{Input:} Dual graph $G=(V,E)$, the current partition $P$, population tolerance $\varepsilon$\\
\textbf{Output:} The next partition $Q$\\
\hfill \\
\SetAlgoLined
{\bf Select:} $(u,v)\in \partial P$ uniformly\\
Set $W_1=P(u)$ and $W_2=P(v)$\\
Form the induced subgraph $H$ of $G$ on the nodes of $W_1\cup W_2$.\\
{\bf Initialize:} Cuttable = \texttt{False}\\
\While{Cuttable = False}{
Sample a spanning tree $T$ of $H$\\
Let EdgeList = []\\
\For{edge in T}{
Let $T_1, T_2 = T \setminus edge$ \\
\If{$|T_1|-|T_2|<\varepsilon|T|$}{Add edge to EdgeList\\ Cuttable = True\\ }
}
}
Select cut uniformly from EdgeList\\
Let $R= T \setminus cut$\\
Define $Q(v) = \begin{cases} R(v) & v\in H\\
P(v)&\textrm{otherwise}
\end{cases}$
\hfill\\
{\bf Return:} $Q$
\end{algorithm}
A similar spanning tree approach to creating initial seeds is available: draw a spanning tree for the entire
graph $G$, then recursively seek edges to cut that leave one complementary component of appropriate population
for a district.
\section{Theoretical comparison}
\label{sec:theory}
Below, we will conduct experiments
that provide intuition for qualitative behavior of the {\sffamily Flip}\xspace and {\sffamily ReCom}\xspace
chains. However,
precise mathematical characterization of their stationary distributions appears to be extremely challenging and is the subject
of active research. In this section, we provide high-level explanations of the two main phenomena that can
be gleaned from experiments: {\sffamily ReCom}\xspace samples preferentially
from fairly compact districting plans while simple {\sffamily Flip}\xspace ensembles are composed of plans with long and winding boundaries;
and {\sffamily ReCom}\xspace seems to mix efficiently while {\sffamily Flip}\xspace mixes slowly.
\subsection{Distributional design: compactness}\label{sec:compactnessdesign}\label{sec:fractal}
``Compactness" is a vague but important term in redistricting: compact districts are those with
tamer or plumper shapes.
This can refer to having high area relative to perimeter, shorter boundary length, fewer spikes or necks or tentacles,
and so on. In this treatment, we focus on the discrete perimeter
as a way to measure compactness.
Recall from \S\ref{sec:notation} that
for a plan $P$ that partitions a graph $G=(V,E)$, we denote by $\partial P\subset E$ its set of cut edges, or the edges of $G$ whose endpoints are in different districts of $P$. A slight variant is
to count the number of boundary nodes $\partial_V P\subset V$ (those nodes at the endpoint of some cut edge).
There is a great deal of mathematical literature connected to combinatorial perimeter, from the minimum cut problem
to the Cheeger constant to expander graphs. Though we focus on the discrete compactness scores here,
a dizzying array of compactness metrics has been proposed in connection to redistricting, and the analysis below---that {\sffamily Flip}\xspace is must contend with serious compactness problems---would apply to any reasonable score, as the figures illustrate.
\begin{figure}[!h]
\centering
\subfloat{\includegraphics[height=1.25in]{New_Figures/Section3/spectral/Spectral_0.png}}
\subfloat{\includegraphics[height=1.25in]{New_Figures/Section3/spectral/Spectral_1.png}} \subfloat{\includegraphics[height=1.25in]{New_Figures/Section3/spectral/Spectral_2.png}} \subfloat{\includegraphics[height=1.25in]{New_Figures/Section3/spectral/Spectral_3.png}}
\caption{The {\sffamily ReCom}\xspace proposal tends to produce compact or geometrically tame districts, with favorable isoperimetric ratios. Each of these plans was selected after 100 {\sffamily ReCom}\xspace steps starting from the same vertical-stripes partition. Unlike the {\sffamily Flip}\xspace samples, these partitions have relatively short boundaries in addition to displaying low correlation with the initial state.}
\label{fig:treeexample}
\end{figure}
The reason that the uniform distribution is so dominated by non-compact districts is a simple matter of numbers:
there are far more chaotic than regular partitions. As an illustration, consider bipartitioning an $n\times n$ square grid
into pieces of nearly the same number of nodes. If the budget of edges you are allowed to cut is
roughly $n$, there are a polynomial
number of ways to bipartition, but the number grows exponentially as you relax the limitation on the boundary size.
This exponential growth
also explains why the imposition of any strict limit on boundary length will leave almost
everything at or near the limit.
Spanning trees are a useful mechanism to produce contiguous partitions, since the deletion of any single edge from a tree leaves two connected subgraphs. Furthermore, the tendency of the spanning tree process will be to produce
districts without skinny necks or tentacles. To see this, consider the $k=2$ case first. The number of ways for the spanning tree step to produce a
bipartition of a graph $G$ into subgraphs $H_1$ and $H_2$ is the number of spanning trees of $H_1$ times the number
of spanning trees of $H_2$ times the number of edges between $H_1$ and $H_2$ that exist in $G$.\footnote{The idea that one can cut spanning trees to create partitions,
and that the resulting distribution will have factors proportional to the number of trees in a block, is a very natural one and appears for instance in the ArXiv note
\url{https://arxiv.org/pdf/1808.00050.pdf}.}
Now we consider why more compact districts are up-weighted by this process.
Suppose $G$ is a graph on $N$ nodes appearing as a connected subgraph of a square lattice.
Kirchhoff's remarkable
counting formula for spanning trees tells us that the precise number of spanning trees
of any graph on $N$ nodes is $\det(\Delta')$, where $\Delta'$ is any $(N-1)\times (N-1)$
minor of the combinatorial Laplacian $\Delta$ of $G$.
For instance, for an $n\times n$ grid, the number of spanning trees is asymptotic to $C^{n^2}=C^N$, where $C$ is a constant
whose value is roughly $3.21$ \cite{Temperley}. There are difficult mathematical theorems that suggest that
squares have more spanning trees than any other subgraphs of grids with the same number of nodes
\cite{Kenyon,Karlsson}. But this means that if a district has a simple ``neck" or ``tentacle" with just two or three nodes,
it could reduce the number of possible spanning trees by a factor of $C^2$ or $C^3$, making the district ten or thirty
times less likely to be selected by a spanning tree process. The long snaky districts that are observed in the {\sffamily Flip}\xspace ensembles
are nearly trees themselves, and are therefore dramatically down-weighted by {\sffamily ReCom}\xspace because they admit
far fewer spanning trees than their plumper cousins.
For example, the initial partition of the $50\times 50$ grid in Figure~\ref{fig:flipschematic} has roughly $10^{1210}$ spanning
trees that project to it while the final partition has roughly $10^{282}$. That means that the tame partition is over
$10^{900}$ times more likely to be selected by a spanning tree {\sffamily ReCom}\xspace step than the snaky partition, while
uniform {\sffamily Flip}\xspace weights them exactly the same.%
\footnote{A similar argument should be applicable to $k>2$ districts since the basic {\sffamily ReCom}\xspace move handles them two at a time.}
\subsection{Complexity and mixing}\label{sec:RP=NP}
Flip distributions and uniform distributions have another marked disadvantage for sampling: computational
intractability. In the study of computational complexity, $P \subseteq RP \subseteq NP$ are complexity classes
(polynomial time, randomized polynomial time, and nondeterministic polynomial time) and it is widely believed that $P = RP$, and $RP \not = NP$ .
Recent theoretical work of DeFord--Najt--Solomon \cite{lorenzo}
shows that flip and uniform flip procedures mix exponentially slowly on several families of graphs, including
planar triangulations of bounded degree.
The authors also show that sampling proportionally to $x^{|\partial P|}$ for any $0<x\le 1$ is intractable, in the sense that an efficient solution would imply $RP=NP$. Note that the $x=1$ case covers uniform sampling.
This complexity analysis implies that methods targeting the uniform distribution and natural variants weighted to favor shorter boundary lengths are likely to have complexity obstacles to overcome, particularly with respect to worst-case scenarios, raising the need for reassurance of the quality of sampling.
Our experiments in \S\ref{sec:experiments} highlight some of these challenges in a practical setting by showing that
{\sffamily Flip}\xspace ensembles continue to give unstable results---with respect to starting point, run length, and summary statistics---at
lengths in the many millions. Practitioners must opt for fast implementations and very large subsampling time; even then, the flip approach requires dozens of tuning decisions, which undermines any sense in which the associated stationary distribution
is canonical.
Unlike {\sffamily Flip}\xspace, the {\sffamily ReCom}\xspace chain is designed so
that each step completely destroys the boundary between two districts, in the sense that the previous pairwise boundary has no impact on the next step. As there are at most $\binom{k}{2}$ boundaries
in a given $k$-partition, this observation suggests that we can lose {\em most} memory
of our starting point in a number of steps that is polynomial in $k$ and does not depend on $n$ at all.
The size of the full state space of balanced $k$-partitions of an $n\times n$ grid is easily seen to be larger than exponential in $n$.
Based on a mixture of experiments and theoretical exploration, we conjecture that the full {\sffamily ReCom}\xspace diameter of the state space---the most steps that might be required to connect any two partitions---is sublinear (in fact, logarithmic) in $n$.
We further conjecture that {\sffamily ReCom}\xspace is rapidly mixing (in the technical sense) on this family of examples,
with mixing time at worst polynomial in $n$.
(By contrast, we expect that the {\sffamily Flip}\xspace diameter of the state space, and its mixing time, grow exponentially or faster
in $n$.)
The Markov chain literature has examples of processes on grids with constant scaling behavior, such
as the Square Lattice Shuffle \cite{haastad2006square}.
That chain has arrangements of $n^2$ different objects in an $n\times n$ grid as its set of states; a move consists of randomly
permuting the elements of each row, then of each column---or just one of those, then transposing.
Its mixing time is constant, i.e., independent of $n$.
Chains with logarithmic mixing time are common in statistical mechanics: a typical fast-mixing model, like the discrete hard-core model at high temperature, mixes in time $n \log n$ with local moves (because it essentially reduces to the classic coupon collector problem), but just $\log n$ with global moves. The global nature of {\sffamily ReCom}\xspace moves leaves open the possibility
of this level of efficiency.
Our experiments below support the intuition that the time needed for effective sampling has moderate growth; tens of thousands
of recombination steps give stable results on practical-scale problems whether we work with the roughly
9000 precincts of Pennsylvania or the roughly 100,000 census blocks in our Virginia experiments. Note that these observations do not contradict the theoretical obstructions in \cite{lorenzo}, since {\sffamily ReCom}\xspace is not designed to target the uniform distribution or
any other distribution known to be intractable.
While {\sffamily ReCom}\xspace is decidedly nonuniform, the arguments in \S\ref{sec:compactnessdesign} indicate that this nonuniformity is desirable, as the chain preferentially samples from plans that comport with traditional districting principles.
\section{Experimental comparison}
\label{sec:experiments}
In this section, we will run experiments on the standard toy examples for graph problems, $n\times n$ grids, as well as on empirical dual graphs generated from census data. The real-world graphs can be large but they share key properties with lattices---they tend to admit planar embeddings, with most faces triangles or squares. Figure~\ref{fig:dgsquares} shows the state of Missouri at four different levels of census geography, providing good examples of the characteristic structures we see in our applications.
\begin{figure}[!h]
\centering
\subfloat[County]{\includegraphics[height=1.23in,width=1.5in]{Finalish_Figures/51Census/MOCO.png}}\quad
\subfloat[County Subunit
]{\includegraphics[height=1.23in,width=1.5in]{Finalish_Figures/51Census/MOCS.png}}\quad
\subfloat[Census Tract
]{\includegraphics[height=1.23in,width=1.5in]{Finalish_Figures/51Census/MOTR.png}}\quad
\subfloat[Census Block
]{\includegraphics[height=1.23in,width=1.5in]{Finalish_Figures/51Census/MOBK.png}}
\caption{Four dual graphs for Missouri at different levels of geography in the Census hierarchy. The graphs have 115, 1,395, 1,393, and 343,565 nodes respectively. }
\label{fig:dgsquares}
\end{figure}
All of our experiments were carried out using the {\sf GerryChain} software \cite{gerrychain}, with additional
source code available
for inspection \cite{replication}. The state geographic and demographic data was obtained from the census TigerLine geography program accessed through NHGIS \cite{nhgis}.
\subsection{Sampling distributions, with and without tight constraints}
We begin by noting that the tendency of {\sffamily Flip}\xspace chains to draw non-compact plans is not limited to grid graphs
but occurs on geographic dual graphs just as clearly. The first run in Figure~\ref{fig:AR_IA} shows that
Arkansas's block groups admit the same behavior, with upwards of 90\% of nodes on the boundary of a district,
and roughly 45\% of edges cut, for essentially the entire length of the run.
The initial plan has under 20\% boundary nodes, and around 5\% of edges cut;
the basic recombination chain (Run 3) stays right in range of those statistics.
\begin{figure}
\centering
\subfloat{\includegraphics[height=1in]{AR-figs/dfBGB9900P5start.png}}
\subfloat{\includegraphics[height=1in]{AR-figs/dfBGB9900P5middle25000.png}}
\subfloat{\includegraphics[height=1in]{AR-figs/dfBGB9900P5middle50000.png}}
\subfloat{\includegraphics[height=1in]{AR-figs/dfBGB9900P5middle75000.png}}
\subfloat{\includegraphics[height=1in]{AR-figs/dfBGB9900P5middle100000.png}}
{\small Run 1: 100K {\sffamily Flip}\xspace steps, shown every 25K, no compactness constraint}
\subfloat{\includegraphics[height=1in]{AR-figs/dfBGB500P5start.png}}
\subfloat{\includegraphics[height=1in]{AR-figs/dfBGB500P5middle25000.png}}
\subfloat{\includegraphics[height=1in]{AR-figs/dfBGB500P5middle50000.png}}
\subfloat{\includegraphics[height=1in]{AR-figs/dfBGB500P5middle75000.png}}
\subfloat{\includegraphics[height=1in]{AR-figs/dfBGB500P5middle100000.png}}
{\small Run 2: 100K {\sffamily Flip}\xspace steps, shown every 25K, limited to 5\% total cut edges}
\subfloat{\includegraphics[height=1.2in]{AR-figs/BGB9900P5bnodesprop01.png}}
\subfloat{\includegraphics[height=1.2in]{AR-figs/BGB9900P5cutsprop05.png}}
\subfloat{\includegraphics[height=1.2in]{AR-figs/BGB500P5bnodesprop01.png}}
\subfloat{\includegraphics[height=1.2in]{AR-figs/BGB500P5cutsprop05.png}}
{\small Run 1 boundary statistics} \hspace{1.5in} {\small Run 2 boundary statistics}
\subfloat{\includegraphics[height=1in]{AR-figs/REdfBGB9900P5start.png}}
\subfloat{\includegraphics[height=1in]{AR-figs/REdfBGB9900P5middle2500.png}}
\subfloat{\includegraphics[height=1in]{AR-figs/REdfBGB9900P5middle5000.png}}
\subfloat{\includegraphics[height=1in]{AR-figs/REdfBGB9900P5middle7500.png}}
\subfloat{\includegraphics[height=1in]{AR-figs/REdfBGB9900P5middle10000.png}}
\small{Run 3: 10K {\sffamily ReCom}\xspace steps, shown every 2500, no compactness constraint}
\subfloat{\includegraphics[height=1in]{AR-figs/REdfBGB500P5start.png}}
\subfloat{\includegraphics[height=1in]{AR-figs/REdfBGB500P5middle2500.png}}
\subfloat{\includegraphics[height=1in]{AR-figs/REdfBGB500P5middle5000.png}}
\subfloat{\includegraphics[height=1in]{AR-figs/REdfBGB500P5middle7500.png}}
\subfloat{\includegraphics[height=1in]{AR-figs/REdfBGB500P5middle10000.png}}
\small{Run 4: 10K {\sffamily ReCom}\xspace steps, shown every 2500, limited to 5\% total cut edges}
\subfloat{\includegraphics[height=1.2in]{AR-figs/REBGB9900P5bnodesprop01.png}}
\subfloat{\includegraphics[height=1.2in]{AR-figs/REBGB9900P5cutsprop05.png}}
\subfloat{\includegraphics[height=1.2in]{AR-figs/REBGB500P5bnodesprop01.png}}
\subfloat{\includegraphics[height=1.2in]{AR-figs/REBGB500P5cutsprop05.png}}
{\small Run 3 boundary statistics} \hspace{1.5in} {\small Run 4 boundary statistics}
\caption{Arkansas block groups partitioned in to 4 districts, with population deviation limited to
5\% from ideal.
Imposing a compactness constraint makes the {\sffamily Flip}\xspace chain unable to move very far.}
\label{fig:AR_IA}
\end{figure}
Using thresholds or constraints to ensure that the {\sffamily Flip}\xspace proposals remain reasonably
criteria-compliant requires a major tradeoff. While this enforces validity, it is difficult for {\sffamily Flip}\xspace Markov chains to generate substantively distinct partitions under tight constraints. Instead the chain
can easily flip the same set of boundary nodes back and forth and remain in a small neighborhood around the initial plan. See the second run in Figure \ref{fig:AR_IA} for an example. Sometimes, this is because an overly tight constraint disconnects the state space
entirely and leaves the chain exploring a small connected component.\footnote{An example of this behavior was presented in
\cite[Fig 2]{SPP1}, though its significance was misinterpreted by the authors with respect to the test in \cite{pegden}.}
Recombination responds better to sharp constraints, and {\sffamily ReCom}\xspace chains do not tend to run at the limit values
when constrained. The interactions between various choices of constraints and priorities are so far vastly under-explored. In \S\ref{sec:metropolis}, we will consider the use
of preferentially weighting steps rather than constraining the chains.
\subsection{Projection to summary statistics}
The space of districting plans is wildly complicated and high-dimensional. For the redistricting application,
we are seeking a way to understand the measurable properties of plans that have political or legal relevance,
such as their partisan and racial statistics; this amounts
to projection to a much lower-dimensional space.
\begin{figure}[!h]
\centering
\begin{tikzpicture}
\node at (0,1.5) {\includegraphics[height=3.3cm,width=4.3cm]{Finalish_Figures/52grids/0B100P5start.png}};
\draw (3,0) rectangle (7,3);
\draw (8,0) rectangle node {$A'$} (9.6,3);
\draw (9.6,0) rectangle node {$B'$} (12,3);
\draw (8,0) rectangle (12,3);
\draw (3,0) rectangle node {$B$} (7,1.8);
\draw (3,1.8) rectangle node {$A$} (7,3);
\node at (-1,-2) {\includegraphics[height=1.25in]{Finalish_Figures/52grids/0B100P5GYBP.png}};
\node at (-1,-4) {{\sffamily Flip}\xspace, $A$ share};
\node at (3,-2) {\includegraphics[height=1.25in]{Finalish_Figures/52grids/0B100P5PPBP.png}};
\node at (3,-4) {{\sffamily Flip}\xspace, $A'$ share};
\node at (7,-2) {\includegraphics[height=1.25in]{Finalish_Figures/52grids/2RE0B100P1GYBP.png}};
\node at (7,-4) {{\sffamily ReCom}\xspace, $A$ share};
\node at (11,-2) {\includegraphics[height=1.25in]{Finalish_Figures/52grids/2RE0B100P1PPBP.png}};
\node at (11,-4) {{\sffamily ReCom}\xspace, $A'$ share};
\node at (5,-7) { \begin{tabular}{|c|c|c|c|c|}
\hline
{}& \multicolumn{2}{|c|}{{\sffamily Flip}\xspace}&\multicolumn{2}{|c|}{{\sffamily ReCom}\xspace}\\
\hline
\hline
\# Seats & $A$&$A'$ &$A$&$A'$ \\
\hline
0 &995,158 &0 &1 &0 \\
\hline
1 &4,842 &0 &1 & 0\\
\hline
2 & 0&0 &1 &0 \\
\hline
3 &0 &0 &1,652 &1,574 \\
\hline
4 &0 &1,000,000 & 6,993&7,561 \\
\hline
5 & 0&0 &1,352 &865 \\
\hline
$\ge$ 6 & 0&0&0 &0 \\
\hline
\end{tabular}};
\end{tikzpicture}
\caption{Boxplots and a table of push-forward statistics for two synthetic elections on a grid, with one million {\sffamily Flip}\xspace steps
and 10,000 {\sffamily ReCom}\xspace steps. The boxplots show the proportion of the district made up of the group $A$ or $A'$ across
the ten districts of the plan. The table records the number of districts with an $A$ or $A'$ majority for each plan.}
\label{fig:grid_plots}
\end{figure}
\noindent Many of the metrics of interest on districting plans are formed by summing some value at each node of each district. For example, the winner of an election is determined by summing the votes for each party in each geographic unit that is assigned to a given district, and so ``Democratic seats won" is a summary statistic that is real- (in fact integer-) valued.
It is entirely plausible that chains which may mix slowly in the space
of partitions will converge much more quickly in their projection to some summary statistics.
To investigate this possibility, we begin with a toy example with synthetic vote data on a grid,
comparing the behavior of the {\sffamily Flip}\xspace and {\sffamily ReCom}\xspace proposals (Figure \ref{fig:grid_plots}).
For each Markov chain, we evaluate two vote distributions where each node is assigned to vote for a single party. In the first election, the votes for Party $A$ are placed in the top 40 rows of the $100\times 100$ grid, while in the second election, the votes for Party $A'$ are placed in the leftmost 40 columns of the grid. We use the familiar vertical-stripes partition as our initial districting plan. The underlying vote data and initial partition are shown in the top row of Figure \ref{fig:grid_plots}.
The results confirm that in this extreme example the {\sffamily Flip}\xspace chain is unable to produce diverse election outcomes for either
vote distribution.
Over 1,000,000 steps the {\sffamily Flip}\xspace ensemble primarily reported one seat outcome in each scenario: four seats in the first setup and
zero seats in the second. The {\sffamily ReCom}\xspace ensemble saw outcomes of three, four, or five seats, and the histograms are in qualitative agreement after only 10,000 steps.\footnote{We have carried out
longer runs in order to see when this observed obstruction to mixing in this particular experiment is overcome. One billion steps seems to suffice on a $60\times 60$ grid, but not on a $70\times 70$.}
The corresponding boxplots show a more detailed version of this story, highlighting the ways in which each ensemble captures the spatial distribution of voters.
In both cases, the flip walk has trouble moving from its initial push-forward
statistics, which causes it to return very different answers in the two scenarios. The recombination walk takes just a few steps
to forget its initial position and then returns consonant answers for the two cases.
We note that this {\sffamily Flip}\xspace chain is far from mixed after a million steps, so the evidence here does not help us compare
its stationary distribution to that of {\sffamily ReCom}\xspace.
\subsection{Weighting, simulated annealing, and parallel tempering}\label{sec:annealing}\label{sec:metropolis}
As we have shown above, the {\sffamily Flip}\xspace proposal tends to create districts with extremely long boundaries, which does not produce a comparison ensemble that is practical for our application. To overcome this issue, we could attempt to modify the proposal to favor districting plans with shorter boundaries. As noted above, this is often done with a standard technique
in MCMC called the \emph{Metropolis--Hastings} algorithm: fix a compactness score, such as a
notion of boundary length $|\partial P|$, prescribe a distribution proportional to $x^{|\partial P|}$ on the state space,
and use the Metropolis--Hastings rule to preferentially accept more compact plans.
As discussed above in \S\ref{sec:RP=NP}, there are computational obstructions to
sampling proportionally to $x^{|\partial P|}$ \cite{lorenzo}. Even if we are unable to achieve a perfect sample from this distribution, however, it could be the case that this strategy generates a suitably diverse ensemble in reasonable time for our applications.
The {\sffamily Flip}\xspace distribution was already slow to mix, and Metropolis--Hastings adds an additional score computation and accept/reject decision at every step to determine whether to keep a sample; this typically implies that this variant runs more slowly than the unweighted proposal distribution. To aid in getting reliable results from slow-mixing systems, it is common practice to employ another MCMC variant called \emph{simulated annealing}, which iteratively tightens the prescribed distribution toward the desired target---effectively taking larger and wilder steps initially to promote randomness, then becoming gradually more restrictive.
To test the properties of a simulated annealing run based on a Metropolis-style weighting,
we run chains to partition Tennessee and Kentucky block groups into nine and six Congressional districts, respectively.
We run the {\sffamily Flip}\xspace walk for 500,000 steps beginning at a random seed drawn by the recursive tree method.
The first 100,000 steps use an unmodified {\sffamily Flip}\xspace proposal; Figure \ref{fig:anneal_plots} shows that after this many steps,
the perimeter statistics are comparable to the Arkansas outputs above, with over 90\% boundary nodes and nearly
50\% cut edges.
This initial phase is equivalent to using an acceptance function proportional to $2^{\beta |\partial P|}$ with $\beta=0$.
The remainder of the chain linearly interpolates $\beta$ from 0 to 3 along the steps of the run.
\begin{figure}[!h]
\centering
\subfloat{\includegraphics[width=.3\textwidth]{Finalish_Figures/53states/TN/dfBGB200P5end.png}}\quad
\subfloat{\includegraphics[width=.3\textwidth]{Finalish_Figures/53states/TN/dfBGB200P5middle100000_second.png}}\quad
\subfloat{\includegraphics[width=.3\textwidth]{Finalish_Figures/53states/TN/dfBGB200P5middle200000_second.png}}\\
\subfloat{\includegraphics[width=.3\textwidth]{Finalish_Figures/53states/TN/dfBGB200P5middle300000_second.png}}\quad
\subfloat{\includegraphics[width=.3\textwidth]{Finalish_Figures/53states/TN/dfBGB200P5middle400000_second.png}}\quad
\subfloat{\includegraphics[width=.3\textwidth]{Finalish_Figures/53states/TN/dfBGB200P5middle500000_second.png}}
\subfloat[TN Nodes]{\includegraphics[height=1.25in]{Finalish_Figures/53states/TN/BGB200P5bnodesprop_second.png}}
\subfloat[TN Edges]{\includegraphics[height=1.25in]{Finalish_Figures/53states/TN/BGB200P5cutsprop_second.png}}
\subfloat[KY Nodes]{\includegraphics[height=1.25in]{Finalish_Figures/53states/KY/BGB200P5bnodesprop_second.png}}
\subfloat[KY Edges]{\includegraphics[height=1.25in]{Finalish_Figures/53states/KY/BGB200P5cutsprop_second.png}}
\subfloat{\includegraphics[width=.3\textwidth]{Finalish_Figures/53states/KY/dfBGB200P5end.png}}\quad
\subfloat{\includegraphics[width=.3\textwidth]{Finalish_Figures/53states/KY/dfBGB200P5middle100000_second.png}}\quad
\subfloat{\includegraphics[width=.3\textwidth]{Finalish_Figures/53states/KY/dfBGB200P5middle200000_second.png}}\\
\subfloat{\includegraphics[width=.3\textwidth]{Finalish_Figures/53states/KY/dfBGB200P5middle300000_second.png}}\quad
\subfloat{\includegraphics[width=.3\textwidth]{Finalish_Figures/53states/KY/dfBGB200P5middle400000_second.png}}\quad
\subfloat{\includegraphics[width=.3\textwidth]{Finalish_Figures/53states/KY/dfBGB200P5middle500000_second.png}}
\caption{Snapshots of the TN and KY annealing ensembles after each 100,000 steps. Comparing the starting and ending states shows only slight changes to the plans as a result of the boundary segments mostly remaining fixed throughout the chain. }
\label{fig:anneal_plots}
\end{figure}
Figure \ref{fig:anneal_plots} shows how these Tennessee and Kentucky chains evolved. Ultimately, there is a relatively small difference between the initial and final states in both examples: the simulated annealing has caused the random walk to return to very near its start point. This is due to the properties of the {\sffamily Flip}\xspace proposal. The districts grow tendrils into each other, but the boundary segments rarely change assignment. Thus, when the annealing forces the tendrils to retract, they collapse near the original districts, and this modified {\sffamily Flip}\xspace walk has failed to move effectively through the space of partitions.
Other ensemble generation approaches such as \cite{Fifield_A_2018} use \emph{parallel tempering} (also known as
{\em replica exchange}), a related technique in MCMC also aimed at accelerating its dynamics. In this algorithm, chains are run
in parallel from different start points at different temperatures, then the temperatures are occasionally exchanged. Exactly the issues highlighted above apply to the individual chains in a parallel tempering run, making this strategy struggle to
introduce meaningful new diversity.
These experiments suggest that the tendency of {\sffamily Flip}\xspace chains to produce fractal shapes is extremely difficult to remediate
and that direct attempts to do so end up impeding any progress of the chain through the state space.
On moderate-sized problems, this can conceivably be countered with careful tuning and extremely long runs.
By contrast, {\sffamily ReCom}\xspace generates plans with relatively few cut edges (usually comparable to human-made plans) by default, and our experiments indicate that its samples are uncorrelated after far fewer steps of the chain---hundreds rather than many
billions.
Weighted variants of {\sffamily ReCom}\xspace can then be tailored to meet other principles by modifying the acceptance probabilities
to favor higher or lower compactness scores, or the preservation of larger units like counties and communities of interest.
With the use of constraints and weights, one can effectively use {\sffamily ReCom}\xspace to impose and compare all
of the redistricting rules and
priorities described above \cite{Alaska,VA-report,VA-criteria}.\footnote{The weighting of a spanning tree {\sffamily ReCom}\xspace
chain is not implemented with a full (reversible) Metropolis-Hastings algorithm for the same reason that the chain is not reversible
in the first place: it is not practical to compute all of the transition probabilities from a given state in this implementation. Nevertheless
a weighting scheme preserves the Markov property and passes the same heuristic convergence tests as before. Several teams
are now producing Recombination-like algorithms that are reversible and still fairly efficient (references to be added when available).}
\newpage
\section{Case study: Virginia House of Delegates}
Finally, we demonstrate the assessment of convergence diagnostics
and the analysis enabled by a high-quality comparator ensemble in a redistricting problem of current legal interest.
For details, see \cite{VA-report}; we include a brief discussion with updated data here in Appendix~\ref{sec:appendix}.
The districting plan for Virginia's 100-member House of Delegates was commissioned and enacted by its
state legislature in 2011, following the 2010 Census. That plan was challenged in
complicated litigation that went before multiple federal courts before reaching the Supreme Court earlier this year,
with the ultimate finding that the plan was an unconstitutional racial gerrymander.
The core of the courts' reasoning was that it is impermissible for the state to have constructed the districts in such a way
that Black Voting Age Percentage (or BVAP) hit the 55\% mark in twelve of the districts. Defending the enacted plan,
the state variously claimed that the high BVAP
was necessary for compliance with the Voting Rights Act and that it was a natural consequence of the
state's geography and demographics.
The courts disagreed, finding that 55\% BVAP was
unnecessarily elevated in 11 of 12 districts, and that it caused dilution of the Black vote elsewhere in the state.
In Appendix~\ref{sec:appendix}, we present various kinds of evidence, focusing on the portion of the state
covered by the invalidated districts and their neighbors.
To assess the possibility that the BVAP is excessively elevated in the top 12 districts without algorithmic output to help,
other human-made plans can be used for comparison, even though these are limited in number and their designers
may have had their own agendas.\footnote{Besides the original enacted
plan, a sequence of replacement proposals introduced in the legislature, reform plans proposed by the NAACP and the
Princeton Gerrymandering Project, and finally the plan drawn by a court-appointed special master.}
Alternately, one could make the observation that the enacted
plan's BVAP values suspiciously jump the 37-55\% BVAP range, the same range that expert reports indicate might be plausibly necessary for Black residents to elect candidates of choice. (See Figure~\ref{fig:proposed_plans}, which shows the alternative
proposed plans and this key BVAP range on the same plot.) But neither of these
adequately controls for the effects of the actual distribution of Black population across the state geography.
For this task, we can employ a large, diverse ensemble of alternatives made without consideration of racial statistics.
Figures~\ref{fig:compare}--\ref{fig:mms} demonstrate that for all the reasons shown in the simpler experiments above---compactness, failure of convergence in projection to racial or partisan statistics---individual {\sffamily Flip}\xspace chains do not produce diverse ensembles, while {\sffamily ReCom}\xspace chains pass tests of quality.\footnote{The metric used in Figure~\ref{fig:mms}
is called the mean-median score; it is a signed measure of party advantage that is one of the leading partisan metrics
in the political science literature.}
In Figure~\ref{fig:VA-ensembles}, we apply the {\sffamily ReCom}\xspace outputs, studying the full ensemble (top plot)
and the winnowed subset of the ensemble containing only plans in which no district exceeds 60\% BVAP (bottom).
This finally allows us to address two key points with the use of an appropriate counterfactual.
First, the BVAP pattern in the enacted plan is not explained by the human geography of Virginia.
Also, since the top 12 districts have elevated BVAP compared to the neutral plans, we can locate the costs across
the remaining districts: it is the next four districts and even the nine after that that exhibit depressed BVAP, supporting claims of
vote dilution. This gives us evidence of the classic gerrymandering pattern of ``packing and cracking"---overly
concentrated population in some districts and dispersed population in other districts that were near to critical mass.
We emphasize that ensemble analysis does not stand alone in the study of gerrymandering, but it provides a unique
ability to identify outliers against a suitable counterfactual of alternative valid plans, holding political and physical
geography constant.
\section{Discussion and Conclusion}
Ensemble-based analysis provides much-needed machinery for understanding districting plans in the context of viable
alternatives: by assembling a diverse and representative collection of plans,
we can learn about the range of possible district properties along several axes, from partisan balance to shape to demographics. When a proposed plan is shown to be an extreme outlier relative to a population of alternatives, we may infer that the plan is better explained by goals and principles that were not incorporated in the model design.
Due to the extremely large space of possible districting plans for most states, we can come nowhere close to complete
enumeration of alternatives.
For this reason, the design of an ensemble generation algorithm is a subtle task fraught with mathematical, statistical, and computational challenges.
Comparator plans must be legally viable and pragmatically plausible to draw any power from the conclusion
that a proposed plan has very different properties.
Moreover, to promote consistent and reliable analysis, it is valuable to connect the sampling method to a well-defined distribution over plans that not only has favorable qualitative properties but also can be sampled tractably. This consideration leads us to study mixing times, which bolster confidence that a sample drawn using MCMC comes from the prescribed stationary distribution.
Across a range of small and large experiments with synthetic and observed data, we find that a run assembled in several
days on a standard laptop produces {\sffamily ReCom}\xspace ensembles whose measurements do not vary substantially between trials,
whether re-running to vary the sample path through the state space or re-seeding at a new starting point.
Many interesting questions remain to be explored. Here is a selection of open questions and research directions.
\paragraph*{Mathematics}
\begin{itemize}
\item Explore the mathematical properties of spanning tree bipartitioning. For instance, what proportion of spanning trees in a grid have an edge whose complementary components have the same number of nodes? Describe the distribution on
2-partitions induced by trees thoroughly enough to specify a reversible version of {\sffamily ReCom}\xspace.
\item Describe the stationary distribution for spanning tree {\sffamily ReCom}\xspace. In particular, many experiments show that
the number of cut edges appears to be normally distributed in a {\sffamily ReCom}\xspace ensemble (see Fig~\ref{fig:chain_ce}(b)).
Prove a central limit theorem for boundary length in
{\sffamily ReCom}\xspace sampling of $n\times n$ grids into $k$ districts, with parameters depending on $n$ and $k$.
\item Prove rapid mixing of {\sffamily ReCom}\xspace for the grid case. Even ergodicity of the chain and diameter bounds for the state
space are difficult open questions.
\item Find conditions on summary statistics that suffice for faster mixing in projection than in the full
space of plans. Relatedly, find a class of ensemble generation techniques that would suffice to get repeatable results
in projection even if the ensembles are not similar in the space of plans.
\end{itemize}
\paragraph*{Computation}
\begin{itemize}
\item Propose other balanced bipartitioning methods to replace spanning trees, supported by fast algorithms.
Subject these methods to similar tests of quality: adaptability to districting principles, convergence in projection
to summary statistics independent of seed, etc.
\item Find effective parallelizations to multiple CPUs while retaining control of the sampling distribution.
\end{itemize}
\paragraph*{Modeling}
\begin{itemize}
\item Study the stability of {\sffamily ReCom}\xspace summary statistics to perturbations of the underlying graph. This ensures that ensemble analysis is robust to some of the implementation decisions made when converting geographical data to a dual graph.
\item Identify sources of voting pattern data (e.g., recent past elections) and summary statistics (e.g., metrics in the political
science literature) that best capture the signatures of racial and partisan
gerrymandering.
\item Consider whether these analyses can be gamed:
could an adversary with knowledge of a Markov proposal create plans that are extreme in a way that is hidden, avoiding
an outlier finding?
\end{itemize}
{\sffamily ReCom}\xspace is available for use as an open-source software package, accompanied by a suite of tools to process maps and facilitate MCMC-based analysis of plans. Beyond promoting adoption of this methodology for ensemble generation, we aim to use this release as a model for open and reproducible development of tools for redistricting. By making code and data public, we can promote public trust in expert analysis and facilitate broader engagement among the many interested parties in the redistricting process.
\section*{Acknowledgments}
We are grateful to the many individuals and organizations whose discussion and input informed our approach to this work. We thank Sarah Cannon, Sebastian Claici, Lorenzo Najt, Wes Pegden, Zach Schutzman, Matt Staib, Thomas Weighill, and Pete Winkler for wide-ranging conversations about spanning trees, Markov chain theory, MCMC dynamics,
and the interpretation of ensemble results. We are grateful to Brian Cannon for his help and encouragement
in making our Virginia analysis relevant to the practical reform effort.
The GerryChain software accompanying this paper was initiated by participants in the Voting Rights Data Institute (VRDI) 2018 at Tufts and MIT,
and we are deeply grateful for their hard work, careful software development, and ongoing involvement.
Max Hully and Ruth Buck were deeply involved in the data preparation and software development that made the
experiments possible. Finally, we acknowledge the generous support of the Prof.\ Amar G.\ Bose Research Grant
and the Jonathan M.\ Tisch College of Civic Life.
|
2,869,038,154,679 | arxiv | \section{INTRODUCTION}
Teleoperation systems with haptic feedback allow a human user to immerse into a distant or
inaccessible environment to perform complex tasks. A typical teleoperation system comprises a slave and a master device, which exchange haptic information (forces, torques,
position, velocity), video signals, and audio signals over a communication network \cite{ferrell1967supervisory}.
In particular, the communication of haptic information (position/velocity and force/torque signals) imposes strong demands on the communication network as it closes a global control loop between the operator and the remote robot. As a result, the system stability is highly sensitive to communication delay \cite{lawrence1993stability}. In addition, high-fidelity teleoperation requires a high sampling rate for the haptic signals of 1 kHz or even higher \cite{colgate1997passivity} to ensure a high quality interaction and system stability. Teleoperation systems, hence, require 1000 or more data packets per second to be transmitted between the master and the slave device. For Internet-based communication, such high packet rates are hard to maintain.
\begin{figure}[t]
\centering
\includegraphics[width=0.35\textwidth]{abs_vs_delay.pdf}
\vspace{-0.1 in}
\caption{The level of abstraction and data complexity in teleoperation systems with update rates and robustness to delays (adapted from \cite{mitra2008model}).}
\label{Fig::abs_vs_delay}
\end{figure}
State-of-the-art solutions that address the aforementioned teleoperation challenges (sensitivity to delay and high packet rate) focus on combining different stability-ensuring control schemes with haptic packet rate reduction methods such as \cite{Xu2016_IEEEToH, Xu2014_TIM}.
On the other hand, since the communication delay can range from a few milliseconds up to several hundred milliseconds, teleoperation systems require a control scheme which stabilizes the teleoperation system in the presence of end-to-end communication delays that are larger than a couple of milliseconds. Fig. \ref{Fig::abs_vs_delay} presents a qualitative analysis of the trade-off between the level of communication delay and the level of abstraction in control schemes for teleoperation \cite{mitra2008model}. We can observe from Fig. \ref{Fig::abs_vs_delay} that passivity-based control, e.g. the time-domain passivity approach (TDPA) described in \cite{Ryu2007, Ryu2010, Artigas2011}, is suitable for short-distance (low-latency) teleoperation with dynamic scenes and a high level of interaction between the master and the remote environment (i.e. high update rate); the model-mediated teleoperation (MMT) \cite{mitra2008model, Hannaford1989, Willaert2012} approach is able to deal with relatively larger communication delays (i.e. for medium or long distance application scenarios), but is unsuitable for quickly changing environments. Teleoperation for very large delay is typically performed using supervisory control and will not be further considered in this paper.
Although all control schemes, in theory, are able to ensure system stability for arbitrary delays, different control schemes have different delay tolerance, introduce different artifacts which degrade the user experience, and require different amounts of resources from the communication network. In short, the teleoperation performance varies with respect to different communication and control schemes. For all considered control schemes, the user's quality-of-experience (QoE) degrades for increasing communication delay. In literature, the impact of communication impairments on the teleoperation performance has been studied objectively and subjectively using the concept of transparency \cite{lawrence1993stability, hannaford1989stability, hashtrudi2000analysis} and perceptual transparency \cite{Hirche2007b, Hirche2012}. In addition, the authors of \cite{arcara2002control} compared 10 different control schemes in terms of stability region, position and force tracking performance, displayed impedance, position drift, etc. However, the control schemes in \cite{arcara2002control} were developed before 2002 and the authors did not consider the impact of haptic data reduction schemes.
Motivated by the above analysis, we propose in this paper a novel solution for time-delayed teleoperation systems based on the state-of-the-art control schemes with perceptual haptic data reduction. The proposed solution maximizes the user's QoE by dynamic switching among different control schemes with respect to different round-trip communication latencies. In order to validate the feasibility of the proposed strategy, we perform a dedicated case study for a virtual teleoperation environment consisting of a one-dimensional spring-damper system.
The remainder of this paper is organized as follows. In Section II, we will give a brief introduction to the considered control schemes for time-delayed teleoperation. In Section III, we will describe the proposed dynamic control scheme switching strategy, which is then validated by a spring-damper experimental setup in Section IV. This case study also demonstrates the tight relationship among QoE, communciation delay and the used control schemes. Finally, Section V concludes this paper with a summary.
\section{Characteristics of Time-delayed Teleoperation with Different Control Schemes}
In order to address the two major communication-related challenges (i.e. time delay and high packet rate) of teleoperation systems, various schemes have been developed by combining different stability-ensuring control approaches with haptic data reduction algorithms. The most representative solutions from the literature are the combination of the TDPA from \cite{Ryu2010} with the perceptual deadband-based (PD) haptic data reduction solution from \cite{hinterseer2008perception}, denoted as TDPA+PD \cite{Xu2016_IEEEToH} in the following, and the combination of the MMT method from \cite{Hannaford1989, Willaert2012} with the haptic data reduction solution from \cite{hinterseer2008perception}, denoted as MMT+PD \cite{Xu2014_TIM} in the following. We will briefly introduce these two approaches in the following two sections.
\subsection{TDPA with Perceptual Haptic Data Reduction}
The TDPA \cite{Ryu2007, Ryu2010, Artigas2011} is a typical passivity-based control scheme for time-delayed teleoperation. The stability arguments are based on the passivity concept, which characterizes the energy exchange over a two-port network and provides a sufficient condition for the input/output stability. The stability of TDPA-based teleoperation systems is guaranteed in the presence of arbitrary communication delays with the help of passivity observers (PO) and passivity controllers (PC). The PO computes the current system energy. The PC adaptively adjusts the customized dampers $\alpha$ and $\beta$ to dissipate energy and thus guarantees the passivity of the system. In \cite{Xu2016_IEEEToH}, perceptual data reduction is incorporated into the TDPA approach. The resulting scheme is called TDPA+PD in the following.
The haptic data reduction blocks are placed after the POs to irregularly downsample the transmission of haptic packets using perceptual thresholds (see Fig.~\ref{Fig::controlScheme} (a)).
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{system_TDPA.pdf} \\
\footnotesize (a) TDPA+PD, adopted from \cite{Xu2016_IEEEToH}
\vspace{0.05 in}
\vfil
\includegraphics[width=0.45\textwidth]{system_MMT.pdf} \\
\footnotesize (b) MMT+PD, adopted from \cite{Xu2014_TIM}
\caption{Overview of different control schemes with haptic data reduction. }
\label{Fig::controlScheme}
\end{figure}
The TDPA is a conservative control design. With increasing delay, it leads to larger distortions in the displayed environment properties (e.g., hard objects are displayed softer than they actually are). In addition, the TDPA+PD method can lead to sudden force changes when the PCs are activated to dissipate the system output energy. This effect becomes stronger with increasing communication delays \cite{ Ryu2010, Xu2016_IEEEToH}.
Therefore, the TDPA+PD approach is suitable for short-distance teleoperation applications which may operate at the edge of the communication network in order to meet the requirement of frequent haptic updating between the master and the slave. Thus, it can deal with a high level of dynamics of the objects (motion, deformation, etc.) and interaction patterns.
\subsection{MMT with Perceptual Haptic Data Reduction}
\label{subsec::MMT}
One major issue of passivity-based control schemes is that the system passivity and transparency are conflicting objectives. This means that the system gains stability at the cost of degraded transparency \cite{lawrence1993stability}. In order to overcome this issue, model-mediated teleoperation (MMT) was proposed to guarantee both stability and transparency in the presence of communication delays \cite{Hannaford1989, Willaert2012}. In the MMT approach, a local object model is employed on the master side to approximate the slave environment. The model parameters describing the object in the slave environment are continuously estimated in real time and transmitted back to the master whenever the slave obtains a new model. On the master side, the local model is reconstructed or updated on the basis of the received model parameters, and the haptic feedback is computed on the basis of the local model without noticeable delay. If the estimated model is an accurate approximation of the remote environment, both stable and transparent teleoperation can be achieved \cite{mitra2008model, Passenberg2010}. In \cite{Xu2014_TIM}, perceptual data reduction is incorporated into the MMT approach. The resulting scheme is called MMT+PD in the remainder of this paper. The data reduction scheme is used to irregularly downsample the velocity signals in the forward channel and the model parameters in the backward channel, using perceptual thresholds (see Fig.~\ref{Fig::controlScheme} (b)). For the model parameters, these thresholds determine whether a model update leads to a perceivable difference in the displayed signal. If not, the model change does not need to be transmitted.
Using the MMT approach, hard objects will not be displayed softer with increasing delay as for the TDPA. Therefore, the MMT approach has better teleoperation quality than the TDPA when the delay is relatively large. However, keeping the local model consistent with the environment at the remote side is challenging for dynamic scenes. This way, the MMT approach is favorable for medium/long distance teleoperation applications and scenarios which are characterized by a low level of scene dynamics.
Although the two approaches address different scenarios, it holds for both that the lower the end-to-end delay, the better the system transparency and hence the QoE.
\section{Proposed QoE-driven Dynamic Control Scheme Swithing Strategy}
Based on the analysis of Section II, we can conclude that different control and communication approaches introduce different types of artifacts into the system. Their performance varies between tasks (e.g. free space versus contact, soft objects versus rigid surface, etc.), and also differs in the robustness towards different network delays, which is the focus of this paper.
\begin{figure}[!t]
\centering
\includegraphics[height=2.8cm,width=0.34\textwidth]{qualityMeasuresHypo.pdf}
\vspace{-0.1 in}
\caption{Hypothetical performance metric as a function of the round-trip delay and control schemes.} \vspace{-0.15in}
\label{ch5:Fig:JointOpt}
\end{figure}
For example, the performance of the TDPA+PD method is mainly influenced by communication delay. The larger the delay, the softer the displayed impedance (stiffness), and the stronger the distortion introduced in the force signals. On the other hand, the performance of the MMT+PD method depends strongly on the model accuracy, which is barely affected by the communication delay for services with low object dynamics. Hence, the TDPA+PD and the MMT+PD methods are the best choice for different delay ranges. In addition, these two methods transmit different types of data in the backward channel, which can lead to very different traffic characteristics over the communication network.
Therefore, the teleoperation system needs to adaptively switch between different control schemes according to the current communication delay in order to achieve the best possible performance. For this, we propose the following joint optimization problem for the currently to be used control approach $\gamma_c $
\begin{equation}
\gamma_c = \mathrm{arg}\max\limits_{\gamma_i \in \Gamma} Q(\tau, \gamma_i)
\end{equation}
where $\tau$ represents the round-trip communication delay, $\gamma_i$ represents the $i$-th control scheme from the set of available control schemes $\Gamma$, and $Q(\tau, \gamma_i)$ represents the teleoperation performance (e.g., expressed in terms of the user's QoE) which is a function of both communication delay and the control scheme. In the next section, we will derive $Q(\tau, \gamma_i)$ for the TDPA+PD and MMT+PD methods through subjective tests.
Hypothetically, the QoE as a function of the communication delay for the two control schemes (TDPA+PD and MMT+PD) can be illustrated as shown in Fig.~\ref{ch5:Fig:JointOpt} (which will be verified with an experimental case study in the next section). We should point out that Fig.~\ref{ch5:Fig:JointOpt} reveals the fundamental trade-off among QoE, communication delay and control approaches. This way, the optimal solution of the proposed optimization problem can be obtained through dynamic switching between control schemes based on the current delay conditions.
\section{Feasibility Validation of the Proposed Design: A Case Study}\label{sec:traffic}
In this section, we use a simple one-dimentional spring-damper setup (as shown in Fig. \ref{Fig::expSetup}) to validate the feasibility of the proposed switching solution. In particular, we evaluate and compare the performance of the two previously discussed control methods in terms of user preference and the generated haptic data traffic. The user's QoE for both control schemes in the presence of difference communication delays is evaluated using subjective tests.
\begin{figure}[t]
\centering
\includegraphics[width=0.4\textwidth]{expSetup.pdf}
\caption{Experimental setup. The communication network, the slave (represented by a haptic interaction point), and the virtual environment are simulated on a PC using the CHAI3D library.} \vspace{-0.1in}
\label{Fig::expSetup}
\end{figure}
\subsection{Experimental Setup}
The experiment was conducted in a virtual environment (VE) developed based on the Chai3D library (www.chai3d.org). The Phantom Omni haptic device was used as the master, while the slave in the VE was designed as a single haptic interaction point (HIP) with negligible mass. The communication network with adjustable delay was simulated in a local PC.
The VE contained a 1D non-linear soft object, whose interaction force $f_e$ is computed using the Hunt-Crossley model \cite{Hunt1975}
\begin{equation}
\label{Eq::HuntCrossley}
f_e=\left\{
\begin{matrix}
Kx^n+Bx^n\dot{x} + \Delta f & x \ge 0 \\
0 & x < 0
\end{matrix}\right.
\end{equation}
where $\Delta f$ is Gaussian distributed measurement noise with a mean of 0 $N$ and a standard deviation of 0.1 $N$. $x$ denotes the penetration (compressed displacement).
Corresponding parameters were set as: $K=200$ N/m, $n=1.5$, and $B=0.5$ N/ms. For the PD+MMT method, a simple linear spring model ($\hat{f}_e = \hat{K}x$) was employed to approximate the environment. This leads to model mismatch and frequent changes in the estimated model stiffness $\hat{K}$ during interaction. The passivity-based model update scheme, introduced in \cite{xu2015passivity}, was applied to ensure stable model updates on the master side. All experiments were conducted on a PC with an Intel Core i7 CPU and 8~GB memory. The whole experimental setup is illustrated in Fig.~\ref{Fig::expSetup}.
The round-trip delays were set to 0 ms, 10 ms, 25 ms, 50 ms, 100 ms, and 200 ms, respectively. For each delay, the subjects tested three systems: the TDPA+PD method, the MMT+PD method, and the 0-delay reference without using any control and data reduction schemes. The reference scenario was shown to the subjects before the real experiments. The original environment impedance was displayed and the best performance (uncompressed, non-delayed) for this setup was provided.
The subjects interacted with the virtual object by pressing its surface and slowly varying the applied force.
The subjects were asked to give a rating by comparing the interaction quality between each control scheme and the reference scenario. They were asked to take all perceivable distortions (e.g. force vibrations, force jumps, perceived impedance variations, etc.) into account when evaluating the interaction quality. The rating scheme was based on Table~\ref{Tab::subjectiveRating}. The reference, designated level 5, was considered as the best performance. The reference can be recalled by the subjects at any time during the experiment.
Each delay case was repeated four times. The order of the tested delay as well as the order of the tested control methods were randomly selected.
There were 12 participants, i.e. subjects (right handed and ranging from the age of 25 to 33), in the experiment. The participants were asked to wear a headset with active noise cancellation to protect them from the ambient noise. During the experiment, they were first provided with a training session, then started the test as soon as they felt familiar with the system setup and the experimental procedure.
\begin{table}[t]
\centering
\caption{Rating scheme for subjective evaluation.}\vspace{-0.1 in}
\begin{tabular}{|c||c|} \hline
Rating level & Description \\ \hline
5 & no difference to the undisturbed signal (perfect) \\ \hline
4 & slightly perceptible disturbing (high quality) \\ \hline
3 & disturbed (low quality) \\ \hline
2 & strongly disturbed (very low quality) \\ \hline
1 & completely distorted (unacceptable) \\ \hline
\end{tabular}
\label{Tab::subjectiveRating} \vspace{-0.1 in}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[height=2.2cm,width=0.35\textwidth]{pos10ms.pdf} \\
\footnotesize (a) The master and slave position signals for 10 ms delay.
\vspace{0.01in}
\vfil
\includegraphics[height=2.2cm,width=0.35\textwidth]{pos100ms.pdf} \\
\footnotesize (b) The master and slave position signals for 100 ms delay.
\vspace{0.01in}
\vfil
\includegraphics[height=2.2cm,width=0.35\textwidth]{force10ms.pdf} \\
\footnotesize (c) The master and slave force signals for 10 ms delay.
\vspace{0.01in}
\vfil
\includegraphics[height=2.2cm,width=0.35\textwidth]{force100ms.pdf} \\ \footnotesize (d) The master and slave force signals for 100 ms delay.
\vspace{0.01in}
\vfil
\includegraphics[height=2.3cm,width=0.35\textwidth]{k10ms.pdf} \\
\footnotesize (e) Estimated, transmitted, and applied stiffness values for 10 ms delay.
\vfil
\includegraphics[height=2.3cm,width=0.35\textwidth]{k100ms.pdf} \\
\footnotesize (f) Estimated, transmitted, and applied stiffness values for 100 ms delay.
\vspace{-0.1 in}
\caption{Measurements of the position and force signals and the estimated stiffness for the MMT+PD method. }
\label{ch5:Fig:modeling}
\end{figure}
\subsection{Environment Modeling for MMT+PD}
Before discussing the subjective evaluation, it is necessary to pay attention to the modeling results of the MMT+PD method. Different from the TDPA+PD method, in which the communication delay has a dominant influence on the system performance, the modeling accuracy is the key factor that affects the system performance of the MMT+PD method. This means that once the model of the MMT+PD method is fixed for a static or slowly varying environment, the teleoperation quality degrades only slowly with increasing delay.
As an example, the measurements of the position, force, and estimated stiffness for delays of 10 ms and 100 ms are shown in Fig.~\ref{ch5:Fig:modeling}. For both delays, similar master position inputs lead to similar force signals and parameter estimates. This verifies that the communication delay in the tested range has only minor effects on the system performance.
In contrast, for the TDPA+PD method, a significant difference between the force signals for 10 ms delay and the force signals for 100 ms delay can be observed from Fig.~\ref{ch5:Fig:TDPAforce}. This confirms that the communication delay has a higher influence on the performance of the TDPA+PD method than of the MMT+PD method.
From Fig.~\ref{ch5:Fig:modeling} (c) and (d), we can observe unexpected peaks in the master force signals. This is caused by the overshooting in the stiffness estimation (unstable estimation) at every initial contact instants. After the estimates converge to the correct values, the master force, which is computed locally based on the applied linear spring model, follows the slave force without much deviation.
The estimated, transmitted, and applied stiffness values for 10 ms and 100 ms delays are shown in Fig.~\ref{ch5:Fig:modeling} (e) and (f). The use of the perceptual deadband-based (PD) data reduction approach avoids excessive transmission of the estimated stiffness data (see the green bars). The initial stiffness value for the local model is set to be 100 N/m before the slave's first contact with the object. Except for the time periods of overshooting and free space motion, the estimates slightly vary during the compress and release phases, leading to frequent packet transmission. Since a linear spring model is used to approximate the non-linear soft object, the estimated stiffness cannot be a constant value during the interaction. The more the object is compressed, the higher is the estimated stiffness. In addition, the passivity-based model update scheme proposed in \cite{Xu2014_TIM} is employed in this paper to guarantee stable and smooth changes in the applied stiffness values on the master side (represented by the blue solid lines).
Note that the strongly varying estimated stiffness at each initial contact leads to a mismatch between the master and slave force. This can disturb the subject's perception of the object stiffness and jeopardizes the teleoperation quality.
\begin{figure}[t]
\centering
\includegraphics[height=2.2cm,width=0.35\textwidth]{TDPA10ms.pdf} \\
\footnotesize (a) Delay of 10 ms.
\vspace{0.05in}
\vfil
\includegraphics[height=2.2cm,width=0.35\textwidth]{TDPA100ms.pdf} \\
\footnotesize (b) Delay of 100 ms.
\vspace{-0.05in}
\caption{Measurements of the force signals for the TDPA+PD method. }
\label{ch5:Fig:TDPAforce} \vspace{-0.0 in}
\end{figure}
\subsection{Packet Rate}
Packet rate reduction over the network without introducing significant distortion is an important system capability to adaptively deal with different network conditions. It can be achieved by using a proper deadband parameter (DBP) for the TDPA+PD method as discussed in \cite{Xu2016_IEEEToH}. Furthermore, the MMT+PD method does not require a high update rate, especially for static or slowly varying environments. Model parameters are only updated when a significant model mismatch is detected. For the MMT+PD method, the stiffness is estimated every 1 ms based on the most recent 100 samples (position and force). The DBP in this experiment is set to 0.1 for both control schemes, indicating that a packet transmission is triggered when the change in force or estimated stiffness value is larger than 10\%.
The packet rates for the two control schemes averaged over all subjects during their interaction with the soft object are shown in Fig.~\ref{ch5:Fig:packRate}. Although the applied local model mismatches the environment model, the average packet rates of the MMT+PD method are still much lower than for the TDPA+PD method. This is one of the strengths of the MMT+PD method compared to the TDPA+PD method.
For the MMT+PD method, triggering of packet transmission is concentrated in the periods of unstable estimation (e.g. at every initial contact and during the release). In addition, if the applied local model for the MMT method is more accurate, the packet rate during the interaction can be additionally reduced.
For the TDPA+PD method, the average packet rates over the tested delay range are 30$-$50 packets/s. It is noted that the packet rate of the TDPA+PD method is highly influenced by the interaction frequency. A larger interaction frequency will lead to more quickly varying velocity and force signals, resulting in higher packet rate. In this experiment, the subjects controlled their interaction frequency to be lower than 1 Hz with the help of a visual guide.
\begin{figure}[t]
\centering
\includegraphics[width=0.35\textwidth]{packRate.pdf}
\vspace{-0.1 in}
\caption{Average packet rate over all subjects during the interaction with the soft object.}
\vspace{-0.1in}
\label{ch5:Fig:packRate} \vspace{-0.1 in}
\end{figure}
\vspace{-0.1 in}
\subsection{QoE vs. Communication Delay} \label{subsec::QoE}
A quantitative evaluation of the subjective ratings (QoE) for the two control methods is illustrated in Fig.~\ref{Fig::expRes}. The QoE for both control methods degrades with increasing communication delay. For the MMT+PD method, the QoE is fairly stable, which confirms that the QoE of the MMT+PD approach mainly depends on the model accuracy, rather than the communication delay. In contrast, the TDPA+PD method is more sensitive to delay, as discussed in Section II.
According to Fig.~\ref{Fig::expRes}, the TDPA+PD method is able to provide relatively higher QoE than the MMT+PD method when the communication delay is small. However, the QoE of the TDPA+PD approach decreases quickly with increasing delay. This is because the subjects will perceive more vibrations and force jumps, as well as softer environment impedance when the delay grows. The overall rating results show that the subjects prefer the TDPA+PD method for small delays, and the MMT+PD method for large delays.
Based on the four-parameter logistic (4PL) \cite{campbell1994methods} curve fitting algorithm, we can obtain the QoE performance function with respect to the round-trip delay for both TDPA+PD and MMT+PD methods as:
\begin{equation}
Q(\tau,\gamma_i)=\frac{D_{\gamma_i}+(A_{\gamma_i}-D_{\gamma_i})}{1+(\frac{\tau}{C_{\gamma_i}})^{B_{\gamma_i}}}
\end{equation}
where $\gamma_1$ and $\gamma_2$ denote the TDPA+PD and MMT+PD methods, respectively. Thus, $A_{\gamma_1}=2.088, B_{\gamma_1}=-1.82, C_{\gamma_1}= 58.48, D_{\gamma_1}=4.585$, and $A_{\gamma_2}=0, B_{\gamma_2}=-1.187, C_{\gamma_2}= 793.7, D_{\gamma_2}=3.64$. An illustration of the curve fitted QoE performance is presented in Fig. \ref{Fig::qoeFitting}
By combing Eqn. (1) and (3), we can implement the proposed control scheme switching appraoch for this exemplary teleoperation use case. With a critical switching point of 50 ms (observed from Figs. \ref{Fig::expRes}-\ref{Fig::qoeFitting}), the TDPA+PD method should be adopted for short communication delays, otherwise, the MMT+PD method.
\begin{figure}[t]
\centering
\includegraphics[width=0.35\textwidth]{subRating.pdf} \vspace{-0.1 in}
\caption{Subjective ratings vs. communication delay for the two control schemes.} \vspace{-0.1in}
\label{Fig::expRes}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.34\textwidth]{Jqoe_curvefitting.pdf} \vspace{-0.1 in}
\caption{QoE performance of different teleoperation systems with respect to delay (obtained via curve fitting).} \vspace{-0.1in}
\label{Fig::qoeFitting}
\end{figure}
\section{CONCLUSIONS}
In this paper, we proposed a novel QoE-driven control scheme switching strategy for teleoperation systems. It is shown that the dynamic switching among different control schemes is essential in order to achieve the best QoE performance under different network conditions. We validated the feasibility of the proposed design with a dedicated case study. The simulation results confirm the efficiency of the proposed approach.
More importantly, this research revealed the intrinsic relationship among human perception, communication and control for time-delayed teleoperation systems. Therefore, we believe this paper can be considered as a fundamental reference for the future haptic communication research, especially in the area of joint optimization of communication and control.
\addtolength{\textheight}{-12cm}
\bibliographystyle{IEEEtran}
|
2,869,038,154,680 | arxiv | \section{Introduction}
Semiconductor-based quantum dots and transition-metal oxides are both subjects of current interest in condensed matter physics, yet with seemingly little connection to each other.
Transition-metal oxides are studied typically as bulk materials in which atomic-scale Coulomb interactions profoundly shape the electronic structure and induce phenomena that range from high temperature superconductivity to colossal magnetoresistance.
Quantum dots, on the other hand, are typically built from semiconductors described by relatively simple band structures, with
atomic-scale interactions folded into Fermi liquid parameters such as effective mass.
Here we show that a simple mesoscopic structure involving two coupled semiconductor-based quantum dots can provide a window into the behavior of bulk strongly-correlated systems.
Specifically, we focus on the non-trivial role that disorder plays in altering the electronic properties of transition-metal oxides.
A well-known effect of disorder in interacting systems is the appearance of a feature in the density of states pinned to the Fermi level, referred to as a zero-bias anomaly.
The properties of such anomalies can be used as a probe of electronic structure.
Well established theoretical frameworks exist for understanding the energy dependence of this feature in weakly correlated metals, which is known as the Altshuler-Aronov zero-bias anomaly\cite{Altshuler1985}, and of a similar feature in insulators, known as the Efros-Shklovskii Coulomb gap\cite{Efros1975}. A crossover between these metallic and insulating limits in the weakly correlated regime has been observed in experiments\cite{Butko2000}.
When similar measurements are made in transition metal oxides, zero-bias anomalies are found, but they generally do not match either the Altshuler-Aronov or the Efros-Shklovskii pictures.\cite{Sarma1998}
Progress has been made in constructing an alternative framework for understanding zero-bias anomalies in strongly-correlated systems using the Anderson-Hubbard model.\cite{Chiesa2008,Wortis2010,Chen2010,Wortis2011,Chen2011,Chen2012}
To use the disorder-driven zero-bias anomaly that is observed in measurements as a probe of strong correlations in real materials, more robust contact between theory and experiment is needed.
Theoretical work suggests that key physics responsible for the zero-bias anomaly in strongly-correlated systems is captured in
an ensemble of two-site Anderson-Hubbard systems.\cite{Wortis2010}
Here, we build on this insight to describe a link between the incompletely-understood zero-bias anomaly in bulk disordered strongly-correlated systems, and measurements in double quantum dots (DQDs) that have been commonplace for decades in the mesoscopics community.\cite{RevModPhys.75.1}
To researchers in bulk strongly-correlated materials, the message of this work is that DQDs provide a controlled environment in which to see the kinetic-energy-driven zero-bias anomaly unique to disordered strongly-interacting systems.
To researchers in the semiconductor mesoscopics community, the message is that an unconventional analysis of transport data through two parallel-coupled quantum dots displays a zero-bias anomaly that is in fact a fingerprint of strong correlations.
Whereas transition metal oxides can generically be described within a many-site version of the Anderson-Hubbard model, DQDs can be described by a two-site Anderson-Hubbard model with each dot corresponding to a single site.
In order to connect bulk tunneling measurements with DQD measurements, we consider DQDs in the parallel configuration in which both dots couple to both the source and the drain such that the tunneling process may occur through both dots simultaneously.\cite{Chan2002,holleitner2003,Chan2002,Chen2004,hubel2008,keller2014} This is in contrast to the more conventional series coupling in which electrons tunnel sequentially through the two dots.
Disorder that would be present in a bulk material is mimicked in the DQD system by averaging over gate-voltage-induced energy level offsets.
The resulting average conductance through the DQD system contains, under some conditions, a zero-bias anomaly that has not been recognized previously.
This paper begins with a brief presentation of the two-site Anderson-Hubbard model and its realization in DQDs, followed by an overview of past work on the zero-bias anomaly in the ensemble-averaged density of states.
We include discussion of the effect of nearest-neighbor interactions (electrostatic coupling), a Hamiltonian term that has been explored in bulk systems\cite{Song2009,Chen2012,Wortis2014} but which was not included in prior discussion of ensembles of two-site systems\cite{Wortis2010,Chen2010,Wortis2011}
and is relevant to DQDs.
We then explore how the zero-bias anomaly is reflected in DQD measurements, proposing an experimental approach to identify the kinetic-energy-driven effect that is unique to strongly-correlated systems.
\section{The two-site Anderson-Hubbard model and a double quantum dot}
\label{system}
We are considering systems consisting of two sites, each of which has a variable site potential and between which tunneling may occur (Fig.~\ref{dqd}a).
A two-site Anderson-Hubbard model is described by the following Hamiltonian:
\begin{eqnarray}
{\hat H} &=&
- t \sum_{\sigma=\uparrow, \downarrow} \left( {\hat c}_{1\sigma}^{\dag} {\hat c}_{2\sigma} + {\hat c}_{2\sigma}^{\dag} {\hat c}_{1\sigma} \right) \nonumber \\
& & + \sum_{i=1,2} \left( \epsilon_i {\hat n}_i + U {\hat n}_{i\uparrow} {\hat n}_{i\downarrow} \right)
\label{ahm}
\end{eqnarray}
${\hat c}_{i\sigma}$ and ${\hat n}_{i\sigma}$ are the annihilation and number operators for lattice site $i$ and spin $\sigma$.
$t$ is the hopping amplitude and $U$ the strength of the on-site Coulomb repulsion.
Disorder is modelled by choosing $\epsilon_i$, the energy of the orbital at site $i$, from a uniform distribution of width $\Delta$.
DQDs offer an approach to realize this Hamiltonian in experiment (Fig.~\ref{dqd}b).
DQDs are commonly built in semiconductor-based two-dimensional electron gases using metal gates to confine and control the electrons\cite{RevModPhys.75.1}. The energy of an orbital on dot 1, $\epsilon_1$, can be tuned by applying a voltage $V_{g1}$ to an adjacent gate.
Likewise for dot 2, $V_{g2}$ tunes $\epsilon_2$. $U$ effectively represents the single-dot charging energies $e^2/C$, where $C$ is the self capacitance of each dot, assumed to be the same. In the measurements described here, site potentials $\epsilon_1$ and $\epsilon_2$ are varied continuously over a range $\Delta>U$, large enough to cause the ground-state occupation of each dot to vary by $\pm 1$.
The height of the tunnel barrier between the two dots, and hence the hopping amplitude $t$, is typically controlled by an additional gate.
To observe the effects of interest here, the size of each individual dot must be small enough that the orbital level spacing is large relative to the thermal energy and to the hopping amplitude between the dots. Furthermore, we assume the dots contain just 0, 1, or 2 electrons so higher orbitals may be neglected. We remind the reader that $\Delta$ in this paper refers to the disorder distribution width, not to orbital level spacing (as is common in DQD literature) as the orbital spacing is assumed to be so large as never to play a role.
In DQD experiments, the two dots are generally close enough that the presence of an electron on dot 1 increases the potential at dot 2.
In the tight-binding model (1) this electrostatic coupling can be represented by adding a nearest-neighbor interaction term
$V{\hat n}_1 {\hat n}_2$ in the Hamiltonian.
\section{The zero-bias anomaly in the ensemble-average density of states}
\label{III}
\begin{figure}
\includegraphics[width=\columnwidth]{Fig1_JF.pdf}
\caption{\label{dqd}
(Color online.) (a) Schematic diagram of a single two-site system.
(b) The DQD realization of the system in (a), with tunnel coupling $t$ between sites and interdot electrostatic coupling $V$. The dots are connected to source and drain leads in the parallel configuration of relevance to this paper.
}
\end{figure}
The density of states (DOS) of an ensemble of two-site Anderson-Hubbard systems (Fig.~1a) has previously been shown to be suppressed at low energy.\cite{Wortis2010,Chen2010,Wortis2011} In order to understand the structure and origin of this zero-bias anomaly, we focus first on a single two-site system and then an ensemble average, showing in each case the effect of inter-site hopping on the
the single-particle DOS.
When the system is non-interacting ($U=V=0$), the DOS is just a histogram of the single-particle states as a function of energy.
For a single two-site system, the DOS will consist of just two peaks of equal weight. Without hopping (tunneling) between the dots, these peaks will be at the site energies $\epsilon_1$ and $\epsilon_2$. With hopping they will be at the energies of the bonding and anti-bonding states.
When interactions are present, the state of the system can no longer be described in terms of single-particle states, and the single-particle DOS is the density of single-particle \emph{transitions}
-- transitions from the many-body ground state (with minimum grand potential $\Omega=E-\mu N$) to many-body excited states for which the particle number is one more or one less than that of the ground state.
With multiple many-body states available, the single-particle DOS of a single interacting two-site system consists of between two and five sharp peaks (depending on the number of possible transitions from the ground state) with varying heights (depending on the magnitude of the corresponding matrix elements).
Fig.\ \ref{zba}(b) shows an example of the DOS, both without and with hopping, for a specific two-site system with site energies as shown in Fig.\ \ref{zba}(a).
\begin{figure}
\includegraphics[width=\columnwidth]{Fig_dos_v5.pdf}
\caption{\label{zba}
(Color online.) (a) Diagram of one possible arrangement of Hubbard orbitals relative to the chemical potential.
(b) The DOS of the system shown in (a) without hopping (black) and with hopping (red).
(c) The ensemble-average DOS with $\Delta=12$, $U=8$, $V=0$ and $t$ as indicated.
(d) The ensemble average DOS with $\Delta=12$, $U=8$, $t=0$ and $V$ as indicated.
(e) The ensemble-average DOS with $\Delta=12$, $U=8$, $V=2$ and $t$ as indicated.
In all cases, $\mu=V+U/2$, corresponding to half filling.
}
\end{figure}
An average over infinitely many disorder configurations (an ensemble of systems) will result in a smooth DOS, because, for each pair of site potentials, the peaks appear at different energies.
The case of purely on-site interactions has been explored in detail elsewhere.\cite{Wortis2010,Chen2010,Wortis2011}
Two central results from those references are (i) without hopping there is no zero-bias anomaly, and
(ii) with hopping, and for sufficiently strong on-site interaction and disorder, there is a zero-bias anomaly the width of which grows linearly with $t$, as shown in Fig.\ \ref{zba}(c).
The DOS suppression can ultimately be traced to the simple lowering of kinetic energy when electrons can spread themselves over both sites.\cite{Wortis2010}
This generic effect manifests in a unique way in systems in which both disorder and interactions are strong, because in these systems the difference in potential between neighboring sites can be roughly equal to $U$ and much greater than $t$.
In order to understand how this leads to suppression of the DOS, consider again Fig.\ \ref{zba}(a), where the alignment of the levels is such that $\epsilon_2 \lesssim \mu \lesssim \epsilon_1+U$. For this system, the $t=0$ ground state contains two particles, one on each site.
When hopping is added, the energy of this ground state is lowered by an amount linear in $t$ due to the spreading of the second electron over both sites.
To linear order in $t$, the 1-particle and 3-particle excited states are not shifted.
Therefore, the energies of the {\it transitions} from the 2-particle ground state to both 1-particle and 3-particle excited states are increased: In Fig.\ \ref{zba}(b) the red peaks (with hopping) are farther from the origin than the black peaks (without hopping) by an amount of order $t$. This shift of the transitions away from zero energy demonstrates the key physics in one system.
The zero-bias anomaly is in the DOS of an ensemble of systems.
One might expect the shifts in transition energies in different systems to cancel each other. Indeed, changes in peak position and peak height with $t$ do depend on the specific site potentials and some cancelation does occur. Taken together, however, the net effect is a systematic shift of weight away from the Fermi level.\cite{Wortis2010}
When nearest-neighbor interactions are included, there is a minimum in the DOS at zero energy even when $t=0$.
Fig.\ \ref{zba}(d) shows the evolution of the $t=0$ DOS as nearest-neighbor interactions are turned on, favoring charge ordering.\cite{Wortis2014}
The effect of nonzero $V$ is important for comparison with DQD experiments in which electrostatic coupling will always be present.
However the resulting $V$-dependent ZBA is of less interest because it is independent of the strength of the on-site interactions.
In contrast, the $t$-dependent ZBA is unique to the strongly-correlated regime. Taking both effects together, Fig.~\ref{zba}(e) shows the evolution of the DOS for increasing $t$ at nonzero $V$.
It is this last panel that is most relevant to experiments because, in DQDs, $V$ will always be nonzero and $t$ is relatively easy to control.
\section{Experimental signatures of the zero-bias anomaly}
We discuss here two ways in which the physics of the zero-bias anomaly is reflected in the DQD transport measurement sketched in Fig.\ \ref{dqd}(b).
We begin by describing the connection between a suppression in the ensemble DOS near the Fermi level and the stability diagrams often produced in DQD measurements.\cite{Chen2004}
We then discuss how the energy dependence of the zero-bias anomaly can be measured, to identify its physical origin. In particular, our focus is on distinguishing the unique effect of $t$, in a system with both strong interactions and strong disorder, from other effects such as the charge ordering associated with $V$.
\subsection{Seeing the reduction of the DOS near the Fermi level}
\label{IVa}
\begin{figure}
\includegraphics[width=0.8\columnwidth]{Fig_levels+maps_v8}
\caption{\label{levels+maps}(Color online.)
Stability diagrams for a parallel-coupled double quantum dot system with
(a) no hopping, no electrostatic coupling;
(b) no hopping, $V=2$; and
(e) $t=0.6$, $V=2$.
The color scale indicates the magnitude of the current as a function of $V_{g1}$ and $V_{g2}$.
The ground-state occupation of each dot is indicated in the two $t=0$ plots, panels (a) and (b), while in panel (e) the numbers in parentheses indicate the overall ground-state occupation.
In all cases $\Delta=12$, $U=8$, $\mu=V+U/2$ and $\ensuremath{V_{sd}}=0.2$.
Transition spectrum diagrams for $(eV_1,eV_2)=(-3,-4.01)$ (c) $t=0$ and (f) $t\ne 0$;
and for $(eV_1,eV_2)=(-3,-4.03)$ (d) $t=0$ and (g) $t\ne 0$.
Diagrams showing $\ensuremath{\mu_{s}}$ on the left, $\ensuremath{\mu_{d}}$ on the right and the available single-particle-addition transitions. The height of the transition lines indicates energy, and the length indicates the weight of the transition.
}
\end{figure}
It is common practice to plot the current through a double quantum dot system as a function of the gate voltages on the two dots, resulting in what is often called a stability diagram. This is effectively a plot of the DOS near the Fermi level for an ensemble of two-site systems. Examples are shown in Fig.\ \ref{levels+maps} panels (a), (b), (e), and (f) for different settings of $t$ and $V$.
Here we address how these plots reflect the suppression of the ensemble-average DOS near the Fermi level shown in Fig.\ \ref{zba}.
To understand the connection between the DOS and the stability diagram, consider first the simplest case of two independent dots. The conductance through both dots in parallel is shown in Fig.\ \ref{levels+maps}(a), with each dot coupled to the source and drain with applied bias $\ensuremath{V_{sd}} \equiv \ensuremath{\mu_{s}}-\ensuremath{\mu_{d}}$.
We will focus here on a system in which the barrier to the drain is much lower than that to the source, such that $\ensuremath{\mu_{d}}$ sets the ground state of each dot.
Current flows through a dot when a single-particle transition from the ground state has an energy less than $\ensuremath{V_{sd}}$.
These low energy transitions occur only for parameters very close to a change in the ground state occupation, so nonzero current flow appears in Fig.\ \ref{levels+maps}(a) only along lines marking the boundaries between different ground states.
Note that, in contrast to most theoretical work, we adopt the experimental convention that the axes of the stability diagram correspond to increasing occupancy up and to the right.
When the two parallel-coupled dots are nearby to each other and therefore not fully independent, but still without tunneling, electrostatic coupling affects the ground state occupancy of each dot and hence the stability diagram, Fig.\ \ref{levels+maps}(b). Fig.\ \ref{levels+maps}(c) shows an example of the transition spectrum (single-particle DOS) corresponding to a set of parameters for which there is a transition with energy less than $\ensuremath{V_{sd}}$, so current does flow. Indeed there are two such transitions, from the 1-particle ground state $\ket{\uparrow 0}$ to two 2-particle excited states $\ket{\uparrow,\uparrow}$ and $\ket{\uparrow \downarrow}$, which are degenerate when $t=0$ because there is no singlet-triplet splitting.
In contrast, Fig.\ \ref{levels+maps}(d) shows an example of the transition spectrum for a system in which current does not flow: Here the same transitions discussed above have an energy outside the window between source and drain.
As an example, Fig.\ 2(c) of Ref.\ \onlinecite{Chan2002} shows a measured stability diagram of the type shown in Fig.\ \ref{levels+maps}(b) with regions of fixed site occupancy separated by straight lines where current flows.
Note that, while in a calculation each parameter may be tuned independently, in experiments they are often interdependent. In particular, increasing $V_{g1}$ will also slightly increase the potential on dot 2, resulting in lines of current in the experimental diagram that are slanted rather than strictly horizontal and vertical lines as in the theory diagram.
Now consider the case of non-zero $t$, when hopping is allowed between the two dots.
The transitions giving rise to current flow are between many-body states of the DQD system, which here have non-integer weight on each dot.
The current through the system is proportional to the DOS of the DQD system
\begin{eqnarray}
\rho(\omega) &=& {\rho_1(\omega) +\rho_2(\omega) \over 2}
\end{eqnarray}
where $\rho_i(\omega)$ is the local DOS on dot $i$, given by the imaginary part of the local single-particle Green's function.
\begin{eqnarray}
\rho_i(\omega) &=& -{1 \over \pi} {\rm Im}\ G_{ii}(\omega)
\end{eqnarray}
Fig.\ \ref{levels+maps}(e) and (f) show examples of
the resulting stability diagram, without and with electrostatic coupling respectively. Fig.\ \ref{levels+maps}(g) and (h) show examples of the transition spectra (single-particle DOS) for two different parameter settings that would and would not give rise to current flow, respectively.
An experimental example of the curved lines of current shown in Fig.\ \ref{levels+maps}(f) can be seen in Fig.\ 1(b) of Ref.\ \onlinecite{Chen2004}.
The calculated zero-bias anomalies shown in Figs.\ \ref{zba}(c)-(e) emerge from an ensemble of two-site systems with site potentials chosen from a flat distribution as described in Section \ref{system}. The experimental equivalent, measured in a DQD device for which the parameters may be tuned in situ, is to scan gates $V_{g1}$ and $V_{g2}$ over an appropriate region of the stability diagram.
The integrated current over such a scan then reflects the ensemble-average DOS for two-site systems, and a decline in the value of this integral as the barrier height is lowered (and hence as $t$ grows) is consistent with the development of a kinetic-energy-driven zero-bias anomaly.
The suppression of ensemble-average DOS at zero bias emerges from two characteristics of the stability diagrams in Fig.\ \ref{levels+maps}:
both the length and the colour of the lines depend on $V$ and $t$.
The most obvious effect giving rise to the suppression is that the total length of the lines decreases from panel (a), to (b), to (f).
Because the width of the lines is fixed by $\ensuremath{V_{sd}}$, shorter lines means smaller average DOS near the Fermi energy and therefore a smaller integrated current, in the absence of other effects.
Comparing panel (b) with panel (a), we see the DOS suppression associated with inter-site interactions, and by comparing panel (f) with panel (a) we see line-length influence of DOS suppression associated with both inter-site interactions ($V$) and inter-site hopping ($t$).
However, line length is not the whole story. The zero-bias anomaly is also due to changes in the magnitude of the contributions (and hence current, indicated by colour) within each line. Comparing panels (c) and (g) in Fig.\ \ref{levels+maps}, one can see that the weight of the transition that allows the current is much less when hopping is present. This change in weight is indicated in the stability diagram by the much darker color (reduced current) of the line separating the 1-particle and 2-particle ground state regions in panel (f) relative to the corresponding line in panel (b).
In contrast, the weight of the transition facilitating current at the 2-particle to 3-particle boundary is increased, but not by as much as the other is decreased.
The difference in behavior at the two boundaries arises from the inherent asymmetry of the experiment--stronger coupling to the drain, with $\ensuremath{\mu_{s}}$ above $\ensuremath{\mu_{d}}$--which thereby enables transitions associated with the addition, not the removal, of a particle.
The combined effect of the changes in the weight (current) at the two boundaries when hopping is added is a further reduction of the ensemble-average DOS near the Fermi level, beyond the reduction due to reduced line length.
\subsection{Seeing the energy dependence of the zero-bias anomaly}
\label{IVb}
The analysis above, illustrating the suppression of integrated current in the DQD system due to finite $t$ and $V$, cannot distinguish between the kinetic-energy-driven zero-bias anomaly and other sources of DOS suppression near the Fermi level, such as zero-bias anomalies driven by alternative physical mechanisms or broader reduction in the DOS.
To make matters worse, measuring the drop in current due to a change in $t$ would be very challenging from an experimental point of view, as gates to change $t$ would typically also affect coupling from the dots to source and drain. A more effective approach is to measure the zero-bias anomaly by the rise in current at finite energy, that is, with finite applied bias ($\ensuremath{V_{sd}}$).
\begin{figure}
\includegraphics[width=0.8\columnwidth]{Fig_Vbias_top_v210628} \\ \vskip 0.1 in
\includegraphics[width=0.8\columnwidth]{Fig_Vbias_bot_v210628}
\caption{\label{Vbias}(Color online.)
The ensemble-average density of states as a function of frequency near the Fermi level for (a) $U=8$ and (f) $U=1$. In all cases, $\Delta=12$, $V=0.6$, $\mu=V+U/2$.
(b)-(e) stability diagrams for $U=8$ and
(g)-(j) stability diagrams for $U=1$ with $t$ and $\ensuremath{V_{sd}}$ as indicated and the same color scale as shown in Fig.\ \ref{levels+maps}.
}
\end{figure}
Differential conductance is a common probe of the DOS in DQD structures, and can readily be measured also as function of $\ensuremath{V_{sd}}$.
To probe the ensemble-average DOS, the differential conductance across the stability diagram can be averaged over $V_{g1}$ and $V_{g2}$, as discussed above.
Fig.\ \ref{Vbias} shows the calculated results from such a gate average over DQD systems with two values of on-site interactions: $U=1$ and $U=8$.
All other parameters are the same for both figures: disorder $\Delta=12$, nearest-neighbor interactions $V=0.6$, and four values of hopping $t$.
Panel (a) shows the case of strong correlations, in the sense that $t<<U$ for all hopping values shown, while in panel (f) $t$ approaches the value of $U$ because $U$ is much smaller.
Although implementing $U\ll\Delta$ as in panel (f) in a DQD system while remaining in the single-electron regime is impossible, the small $U$ case is relevant to bulk systems with screening, and the comparison highlights what is unique about the strongly correlated regime.
Panels (a) and (f) show the ensemble-average DOS close to the Fermi level for the large and small $U$ cases.
The curves without hopping illustrate the DOS suppression caused by nearest-neighbor interactions, resulting in linear dependence on energy out to the energy $V$ in both panels (a) and (f).
With hopping, there is a clear difference in the response of the ensembles with large and small $U$.
In the strongly-correlated (large $U$) case a conspicuous zero-bias anomaly opens as tunneling between the dots is increased, while in the weakly-correlated case no such feature appears.
Why are the two limits so different?
Fig.\ \ref{Vbias} panels (b) and (c) show the change in the stability diagram in the strongly-correlated case when hopping is added, while panels (g) and (h) show the corresponding comparison with weak onsite interactions.
Two energy ratios are relevant -- the relative strength of hopping versus interactions $t/U$ and the relative strength of interactions versus disorder $U/\Delta$.
The ratio $t/U$ is important because
the kinetic-energy-driven zero-bias anomaly arises from an alignment in energy of the potential on one site with the potential on the other site plus $U$ as shown in Fig.\ \ref{zba}(a).\cite{Wortis2010}
As described in Section \ref{III}, the linear-$t$ behaviour is generated by a mixing of just two atomic orbitals $\epsilon_2$ and $\epsilon_1+U$, but not all four orbitals.
This is only possible with $t/U<<1$.
The linear dependence on $t$ is visible in Fig.\ \ref{zba}(c), specifically in the distance between the sharp maxima in the DOS.\cite{Wortis2010}
The influence of nearest-neighbor interaction $V$ obscures this linearity in Fig.\ \ref{zba}(e) and Fig.\ \ref{Vbias}(a).
The ratio $U/\Delta$ is also important because it determines over what fraction of the ensemble the changes just discussed will manifest.
In the $U=1$ case shown here, there is very little change in the stability diagram at most values of the site potentials.
Other than at the center, both the locations of the lines, indicating nonzero DOS contribution, and their colour, indicating the magnitude of the contribution, are very similar in panels (g) and (h).
The $U=8$ case, in contrast, shows significant changes in both the position and the colour of the lines between panels (b) and (c).
Because $U/\Delta$ is small in the $U=1$ case shown, when $t=0.2$ such that $t/U$ is small, the center of the diagram will show similar changes to those seen in the $U=8$ case, but there is no dramatic zero-bias anomaly because the addition of hopping has little effect for most values of the site potentials.
A closer examination of changes in the stability diagram with bias can help to understand the origin of the sharp edge in the zero-bias anomaly, for $U=8$.
Panels (c)-(e) and (h)-(j) of Fig.\ \ref{Vbias} show how the stability diagram evolves with $\ensuremath{V_{sd}}$ for the strongly and weakly correlated cases respectively.
In the weakly correlated case ((b)-(e)), the only change is the width of the region in which nonzero current flows.
In the strongly correlated case ((g)-(j)), on the other hand, new transitions--specifically those to the two triplet states--begin to contribute at higher values of $\ensuremath{V_{sd}}$, sharply increasing the current flow for specific potential configurations.
It is the entrance of these new transitions that creates the sharp edge to the zero-bias anomaly in the ensemble-average DOS.
However, while the sharp edge is visually striking, the connection with bulk systems is in the largest energy scale of the anomaly rather than its detailed shape.
\section{Conclusion}
In summary, we have pointed out two ways in which a characteristic transport signature of disordered strongly-correlated bulk materials can be seen in DQD measurements.
Comparing stability diagrams at fixed $\ensuremath{V_{sd}}$, without and with hopping, shows that hopping suppresses the DOS.
While such diagrams have been observed in DQD systems, they have not previously been connected with the bulk zero-bias anomaly.
Moreover, comparing stability diagrams at fixed hopping as $\ensuremath{V_{sd}}$ is increased allows exploration of the energy dependence of the zero-bias anomaly. Such an analysis, including averaging over gate voltages, has not, to our knowledge, been performed.
The presence of a zero-bias anomaly that depends only on hopping, and for which the maximum energy scale grows linearly with $t$, is a unique signature of strongly-correlated systems.
Observing this effect in DQD systems would establish a first solid point of contact between theory and experiment in the study of the disorder-driven zero-bias anomaly in strongly correlated systems.
\section{Acknowledgements}
R.W. gratefully acknowledges support from the Natural Sciences and Engineering Research Council (NSERC) of Canada and thanks the Pacific Institute for Theoretical Physics for hosting her. J.F. and S.L. acknowledge support from NSERC, the Canada Foundation for Innovation, the Canadian Institute for Advanced Research, and the Stewart Blusson Quantum Matter Institute.
|
2,869,038,154,681 | arxiv | \section{Introduction}
In a recent paper~\cite{Gilkes1994} the EOS collaboration reported
some very interesting results on the critical point behavior of
nuclear matter. The analysis of the data is based on percolation
model predictions due to Campi~\cite{Campi1986}.
In this analysis, three parameters
appear, one being the expected critical multiplicity $\langle m
\rangle_{c}$ and the other
two critical exponents $\gamma$ and $\beta$. The $\gamma$ and $\beta$
exponents are then used to obtain the critical exponent $\tau$. The
purpose of this paper is to show there is a simple connection between
the critical exponent $\tau$ and the critical multiplicity
$\langle m \rangle_{c}$.
The development of this
relation is based on an exactly solvable statistical model of hadronic
matter which shares some features with percolation models. Namely, in
the infinite mass number $A$ limit, an infinite cluster can appear below
a critical point in a variable $x$ called the tuning parameter, just
as in percolation models an infinite cluster may exist only at and above
a critical site or bond probability $p_{c}$.
Thermodynamic and statistical arguments relate the tuning parameter
$x$ to the volume, temperature, binding energy coefficient $a_{V}$,
and level spacing parameter $\varepsilon_{0}$,
namely
\begin{equation}
x = {V \over v_{0}(T)} \exp \left\{-{a_{V} \over k_{B}T} - {k_{B}T \over
\varepsilon_{0}} {T_{0} \over T+T_{0}} \right\} \;.
\end{equation}
Further details can be found in Ref.~\cite{Chase1994}.
A parallel of the exactly solvable model of hadronic matter with Bose
condensation and Feynman's approach to the $\lambda$ transition in
liquid helium was also noted in previous studies~\cite{Chase1994}.
In this parallel, the cluster size in fragmentation is equivalent to
the cycle length in the cycle class decomposition of the symmetric
group for the Bose problem. The number of clusters of size $k$ is
the same as the number of cycles of length $k$. The complete
fragmentation into nucleons corresponds to all unit cycles, which is
the identity permutation. Bose condensation corresponds to the
appearance of longer and longer cycles in the cycle class
decomposition of the symmetric group.
In the statistical model of Ref.~\cite{Chase1994}
each fragmentation outcome happens
with a probability proportional to the Gibbs weight
\begin{equation}
P(\vec{n}) = {1 \over Z_{A}(\vec{x})}
\prod_{k \ge 1} {x_{k}^{n_{k}} \over n_{k}!}
\end{equation}
where $Z_{A}(\vec{x})$ is the canonical partition function.
Here $x_{k} = x/k^{\tau}$ with $x$ as given above and
$\tau$ the critical exponent
originally introduced in nuclear fragmentation by Finn, et
al.\cite{Finn1982}.
Cluster distributions can be obtained from the partition functions
$Z_{A}(\vec{x})$ using
\begin{equation}
\langle n_{k} \rangle
= x_{k} {Z_{A-k}(\vec{x}) \over Z_{A}(\vec{x})}
\end{equation}
In turn the $Z_{A}(\vec{x})$ can be obtained by recursive techniques,
and in particular
\begin{equation}
Z_{A}(\vec{x})
= {1 \over A} \sum_{k \ge 1} k x_{k} Z_{A-k}(\vec{x})
\end{equation}
with $Z_{0}(\vec{x}) = 1$. If $A$ is very large we can work in
the grand canonical limit, where
$\langle n_{k} \rangle = x z^{k}/k^{\tau}$,
$\langle m \rangle = x g_{\tau}(z)$ and $A = x g_{\tau-1}(z)$.
Here $z \le 1$ is the fugacity and $g_{\tau}(z) = \sum_{k} z^{k}/k^{\tau}$.
At $z = 1$, $g_{\tau-1}(z)$ is finite only if $\tau > 2$, which indicates
a critical point at $A = x_{c} g_{\tau-1}(1)$.
At this point the expected multiplicity $\langle m \rangle_{c}$
is given by
\begin{equation}
{\langle m \rangle_{c} \over A} = {\zeta(\tau) \over \zeta(\tau-1)}
\end{equation}
where $\zeta(x)$ is the Riemann zeta function.
The EOS collaboration recently reported independent determinations of
$\langle m \rangle_{c}$ and $\tau$. Specifically, they found
$\langle m \rangle_{c} = 26 \pm 1$ and $\tau = 2.14 \pm 0.06$
for the fragmentation of gold nuclei by using a percolation model analysis.
For our model, $\langle m \rangle_{c} = 26 \pm 1$ implies
a critical exponent $\tau = 2.262 \pm 0.013$ which is consistent with
EOS result, but based on a different model.
The critical behavior of this model is most clearly seen from the
behavior of the multiplicity fluctuations
$\langle m^{2} \rangle - \langle m \rangle^{2} \equiv \langle m \rangle_{2}$.
In the grand canonical limit, these fluctuations are given by
\begin{equation}
\langle m \rangle_{2} = \left\{
\begin{array}{ll}
\langle m \rangle & \langle m \rangle \le \langle m \rangle_{c} \\
\langle m \rangle - x {g_{\tau-1}(z)^{2} \over g_{\tau-2}(z)}
& \langle m \rangle > \langle m \rangle_{c}
\end{array} \right. \;,
\end{equation}
which is continuous function with a maximum at the transition point.
However the slope of $\langle m \rangle_{2}$ is discontinuous at the
transition point,
i.e. the peak at the critical point is a cusp. Specifically,
\begin{equation}
\left. x {\partial \over \partial x} \langle m \rangle_{2}
\right|_{x \rightarrow x_{c}^{+}} -
\left. x {\partial \over \partial x} \langle m \rangle_{2}
\right|_{x \rightarrow x_{c}^{-}} = -2 A x_{c}
\end{equation}
Such a discontinuity is indicative of a phase transition.
For finite $A$, the exactly solvable model developed in
Ref.~\cite{Chase1994} leads to a rounded peak, which is shown in
Fig.~\ref{fig:m2vsm} which is a plot of $\langle m \rangle_{2}/A$ vs.
$\langle m \rangle/A$ for $\tau = 2.5$.
This behavior of the multiplicity fluctuation arises from the
sudden appearance of an infinite cluster below the critical point, a
situation that parallels a similar result in percolation models.
In percolation theory an infinite cluster exists
only when the site or bond probability exceeds a critical value as
noted above. For
the nuclear fragmentation model, the critical condition
and the sudden presence of an infinite cluster also parallels the
discussion of Bose condensation. Specifically, the Bose condensation
phenomenon is equivalent to a condensation of nucleons into the
largest cluster and the argument for the presence of the infinite
cluster is the same as that used to discuss the appearance of the Bose
condensed state. Namely, since $A/x = \sum_{k} z^{k} k^{1-\tau}$ where
$x \propto V$, then for $z<1$, the sum $\sum_{k} z^{k} k^{1-\tau}$
is finite. At $z=1$, the sum
is finite only for $\tau > 2$ and for $z >1$ the sum will diverge;
hence the sum must be truncated. This truncation implies the
expectation of certain large clusters must be zero. For $\tau \le 2$
an infinite cluster does not exist which is equivalent to the
condition that Bose condensation does not exist in two or fewer
dimensions.
In summary, an exactly solvable statistical model of heated hadronic
matter leads to a relationship between the critical multiplicity
$\langle m \rangle_{c}$ and critical exponent $\tau$ which
characterizes the power law
fall off of cluster yields with mass number. The critical
multiplicity can be obtained from the peak in the multiplicity
fluctuations, which in the infinite $A$ limit has a cusp behavior due
to the sudden appearance of an infinite cluster in the theory.
This work supported in part by the National Science Foundation
Grant No. NSFPHY 92-12016.
|
2,869,038,154,682 | arxiv | \section{Introduction}
The goal of the present article is to investigate homotopy theoretic properties of L-spectra of the integers. We will concentrate on two particular flavours: On the one hand, we shall consider the classical quadratic L-spectrum $\L^{\mathrm{q}}(\mathbb{Z})$, whose homotopy groups arise as receptacles for surgery obstructions in geometric topology, along with its symmetric companion $\L^{\mathrm{s}}(\mathbb{Z})$ as introduced by Ranicki. On the other, we will study the genuine L-spectra $\L^{\mathrm{gq}}(\mathbb{Z})$ and $\L^\mathrm{gs}(\mathbb{Z})$, that more intimately relate to classical Witt-groups of unimodular forms and are part of joint work with Calm\`es, Dotto, Harpaz, Moi, Nardin and Steimle on the homotopy type of Grothendieck--Witt spectra. We will treat each case in turn.
\subsection{The classical variants}
As indicated above, the main motivation for studying the classical L-groups comes from their relation to the classification of manifolds as established by Wall: One of the main results of (topological) surgery theory associates to an $n$-dimensional closed, connected, and oriented topological manifold $M$ the \emph{surgery exact sequence}
\[ \dots \mathcal{N}_\partial(M\times I) \stackrel{\sigma}{\longrightarrow} \L_{n+1}^\mathrm{q}(\mathbb{Z}\pi_1(M)) \longrightarrow \mathcal{S}(M) \longrightarrow \mathcal{N}(M) \stackrel{\sigma}{\longrightarrow} \L_n^\mathrm{q}(\mathbb{Z}\pi_1(M)). \]
Here, we have denoted by $\mathcal S(M)$, the main object of interest, the set of $h$-cobordism classes of homotopy equivalences to $M$ and by $\mathcal N(M)$ the set of cobordism classes of degree $1$ normal maps to $M$. Finally, $\L_n^\mathrm{q}(\mathbb{Z}\pi_1(M))$ consists of bordism classes of $n$-dimensional quadratic Poincar\'e chain complexes. The classification of manifolds homotopy equivalent to $M$ therefore boils down to understanding the \emph{surgery obstruction} $\sigma$, assigning to a degree $1$ normal map $f \colon N \rightarrow M$ its surgery kernel $\sigma(f) = \fib\big(\mathrm{C}^*(M) \rightarrow \mathrm{C}^*(N)\big)[1]$ with the Poincar\'e structure induced from the Poincar\'e duality of $M$ and $N$.
Now, celebrated work of Ranicki and Sullivan shows that $\sigma$ can be described entirely in terms of L-spectra: It is identified with the composite
\[ (\tau_{\geq 1}\L^{\mathrm{q}}(\mathbb{Z}))_*(M) \longrightarrow \L^{\mathrm{q}}(\mathbb{Z})_*(M) \stackrel{\mathrm{asbl}}{\longrightarrow} \L^{\mathrm{q}}_{*}(\mathbb{Z}\pi_1(M)),\]
where $\tau_{\geq 1}$ denotes connected cover of a spectrum, and $\mathrm{asbl}$ is the $\L$-theoretic assembly map. As an application of this technology let us mention recent progress on Borel's famous rigidity conjecture, which states that any homotopy equivalence between aspherical manifolds is homotopic to a homeomorphism. {In dimensions greater than $4$ the topological $s$-cobordism theorem translates the Borel conjecture to the statement that $\mathcal S^s(M) = \{\mathrm{id}_M\}$, whenever $M$ is aspherical; here $\mathcal S^s(M)$ denotes the set of $s$-cobordism classes of simple homotopy equivalences to $M$. That the assembly map in L-theory is an isomorphism implies that $\mathcal{S}(M) = \{\mathrm{id}_M\}$ and that a similar assembly map in K-theory is an isomorphism yields $\mathcal S^s(M) = \mathcal S(M)$. The combined statement about the assembly maps is known as the Farrell--Jones conjecture, which has been attacked with remarkable success, leading to a proof of Borel's conjecture for large classes of aspherical manifolds; see for example \cite{FJ3,BL2,BLRR}.}\\
It is therefore of great interest to investigate L-spectra, and not just their homotopy groups. Our first set of results relates the symmetric and quadratic $\L$-spectra of the integers, $\L^{\mathrm{s}}(\mathbb{Z})$ and $\L^{\mathrm{q}}(\mathbb{Z})$ to the $\L$-spectrum $\L(\mathbb{R})$ of the real numbers; since symmetric bilinear forms over the real numbers admit unique quadratic refinements, there is no difference between the quadratic and symmetric $\L$-spectra of $\mathbb{R}$, as reflected in our notation. To state our result we note, finally, that by work of Laures and McClure \cite{LauresMCCII} the tensor product of symmetric forms induces an $\mathbb{E}_\infty$-ring structure on symmetric L-spectra, over which the quadratic variants form modules. We show:
\begin{introthm}\label{thmA}
As $\L^{\mathrm{s}}(\mathbb{Z})$-module spectra, $\L^{\mathrm{q}}(\mathbb{Z})$ and $\L^{\mathrm{s}}(\mathbb{Z})$ are Anderson dual to each other. Furthermore, there is a canonical $\mathbb{E}_1$-ring map $\L(\mathbb{R}) \to \L^{\mathrm{s}}(\mathbb{Z})$ splitting the $\mathbb{E}_\infty$-ring map $\L^{\mathrm{s}}(\mathbb{Z}) \rightarrow \L(\mathbb{R})$ induced by the homomorphism $\mathbb{Z} \rightarrow \mathbb{R}$. Using it to regard $\L^{\mathrm{s}}(\mathbb{Z})$ and $\L^{\mathrm{q}}(\mathbb{Z})$ as $\L(\mathbb{R})$-modules there are then canonical equivalences
\[
\L^{\mathrm{s}}(\mathbb{Z}) \simeq \L(\mathbb{R}) \oplus (\L(\mathbb{R})/2)[1] \quad \text{and}\quad
\L^{\mathrm{q}}(\mathbb{Z}) \simeq \L(\mathbb{R}) \oplus (\L(\mathbb{R})/2)[-2] \]
of $\L(\mathbb{R})$-modules. Finally, as a plain spectrum, $\L(\mathbb{R})/2 \simeq \bigoplus_{k \in \mathbb Z} \H\mathbb{Z}/2[4k]$.
\end{introthm}
Some comments are in order:
\begin{enumerate}
\item As we recall in \cref{sectiona}, Anderson duality is a functor $\mathrm{I} \colon \mathrm{Sp}^{\mathrm{op}} \rightarrow \mathrm{Sp}$ on the category of spectra which is designed to admit universal coefficient sequences of the form
\[0 \longrightarrow \mathrm{Ext}(E_{n-1}(Y),\mathbb{Z}) \longrightarrow \mathrm{I}(E)^n(Y) \longrightarrow \mathrm{Hom}(E_n(Y),\mathbb{Z}) \longrightarrow 0\]
for any two spectra $E$ and $Y$. Thus, the first statement of \cref{thmA} gives a useful relation between $\L^{\mathrm{q}}(\mathbb{Z})$-cohomology and $\L^{\mathrm{s}}(\mathbb{Z})$-homology, and vice versa.
\item The homotopy groups of the $\L$-theory spectra occurring in the theorem are of course well-known by work of Milnor and Kervaire for the quadratic case, and Mishchenko and Ranicki for the symmetric one.
Furthermore, Sullivan showed, that after inverting $2$, they are all equivalent to the real $K$-theory spectrum $\mathrm{KO}$, albeit with a non-canonical equivalence; see \cite{Sullivan, MST}. It is also well-known, that after localization at 2, $\L$-spectra become generalised Eilenberg-Mac Lane spectra; see \cite{TW}. As a result, the splittings in \cref{thmA} are fairly easy to deduce at the level of underlying spectra.
\item Refining the results of the previous point, the homotopy type of $\L(\mathbb{R})$ is quite well understood, even multiplicatively: For instance, by \cite{KandL} there is a canonical equivalence of $\mathbb{E}_\infty$-rings $\L(\mathbb{R})\mathopen{}\left[\tfrac 1 2\right]\mathclose{} \simeq \mathrm{KO}\mathopen{}\left[\tfrac 1 2\right]\mathclose{}$ and as an $\mathbb{E}_1$-ring $\L(\mathbb{R})_{(2)}$ is the free $\H\mathbb{Z}_{(2)}$-algebra on an invertible generator $t$ of degree $4$. Its $2$-local fracture square thus displays the $\mathbb{E}_1$-ring underlying $\L(\mathbb{R})$ as the pullback
\[\begin{tikzcd}
\L(\mathbb{R}) \ar[r] \ar[d] & \mathrm{KO}\mathopen{}\left[\tfrac 1 2\right]\mathclose{}\ar[d,"{\mathrm{ch}}"] \\
\H\mathbb{Z}_{(2)}\mathopen{}\left[t^{\pm 1}\right]\mathclose{} \ar[r,"{t \mapsto \frac{t}{2}}"] &\H\mathbb Q\mathopen{}\left[t^{\pm 1}\right]\mathclose{},
\end{tikzcd}\]
where $\mathrm{ch}$ is the Chern character.
\item One might hope that a splitting of $\L^{\mathrm{q}}(\mathbb{Z})$ as in the theorem might hold even as $\L^{\mathrm{s}}(\mathbb{Z})$-modules. This is, however, ruled out by the Anderson duality statement since $\L^{\mathrm{s}}(\mathbb{Z})$ is easily checked to be indecomposable as an $\L^{\mathrm{s}}(\mathbb{Z})$-module.
\end{enumerate}
Finally, let us mention that the Anderson duality between quadratic and symmetric L-theory as described in \cref{thmA} was anticipated by work on Sullivan's characteristic variety theorem initiated by Morgan and Pardon \cite{Pardon}: Pardon investigated the intersection homology Poincar\'e bordism spectrum $\Omega^{\mathrm{IP}}$, and showed that its homotopy groups are isomorphic to those of $\L^{\mathrm{s}}(\mathbb{Z})$ in all non-negative degrees except $1$. By an ad hoc procedure reminiscent of Anderson duality compounded with periodisation, he furthermore produced a $4$-periodic cohomology theory from $\Omega^{\mathrm{IP}}$ equivalent to that obtained from the spectrum $\L^{\mathrm{q}}(\mathbb{Z})$.
The expectation that the isomorphisms $\Omega_n^{\mathrm{IP}} \cong \L_n^s(\mathbb{Z})$ are induced by a map of spectra $\Omega^{\mathrm{IP}} \to \L^{\mathrm{s}}(\mathbb{Z})$, was finally implemented by Banagl, Laures and McClure \cite{BLM}, informally by sending an IP-space to its intersection cochains of middle perversity.
This article partially arose from an attempt to understand Pardon's constructions directly from an L-theoretic perspective.
\subsection{The genuine variants}
We also investigate the genuine L-spectra $\L^{\mathrm{gs}}(\mathbb{Z})$ and $\L^{\mathrm{gq}}(\mathbb{Z})$ introduced in \cite{CDHII}. These variants of $\L$-theory are designed to fit into fibre sequences
\[\mathrm K(R)_{\mathrm{hC}_2} \longrightarrow \mathrm{GW}^{s}(R) \longrightarrow \L^{\mathrm{gs}}(R) \quad \text{and} \quad \mathrm K(R)_{\mathrm{hC}_2} \longrightarrow \mathrm{GW}^{q}(R) \longrightarrow \L^{\mathrm{gq}}(R),\]
where the middle terms denote the symmetric and quadratic Grothendieck-Witt spectra, variously also denoted by $\mathrm{KO}(R)$ and $\mathrm{KO}^q(R)$, respectively. In particular, the groups $\L_0^\mathrm{gs}(R)$ and $\L_0^\mathrm{gq}(R)$ are exactly the classical Witt groups of unimodular symmetric and quadratic forms over $R$, respectively; recall that Witt groups are given by isomorphism classes of such forms, divided by those that admit Lagrangian subspaces.
In fact, one of the main results of \cite{CDHIII} is an identification of the homotopy groups of the spectrum $\L^\mathrm{gs}(R)$ with Ranicki's initial definition of symmetric L-groups in \cite{Ranicki4}, which lack the $4$-periodicity exhibited by $\L^{\mathrm{q}}(R)$ and $\L^{\mathrm{s}}(R)$. The spectra $\L^\mathrm{gs}(\mathbb{Z})$ and $\L^\mathrm{gq}(\mathbb{Z})$ are thus hybrids of the classical quadratic and symmetric $\L$-spectra:
For any commutative ring, there are canonical maps
\[\L^{\mathrm{q}}(R) \longrightarrow \L^\mathrm{gq}(R) \stackrel{\mathrm{sym}}{\longrightarrow} \L^\mathrm{gs}(R) \longrightarrow \L^{\mathrm{s}}(R),\]
of which the middle one forgets the quadratic refinement and the left hand map induces an isomorphism on homotopy groups in degrees below $2$. For Dedekind rings, like the integers, the right hand map is an isomorphism in non-negative degrees and the middle map in degrees outside $[-2,3]$.
We also recall from \cite{CDHIV} that $\L^\mathrm{gs}(R)$ is an $\mathbb{E}_\infty$-ring spectrum, the right hand map refines to an $\mathbb{E}_\infty$-ring maps and the entire displayed sequence consists of $\L^\mathrm{gs}(R)$-module spectra. By the calculations described above, the periodicity generator $x \in \L_4^\mathrm{s}(\mathbb{Z})$ pulls back from $\L^\mathrm{gq}_4(\mathbb{Z})$ and the $\L^\mathrm{gs}(\mathbb{Z})$-module structure of $\L^\mathrm{gq}(\mathbb{Z})$ then determines a map $\L^\mathrm{gs}(\mathbb{Z})[4] \to \L^\mathrm{gq}(\mathbb{Z})$,
which is an equivalence.
As the second result of this paper, we then have the following analogue of \cref{thmA}, where we denote by $\ell(\mathbb{R})$ the connective cover of $\L(\mathbb{R})$:
\begin{introthm}\label{thmB}
As $\L^{\mathrm{gs}}(\mathbb{Z})$-module spectra, $\L^{\mathrm{gq}}(\mathbb{Z})$ and $\L^{\mathrm{gs}}(\mathbb{Z})$ are Anderson dual to each other. Furthermore, there is a canonical $\mathbb{E}_1$-map $\ell(\mathbb{R}) \to \L^{\mathrm{gs}}(\mathbb{Z})$ and an equivalence
\[ \L^{\mathrm{gs}}(\mathbb{Z}) \simeq \mathscr{L} \oplus (\ell(\mathbb{R})/2)[1] \oplus (\L(\mathbb{R})/(\ell(\mathbb{R}),2))[-2]\]
of $\ell(\mathbb{R})$-modules, where $\mathscr{L}$ is given by the pullback
\[\begin{tikzcd}
\mathscr{L} \ar[r] \ar[d] & \L(\mathbb{R}) \ar[d] \\
\ell(\mathbb{R})/8 \ar[r] & \L(\mathbb{R})/8
\end{tikzcd}\]
of $\ell(\mathbb{R})$-modules. Finally, as plain spectra, we have
\[\ell(\mathbb{R})/2 \simeq \bigoplus_{k \geq 0} \H\mathbb{Z}/2[4k] \quad \text{and} \quad \L(\mathbb{R})/(\ell(\mathbb{R}),2) \simeq \bigoplus_{k < 0} \H\mathbb{Z}/2[4k].\]
\end{introthm}
Again some comments are in order.
\begin{enumerate}
\item The homotopy groups of $\mathscr{L}$ are isomorphic to those of $\L(\mathbb{R})$, but the induced map $\pi_*(\mathscr{L}) \rightarrow \L_*(\mathbb{R})$ is an isomorphism only in non-negative degrees, whereas it is multiplication by $8$ below degree $0$.
\item By the discussion above, the canonical $\mathbb{E}_\infty$-map $\L^\mathrm{gs}(\mathbb{Z}) \to \L^{\mathrm{s}}(\mathbb{Z})$ induces an equivalence on connective covers and the map $\ell(\mathbb{R}) \to \L^\mathrm{gs}(\mathbb{Z})$ is then obtained from the ring map $\L(\mathbb{R}) \to \L^{\mathrm{s}}(\mathbb{Z})$ of \cref{thmA} by passing to connective covers.
\item Conversely, denoting by $x$ a generator of $\L^\mathrm{gs}_4(\mathbb{Z}) = \L^{\mathrm{s}}_4(\mathbb{Z})$ there are canonical equivalences
\[\L^\mathrm{gs}(\mathbb{Z})[x^{-1}] \simeq \L^{\mathrm{s}}(\mathbb{Z}) \quad \text{and} \quad \ell(\mathbb{R})[x^{-1}] \simeq \mathscr{L}[x^{-1}] \simeq \L(\mathbb{R}),\]
which translate the splitting of \cref{thmB} into that of \cref{thmA}.
\item Similarly, the Anderson duality statement of \cref{thmB} has that of \cref{thmA} as an immediate consequence, since $\L^{\mathrm{q}}(\mathbb{Z}) = \mathrm{div_x}\L^\mathrm{gq}(\mathbb{Z})$ consists of the $x$-divisible part of $\L^\mathrm{gq}(\mathbb{Z})$, and Anderson duality generally takes the inversion of an element $x$ to the $x$-divisible part of the dual, as it sends colimits to limits. We will, however, deduce the Anderson duality in \cref{thmB} from that of the classical $\L$-spectra in \cref{thmA}.
\end{enumerate}
\subsection{A possible multiplicative splitting of $\L^{\mathrm{s}}(\mathbb{Z})$}\label{question}
While our results explain the precise relation of the spectra $\L^{\mathrm{s}}(\mathbb{Z})$ and $\L(\mathbb{R})$, they give no information about the multiplicative structures carried by them. Through various lingering questions we are lead to wonder:
\begin{conj}
Is the ring map $\L^{\mathrm{s}}(\mathbb{Z}) \rightarrow \L(\mathbb{R})$, induced by the homomorphism $\mathbb{Z} \rightarrow \mathbb{R}$, a split square-zero extension?
\end{conj}
\cref{thmA}, in particular, supplies the requisite split in the realm of $\mathbb{E}_1$-spectra, though our question can equally well be considered at the level of $\mathbb{E}_k$-rings for any $k \in [1,\infty]$. For instance, we do not know whether our splitting can be promoted to an $\mathbb{E}_\infty$- or even an $\mathbb{E}_2$-map.
\subsection*{Organisation of the paper}
In \cref{sectiona,sectionl} we recall relevant facts about Anderson duality and $\L$-spectra, respectively.
In \cref{sections,sectionq,sectiongs} we then analyse the spectra $\L^{\mathrm{s}}(\mathbb{Z})$, $\L^{\mathrm{q}}(\mathbb{Z})$ and $\L^\mathrm{gs}(\mathbb{Z})$ in turn, proving \cref{thmA} in \cref{lstype}, \cref{adlslq}, \cref{symmetric case} and \cref{thmB} in \cref{AD for genuine L}, \cref{abc}.
The appendix contains the computation of the homotopy ring of the cofibre $\L^{\mathrm{n}}(\mathbb{Z})$ of the symmetrisation map $\L^{\mathrm{q}}(\mathbb{Z}) \rightarrow \L^{\mathrm{s}}(\mathbb{Z})$, which we use at several points. The result is well-known, but we had difficulty locating a proof in the literature.
\subsection*{Notation and Conventions}
Firstly, as already visible in the introduction, we will adhere to the naming scheme for $\L$-theory spectra introduced by Lurie in \cite{LurieL}, e.g.\ writing $\L^{\mathrm{s}}(R)$ and $\L^{\mathrm{q}}(R)$ rather than Ranicki's $\L^\bullet(R)$ and $\L_\bullet(R)$. We hope that this will cause no confusion, but explicitly warn the reader of the clash that $\L^{\mathrm{n}}(R)$ for us will mean the normal $\L$-spectrum of a ring $R$, variously called $\widehat\L_\bullet(R)$ and $\mathrm{NL}(R)$ by Ranicki, and never the $n$-th symmetric $\L$-group of $R$.
Secondly, essentially all of our constructions will only yield connected (as opposed to contractible) spaces of choices. Contrary to standard use in homotopical algebra, we will therefore use the term \emph{unique} to mean unique up to homotopy throughout in order to avoid awkward language. Similarly, we will say that the choices form a $G$-torsor if the components of the space of choices are acted upon transitively and freely by the group $G$; we wish it understood that no discreteness assertion is contained in the statement.
Finally, while most of the ring spectra we will encounter carry $\mathbb{E}_\infty$-structures, we will need to consider $\mathbb{E}_1$-ring maps among them and therefore have to distinguish left and right modules. We make the convention that all module spectra occurring in the text will be considered as left module spectra.
\subsection*{Acknowledgements}
We wish to thank Felix Janssen, Achim Krause, Lennart Meier and Carmen Rovi for helpful discussions, and Baptiste Calm\`es, Emanuele Dotto, Yonatan Harpaz, Kristian Moi, Denis Nardin and Wolfgang Steimle for the exciting collaboration that lead to the discovery of the genuine $\L$-spectra. We also thank Mirko Stappert for comments on a draft.
Finally, we wish to heartily thank Michael Weiss and the (sadly now late) Andrew Ranicki for a thoroughly enjoyable eMail correspondence whose content makes up the appendix. \\
FH is a member of the Hausdorff Center for Mathematics at the University of Bonn funded by the German Research Foundation (DFG) under GZ 2047/1 project ID 390685813. ML was supported by the CRC/SFB 1085 `Higher Invariants' at University of Regensburg, by the research fellowship DFG 424239956, and by the Danish National Research Foundation through the Center for Symmetry and Deformation (DNRF92) and the Copenhagen Centre for Geometry and Topology. TN was funded by the DFG through grant no.\ EXC 2044 390685587, `Mathematics M\"unster: Dynamics--Geometry--Structure'.
\section{Recollections on Anderson duality}\label{sectiona}
We recall the definition of the Anderson dual of a spectrum and collect some basic properties needed for our main theorem.
\begin{Def}
Let $M$ be an injective abelian group. Consider the functor
\[\begin{tikzcd}[row sep=tiny]
(\mathrm{Sp})^{\mathrm{op}} \ar[r] & \mathrm{Ab}_\mathbb{Z} \\
X \ar[r, mapsto] & \mathrm{Hom}(\pi_{-*}(X),M)
\end{tikzcd}\]
Since $M$ is injective, this is a cohomology theory, and thus represented by a spectrum $\mathrm{I}_M(\S)$.
Define $\mathrm{I}_M(X) = \mathrm{map}(X,\mathrm{I}_M(\S))$.
\end{Def}
For instance, the spectrum $\mathrm{I}_{\mathbb{Q}/\mathbb{Z}}(X)$ is known as the Brown--Comenetz dual of $X$.
Clearly, one obtains an isomorphism $\pi_*(\mathrm{I}_M(X)) \cong \mathrm{Hom}(\pi_{-*}(X),M)$, and homomorphisms $M \to M'$ of injective abelian groups induce maps $\mathrm{I}_M(X) \to \mathrm{I}_{M'}(X)$.
\begin{Def}
Define the Anderson dual $\mathrm{I}(X)$ of a spectrum $X$ to fit into the fibre sequence
\[ \mathrm{I}(X) \longrightarrow \mathrm{I}_\mathbb{Q}(X) \longrightarrow \mathrm{I}_{\mathbb{Q}/\mathbb{Z}}(X) .\]
\end{Def}
One immediately obtains:
\begin{Lemma}\label{anderson dual as mapping spectrum}
For any two spectra $X,Y$ there is a canonical equivalence
\[\mathrm{map}(Y,\mathrm{I}(X)) \simeq \mathrm{I}(X\otimes Y).\]
In particular, $\mathrm{I}(X) \simeq \mathrm{map}(X,\mathrm{I}(\S))$, so that $\mathrm{I}(X)$ canonically acquires the structure of an $R^\mathrm{op}$-module spectrum, if $X$ is an $R$-module spectrum for some ring spectrum $R$.
\end{Lemma}
\begin{proof}
The second statement is immediate from the definitions since $\mathrm{map}(X,-)$ preserves fibre sequences. The first statement then follows by adjunction:
\[ \mathrm{map}(Y,\mathrm{I}(X)) \simeq \mathrm{map}\big(Y,\mathrm{map}(X,\mathrm{I}(\S))\big) \simeq \mathrm{map}(X\otimes Y,\mathrm{I}(\S)) \simeq \mathrm{I}(X\otimes Y).\]
\end{proof}
One can calculate the homotopy groups of the Anderson dual by means of the following exact sequence.
\begin{Lemma}\label{homotopy groups of anderson dual}
Let $X \in \mathrm{Sp}$ be a spectrum. Then the fibre sequence defining $\mathrm{I}(X)$ induces exact sequences
\[ 0 \longrightarrow \mathrm{Ext}\big(\pi_{-n-1}(X),\mathbb{Z}\big) \longrightarrow \pi_{n}(\mathrm{I}(X)) \longrightarrow \mathrm{Hom}\big(\pi_{-n}(X),\mathbb{Z}\big) \longrightarrow 0\]
of abelian groups that split non-canonically. If $X$ is an $R$-module spectrum then this sequence is compatible with the $\pi_*(R)^\mathrm{op}$-module structure on the three terms.
\end{Lemma}
\begin{proof}
From the fibre sequence
\[ \mathrm{I}(M) \longrightarrow \mathrm{I}_\mathbb{Q}(M) \longrightarrow \mathrm{I}_{\mathbb{Q}/\mathbb{Z}}(M) \]
we obtain a long exact sequence of homotopy groups. Since $\mathbb{Q}$ and $\mathbb{Q}/\mathbb{Z}$ are injective abelian groups this sequence looks as follows
\begin{multline*}
\mathrm{Hom}(\pi_{-n-1}(X),\mathbb{Q}) \longrightarrow \mathrm{Hom}(\pi_{-n-1}(X),\mathbb{Q}/\mathbb{Z}) \\ \longrightarrow \pi_{-n}(\mathrm{I}(M)) \longrightarrow \mathrm{Hom}(\pi_{-n}(X),\mathbb{Q}) \longrightarrow \mathrm{Hom}(\pi_{-n}(X),\mathbb{Q}/\mathbb{Z})
\end{multline*}
Since the sequence
\[ 0 \longrightarrow \mathbb{Z} \longrightarrow \mathbb{Q} \longrightarrow \mathbb{Q}/\mathbb{Z} \longrightarrow 0\]
is an injective resolution of $\mathbb{Z}$ it follows that the cokernel of the left most map is given by $\mathrm{Ext}(\pi_{-n-1}(X),\mathbb{Z})$, and the kernel of the right most map is given by $\mathrm{Hom}(\pi_{-n}(X),\mathbb{Z})$ as needed.
{To see that the sequence splits simply note that $\mathrm{Ext}(B,C)$ is always a cotorsion group, which by definition means that every extension of any torsionfree group $A$ (here $A = \mathrm{Hom}\big(\pi_{-n}(X),\mathbb{Z}\big)$) by $\mathrm{Ext}(B,C)$ splits; see for example \cite[Chapter 9, Theorem 6.5]{Fuchs}. For the reader's convenience we give the short proof: We generally have
\begin{align*}
\mathrm{Ext}(A, \mathrm{Ext}(B,C)) & = \pi_{-2}\mathrm{map}_{\H\mathbb{Z}}(\H A, \mathrm{map}_{\H\mathbb{Z}}(\H B, \H C)) \\
& = \pi_{-2}\mathrm{map}_{\H\mathbb{Z}}(\H A \otimes_{\H\mathbb{Z}} \H B, \H C) \\
& = \mathrm{Ext}\left( \mathrm{Tor}(A, B), C)\right).
\end{align*}
Now if $A$ is torsionfree it is flat, so that the $\mathrm{Tor}$-term vanishes.}
\end{proof}
\begin{Example}
From this, one immediately obtains canonical equivalences $\mathrm{I}(\H F) \simeq \H(\mathrm{Hom}(F,\mathbb Z))$ for every free abelian group $F$ and for every torsion abelian group $T$ we find that $\mathrm{I}(\H T) \simeq \H(\mathrm{Hom}(T,\mathbb Q/\mathbb Z))[-1]$. In particular, $\H\mathbb{Z}$ is Anderson self-dual and $\mathrm{I}(\H\mathbb{Z}/n) \simeq \H\mathbb{Z}/n[-1]$.
\end{Example}
Applying \cref{homotopy groups of anderson dual} to $X \otimes Y$ we find:
\begin{Lemma}\label{coefficient sequence for anderson dual}
Let $X,Y \in \mathrm{Sp}$ be spectra. Then there is a canonical short exact sequence
\[ 0 \longrightarrow \mathrm{Ext}(X_{n-1}(Y),\mathbb{Z}) \longrightarrow \mathrm{I}(X)^n(Y) \longrightarrow \mathrm{Hom}_\mathbb{Z}(X_n(Y),\mathbb{Z}) \longrightarrow 0\]
\end{Lemma}
\begin{proof}
Note only that $X_{n}(Y) = \pi_n(X \otimes Y)$ by definition and
\[\mathrm{I}(X)^n(Y) \cong \pi_{-n}\big(\mathrm{map}(Y,\mathrm{I}(X))\big) \cong \pi_{-n}\mathrm{I}(X \otimes Y)\]
by \cref{anderson dual as mapping spectrum}.
\end{proof}
Applied to $X=\H\mathbb{Z}$ this sequence reproduces the usual universal coefficient sequence for integral cohomology, but it has little content for example for $X=\H\mathbb{Z}/n$.
The following standard lemma will be useful to identify Anderson duals:
\begin{Lemma}\label{free homotopy implies free module}
Let $R$ be a ring spectrum and let $M$ be a module spectrum over $R$. If $\pi_*(M)$ is a free module over $\pi_*(R)$, then $M$ is a sum of shifts of the free rank 1 module $R$.
\end{Lemma}
\begin{proof}
Choose an equivalence $\pi_*(M) \cong \bigoplus\pi_*(R)[d_i]$. The base elements on the right correspond to maps $\S^{d_i} \to M$, which by linearity extend to a map \[\bigoplus R[d_i] \longrightarrow M.\]
One readily checks that this induces an isomorphisms on homotopy groups.
\end{proof}
\begin{Example}
\begin{enumerate}
\item Using \cref{free homotopy implies free module}, one immediately concludes that $\mathrm{KU}$ is Anderson self-dual, which was Anderson's original observation in \cite{Anderson}.
\item It follows that $\mathrm{I}(\mathrm{KO}) \simeq \mathrm{KO}[4]$: \cref{homotopy groups of anderson dual} implies that the homotopy groups of $\mathrm{I}(\mathrm{KO})$ agree with those of $\mathrm{KO}[4]$. To see that the $\mathrm{KO}_*$-module structure is free of rank one on the generator in degree $4$, observe that the sequence of \cref{homotopy groups of anderson dual} is one of $\mathrm{KO}_*$-modules. This determines the entire module structure, except the $\eta$-multiplication $\pi_{8k+4}\mathrm{I}(\mathrm{KO}) \rightarrow \pi_{8k+5}\mathrm{I}(\mathrm{KO})$, where the groups are $\mathbb{Z}$ and $\mathbb{Z}/2$, respectively. To see that this $\eta$-multiplication is surjective as desired, one can employ the following argument, that we will use again in the proof of \cref{thmA}. From the long exact sequence associated to the multiplication by $\eta$, the surjectivity of the map in question is implied by $\pi_{8k+5}(\mathrm{I}(\mathrm{KO})/\eta) = 0$. Applying Anderson duality to the fibre sequence defining $\mathrm{KO}/\eta$ we find an equivalence $\mathrm{I}(\mathrm{KO})/\eta\simeq\mathrm{I}(\mathrm{KO}/\eta)[2]$. But by Wood's equivalence $\mathrm{KO}/\eta \simeq \mathrm{KU}$, and the Anderson self-duality of $\mathrm{KU}$, these spectra are even, which gives the claim; for a proof of Wood's theorem see \cite[Theorem 3.2]{AkhilWood}, but note that this was cut from the published version \cite{AkhilnoWood}.
A slightly different argument, based on the computation of the $\mathrm C_2$-equivariant Anderson dual of $\mathrm{KU}$ was recently given by Heard and Stojanoska in \cite{Drews}.
\item Stojanoska also announced a proof that $\mathrm{I}(\mathrm{Tmf}) \simeq \mathrm{Tmf}[21]$ in \cite{Stoja}, having established the corresponding statement after inverting $2$ in \cite{Stojanoska}.
\end{enumerate}
\end{Example}
Finally, let us record:
\begin{Thm}\label{andersondualdual}
For every spectrum $X$ all of whose homotopy groups finitely generated, the natural map
\[X \longrightarrow \mathrm{I}^2(X)\]
adjoint to the evaluation
$X \otimes \mathrm{map}(X,\mathrm{I}(\S)) \longrightarrow \mathrm{I}(\S)$
is an equivalence.
\end{Thm}
The proof is somewhat lengthy and can be found for example in \cite[Theorem 4.2.7]{DagXIV}.
Note that this statement really fails for general $X$, for example in the cases $\H\mathbb{Q}$ or $\H F$, for $F$ a free abelian group of countable rank.
\section{Recollections on L-theory}\label{sectionl}
As many of the details will be largely irrelevant for the present paper, let us only mention that the various flavours of $\L$-groups are defined as cobordism groups of corresponding flavours of Poincar\'e chain complexes; see for example \cite{Ranicki} or \cite{LurieL} for details. Let us, however, mention, that in agreeance with \cite{CDHI} and \cite{LurieL} we will describe such Poincar\'e chain complexes by a hermitian form rather than the hermitian tensor preferred by Ranicki. Consequently, the $i$-th $\L$-group will admit those complexes as cycles whose Poincar\'e duality is $-i$-dimensional, i.e.\ makes degrees $k$ and $-k-i$ correspond.
$\L$-theory spectra are then built by realising certain simplicial objects of Poincar\'e ads, the most highly structured result currently available being \cite[Theorem 18.1]{LauresMCCII}, where Laures and McClure produce a lax symmetric monoidal functor
\[\L^{\mathrm{s}} \colon \mathrm{Ring}^{\mathrm{inv}} \longrightarrow \mathrm{Sp}\]
assigning to a ring with involution its (projective) symmetric $\L$-theory; both source and target are regarded as symmetric monoidal via the respective tensor products. In particular, for a commutative ring $R$, the spectrum $\L^{\mathrm{s}}(R)$ is an $\mathbb{E}_\infty$-ring spectrum; here and throughout we suppress the involution from notation if it is given by the identity of $R$. Together with the fact that the (projective) quadratic and normal $\L$-spectra reside in a fibre sequence
\[\L^{\mathrm{q}}(R) \xrightarrow{\mathrm{sym}} \L^{\mathrm{s}}(R) \longrightarrow \L^{\mathrm{n}}(R)\]
with $\L^{\mathrm{n}}(R)$ an $\L^{\mathrm{s}}(R)$-algebra and $\L^{\mathrm{q}}(R)$ an $\L^{\mathrm{s}}(R)$-module, this will suffice for our results concerning the spectra $\L^{\mathrm{s}}(\mathbb{Z})$ and $\L^{\mathrm{q}}(\mathbb{Z})$ in \cref{sections,sectionq}; note that, in the current literature it is only established that $\L^{\mathrm{n}}(R)$ is an algebra up to homotopy and $\L^{\mathrm{q}}(R)$ a module up to homotopy and this will be enough for the current paper. Highly structured refinements are contained in \cite{CDHI, CDHIV}.
Let us also mention, that by virtue of the displayed fibre sequence, elements in $\L^{\mathrm{n}}_{k+1}(R)$ can be represented by a $k$-dimensional quadratic Poincar\'e chain complex equipped with a symmetric null-cobordism.
The genuine $\L$-theory spectra, however, are not covered by this regime (for example, they do not admit $\L^{\mathrm{s}}(\mathbb{Z})$-module structures in general), and we recall the necessary additions from \cite{CDHI,CDHII,CDHIII} at the beginning of \cref{sectiongs}.\\
Largely to fix notation, let us recall the classical computation of the quadratic and symmetric $\L$-groups of the integers, full proofs appear for example in \cite[Section 4.3.1]{phonebook}, see also \cite[Lectures 15 \& 16]{LurieL}.
\begin{Thm}[Kervaire-Milnor, Mishchenko]\label{L(Z)}
The homotopy ring of $\L^{\mathrm{s}}(\mathbb{Z})$ and the $\L_*^\mathrm{s}(\mathbb{Z})$-module $\L^{\mathrm{q}}_*(\mathbb{Z})$ are given by
\[\L^{\mathrm{s}}_*(\mathbb{Z}) = \mathbb{Z}[x^{\pm 1},e]/(2e,e^2) \quad \text{and} \quad \L_*^{\mathrm{q}}(\mathbb{Z}) = \L^{\mathrm{s}}_*(\mathbb{Z})/(e) \oplus \big(\L^{\mathrm{s}}_*(\mathbb{Z})/(2,e)\big)[-2]\]
with $|x|= 4$ and $|e|=1$.
\end{Thm}
Here, $x \in \L^{\mathrm{s}}_4(\mathbb{Z})$ is represented by $\mathbb{Z}[-2]$ equipped with the standard symmetric form of signature $1$, $e \in \L^{\mathrm{s}}_1(\mathbb{Z})$ is the class of $\mathbb{Z}/2[-1]$ with its unique non-degenerate symmetric form of degree $1$. The element $(1,0)$ on the right is represented by the $E_8$-lattice and $(0,1)$ is the class of $\mathbb{Z}^2[1]$ with its standard skew-symmetric hyperbolic form, equipped with quadratic refinement $(a,b) \mapsto a^2+b^2+ab$. The symmetrisation map sends $(1,0)$ to $8x$ and $(0,1)$ to $0$.
In fact, $\L^{\mathrm{q}}_*(\mathbb{Z})$ obtains the structure of a non-unital ring through the symmetrisation map, which is most easily described as
\[\L^{\mathrm{q}}_*(\mathbb{Z}) = 8\mathbb{Z}[t^{\pm1},g]/(16g,64g^2)\]
with $8t \in \L_4^\mathrm{q}(\mathbb{Z})$ and $8g \in \L_{-2}(\mathbb{Z})$ corresponding to $(x,0)$ and $(0,1)$, respectively.
\begin{Rmk}
Note that the element $e$ is not the image of $\eta$ under the unit $\S \to \L^{\mathrm{s}}(\mathbb{Z})$. In fact, the unit map $\S_* \rightarrow \L_*^\mathrm{s}(\mathbb{Z})$ is trivial outside degree $0$, as follows for example by combining \cref{sigsquare} and \cref{thommaps} below.
\end{Rmk}
As a consequence of the above, one can immediately deduce the additive structure of the normal $\L$-groups of the integers. Including the ring structure, the result reads:
\begin{Prop}\label{ringln}
The homotopy ring of $\L^{\mathrm{n}}(\mathbb{Z})$ is given by
\[\L^{\mathrm{n}}_*(\mathbb{Z}) = \mathbb{Z}/8[x^{\pm 1},e,f]/\Big( 2e=2f=0, e^2=f^2 =0, ef= 4 \Big).\]
\end{Prop}
Here, $e$ and $x$ are the images of the corresponding classes in $\L_*^\mathrm{s}(\mathbb{Z})$ and $f \in \L_{-1}^\mathrm{n}(\mathbb{Z})$ is represented by the pair consisting of the $\mathbb{Z}^2[1]$ equipped by with the quadratic form representing $g$ together with the symmetric Lagrangian given by the diagonal. Under this identification, the boundary map $\L_{4i-1}^n(\mathbb{Z}) \rightarrow \L_{4i-2}^q(\mathbb{Z})$ sends $x^if$ to $8x^ig$ and necessarily vanishes in all other degrees.
The only part of the statement not formally implied by \cref{L(Z)} is the equality $ef = 4$. As we had a hard time extracting a proof from the literature, but will crucially use this fact later, we supply a proof in the appendix.\\
Now, the $2$-local homotopy type of the fibre sequence
\[\L^{\mathrm{q}}(\mathbb{Z}) \xrightarrow{\mathrm{sym}} \L^{\mathrm{s}}(\mathbb{Z}) \longrightarrow \L^{\mathrm{n}}(\mathbb{Z})\]
is well-known, the results first appearing in \cite{TW}. As we will have to use them, we briefly give a modern account of the relevant parts. The starting point of the analysis is Ranicki's signature square; see for example \cite[p. 290]{RanickiTSO}.
\begin{Thm}[Ranicki]\label{sigsquare}
The symmetric and normal signatures give a commutative square of homotopy ring spectra
\[\begin{tikzcd}
\mathrm{MSO} \ar[r,"{\sigma^s}"] \ar[d,"{\mathrm{MJ}}"'] & \L^{\mathrm{s}}(\mathbb{Z}) \ar[d] \\
\mathrm{MSG} \ar[r,"{\sigma^n}"] & \L^{\mathrm{n}}(\mathbb{Z}),
\end{tikzcd}\]
where $\mathrm{MSG}$ is the Thom spectrum of the universal stable spherical fibration of rank $0$ over $\mathrm{BS}\mathcal{G} = \mathrm{BSl}_1(\S)$ and $\mathrm{MSO}$ that of its pullback along $\mathrm J \colon \mathrm{BSO} \rightarrow \mathrm{BS}\mathcal{G}$.
\end{Thm}
\begin{Rmk}
As part of their work in \cite{LauresMCCI, LauresMCCII} the top horizontal map was refined to an $\mathbb{E}_\infty$-map by Laures and McClure. Refining the whole square to one of $\mathbb{E}_\infty$-ring spectra is work in progress
by the first two authors with Laures;
we do not, however, have to make use of these extensions here.
\end{Rmk}
As a second ingredient we make use of the following result, the $2$-primary case of which originally appeared in \cite{HZThom}. A full proof appears for example in \cite[Theorem 5.7]{OmarTobi}.
\begin{Thm}[Hopkins-Mahowald]
For $p$ a prime, the Thom spectrum of the $\mathbb{E}_2$-map
\[\eta_p \colon \tau_{\geq 2}(\Omega^2 S^3) \longrightarrow \mathrm{BSl}_1(\S_{(p)})\]
induced by the generator $1-p \in \mathbb{Z}_{(p)}^\times = \pi_1\mathrm{BGl}_1(\S_{(p)})$ is given by $\H\mathbb{Z}_{(p)}$.
\end{Thm}
Here, $\mathrm{BGl}_1(\S_{(p)})$ is the classifying space for $p$-local stable spherical fibrations, and $\mathrm{BSl}_1(\S_{(p)})$ is its universal cover. Note also, that the $\mathbb{E}_2$-structure on $\H\mathbb{Z}$ resulting from the theorem is the restriction of its canonical $\mathbb{E}_\infty$-structure (since for any $k$ the spectrum $\H R$ admits a unique $\mathbb{E}_k$-ring structure refining the ring structure on $R$).
For $p=2$ the map occurring in the theorem evidently factors as
\[\tau_{\geq 2}(\Omega^2 S^3) \stackrel{\eta}{\longrightarrow} \mathrm{BSO} \stackrel{J}{\longrightarrow} \mathrm{BS}\mathcal{G} = \mathrm{BSl}_1(\S) \longrightarrow \mathrm{BSl}_1(\S_{(2)}),\]
but the Thom spectrum of this integral refinement is neither $\H\mathbb{Z}$ nor $\H\mathbb{Z}_{(2)}$ before localisation at $2$. Taking on the other hand the product over all primes we arrive at an $\mathbb{E}_2$-map
\[\eta_\mathbb P \colon \tau_{\geq 2}(\Omega^2 S^3) \longrightarrow \mathrm{BS}\mathcal{G},\]
since its target is the product over all the $\mathrm{BSl}_1(\S_{(p)})$ by the finiteness of $\pi_*\S$ in positive degrees. The Thom spectrum of $\eta_\mathbb P$ is then necessarily $\H\mathbb{Z}$. We obtain:
\begin{Cor}\label{thommaps}
Applying Thom spectra to $\eta_2$ and $\eta_\mathbb P$ results in a commutative square
\[\begin{tikzcd}
\H\mathbb{Z} \ar[r,"{\mathrm M\eta_2}"] \ar[d,"{{\mathrm M\eta_\mathbb P}}"] & \mathrm{MSO}_{(2)} \ar[d] \\
\mathrm{MSG} \ar[r] & \mathrm{MSG}_{(2)}
\end{tikzcd}\]
of $\mathbb{E}_2$-ring spectra.
\end{Cor}
Putting these together we obtain:
\begin{Cor}\label{waystosplit}
Any $2$-local (homotopy) module spectrum $E$ over $\tau_{\geq 0}\L^{\mathrm{s}}(\mathbb{Z})$ acquires a preferred (homotopy) $\H\mathbb{Z}$-module structure through the composite
\[\H\mathbb{Z} \xrightarrow{\mathrm M\eta_2} \mathrm{MSO}_{(2)} \xrightarrow{\sigma^s}\tau_{\geq 0}\L^{\mathrm{s}}(\mathbb{Z})_{(2)}. \]
In particular, there is a $\prod_{i \in \mathbb{Z}}\mathrm{Ext}(\pi_i(E),\pi_{i+1}(E))$-torsor of $\H\mathbb{Z}$-linear equivalences
\[E \simeq \bigoplus_{i \in \mathbb{Z}} \H\pi_i(E)[i]\]
inducing the identity map on homotopy groups.
\end{Cor}
\begin{proof}
Recall that an $\H\mathbb{Z}$-module spectrum always admits a splitting as displayed, by first choosing maps $M\pi_i(E)[i] \rightarrow E$ from the Moore spectrum inducing the identity on $\pi_i$, and then forming
\[\H\pi_i(E)[i] \simeq \H\mathbb{Z} \otimes M(\pi_i(E))[i] \longrightarrow E\]
using the $\H\mathbb{Z}$-module structure. The indeterminacy of such an equivalence is clearly the direct product over all $i$ of those components in $\pi_0\mathrm{map}(\H\pi_i(E)[i], E)$ inducing the identity in homotopy. Since
\[\pi_0\mathrm{map}_{\H\mathbb{Z}}(\H\pi_i(E)[i], E) \cong \mathrm{Hom}(\pi_iE,\pi_iE) \oplus \mathrm{Ext}(\pi_i(E),\pi_{i+1}(E))\]
the claim follows.
\end{proof}
Applying \cref{waystosplit} to the $\L^{\mathrm{s}}(\mathbb{Z})$-module $\L^{\mathrm{n}}(\mathbb{Z})$, one obtains:
\begin{Cor}[Taylor-Williams]\label{splitln}
The equivalences
\[\L^{\mathrm{n}}(\mathbb{Z}) \simeq \bigoplus_{k \in \mathbb Z} \Big[\H\mathbb{Z}/8[4k] \oplus \H\mathbb{Z}/2[4k+1] \oplus \H\mathbb{Z}/2[4k+3]\Big]\]
of $\H\mathbb{Z}$-modules compatible with the action of the periodicity operator $x \in \L^{\mathrm{s}}_4(\mathbb{Z})$ form a $(\mathbb{Z}/2)^2$-torsor.
\end{Cor}
\begin{proof}
One easily checks that $x$ acts on the indeterminacy
\[\prod_{k \in \mathbb{Z}}\mathrm{Ext}(\L^{\mathrm{n}}_{4k}(\mathbb{Z}),\L^{\mathrm{n}}_{4k+1}(\mathbb{Z})) \oplus \mathrm{Ext}(\L^{\mathrm{n}}_{4k+3}(\mathbb{Z}),\L^{\mathrm{n}}_{4k+4}(\mathbb{Z})) \cong \prod_{k \in \mathbb{Z}} (\mathbb{Z}/2)^2\]
by shifting, which immediately gives the result.
\end{proof}
\begin{Rmk}\label{E3}
\begin{enumerate}
\item Hopkins observed that the map $\eta_2$, and therefore also $M\eta_2$, are in fact $\mathbb{E}_3$-maps via the composite
\[\mathrm{B} S^3 \simeq \mathrm{B} \mathrm{Sp}(1) \longrightarrow \mathrm{B}\mathrm{Sp} \xrightarrow\beta \mathrm{B}^5\mathrm O \xrightarrow{\eta} \mathrm{B}^4 \mathrm O,\]
where $\beta$ denotes the Bott map.
\item The Thom spectrum of the map $\Omega^2 S^3 \longrightarrow \mathrm{BSl}_1(\S_{(p)})$ is $\H\mathbb{F}_p$ by another computation of Mahowald in \cite{HZThom}. In particular, there is also an $\mathbb{E}_3$-map $\mathrm M \eta_2 \colon \H\mathbb{F}_2 \to \mathrm{MO}$
fitting into the evident diagrams (involving $\mathrm M\mathcal{G}$) with the maps from \cref{thommaps}.
\item Note that not even an $\mathbb{E}_1$-map $\H\mathbb{Z} \rightarrow \L^{\mathrm{s}}(\mathbb{Z})$ can exist before localisation at $2$: By \cite[Theorem 5.2]{KandL} there is a canonical equivalence $\L(\mathbb{R})\mathopen{}\left[\tfrac 1 2\right]\mathclose{} \simeq \mathrm{KO}\mathopen{}\left[\tfrac 1 2\right]\mathclose{}$, and $\Omega^{\infty}\mathrm{KO}\mathopen{}\left[\tfrac 1 2\right]\mathclose{}$ is not a generalised Eilenberg-Mac Lane space, for example on account of its $\mathbb F_p$-homology.
\item Using the ring structure on $\L^{\mathrm{n}}_*(\mathbb{Z})$, Taylor and Williams \cite{TW} in fact also determine the homotopy class of the multiplication map of $\L^{\mathrm{n}}(\mathbb{Z})$ under the splitting in \cref{splitln}, but we shall not need that result.
\end{enumerate}
\end{Rmk}
\section{The homotopy type of $\L^{\mathrm{s}}(\mathbb Z)$}\label{sections}
To give the strongest forms of our results concerning the $\L$-spectra of the integers, we need to analyse their relation to the ring spectrum $\L(\mathbb{R})$. From \cite[Proposition 22.16 (i)]{Ranicki} we first find, see also \cite[Lecture 13]{LurieL}:
\begin{Prop}\label{pilr}
The homotopy ring of $\L(\mathbb{R})$ is given by
\[\L_*(\mathbb{R}) = \mathbb{Z}[x^{\pm1}], \]
where $x \in \L_4(\mathbb{R})$ denotes the image of the class in $\L_4^\mathrm{s}(\mathbb{Z})$ of the same name.
\end{Prop}
Recall from \cref{thommaps} that $\L(\mathbb{R})_{(2)}$ receives a preferred $\mathbb{E}_2$-map from $\H\mathbb{Z}$, and is therefore, in particular, an $\mathbb{E}_1$-$\H\mathbb{Z}$-algebra.
\begin{Cor}\label{free guy}\label{section of L-theories}
The localisation $\L(\mathbb{R})_{(2)}$ is a free $\mathbb{E}_1$-$\H\mathbb{Z}_{(2)}$-algebra on an invertible generator of degree $4$, namely $x \in \L_4(\mathbb{R})_{(2)}$.
\end{Cor}
Recall that the free $\mathbb{E}_1$-$\H\mathbb{Z}_{(2)}$-algebra on an invertible generator $t$ is given as the localisation of
\[\H\mathbb{Z}[t] = \bigoplus_{n \in \mathbb N} \H\mathbb{Z}[4]^{\otimes_{\H\mathbb{Z}} n}\]
at $2$ and $t$. In particular, its homotopy ring is given by the ring of Laurent polynomials over $\mathbb{Z}_{(2)}$.
\begin{proof}
By definition of the source there exists a canonical map $\H\mathbb{Z}[t] \to \L(\mathbb{R})_{(2)}$ which sends $t$ to $x$. Since $x$ is invertible, this map factors through the localisation $\H\mathbb{Z}[t^{\pm 1}]$ and the resulting map is an equivalence by \cref{pilr}.
\end{proof}
\begin{Cor}\label{lzlrsplit}
There is a unique $\mathbb{E}_1$-section of the canonical map $\L^{\mathrm{s}}(\mathbb{Z}) \to \L(\mathbb{R})$, that is a section of $\H\mathbb{Z}$-algebras after localisation at $2$.
\end{Cor}
In particular, every $\L^{\mathrm{s}}(\mathbb{Z})$-module acquires a canonical $\L(\mathbb{R})$-module structure.
\begin{proof}
We denote by $\Gamma(A \to B)$ the space of sections of a map $A \to B$ of $\mathbb{E}_1$-algebras. Then from the usual fracture square, or its formulation as a pullback of $\infty$-categories, one gets a cartesian square
\[\begin{tikzcd}
\Gamma\left(\L^{\mathrm{s}}(\mathbb{Z}) \to \L(\mathbb{R})\right) \ar[r]\ar[d] & \Gamma(\L^{\mathrm{s}}(\mathbb{Z})_{(2)} \to \L(\mathbb{R})_{(2)}) \ar[d] \\
\Gamma\left(\L^{\mathrm{s}}(\mathbb{Z})\!\left[\tfrac 1 2\right] \to \L(\mathbb{R})\!\left[\tfrac 1 2\right]\right) \ar[r] & \Gamma\left(\L^{\mathrm{s}}(\mathbb{Z})_\mathbb{Q} \to \L(\mathbb{R})_\mathbb{Q}\right) \ .
\end{tikzcd}\]
But the lower maps $\L^{\mathrm{s}}(\mathbb{Z})\!\left[\tfrac 1 2\right] \to \L(\mathbb{R})\!\left[\tfrac 1 2\right]$ and $\L^{\mathrm{s}}(\mathbb{Z})_\mathbb{Q} \to \L(\mathbb{R})_\mathbb{Q}$ are equivalences so that the associated spaces of sections are contractible. We conclude that
the upper horizontal map is an equivalence. Thus a section is completely determined by its 2-local behaviour. If we require this 2-local section to be an $\H\mathbb{Z}$-algebra map then it follows from the freeness of $\L(\mathbb{R})_{(2)}$ as an $\H\mathbb{Z}$-algebra (see \cref{lzlrsplit} above) that the section is unique. In fact the space of $\H\mathbb{Z}$-algebra sections of $\L^{\mathrm{s}}(\mathbb{Z})_{(2)} \to \L(\mathbb{R})_{(2)}$ is equivalent to the fibre of
\[\Omega^{\infty+4}\L^{\mathrm{s}}(\mathbb{Z})_{(2)} \longrightarrow \Omega^{\infty+4}\L(\mathbb{R})_{(2)},\]
over $x$ which has vanishing $\pi_0$ by \cref{pilr}.
\end{proof}
For later use we also record:
\begin{Prop}\label{self dual}
The spectrum $\L(\mathbb{R})$ is canonically Anderson self-dual as a module over itself. In fact, the same is true for $\L(\mathbb{C},c)$, where $c$ denotes complex conjugation on $\mathbb{C}$, and $\mathrm{I}(\L(\mathbb{C})) \simeq \L(\mathbb{C})[-1]$.
\end{Prop}
\begin{proof}
The homotopy rings in the latter two cases are $\L_*(\mathbb{C}) \cong \mathbb F_2[x^{\pm1}]$ for $x$ the image of the periodicity element in $\L^{\mathrm{s}}_4(\mathbb Z)$ and $\L_*(\mathbb{C},c) \cong \mathbb Z[s^{\pm 1}]$ where $s^2 = x$; see \cite[Proposition 22.16]{Ranicki}. Using \cref{homotopy groups of anderson dual} one calculates that $\pi_*\mathrm{I}(R)$ is a free $\pi_*R$-module of rank 1 in each case. \cref{free homotopy implies free module} then finishes the proof.
\end{proof}
\begin{Rmk}
Recall from \cite{KandL} that the spectra $\L(\mathbb{C},c)$ and $\mathrm{KU}$ are closely related, but only equivalent after inverting $2$, despite having isomorphic homotopy rings.
\end{Rmk}
We now start with the proof of \cref{thmA}. As preparation we set:
\begin{Def}
Denote by $\mathrm{dR}$ the $\L^{\mathrm{s}}(\mathbb{Z})$-module spectrum fitting into the fibre sequence
\[\mathrm{dR} \longrightarrow \L^{\mathrm{s}}(\mathbb{Z}) \longrightarrow \L(\mathbb{R}).\]
\end{Def}
We shall refer to $\mathrm{dR}$ as the de Rham-spectrum, since its homotopy groups are detected by de Rham's invariant $\L_{4k+1}^s(\mathbb{Z}) \rightarrow \mathbb{Z}/2$.
\begin{Cor}\label{drtype}
The spectrum $\mathrm{dR}$ is a $2$-local $\L^{\mathrm{s}}(\mathbb{Z})$-module, whose underlying $\L(\mathbb{R})$-module admits a unique equivalence to $(\L(\mathbb{R})/2)[1]$ and whose underlying $\H\mathbb{Z}$-module admits a splitting
\[\mathrm{dR} \simeq \bigoplus_{k \in \mathbb{Z}} \H\mathbb{Z}/2[4k+1]\]
unique up to homotopy.
\end{Cor}
Note, that we do not claim here, that $\mathrm{dR} \simeq (\L(\mathbb{R})/2)[1]$ as $\L^{\mathrm{s}}(\mathbb{Z})$-modules, though this is clearly required for a positive answer to our question in \cref{question}.
\begin{proof}
It is immediate from the computation of the homotopy groups of $\L^{\mathrm{s}}(\mathbb{Z})$ and $\L(\mathbb{R})$ that multiplication with $e \in \L_1^s(\mathbb{Z})$ gives an isomorphism
\[\big(\L_*^\mathrm{s}(\mathbb{Z})/(2,e)\big)[1] \longrightarrow \mathrm{dR}\]
of $\L^{\mathrm{s}}_*(\mathbb{Z})$-modules. It then follows from the exact sequence induced by multiplication with $2$, that there is a unique homotopy class $\S^1/2 \rightarrow \mathrm{dR}$ representing $e$ and extending $\L(\mathbb{R})$-linearly using \cref{lzlrsplit}, one obtains a canonical $\L(\mathbb{R})$-module map
\[(\L(\mathbb{R})/2)[1] \longrightarrow \mathrm{dR}.\]
This is readily checked to be an equivalence. The statement about the underlying $\H\mathbb{Z}$-module is immediate from \cref{waystosplit}.
\end{proof}
We obtain:
\begin{Cor}\label{lstype}
There is a unique equivalence
\[\L^{\mathrm{s}}(\mathbb{Z}) \simeq \L(\mathbb{R}) \oplus (\L(\mathbb{R})/2)[1]\]
of $\L(\mathbb{R})$-modules, whose composition with the projection to the first summand agrees with the canonical map. In particular, there is a preferred equivalence
\[\L^{\mathrm{s}}(\mathbb{Z}) \simeq \L(\mathbb{R}) \oplus \bigoplus_{k \in \mathbb{Z}} \H\mathbb{Z}/2[4k+1]\]
of underlying spectra.
\end{Cor}
\begin{proof}
For the existence statement note simply that the fibre sequence
\[\mathrm{dR} \longrightarrow \L^{\mathrm{s}}(\mathbb{Z}) \longrightarrow \L(\mathbb{R})\]
is $\L(\mathbb{R})$-linearly split, since its boundary map $\L(\mathbb{R}) \rightarrow \mathrm{dR}[1]$ vanishes as an $\L(\mathbb{R})$-linear map because $\pi_{-1}\mathrm{dR} = 0$. Then the previous corollary gives the claim about the underlying spectrum.
For the uniqueness assertion we compute
\begin{align*}
\pi_0\mathrm{end}_{\L(\mathbb{R})}(\L(\mathbb{R}) \oplus (\L(\mathbb{R})/2)[1]) =\ &\pi_0\mathrm{end}_{\L(\mathbb{R})}(\L(\mathbb{R})) \oplus \pi_0\mathrm{map}_{\L(\mathbb{R})}(\L(\mathbb{R}), (\L(\mathbb{R})/2)[1]) \ \oplus \ \\ & \pi_0\mathrm{map}_{\L(\mathbb{R})}((\L(\mathbb{R})/2)[1], \L(\mathbb{R})) \oplus \pi_0\mathrm{end}_{\L(\mathbb{R})}((\L(\mathbb{R})/2)[1]) \\
=\ & \mathbb{Z} \oplus 0 \oplus 0 \oplus \mathbb{Z}/2
\end{align*}
using the exact sequences associated to multiplication by $2$. The two non-trivial summands are detected by the effect on $\pi_0$ and $\pi_1$, respectively, and thus determined by compatibility with the map $\L^{\mathrm{s}}(\mathbb{Z}) \rightarrow \L(\mathbb{R})$ and by being an equivalence.
\end{proof}
\begin{Rmk}\label{generalsplits}
In fact, \cite[Theorem 4.2.3]{Patchkoria} gives a triangulated equivalence \[\mathrm{ho}(\mathrm{Mod}(\L(\mathbb{R})) \simeq \mathrm{ho}(\mathcal D(\mathbb{Z}[t^{\pm1}])),\] under which the homotopy groups of a module over $\L(\mathbb{R})$ are given by the homology groups of the corresponding chain complex over $\mathcal D(\mathbb{Z}[t^{\pm1}])$. In particular, any $\L(\mathbb{R})$-module $M$ splits (non-canonically in general) into
\[\bigoplus_{i=0}^3\L(\mathbb{R}) \otimes \mathrm M(\pi_i(M))[i],\]
where $\mathrm M(A)$ is the Moore spectrum of the abelian group $A$, though this statement is also easy to see by hand.
Note, however, that the $\infty$-categories $\mathrm{Mod}(\L(\mathbb{R}))$ and $\mathcal D(\mathbb{Z}[t^{\pm1}])$ themselves are not equivalent, as the right comes equipped with an $\H\mathbb{Z}$-linear structure, whereas the left cannot admit one: $\L(\mathbb{R})$ itself occurs as a morphism spectrum and does not admit an $\H\mathbb{Z}$-module structure as observed in the comments preceding \cref{lzlrsplit}.
\end{Rmk}
\section{The homotopy type of $\L^{\mathrm{q}}(\mathbb Z)$}\label{sectionq}
We start with the first part of \cref{thmA}:
\begin{Thm}\label{adlslq}
As modules over $\L^{\mathrm{s}}(\mathbb{Z})$, there is a preferred homotopy class of equivalences
\[ \mathrm{I}(\L^{\mathrm{s}}(\mathbb{Z})) \simeq \L^{\mathrm{q}}(\mathbb{Z}).\]
\end{Thm}
\begin{proof}
Since the homotopy groups of $\L^{\mathrm{q}}(\mathbb{Z})$ and $\L^{\mathrm{s}}(\mathbb{Z})$ are finitely generated, this is equivalent to showing that $\mathrm{I}(\L^{\mathrm{q}}(\mathbb{Z})) \simeq \L^{\mathrm{s}}(\mathbb{Z})$ as $\L^{\mathrm{s}}(\mathbb{Z})$-modules by \cref{andersondualdual}. By \cref{free homotopy implies free module} it suffices to prove that $\pi_*(\mathrm{I}(\L^{\mathrm{q}}(\mathbb{Z})))$ is a free module of rank 1 over $\pi_*(\L^{\mathrm{s}}(\mathbb{Z}))$ with a preferred generator.
Now, one readily calculates that the homotopy groups of $\mathrm{I}(\L^{\mathrm{q}}(\mathbb{Z}))$ are isomorphic to those of $\L^{\mathrm{s}}(\mathbb{Z})$ using \cref{homotopy groups of anderson dual} and $[E_8] \in \L_0^\mathrm{q}(\mathbb{Z})$ determines a generator of $\pi_0(\mathrm{I}(\L^{\mathrm{q}}(\mathbb{Z})))$.
It remains to determine the module action of $\pi_*(\L^{\mathrm{s}}(\mathbb{Z}))$ on $\pi_*\mathrm{I}(\L^{\mathrm{q}}(\mathbb{Z}))$, which is somewhat tricky (just as in the case of $\mathrm{KO}$), as the latter has contributions from both sides of the exact sequence of \cref{homotopy groups of anderson dual}.
Being invertible, the action of $x$ is determined by the structure of the homotopy groups (up to an irrelevant sign), so the claim holds if and only if the map
\begin{equation}\label{times e}
\pi_{4k}\mathrm{I}(\L^{\mathrm{q}}(\mathbb{Z})) \stackrel{\cdot e}{\longrightarrow} \pi_{4k+1}\mathrm{I}(\L^{\mathrm{q}}(\mathbb{Z})) \tag{$\ast$}
\end{equation}
is surjective. For this we consider the cofibre sequence
\[ \mathrm{I}(\L^{\mathrm{q}}(\mathbb{Z}))[1] \stackrel{\cdot e}{\longrightarrow} \mathrm{I}(\L^{\mathrm{q}}(\mathbb{Z})) \longrightarrow \mathrm{I}(\L^{\mathrm{q}}(\mathbb{Z}))/e \]
and see that the surjectivity of \eqref{times e} is equivalent to the statement that
\[ \pi_{4k+1}\big(\mathrm{I}(\L^{\mathrm{q}}(\mathbb{Z}))/e\big) = 0.\]
Applying Anderson duality to the cofibre sequence
\[ \L^{\mathrm{q}}(\mathbb{Z})[1] \stackrel{\cdot e}{\longrightarrow} \L^{\mathrm{q}}(\mathbb{Z}) \longrightarrow \L^{\mathrm{q}}(\mathbb{Z})/e \]
one finds that
\[ \mathrm{I}(\L^{\mathrm{q}}(\mathbb{Z}))/e \simeq \mathrm{I}\big(\L^{\mathrm{q}}(\mathbb{Z})/e\big)[2]\]
and thus that
\[ \pi_{4k+1}\big(\mathrm{I}(\L^{\mathrm{q}}(\mathbb{Z}))/e\big) \cong \pi_{4k-1}\mathrm{I}(\L^{\mathrm{q}}(\mathbb{Z})/e), \]
which can be computed by \cref{homotopy groups of anderson dual} as soon as we know the homotopy groups of $\L^{\mathrm{q}}(\mathbb{Z})/e$. Spelling this out, we find that the map \eqref{times e} is surjective if and only if
\begin{enumerate}
\item[(i)] $\pi_{4k+1}(\L^{\mathrm{q}}(\mathbb{Z})/e)$ is torsion, and
\item[(ii)] $\pi_{4k}(\L^{\mathrm{q}}(\mathbb{Z})/e)$ is torsionfree.
\end{enumerate}
We can calculate the homotopy groups of $\L^{\mathrm{q}}(\mathbb{Z})/e$ by means of the following two cofibre sequences
\[\begin{tikzcd}[row sep=tiny]
\L^{\mathrm{q}}(\mathbb{Z})[1] \ar[r,"{\cdot e}"] & \L^{\mathrm{q}}(\mathbb{Z}) \ar[r] & \L^{\mathrm{q}}(\mathbb{Z})/e \\
\L^{\mathrm{q}}(\mathbb{Z})/e \ar[r] & \L^{\mathrm{s}}(\mathbb{Z})/e \ar[r] & \L^{\mathrm{n}}(\mathbb{Z})/e.
\end{tikzcd}\]
Since in the homotopy groups of $\L^{\mathrm{q}}(\mathbb{Z})$ the multiplication by $e$ is trivial (simply for degree reasons) we obtain short exact sequences
\[0 \longrightarrow \pi_*(\L^{\mathrm{q}}(\mathbb{Z})) \longrightarrow \pi_*(\L^{\mathrm{q}}(\mathbb{Z})/e) \longrightarrow \pi_{*-2}(\L^{\mathrm{q}}(\mathbb{Z})) \longrightarrow 0, \]
which give
\[\pi_i(\L^{\mathrm{q}}(\mathbb{Z})/e) \cong \begin{cases} M & \text{ if } i \equiv 0 (4) \\
0 & \text{ if } i \equiv 1 (4) \\
\mathbb{Z}\oplus \mathbb{Z}/2 & \text{ if } i \equiv 2 (4) \\
0 & \text{ if } i \equiv 3 (4) \end{cases}\]
where $M$ is an extension of $\mathbb{Z}$ and $\mathbb{Z}/2$. In particular, we see that (i) in the above list holds true. To see (ii), we will calculate the extension $M$ by means of the second fibre sequence. A quick calculation in symmetric L-theory reveals that $\mathbb{Z} \cong \pi_{4i}(\L^{\mathrm{s}}(\mathbb{Z})) \stackrel{\cong}{\to} \pi_{4i}(\L^{\mathrm{s}}(\mathbb{Z})/e)$ and that $\pi_{4k+1}(\L^{\mathrm{s}}(\mathbb{Z})/e) = 0$. We then consider the exact sequence
\[ 0 \longrightarrow \pi_{4i+1}(\L^{\mathrm{n}}(\mathbb{Z})/e) \longrightarrow \pi_{4i}(\L^{\mathrm{q}}(\mathbb{Z})/e) \longrightarrow \pi_{4i}(\L^{\mathrm{s}}(\mathbb{Z})/e) \longrightarrow \pi_{4i}(\L^{\mathrm{n}}(\mathbb{Z})/e) \longrightarrow 0\]
Hence $M$ is torsionfree if and only if the torsion group $\pi_{4i+1}((\L^{\mathrm{n}}(\mathbb{Z}))/e)$ is trivial.
To finally calculate this group we consider the exact sequence
\[\pi_{4k}(\L^{\mathrm{n}}(\mathbb{Z})) \stackrel{\cdot e}{\longrightarrow} \pi_{4k+1}(\L^{\mathrm{n}}(\mathbb{Z})) \longrightarrow \pi_{4k+1}(\L^{\mathrm{n}}(\mathbb{Z})/e) \longrightarrow \pi_{4k-1}(\L^{\mathrm{n}}(\mathbb{Z})) \stackrel{\cdot e}{\longrightarrow} \pi_{4k}(\L^{\mathrm{n}}(\mathbb{Z})) \]
The arrow to the left is surjective since $xe \neq 0$ in $\pi_*(\L^{\mathrm{n}}(\mathbb{Z}))$. We thus find that
\[ \pi_{4k+1}(\L^{\mathrm{n}}(\mathbb{Z})/e) = \ker( \L^{\mathrm{n}}_{4k-1}(\mathbb{Z}) \stackrel{\cdot e}{\to} \L^{\mathrm{n}}_{4k}(\mathbb{Z}) ). \]
This kernel is trivial, as $ef \neq 0$ in $\L^{\mathrm{n}}_*(\mathbb{Z})$ by \cref{ringln}.
\end{proof}
Before finishing the proof of \cref{thmA} by splitting $\L^{\mathrm{q}}(\mathbb{Z})$, we collect a few results that allow us to describe the behaviour of the entire fibre sequence
\[\L^{\mathrm{q}}(\mathbb{Z}) \longrightarrow \L^{\mathrm{s}}(\mathbb{Z}) \longrightarrow \L^{\mathrm{n}}(\mathbb{Z})\]
under Anderson duality.
\begin{Cor}\label{bla cor}
For any $\L^{\mathrm{s}}(\mathbb{Z})$-module $X$, there is a canonical equivalence
\[ \mathrm{map}_{\L^{\mathrm{s}}(\mathbb{Z})}(X,\L^{\mathrm{q}}(\mathbb{Z})) \simeq \mathrm{I}(X)\]
as $\L^{\mathrm{s}}(\mathbb{Z})$-modules.
\end{Cor}
\begin{proof}
We calculate
\begin{align*}
\mathrm{map}_{\L^{\mathrm{s}}(\mathbb{Z})}(X,\L^{\mathrm{q}}(\mathbb{Z})) & \simeq \mathrm{map}_{\L^{\mathrm{s}}(\mathbb{Z})}(X,\mathrm{map}(\L^{\mathrm{s}}(\mathbb{Z}),\mathrm{I}(\S))) \\
& \simeq \mathrm{map}_{\L^{\mathrm{s}}(\mathbb{Z})}(\L^{\mathrm{s}}(\mathbb{Z})\otimes X,\mathrm{I}(\S)) \\
& \simeq \mathrm{map}(X,\mathrm{I}(\S)) \\
& \simeq \mathrm{I}(X).
\end{align*}
using \cref{anderson dual as mapping spectrum} and \cref{adlslq}.
\end{proof}
\begin{Prop}\label{lnand}
There is a preferred equivalence $\mathrm{I}(\L^{\mathrm{n}}(\mathbb{Z})) = \L^{\mathrm{n}}(\mathbb{Z})[-1]$ as $\L^{\mathrm{n}}(\mathbb{Z})$-module spectra.
\end{Prop}
\begin{proof}
We will again show that $\pi_*\mathrm{I}(\L^{\mathrm{n}}(\mathbb{Z}))$ is a free $\L^{\mathrm{n}}_*(\mathbb{Z})$-module on a generator in degree $-1$ and apply \cref{free homotopy implies free module}.
The additive structure is immediate. Also, the action of $x \in \L^{\mathrm{n}}_4(\mathbb{Z})$ is through isomorphisms as needed. We will show that $\pi_{4k}(\mathrm{I}(\L^{\mathrm{n}}(\mathbb{Z})/e)) = 0$. This implies that multiplication by $e$ induces a surjection $\pi_{4k-1}(\mathrm{I}(\L^{\mathrm{n}}(\mathbb{Z}))) \to \pi_{4k}(\mathrm{I}(L^n(\mathbb{Z})))$ and an injection $\pi_{4k-2}(\mathrm{I}(\L^{\mathrm{n}}(\mathbb{Z}))) \to \pi_{4k-1}(\mathrm{I}(\L^{\mathrm{n}}(\mathbb{Z})))$. Using $ef=4$ this is easily checked to force the multiplication by all other elements to take the desired form.
As before, we have $\mathrm{I}(\L^{\mathrm{n}}(\mathbb{Z}))/e \simeq \mathrm{I}(\L^{\mathrm{n}}(\mathbb{Z})/e)[2]$. \cref{homotopy groups of anderson dual}, together with the fact that all homotopy groups involved are torsion, gives
\[ \pi_{4k}(\mathrm{I}(\L^{\mathrm{n}}(\mathbb{Z}))/e) \cong \mathrm{Ext}(\pi_{-4k+1}(\L^{\mathrm{n}}(\mathbb{Z})/e),\mathbb{Z}).\]
However, it follows from the description of the $e$-multiplication on $\L^{\mathrm{n}}(\mathbb{Z})$ in \cref{ringln} that $\pi_{-4k+1}(\L^{\mathrm{n}}(\mathbb{Z})/e) = 0$ as needed.
\end{proof}
\begin{Cor}\label{endos of Lq}
There are canonical equivalences
\[\mathrm{end}_{\L^{\mathrm{s}}(\mathbb{Z})}(\L^{\mathrm{q}}(\mathbb{Z})) \simeq \L^{\mathrm{s}}(\mathbb{Z}) \quad \text{and} \quad \mathrm{map}_{\L^{\mathrm{s}}(\mathbb{Z})}(\L^{\mathrm{n}}(\mathbb{Z}),\L^{\mathrm{q}}(\mathbb{Z})[1]) \simeq \L^{\mathrm{n}}(\mathbb{Z}),\]
the left hand one of $\mathbb{E}_1$-$\L^{\mathrm{s}}(\mathbb{Z})$-algebras, the right hand one of $\L^{\mathrm{s}}(\mathbb{Z})$-modules. Under the second equivalence the element $1 \in \pi_0\L^{\mathrm{n}}(\mathbb{Z})$ corresponds to the boundary map of the symmetrisation fibre sequence.
\end{Cor}
Given the equivalence $\mathrm{end}_{\L^{\mathrm{s}}(\mathbb{Z})}(\L^{\mathrm{q}}(\mathbb{Z})) \simeq \L^{\mathrm{s}}(\mathbb{Z}) $ in this Corollary, one may wonder whether $\L^{\mathrm{q}}(\mathbb{Z})$ is an invertible $\L^{\mathrm{s}}(\mathbb{Z})$-module. All that remains is to decide whether it is a compact $\L^{\mathrm{s}}(\mathbb{Z})$-module, but alas, we were unable to do so; it is clearly related to our question in \cref{question}.
\begin{proof}
The displayed equivalences are direct consequences of \cref{bla cor}. To see that the left hand one preserves the algebra structures, simply note that the module structure of $\L^{\mathrm{q}}(\mathbb{Z})$ provides an $\mathbb{E}_1$-map $\L^{\mathrm{s}}(\mathbb{Z}) \rightarrow \mathrm{end}_{\L^{\mathrm{s}}(\mathbb{Z})}(\L^{\mathrm{q}}(\mathbb{Z}))$, whose underlying module map is necessarily inverse to the equivalence of \cref{bla cor} (by evaluating on $\pi_0$).
To obtain the final statement let $\varphi \colon \L^\mathrm{n}(\mathbb{Z}) \to \L^{\mathrm{q}}(\mathbb{Z})[1]$ correspond to $1$ under the equivalence and let $F$ be its fibre. Let $k \in \mathbb{Z}$ be such that $k\varphi = \partial$, where $\partial$ is boundary map $\L^{\mathrm{n}}(\mathbb{Z}) \to \L^{\mathrm{q}}(\mathbb{Z})[1]$. Then we find a commutative diagram
\[ \begin{tikzcd}
\L^{\mathrm{q}}(\mathbb{Z}) \ar[r] \ar[d,"\cdot k"] & F \ar[r] \ar[d] & \L^{\mathrm{n}}(\mathbb{Z}) \ar[r,"\varphi"] \ar[d,equal] & \L^{\mathrm{q}}(\mathbb{Z})[1] \ar[d,"\cdot k"] \\
\L^{\mathrm{q}}(\mathbb{Z}) \ar[r,"\cdot 8"] & \L^{\mathrm{s}}(\mathbb{Z}) \ar[r] & \L^{\mathrm{n}}(\mathbb{Z}) \ar[r,"\partial"] & \L^{\mathrm{q}}(\mathbb{Z})[1]
\end{tikzcd}\]
Looking at the sequence of homotopy groups in degrees $4k$, we find from the snake lemma that $\pi_{4k}(F) \to \pi_{4k}(\L^{\mathrm{s}}(\mathbb{Z}))$ is injective and has cokernel $\mathbb{Z}/k$. In other words, the map identifies with the multiplication by $k$ on $\mathbb{Z}$, and it follows that $k$ must be congruent to $1$ modulo $8$, which gives the second claim.
\end{proof}
\begin{Prop}\label{xyz}
The equivalences of \cref{adlslq} and \cref{lnand} extend to an equivalence of the $\L^{\mathrm{s}}(\mathbb{Z})$-linear fibre sequences
\[\L^{\mathrm{q}}(\mathbb{Z}) \xrightarrow{\mathrm{sym}} \L^{\mathrm{s}}(\mathbb{Z}) \longrightarrow \L^{\mathrm{n}}(\mathbb{Z})\]
and
\[\mathrm{I}(\L^{\mathrm{s}}(\mathbb{Z})) \xrightarrow{\mathrm{I}(\mathrm{sym})} \mathrm{I}(\L^{\mathrm{q}}(\mathbb{Z})) \xrightarrow{\mathrm{I}(\partial)} \mathrm{I}(\L^{\mathrm{n}}(\mathbb{Z})[-1]).\]
In fact, the space of such extensions forms a torsor for $\Omega^{\infty+1}(\L^{\mathrm{s}}(\mathbb{Z})\oplus \L^{\mathrm{n}}(\mathbb{Z}))$ and so the indeterminacy of such an identification of fibre sequences is a $(\mathbb{Z}/2)^2$-torsor.
\end{Prop}
We remark, that the $(\mathbb{Z}/2)^2$ appearing as indeterminacy in this statement seems unrelated to those appearing in \cref{splitln} and \cref{symhop}. In particular, it is not difficult to check that it acts trivially on the $\L(\mathbb{R})$-linear self-homotopies of the symmetrisation map.
\begin{proof}
We first show that the square involving the right hand maps commutes. To this end we compute, that
\[\mathrm{map}_{\L^{\mathrm{s}}(\mathbb{Z})}(\mathrm{I}(\L^{\mathrm{q}}(\mathbb{Z}),\L^{\mathrm{n}}(\mathbb{Z})) \simeq \mathrm{map}_{\L^{\mathrm{s}}(\mathbb{Z})}(\L^{\mathrm{s}}(\mathbb{Z}),\L^{\mathrm{n}}(\mathbb{Z})) \simeq \L^{\mathrm{n}}(\mathbb{Z})\]
via the equivalence of \cref{adlslq}. In particular, such maps are detected by their behaviour on $\pi_0$, where both composites are readily checked to induce the projection $\mathbb{Z} \rightarrow \mathbb{Z}/8$. Choosing a homotopy between the composites, we obtain an equivalence of fibre sequences and it remains to check, that the induced map on fibres is the Anderson dual of the middle one. But we have
\[\mathrm{map}_{\L^{\mathrm{s}}(\mathbb{Z})}(\mathrm{I}(\L^{\mathrm{s}}(\mathbb{Z})), \L^{\mathrm{q}}(\mathbb{Z})) \simeq \mathrm{map}_{\L^{\mathrm{s}}(\mathbb{Z})}(\L^{\mathrm{q}}(\mathbb{Z}), \L^{\mathrm{q}}(\mathbb{Z})) \simeq \L^{\mathrm{s}}(\mathbb{Z})\]
by \cref{endos of Lq}, so again such maps are detected by their effect on components. In the case at hand this is easily checked to be the canonical isomorphism by means of the long exact sequences associated to the fibre sequences.
To obtain the statement about the automorphisms, recall that endomorphisms of fibre sequences are equally well described by the endomorphisms of their defining arrow.
By \cite[Proposition 5.1]{GHN} these endomorphisms are given by
\[\mathrm{Hom}_{\L^{\mathrm{s}}(\mathbb{Z})}(\L^{\mathrm{s}}(\mathbb{Z}),\L^{\mathrm{s}}(\mathbb{Z})) \times_{\mathrm{Hom}_{\L^{\mathrm{s}}(\mathbb{Z})}(\L^{\mathrm{s}}(\mathbb{Z}),\L^{\mathrm{n}}(\mathbb{Z}))} \mathrm{Hom}_{\L^{\mathrm{s}}(\mathbb{Z})}(\L^{\mathrm{n}}(\mathbb{Z}),\L^{\mathrm{n}}(\mathbb{Z})),\]
in the present situation. We were unable to identify the right hand term, but requiring the endomorphism to be the identity termwise, results in the space
\[\{\mathrm{id}_{\L^{\mathrm{q}}(\mathbb{Z})}\} \times_{\mathrm{Hom}_{\L^{\mathrm{s}}(\mathbb{Z})}(\L^{\mathrm{q}}(\mathbb{Z}),\L^{\mathrm{q}}(\mathbb{Z}))} \{\mathrm{id}_{\L^{\mathrm{s}}(\mathbb{Z})}\} \times_{\mathrm{Hom}_{\L^{\mathrm{s}}(\mathbb{Z})}(\L^{\mathrm{s}}(\mathbb{Z}),\L^{\mathrm{n}}(\mathbb{Z}))} \{\mathrm{id}_{\L^{\mathrm{n}}(\mathbb{Z})}\}\]
which by \cref{endos of Lq} evaluates as claimed.
\end{proof}
\begin{Rmk}
\cref{lnand} and the existence part of \cref{xyz} can also be shown using the following argument.
The equivalences
\[\mathrm{map}_{\L^{\mathrm{s}}(\mathbb{Z})}(\L^{\mathrm{q}}(\mathbb{Z}),\L^{\mathrm{s}}(\mathbb{Z})) \simeq \mathrm{map}_{\L^{\mathrm{s}}(\mathbb{Z})}(\L^{\mathrm{q}}(\mathbb{Z}),\mathrm{map}(\L^{\mathrm{q}}(\mathbb{Z}),\mathrm{I}(\S))) \simeq \mathrm{map}(\L^{\mathrm{q}}(\mathbb{Z})\otimes_{\L^{\mathrm{s}}(\mathbb{Z})}\L^{\mathrm{q}}(\mathbb{Z}), \mathrm{I}(\S))\]
give a $\mathrm{C}_2$-action to the left hand spectrum via the flip action on the right. Decoding definitions, the action map on the left hand spectrum is given by applying Anderson duality and then conjugating with the equivalences of \cref{adlslq}.
The symmetrisation map is a homotopy fixed point for this action, since unwinding definitions shows that it is sent to the composite
\[\L^{\mathrm{q}}(\mathbb{Z})\otimes_{\L^{\mathrm{s}}(\mathbb{Z})}\L^{\mathrm{q}}(\mathbb{Z}) \stackrel{\mu}{\longrightarrow} \L^{\mathrm{q}}(\mathbb{Z}) \simeq \mathrm{I}(\L^{\mathrm{s}}(\mathbb{Z})) \longrightarrow \mathrm{I}(\S),\]
where the final map is induced by the unit of $\L^{\mathrm{s}}(\mathbb{Z})$; this composite is a homotopy fixed point since the multiplication $\L^{\mathrm{q}}(\mathbb{Z})$ makes it a non-unital $\mathbb{E}_\infty$-ring spectrum by \cite{HLLII}.
\end{Rmk}
Finally, we complete the proof of \cref{thmA}. Consider the map $u\colon \L^{\mathrm{s}}(\mathbb{Z}) \to \L(\mathbb{R})$ induced by the evident ring homomorphism and its Anderson dual $\mathrm{I}(u) \colon \L(\mathbb{R}) \to \L^{\mathrm{q}}(\mathbb{Z})$ arising from \cref{self dual} and \cref{adlslq}.
\begin{Cor}\label{symmetric case}
There exists a unique equivalence
\[\L^{\mathrm{q}}(\mathbb{Z}) \simeq \L(\mathbb{R}) \oplus (\L(\mathbb{R})/2)[-2]\]
of $\L(\mathbb{R})$-modules inducing under which the map $\mathrm{I}(u)$ corresponds to the inclusion. In particular, there is a preferred equivalence
\[\L^{\mathrm{q}}(\mathbb{Z}) \simeq \L(\mathbb{R}) \oplus \bigoplus_{k \in \mathbb{Z}} \H\mathbb{Z}/2[4k-2]\]
of underlying spectra.
\end{Cor}
Recall the analogous equivalence
\[\L^{\mathrm{s}}(\mathbb{Z}) \simeq \L(\mathbb{R}) \oplus (\L(\mathbb{R})/2)[1]\]
from \cref{lstype}.
\begin{proof}
The existence of such a splitting is for example immediate from that for $\L^{\mathrm{s}}(\mathbb{Z})$, the self duality of $\L(\mathbb{R})$ from \cref{self dual} and the equivalence
\[\mathrm{I}(\L(\mathbb{R})/2) \simeq (\L(\mathbb{R})/2)[-1]\]
obtained by applying Anderson duality to the fibre sequence defining $\L(\mathbb{R})/2$.
For the uniqueness we compute
\begin{align*}
\pi_0\mathrm{end}_{\L(\mathbb{R})}(\L(\mathbb{R}) \oplus (\L(\mathbb{R})/2)[-2]) =\ &\pi_0\mathrm{end}_{\L(\mathbb{R})}(\L(\mathbb{R})) \oplus \pi_0\mathrm{map}_{\L(\mathbb{R})}(\L(\mathbb{R}), (\L(\mathbb{R})/2)[-2]) \ \oplus \ \\ & \pi_0\mathrm{map}_{\L(\mathbb{R})}((\L(\mathbb{R})/2)[-2], \L(\mathbb{R})) \oplus \pi_0\mathrm{end}_{\L(\mathbb{R})}((\L(\mathbb{R})/2)[2]) \\
=\ & \mathbb{Z} \oplus 0 \oplus 0 \oplus \mathbb{Z}/2
\end{align*}
using the exact sequences associated to multiplication by $2$. The two factors are again detected by the effect on $\pi_0$ and $\pi_2$, respectively, and thus determined by compatibility with $\mathrm{I}(u)$ and by being an equivalence.
The statement about the underlying spectra is immediate from \cref{drtype}.
\end{proof}
\begin{Prop}\label{symhop}
Under the equivalences
\[\L^{\mathrm{q}}(\mathbb{Z}) \simeq \L(\mathbb{R}) \oplus (\L(\mathbb{R})/2)[-2] \quad \L^{\mathrm{s}}(\mathbb{Z}) \simeq \L(\mathbb{R}) \oplus (\L(\mathbb{R})/2)[1]\]
of \cref{lstype} and \cref{symmetric case} the symmetrisation map is $\L(\mathbb{R})$-linearly homotopic to the matrix
\[\begin{pmatrix}8 & 0 \\ 0 & 0\end{pmatrix},\]
moreover the $\L(\mathbb{R})$-linear homotopies form a $(\mathbb{Z}/2)^2$-torsor. Each induces a splitting
\[\L^{\mathrm{n}}(\mathbb{Z}) \simeq \L(\mathbb{R})/8 \oplus (\L(\mathbb{R})/2)[1] \oplus (\L(\mathbb{R})/2)[-1]\]
of $\L(\mathbb{R})$-modules and the underlying splittings of spectra are those from \cref{splitln}.
\end{Prop}
\begin{proof}
We analyse the components individually. The induced map on the $\L(\mathbb{R})$-factors is multiplication by $8$ on $\pi_0$ and therefore homotopic to multiplication by $8$ as the source is a free module, and since $\L_1(\mathbb{R}) = 0$, a witnessing homotopy is unique. For the other parts it is a simple computation that all three of
\[\mathrm{map}_{\L(\mathbb{R})}(\L(\mathbb{R}),(\L(\mathbb{R})/2)[1]),\quad \mathrm{map}_{\L(\mathbb{R})}((\L(\mathbb{R})/2)[-2],\L(\mathbb{R})) \] \[\quad \text{and} \quad \mathrm{map}_{\L(\mathbb{R})}((\L(\mathbb{R})/2)[-2],(\L(\mathbb{R})/2)[1])\]
have vanishing components. This gives the existence of a homotopy as claimed.
For the indeterminacy one computes the first homotopy groups of the three terms are given by $\mathbb{Z}/2, \mathbb{Z}/2$ and $0$, respectively. Together with the uniqueness statement for the $\L(\mathbb{R})$-part this gives the claim.
The comparison to \cref{splitln} follows by simply unwinding the definitions.
\end{proof}
\begin{Rmk}
\begin{enumerate}
\item \cref{symmetric case} can also easily be obtained directly from \cref{lzlrsplit}: One considers the map $v \colon \L(\mathbb{R}) \rightarrow \L^{\mathrm{q}}(\mathbb{Z})$ obtained by extending the generator $\S \rightarrow \L^{\mathrm{q}}(\mathbb{Z})$ of $\L_0^\mathrm{q}(\mathbb{Z})$, and observes that the resulting fibre sequence splits $\L(\mathbb{R})$-linearly, see \cref{generalsplits}. The underlying equivalence of spectra can also be obtained even more directly from the $2$-local fracture square of $\L^{\mathrm{q}}(\mathbb{Z})$.
\item With such an independent argument \cref{symmetric case} can then be used in conjunction with \cref{lstype} to deduce that $\L^{\mathrm{q}}(\mathbb{Z})$ and $\L^{\mathrm{s}}(\mathbb{Z})$ are Anderson dual to one another, however, only as $\L(\mathbb{R})$- and not as $\L^{\mathrm{s}}(\mathbb{Z})$-modules. Similarly, using this approach it is not clear that $v = \mathrm{I}(u)$ is in fact $\L^{\mathrm{s}}(\mathbb{Z})$-linear.
\item
By \cref{symhop} the composite
\[\L(\mathbb{R}) \stackrel{\mathrm{I}(u)}{\longrightarrow} \L^{\mathrm{q}}(\mathbb{Z}) \stackrel{\mathrm{sym}}{\longrightarrow} \L^{\mathrm{s}}(\mathbb{Z}) \stackrel{u}{\longrightarrow} \L(\mathbb{R})\]
is $\L(\mathbb{R})$-linearly homotopic to multiplication by $8$. However, by construction this map is $\L^{\mathrm{s}}(\mathbb{Z})$-linear. We do not know whether it is homotopic to multiplication by $8$ as such, largely since we do not understand $\L(\mathbb{R})$ as an $\L^{\mathrm{s}}(\mathbb{Z})$-module, compare to our question in \cref{question}.
\end{enumerate}
\end{Rmk}
As a final application of the results above, we consider the space $\mathcal{G}/\mathrm{Top}$. It is the fibre of the topological $J$-homomorphisms $\mathrm{B}\mathrm{Top} \to \mathrm{B}\mathcal{G}$ and is therefore endowed with a canonical $\mathbb{E}_\infty$-structure usually referred to as the Whitney-sum structure. As mentioned in the introduction one of the principal results of surgery theory is an equivalence of \emph{spaces} $\mathcal{G}/\mathrm{Top} \to \Omega^\infty_0\L^{\mathrm{q}}(\mathbb{Z})$; see \cite[Theorem C.1]{KS}. We therefore find:
\begin{Cor}
There is a preferred equivalence of spaces
\[ \mathcal{G}/\mathrm{Top} \simeq \Omega^\infty_0 \L(\mathbb{R}) \times \prod\limits_{k \geq 0} \mathrm K(\mathbb{Z}/2,4k+2).\]
\end{Cor}
Let us emphatically warn the reader that the equivalence $\mathcal{G}/\mathrm{Top} \to \Omega^\infty_0 \L^{\mathrm{q}}(\mathbb{Z})$ is not one of $\mathbb{E}_\infty$-spaces or even H-spaces, when equipping $\mathcal{G}/\mathrm{Top}$ with the Whitney-sum structure. By computations of Madsen \cite{MadsenGTop}, $\mathrm{B}^3(\mathcal{G}/\mathrm{Top})_{(2)}$ is not even an Eilenberg-Mac Lane space, in contrast to $\Omega^{\infty-3}\L^{\mathrm{q}}(\mathbb{Z})_{(2)}$.
One can nevertheless describe the Whitney-sum $\mathbb{E}_\infty$-structure on $\mathcal{G}/\mathrm{Top}$ purely in terms of the algebraic L-theory spectrum $\L^{\mathrm{q}}(\mathbb{Z})$: one finds an equivalence of $\mathbb{E}_\infty$-spaces $\mathcal{G}/\mathrm{Top} \simeq \mathrm{Sl}_1\L^{\mathrm{q}}(\mathbb{Z})$, see the forthcoming \cite{HLLII}.
\section{The homotopy type of $\L^{\mathrm{gs}}(\mathbb{Z})$}\label{sectiongs}
To address the case of the genuine $\L$-spectra, we need to briefly recall Lurie's setup for $\L$-spectra from \cite{LurieL}, which forms the basis for their construction and analysis in the series \cite{CDHI, CDHII, CDHIII, CDHIV}. In the terminology of those papers the input for an $\L$-spectrum is a Poincar\'e $\infty$-category, which is a small stable $\infty$-category $\mathscr{C}$ equipped with a quadratic functor $\Qoppa \colon \mathscr{C}^\mathrm{op} \to \mathrm{Sp}$, satisfying a non-degeneracy condition. These objects form an $\infty$-category ${\mathrm{Cat}_\infty^\mathrm{p}}$ and Lurie constructs a functor
\[\L \colon {\mathrm{Cat}_\infty^\mathrm{p}} \longrightarrow \mathrm{Sp},\]
see \cite[Section 4.4]{CDHII}. The examples considered before, that is symmetric and quadratic $\L$-spectra of a ring with involution $R$, arise by considering $\mathscr{C} = {\mathscr{D}^{\mathrm{perf}}}(R)$, the category of perfect complexes over $R$, equipped with the quadratic functors
\[\Qoppa^s(C) = \mathrm{map}_{R \otimes R}(C \otimes C,R)^{\mathrm{hC}_2}\quad \text{and} \quad \Qoppa^q(C) = \mathrm{map}_{R \otimes R}(C \otimes C,R)_{\mathrm{hC}_2},\]
respectively, where $\mathrm{C}_2$ acts by the involution on $R$, by flipping the factors on $C \otimes C$ and through conjugation on the mapping spectrum.
Now, the animation\footnote{Here, we use novel terminology suggested by Clausen and first introduced in \cite[5.1.4]{CS} } (in more classical terminology the non-abelian derived functor)
of any quadratic functor
$\Lambda \colon \mathrm{Proj}(R)^\mathrm{op} \to \mathrm{Ab}$
gives rise to a genuine quadratic functor
\[\Qoppa^{g\Lambda} \colon {\mathscr{D}^{\mathrm{perf}}}(R)^\mathrm{op} \longrightarrow \mathrm{Sp}\]
and we set
\[\L^{g\Lambda}(R) = \L({\mathscr{D}^{\mathrm{perf}}}(R),\Qoppa^{g\Lambda}),\]
whenever $({\mathscr{D}^{\mathrm{perf}}}(R),\Qoppa^{g\Lambda})$ is a Poincar\'e category (which can be read off from a non-degeneracy condition on $\Lambda$).
This can be applied for example to the functors
\[P \longmapsto \mathrm{Hom}_{R \otimes R}(P \otimes P,R)^{\mathrm{C}_2} \quad \text{and} \quad P \longmapsto \mathrm{Hom}_{R \otimes R}(P \otimes P,R)_{\mathrm{C}_2}\]
giving Poincar\'e structures $\Qoppa^{\mathrm{gs}}$ and $\Qoppa^\mathrm{gq}$ on ${\mathscr{D}^{\mathrm{perf}}}(R)$ which are studied in detail in \cite[Section 4.2]{CDHI}. From the universal property of homotopy orbits and fixed points, there are then comparison maps
\[\Qoppa^q \Longrightarrow \Qoppa^\mathrm{gq} \Longrightarrow \Qoppa^\mathrm{gs} \Longrightarrow \Qoppa^s\]
giving rise to maps
\[\L^{\mathrm{q}}(R) \longrightarrow \L^\mathrm{gq}(R) \longrightarrow \L^\mathrm{gs}(R) \longrightarrow \L^{\mathrm{s}}(R),\]
whose composition is the symmetrisation map considered before. All of these are equivalences if $2$ is invertible in $R$, but in general these are four different spectra. An entirely analogous definition gives the skew-symmetric variants by introducing a sign into the action of $\mathrm{C}_2$.
The final formal property of this version of the $\L$-functor that we need to recall is that it admits a canonical lax symmetric monoidal refinement: By the results of \cite[Section 5.2]{CDHI} the category ${\mathrm{Cat}_\infty^\mathrm{p}}$ admits a symmetric monoidal structure that lifts Lurie's tensor product on $\mathrm{Cat}^\mathrm{ex}_\infty$, the category of stable categories. In \cite{CDHIV} we then show that the functor $\L \colon {\mathrm{Cat}_\infty^\mathrm{p}} \rightarrow \mathrm{Sp}$ admits a canonical lax symmetric refinement and for a commutative ring with trivial involution the functors $\Qoppa^s$ and $\Qoppa^\mathrm{gs}$ enhance ${\mathscr{D}^{\mathrm{perf}}}(R)$ with its tensor product to an $\mathbb{E}_\infty$-object of ${\mathrm{Cat}_\infty^\mathrm{p}}$. Furthermore, the forgetful transformation $\Qoppa^\mathrm{gs} \Rightarrow \Qoppa^s$ refines to an $\mathbb{E}_\infty$-map and $({\mathscr{D}^{\mathrm{perf}}}(R),\Qoppa^q)$ and $({\mathscr{D}^{\mathrm{perf}}}(R),\Qoppa^\mathrm{gq})$ admit canonical module structures over their respective symmetric counterparts, such that the forgetful map $\Qoppa^q \Rightarrow \Qoppa^\mathrm{gq}$ becomes $\Qoppa^\mathrm{gs}$-linear, see \cite[Section 5.4]{CDHI}. By the monoidality of the $\L$-functor these structures persist to the $\L$-spectra. However, neither $\L^\mathrm{gs}(R)$ nor $\L^\mathrm{gq}(R)$ are generally modules over $\L^{\mathrm{s}}(R)$.
Combining work of Ranicki with the main results of \cite{CDHIII} we obtain:
\begin{Thm}\label{homotopylgs}
Of the comparison maps
\[\L^{\mathrm{q}}(\mathbb{Z}) \longrightarrow \L^\mathrm{gq}(\mathbb{Z}) \longrightarrow \L^\mathrm{gs}(\mathbb{Z}) \longrightarrow \L^{\mathrm{s}}(\mathbb{Z})\]
the left, middle, and right one induce isomorphisms on homotopy groups below degree $2$, outside degrees $[-2,1]$, and in degrees $0$ and above, respectively. Furthermore, the preimage of the element $x \in \L_4^\mathrm{s}(\mathbb{Z})$ in $\L_4^\mathrm{gq}(\mathbb{Z})$ determines an equivalence
\[\L^\mathrm{gs}(\mathbb{Z})[4] \longrightarrow \L^\mathrm{gq}(\mathbb{Z}).\]
\end{Thm}
In particular, the homotopy groups of $\L^\mathrm{gs}(\mathbb{Z})$ evaluate to
\[\L_n^\mathrm{gs}(\mathbb{Z}) = \begin{cases} \mathbb{Z} & n \equiv 0 \ (4) \\
\mathbb{Z}/2 & n \equiv 1 \ (4) \text{ and } n \geq 0 \\
\mathbb{Z}/2 & n \equiv 2 \ (4) \text{ and } n \leq -4 \\
0 & \text{else.}\end{cases}\]
\begin{Cor}\label{truncsym}
The symmetrisation map determines a cartesian square
\[\begin{tikzcd}
\L^\mathrm{gs}(\mathbb{Z}) \ar[r] \ar[d] & \L^{\mathrm{s}}(\mathbb{Z}) \ar[d] \\
\tau_{\geq -1} \L^{\mathrm{n}}(\mathbb{Z}) \ar[r] & \L^{\mathrm{n}}(\mathbb{Z})
\end{tikzcd}\]
of $\L^\mathrm{gs}(\mathbb{Z})$-modules and the horizontal maps are localisations at $x \in \L_4^{\mathrm{gs}}(\mathbb{Z})$.
\end{Cor}
Here the module structure on the lower left corner is induced by the fibre sequence
\[\L^{\mathrm{q}}(\mathbb{Z}) \longrightarrow \L^\mathrm{gs}(\mathbb{Z}) \longrightarrow \tau_{\geq -1} \L^{\mathrm{n}}(\mathbb{Z}), \]
which also proves the Corollary.
\begin{Rmk}\label{weirdring}
\begin{enumerate}
\item
The homotopy ring of $\L^\mathrm{gs}(\mathbb{Z})$ is rather complicated to spell out in full. It is entirely determined by the assertion that the multiplication with $x \in \L_4^\mathrm{gs}(\mathbb{Z})$ is an isomorphism in all degrees in which it can, with one exception: The map $\cdot x \colon \L_{-4}^\mathrm{gs}(\mathbb{Z}) \to \L_0^\mathrm{gs}(\mathbb{Z})$ is multiplication by 8.
Using the evident generators, this results in $\L_*^\mathrm{gs}(\mathbb{Z}) = \mathbb{Z}[x,e,y_i,z_i | i \geq 1]/I$ with $|x| = 4, |e| = 1, |y_i| = -4i$ and $|z_i| = -4i-2$, where $I$ is the ideal spanned by the elements
\[2e,e^2, ey_i,ez_i,xy_1 - 8, xy_{i+1} - y_i, xz_1, xz_{i+1} - z_i, y_iy_j - 8y_{i+j},y_iz_j, 2z_i, z_iz_j.\]
In particular, $\L_*^\mathrm{gs}(\mathbb{Z})$ is not finitely generated as an algebra over $\mathbb{Z}$ or even $\L^\mathrm{gs}_{\geq 0}(\mathbb{Z}) = \mathbb{Z}[x,e]/(2e,e^2)$.
\item One can show that the square in \cref{truncsym} is in fact one of $\mathbb{E}_\infty$-ring spectra, which is somewhat surprising for the lower left term. To see this, one has to show that the maps $\Qoppa^\mathrm{q} \Rightarrow \Qoppa^{\mathrm{gs}} \Rightarrow \Qoppa^\mathrm{s}$ both exhibit the source as an ideal object of the target with respect to the tensor product of quadratic functors established in \cite[Section 5.3]{CDHI}. Then one can use the monoidality of $\L$-theory. We refrain from giving the details here.
\end{enumerate}
\end{Rmk}
We begin with the proof of \cref{thmB}:
\begin{Thm}\label{AD for genuine L}
The Anderson dual of $\L^{\mathrm{gs}}(\mathbb{Z})$ is given by $\L^{\mathrm{gq}}(\mathbb{Z})$, or equivalently by $\L^{\mathrm{gs}}(\mathbb{Z})[4]$, as an $\L^{\mathrm{gs}}(\mathbb{Z})$-module.
\end{Thm}
\begin{proof}
Consider the fibre sequence
\[ \L^{\mathrm{gs}}(\mathbb{Z}) \longrightarrow \L^{\mathrm{s}}(\mathbb{Z}) \longrightarrow \tau_{\leq -3}\L^{\mathrm{n}}(\mathbb{Z})\]
from \cref{truncsym} and observe that \cref{lnand} provides a canonical equivalence
\[\mathrm{I}(\tau_{\leq -3}\L^{\mathrm{n}}(\mathbb{Z})) \simeq (\tau_{\geq 3}\L^{\mathrm{n}}(\mathbb{Z}))[-1]\]
of $\L^\mathrm{gs}(\mathbb{Z})$-modules. Rotating the above fibre sequence once to the left and applying Anderson duality in combination with \cref{adlslq}, we obtain a fibre sequence
\[ \L^{\mathrm{q}}(\mathbb{Z}) \longrightarrow \mathrm{I}(\L^{\mathrm{gs}}(\mathbb{Z})) \longrightarrow \tau_{\geq 3}\L^{\mathrm{n}}(\mathbb{Z}).\]
From \cref{truncsym} we also have a fibre sequence
\[ \L^{\mathrm{q}}(\mathbb{Z}) \longrightarrow \L^{\mathrm{gs}}(\mathbb{Z})[4] \longrightarrow \tau_{\geq 3}\L^{\mathrm{n}}(\mathbb{Z})\]
and both are sequences of $\L^{\mathrm{gs}}(\mathbb{Z})$-modules. Now
\[\pi_4\mathrm{I}(\L^{\mathrm{gs}}(\mathbb{Z})) \cong \mathrm{Hom}(\L^{\mathrm{gs}}_{-4}(\mathbb{Z}),\mathbb{Z}) \cong \mathbb{Z}\]
with the $E_8$-form providing a generator. It determines an $\L^\mathrm{gs}(\mathbb{Z})$-module map
\begin{equation}\label{map-to-be-equivalence}
\L^\mathrm{gs}(\mathbb{Z})[4] \longrightarrow \mathrm{I}(\L^{\mathrm{gs}}(\mathbb{Z})) \tag{$\ast$}
\end{equation}
which we wish to show is an equivalence. To see this consider the diagram
\[\begin{tikzcd}
\L^{\mathrm{q}}(\mathbb{Z}) \ar[r] \ar[d,equal] & \mathrm{I}(\L^{\mathrm{gs}}(\mathbb{Z})) \ar[r] & \tau_{\geq 3}\L^{\mathrm{n}}(\mathbb{Z}) \ar[d, equal] \\
\L^{\mathrm{q}}(\mathbb{Z}) \ar[r] & \L^{\mathrm{gs}}(\mathbb{Z})[4] \ar[r] \ar[u] & \tau_{\geq 3}\L^{\mathrm{n}}(\mathbb{Z})
\end{tikzcd}
\]
which we claim commutes up to homotopy and consists of horizontal fibre sequences; let us mention explicitly that we do not claim here that it is a diagram of fibre sequences, which would require us to identify the witnessing homotopies; although this follows a posteriori, it is not necessary for the argument. The commutativity of the above diagram on homotopy groups implies that the map \eqref{map-to-be-equivalence} is indeed an equivalence by a simple diagram chase.
We now establish the claim that the above diagram commutes up to homotopy.
We discuss first the right square. Since the source of the composites is a free $\L^{\mathrm{gs}}(\mathbb{Z})$-module in this case, it suffices to show that this diagram commutes on $\pi_4$. By construction, both composites induce the projection $\mathbb{Z} \rightarrow \mathbb{Z}/8$.
For the left hand square, we use the calculation proving \cref{bla cor} together with \cref{adlslq} to obtain an equivalence
\[ \mathrm{map}_{\L^{\mathrm{gs}}(\mathbb{Z})}(\L^{\mathrm{q}}(\mathbb{Z}),\mathrm{I}(\L^{\mathrm{gs}}(\mathbb{Z}))) \simeq \mathrm{I}(\L^{\mathrm{q}}(\mathbb{Z})) \simeq \L^{\mathrm{s}}(\mathbb{Z}).\]
In particular, $\pi_0$ of this space is $\mathbb{Z}$, and clearly we can distinguish all the possible maps by their effect on $\pi_{4}$, as there is one that is non-zero. It follows that it again suffices to argue that the diagram commutes on $\pi_{4}$. In this case, both maps are the multiplication by $8$, and so are homotopic as needed.
\end{proof}
\begin{Rmk}
One can also hope to prove \cref{AD for genuine L} directly using \cref{free homotopy implies free module}, instead of reducing it to \cref{adlslq}. However, we did not manage to compute the module structure of $\mathrm{I}(\L^\mathrm{gs}(\mathbb{Z}))$ over $\L^\mathrm{gs}(\mathbb{Z})$ without referring to \cref{adlslq}: Everything can be formally reduced to the statement that multiplication by the element in $z_1 \in \L^\mathrm{gs}_{-6}(\mathbb{Z})$ induces a surjection $\pi_4(\mathrm{I}(\L^\mathrm{gs}(\mathbb{Z}))) \rightarrow \pi_{-2}(\mathrm{I}(\L^\mathrm{gs}(\mathbb{Z})))$, but we did not find a direct argument for this.
If one had a direct argument for \cref{AD for genuine L} one could conversely deduce \cref{adlslq} as there are equivalences
\[\L^{\mathrm{s}}(\mathbb{Z}) \simeq \colim\limits_{\cdot x} \L^{\mathrm{gs}}(\mathbb{Z}) \quad \text{and}\quad \L^{\mathrm{q}}(\mathbb{Z}) \simeq \lim\limits_{\cdot x} \L^{\mathrm{gq}}(\mathbb{Z})\]
as a result of \cref{homotopylgs}.
\end{Rmk}
\begin{Cor}\label{canonical maps homotopic}
Under the equivalences of \cref{adlslq} and \cref{AD for genuine L}, the Anderson dual of the canonical map $\L^\mathrm{gs}(\mathbb{Z}) \rightarrow \L^{\mathrm{s}}(\mathbb{Z})$ is homotopic to the canonical map $\L^{\mathrm{q}}(\mathbb{Z}) \rightarrow \L^\mathrm{gq}(\mathbb{Z})$. In fact, the $\L^{\mathrm{gs}}(\mathbb{Z})$-linear homotopies between these maps form a $\mathbb{Z}/2$-torsor.
\end{Cor}
As the symmetrisation map $\mathrm{sym} \colon \L^{\mathrm{q}}(\mathbb{Z}) \rightarrow \L^{\mathrm{s}}(\mathbb{Z})$ factors through the above maps and the canonical one $\L^\mathrm{gq}(\mathbb{Z}) \rightarrow \L^\mathrm{gs}(\mathbb{Z})$, which \cref{homotopylgs} identifies with the multiplication by $x$ on $\L^\mathrm{gs}(\mathbb{Z})$, the present corollary provides another proof that $\mathrm{I}(\mathrm{sym}) \simeq \mathrm{sym}$ under the equivalences provided by \cref{adlslq}.
\begin{proof}
By duality we have equivalences
\[\mathrm{map}_{\L^\mathrm{gs}(\mathbb{Z})}(\L^{\mathrm{q}}(\mathbb{Z}), \L^\mathrm{gq}(\mathbb{Z})) \simeq \mathrm{map}_{\L^\mathrm{gs}(\mathbb{Z})}(\L^\mathrm{gs}(\mathbb{Z}), \L^{\mathrm{s}}(\mathbb{Z})) \simeq \L^{\mathrm{s}}(\mathbb{Z}),\]
so such maps are detected on homotopy. Both claims are now immediate.
\end{proof}
Finally, we address the second half of \cref{thmB} and split $\L^{\mathrm{gs}}(\mathbb{Z})$ into a torsionfree part and a torsion part. For the precise statement, we put $\ell(\mathbb{R}) = \tau_{\geq 0}\L(\mathbb{R})$ and note that the equivalence $\tau_{\geq 0} \L^\mathrm{gs}(\mathbb{Z}) \to \tau_{\geq 0} \L^{\mathrm{s}}(\mathbb{Z})$ together with \cref{lzlrsplit} provides an $\mathbb{E}_1$-map $\ell(\mathbb{R}) \to \L^\mathrm{gs}(\mathbb{Z})$ splitting the canonical map on connective covers. We will use this map to regard $\L^\mathrm{gs}(\mathbb{Z})$ as an $\ell(\mathbb{R})$-module.
The torsionfree part of $\L^\mathrm{gs}(\mathbb{Z})$ is given by the following spectrum:
\begin{Def}
Define the $\ell(\mathbb{R})$-module $\mathscr{L}$ by the cartesian square
\[\begin{tikzcd}
\mathscr{L} \ar[r] \ar[d] & \L(\mathbb{R})\ar[d] \\
\ell(\mathbb{R})/8 \ar[r] & \L(\mathbb{R})/8.
\end{tikzcd}\]
\end{Def}
One easily checks that $\pi_{4k}\mathscr{L} \cong \mathbb{Z}$ for all $k \in \mathbb{Z}$, whereas all other homotopy groups vanish.
\begin{Rmk}\label{Tisweird}
The diagram defining $\mathscr{L}$ in fact refines to one of $\mathbb{E}_2$-ring spectra: The equivalence
\[\ell(\mathbb{R})/8 \simeq \ell(\mathbb{R})_{(2)} \otimes_{\H\mathbb{Z}} \H\mathbb{Z}/8.\]
from \cref{waystosplit} exhibits the source as an $\mathbb{E}_2$-ring by \cref{E3}, and similarly for the periodic case. The homotopy ring of $\mathscr{L}$ is then described as
$\mathbb{Z}[x] \oplus_{8\mathbb{Z}[x]} 8\mathbb{Z}[x^{\pm1}] \subset \mathbb{Z}[x^{\pm 1}]$.
Contrary to the case of $\L^{\mathrm{s}}(\mathbb{Z})$ and $\L(\mathbb{R})$, we do not, however, know of sensible ring maps connecting $\L^\mathrm{gs}(\mathbb{Z})$ and $\mathscr{L}$: An $\mathbb{E}_1$-map $\L^\mathrm{gs}(\mathbb{Z}) \rightarrow \mathscr{L}$ cannot be particularly compatible with the squares
\[\begin{tikzcd}
\mathscr{L} \ar[r] \ar[d] & \L(\mathbb{R})\ar[d] & & \L^\mathrm{gs}(\mathbb{Z}) \ar[r] \ar[d] & \L^{\mathrm{s}}(\mathbb{Z}) \ar[d] \\
\ell(\mathbb{R})/8 \ar[r] & \L(\mathbb{R})/8 & & \tau_{\geq -1} \L^{\mathrm{n}}(\mathbb{Z}) \ar[r] & \L^{\mathrm{n}}(\mathbb{Z}),
\end{tikzcd}\]
since by \cref{ringln} no ring map $\L^{\mathrm{n}}(\mathbb{Z}) \rightarrow \L(\mathbb{R})/8$ can induce an isomorphism on $\pi_4$. Conversely, to construct a ring map $\mathscr{L} \rightarrow \L^\mathrm{gs}(\mathbb{Z})$ from these squares, one would need a ring map $\L(\mathbb{R})/8 \rightarrow \L^{\mathrm{n}}(\mathbb{Z})$, and thus in turn an $\mathbb{E}_1$-factorisation of $\H\mathbb{Z} \rightarrow \L^{\mathrm{n}}(\mathbb{Z})$ through $\H\mathbb{Z}/8$. We do not know whether such a factorisation exists. If it does, \cref{abc} below could be upgraded to an $\mathscr{L}$-linear splitting of $\L^\mathrm{gs}(\mathbb{Z})$.
\end{Rmk}
We are ready to finish the proof of \cref{thmB}:
\begin{Thm}\label{abc}
There is a unique equivalence
\[ \L^{\mathrm{gs}}(\mathbb{Z}) \simeq \mathscr{L} \oplus (\ell(\mathbb{R})/2)[1] \oplus (\L(\mathbb{R})/(\ell(\mathbb{R}),2))[-2],\]
of $\ell(\mathbb{R})$-modules, such that the canonical map $\L^\mathrm{gs}(\mathbb{Z}) \rightarrow \L(\mathbb{R})$ corresponds to the projection to the first summand followed by the canonical map $\mathscr{L} \rightarrow \L(\mathbb{R})$.
In particular, there is a preferred equivalence
\[ \L^{\mathrm{gs}}(\mathbb{Z}) \simeq \mathscr{L} \oplus \bigoplus_{k \geq 0} \H\mathbb{Z}/2[4k+1] \oplus \H\mathbb{Z}/2[-4k-6]\]
of underlying spectra.
\end{Thm}
We can also identify the induced maps involving the classical $\L$-spectra of the integers. For the statement recall from \cref{lstype} and \cref{symmetric case} that there are canonical equivalences
\[\L^{\mathrm{s}}(\mathbb{Z}) \simeq \L(\mathbb{R}) \oplus (\L(\mathbb{R})/2)[1] \quad \text{and} \quad \L^{\mathrm{q}}(\mathbb{Z}) \simeq \L(\mathbb{R}) \oplus (\L(\mathbb{R})/2)[2]\]
of $\L(\mathbb{R})$-modules. Recall also that $\L^\mathrm{gq}(\mathbb{Z}) \simeq \L^\mathrm{gs}(\mathbb{Z})[4]$ via the multiplication by $x$.
\begin{add}\label{def}
Under these three equivalences the $\ell(\mathbb{R})$-linear homotopies between the canonical maps
\[\L^{\mathrm{q}}(\mathbb{Z}) \longrightarrow \L^\mathrm{gq}(\mathbb{Z}) \quad \text{and}\quad \L^\mathrm{gs}(\mathbb{Z}) \longrightarrow \L^{\mathrm{s}}(\mathbb{Z})\]
and the maps represented by the matrices
\[\begin{pmatrix} \mathrm{I}(\mathrm{can}) &0 \\ 0&0 \\0 &\mathrm{pr} \end{pmatrix} \quad \text{and}\quad \begin{pmatrix} \mathrm{can} & 0&0 \\ 0&\mathrm{incl} &0 \end{pmatrix},\]
form a $\mathbb{Z}/2$-torsor each; here $\mathrm{can} \colon \mathscr{L} \rightarrow \L(\mathbb{R})$ is the map from the definition of $\mathscr{L}$, and
\[\mathrm{I}(\mathrm{can}) \colon \L(\mathbb{R}) \simeq \mathrm{I}(\L(\mathbb{R})) \rightarrow \mathrm{I}(\mathscr{L}) \simeq \mathscr{L}[4]\]
is its Anderson dual, uniquely determined from the map $\ell(\mathbb{R}) \rightarrow \mathscr{L}[4]$ arising from the generator $8x^{-1} \in \pi_{-4}(\mathscr{L})$.
\end{add}
The canonical map $\L^\mathrm{gq}(\mathbb{Z}) \rightarrow \L^\mathrm{gs}(\mathbb{Z})$, being identified with multiplication by $x \in \ell_4(\mathbb{R})$ on $\L^\mathrm{gs}(\mathbb{Z})$, is compatible with the splitting of the theorem. Composing with it, the four $\ell(\mathbb{R})$-linear homotopies of the addendum necessarily underlie the four $\L(\mathbb{R})$-linear homotopies of \cref{symhop}.
\begin{proof}[Proof of \cref{abc} \& \cref{def}]
For the existence part, we first claim that the canonical map $\L^\mathrm{gs}(\mathbb{Z}) \rightarrow \L(\mathbb{R})$ factors through $\mathscr{L}$. From the definition we have a fibre sequence
\[\mathscr{L} \longrightarrow \L(\mathbb{R}) \longrightarrow \L(\mathbb{R})/(\ell(\mathbb{R}),8),\] so we need to check that the composite
\[\L^\mathrm{gs}(\mathbb{Z}) \longrightarrow \L(\mathbb{R}) \longrightarrow \L(\mathbb{R})/(\ell(\mathbb{R}),8)\]
is null homotopic. Note that it induces the zero map on homotopy groups by \cref{homotopylgs}. We will argue that this implies the claim. Using \cref{AD for genuine L} we find
\begin{align*}
\mathrm{map}_{\ell(\mathbb{R})}(\L^\mathrm{gs}(\mathbb{Z}), \L(\mathbb{R})/(\ell(\mathbb{R}),8)) &\simeq \mathrm{map}_{\ell(\mathbb{R})}(\L^\mathrm{gs}(\mathbb{Z}),\mathrm{I}((\ell(\mathbb{R})/8)[3])) \\
&\simeq \mathrm{map}_{\ell(\mathbb{R})}((\ell(\mathbb{R})/8)[3],\L^\mathrm{gs}(\mathbb{Z})[4]).
\end{align*}
The arising fibre sequence
\[ \L^\mathrm{gs}(\mathbb{Z}) \xrightarrow{\cdot 8} \L^\mathrm{gs}(\mathbb{Z}) \longrightarrow \mathrm{map}_{\ell(\mathbb{R})}(\L^\mathrm{gs}(\mathbb{Z}), \L(\mathbb{R})/(\ell(\mathbb{R}),8)) \]
gives an isomorphism
\[\mathbb{Z}/8 \cong \pi_0\mathrm{map}_{\ell(\mathbb{R})}(\L^\mathrm{gs}(\mathbb{Z}), \L(\mathbb{R})/(\ell(\mathbb{R}),8)),\]
which is necessarily detected on $\pi_{-4}$, as the composite
\[\tau_{\leq -4} \L^\mathrm{gs}(\mathbb{Z}) \simeq \tau_{\leq -4} \L^{\mathrm{q}}(\mathbb{Z}) \longrightarrow \tau_{\leq -4} \L(\mathbb{R}) \longrightarrow \tau_{\leq -4}\L(\mathbb{R})/8 \simeq \L(\mathbb{R})/(\ell(\mathbb{R}),8),\]
with the middle map the projection arising from \cref{symmetric case}, is clearly $\ell(\mathbb{R})$-linear and induces the projection $\mathbb{Z} \rightarrow \mathbb{Z}/8$ on $\pi_{-4}$. By \cref{homotopylgs} any lift $\L^\mathrm{gs}(\mathbb{Z}) \rightarrow \mathscr{L}$ induces an isomorphism the torsionfree part of the homotopy groups of $\L^\mathrm{gs}(\mathbb{Z})$.
We next produce the map $\L^\mathrm{gs}(\mathbb{Z}) \rightarrow (\ell(\mathbb{R})/2)[1]$. We claim that the composite
\[\L^\mathrm{gs}(\mathbb{Z}) \longrightarrow \L^{\mathrm{s}}(\mathbb{Z}) \longrightarrow (\L(\mathbb{R})/2)[1],\]
factors as needed, where the second map comes from the splitting \cref{lstype}. To this end we compute just as before that
\begin{align*}
\mathrm{map}_{\ell(\mathbb{R})}(\L^\mathrm{gs}(\mathbb{Z}), (\L(\mathbb{R})/(\ell(\mathbb{R}),2))[1]) &\simeq \mathrm{map}_{\ell(\mathbb{R})}(\L^\mathrm{gs}(\mathbb{Z}),\mathrm{I}((\ell(\mathbb{R})/2)[2])) \\
&\simeq \mathrm{map}_{\ell(\mathbb{R})}((\ell(\mathbb{R})/2)[2],\L^\mathrm{gs}(\mathbb{Z})[4]).
\end{align*}
This results in a fibre sequence
\[ \mathrm{map}_{\ell(\mathbb{R})}(\L^\mathrm{gs}(\mathbb{Z}), \L(\mathbb{R})/(\ell(\mathbb{R}),2)[1]) \longrightarrow \L^\mathrm{gs}(\mathbb{Z})[2] \xrightarrow{\cdot 2} \L^\mathrm{gs}(\mathbb{Z})[2]\]
and thus
\[0 \cong \pi_0\mathrm{map}_{\ell(\mathbb{R})}(\L^\mathrm{gs}(\mathbb{Z}), \L(\mathbb{R})/(\ell(\mathbb{R}),8)),\]
giving us a lift $\L^\mathrm{gs}(\mathbb{Z}) \rightarrow (\ell(\mathbb{R})/2)[1]$, as desired. By construction it induces an isomorphism of the positive degree torsion part of the homotopy groups of $\L^\mathrm{gs}(\mathbb{Z})$
The final map $\L^\mathrm{gs}(\mathbb{Z}) \rightarrow (\L(\mathbb{R})/(\ell(\mathbb{R}),2))[-2]$ is easier. It arises by expanding the composite
\[\tau_{\leq -6} \L^\mathrm{gs}(\mathbb{Z}) \simeq \tau_{\leq -6} \L^{\mathrm{q}}(\mathbb{Z}) \longrightarrow \tau_{\leq -6} \L(\mathbb{R})[2] \simeq (\L(\mathbb{R})/(\ell(\mathbb{R}),2))[-2]\]
to all of $\L^\mathrm{gs}(\mathbb{Z})$ (by coconnectivity of the target), where the second map arises from the splitting of \cref{symmetric case}. It induces an equivalence on the negative degree torsion part of the homotopy groups of $\L^\mathrm{gs}(\mathbb{Z})$ by construction.
Combining these three maps, gives the existence part of the theorem. \\
Next we prove the existence homotopies as in the addendum. To start note that the claims about the first two columns of the second matrix are true by construction. For the last column simply observe that the mapping space $\mathrm{map}_{\ell(\mathbb{R})}(\L(\mathbb{R})/(\ell(\mathbb{R}),2),M)$ is contractible for any $\L(\mathbb{R})$-module $M$, since the source is $x$-torsion, whereas $x$ is invertible in the target. The claim about the first matrix is then immediate from \ref{canonical maps homotopic}. \\
To obtain uniqueness of the splitting, we again treat all parts separately. From the fibre sequence defining $\mathscr{L}$ we find a fibre sequence
\[\mathrm{map}_{\ell(\mathbb{R})}(\mathscr{L},\mathscr{L}) \longrightarrow \mathrm{map}_{\ell(\mathbb{R})}(\mathscr{L},\L(\mathbb{R})) \longrightarrow \mathrm{map}_{\ell(\mathbb{R})}(\mathscr{L},\L(\mathbb{R})/(\ell(\mathbb{R}),8))\]
Since $T[x^{-1}] \simeq \L(\mathbb{R})$, the middle term evaluates to $\L(\mathbb{R})$ and for the latter easy manipulation using \cref{self dual} give $\mathrm{I}(\mathscr{L}) \simeq \mathscr{L}[4]$ and $\mathrm{I}(\L(\mathbb{R})/(\ell(\mathbb{R}),8)) \simeq (\ell(\mathbb{R})/8)[3]$, so
\[\mathrm{map}_{\ell(\mathbb{R})}(\mathscr{L},\L(\mathbb{R})/(\ell(\mathbb{R}),8)) \simeq \mathrm{map}_{\ell(\mathbb{R})}(\ell(\mathbb{R})/8,\mathscr{L}[1]).\]
From the fibre sequence defining the source of the latter term we find
\[\pi_1(\mathrm{map}_{\ell(\mathbb{R})}(\mathscr{L},\L(\mathbb{R})/(\ell(\mathbb{R}),8)) = 0\]
so $\pi_0\mathrm{map}_{\ell(\mathbb{R})}(\mathscr{L},\mathscr{L})$ injects into the integers (with index $8$, but we do not need this), so is detected on homotopy.
Secondly, we have
\[\mathrm{map}_{\ell(\mathbb{R})}(\mathscr{L},(\ell(\mathbb{R})/2)[1]) \simeq \mathrm{map}_{\ell(\mathbb{R})}(\L(\mathbb{R})/(\ell(\mathbb{R}),8),\ell(\mathbb{R})/2) \simeq \mathrm{map}_{\ell(\mathbb{R})}(\ell(\mathbb{R})/8,\ell(\mathbb{R})/2)[-1]\]
by noting that for any connective $\ell(\mathbb{R})$-module $M$ we have
\begin{align*}
\mathrm{map}_{\ell(\mathbb{R})}(\L(\mathbb{R})/k,M) &\simeq \mathrm{map}_{\ell(\mathbb{R})}(\L(\mathbb{R}),M/k)[-1] \\
&\simeq \mathrm{map}_{\ell(\mathbb{R})}(\colim_{\cdot x}\ell(\mathbb{R}), M/k)[-1] \\
&\simeq \lim_{\cdot x} M/k[-1] \\&
\simeq 0
\end{align*}
and applying this to the fibre sequences defining the sources of the left to terms. The arising fibre sequence
\[\mathrm{map}_{\ell(\mathbb{R})}(\mathscr{L},\ell(\mathbb{R})/2[1]) \longrightarrow (\ell(\mathbb{R})/2)[-1] \xrightarrow{\cdot 8} (\ell(\mathbb{R})/2)[-1]\]
shows that $\pi_0\mathrm{map}_{\ell(\mathbb{R})}(\mathscr{L},\ell(\mathbb{R})/2[1])$ vanishes.
Thirdly, by duality we have
\[\mathrm{map}_{\ell(\mathbb{R})}(\mathscr{L},\L(\mathbb{R})/(\ell(\mathbb{R}),2)[-2]) \simeq \mathrm{map}_{\ell(\mathbb{R})}((\ell(\mathbb{R})/2), \mathscr{L})[-1],\]
so from the long exact sequence associated to multiplication by $2$ we again find
\[\pi_0\mathrm{map}_{\ell(\mathbb{R})}(\mathscr{L},\L(\mathbb{R})/(\ell(\mathbb{R}),2)[-2]) = 0.\]
Next up, both $\pi_0\mathrm{map}_{\ell(\mathbb{R})}(\ell(\mathbb{R})/2[1],\mathscr{L})$ and $\pi_0\mathrm{map}_{\ell(\mathbb{R})}(\ell(\mathbb{R})/2[1],\L(\mathbb{R})/(\ell(\mathbb{R}),2)[-2])$ vanish again by the long exact sequence of multiplication by $2$, whereas it gives
\[\mathrm{map}_{\ell(\mathbb{R})}(\ell(\mathbb{R})/2[1],\ell(\mathbb{R})/2[1]) = \mathbb{Z}/2\]
clearly detected on homotopy.
Finally, for $M$ an $\ell(\mathbb{R})$-module we find
$\mathrm{map}_{\ell(\mathbb{R})}((\L(\mathbb{R})/(\ell(\mathbb{R}),2))[-2],M)$ to be the total homotopy fibre of
\[\begin{tikzcd}
\lim\limits_{\cdot x} M[2] \ar[r] \ar[d,"{\cdot 2}"] & M[2] \ar[d,"{\cdot 2}"] \\
\lim\limits_{\cdot x} M[2] \ar[r] & M[2]
\end{tikzcd}\]
via the fibre sequence defining the source. For the three modules $\mathscr{L}, (\ell(\mathbb{R})/2)[1]$ and $(\L(\mathbb{R})/(\ell(\mathbb{R}),2))[-2]$ the components of this total fibre are given by $0,0$ and $\mathbb{Z}/2$, respectively, the last term clearly detected on homotopy again.
Putting these nine calculations together gives the uniqueness assertion of the theorem. \\
Finally, by \cref{canonical maps homotopic} it suffices to verify the (non)-uniqueness assertion of the addendum in the case of the map $\L^\mathrm{gs}(\mathbb{Z}) \rightarrow \L^{\mathrm{s}}(\mathbb{Z})$. So we need to compute the first homotopy group of
\[\mathrm{map}_{\ell(\mathbb{R})}(\mathscr{L} \oplus (\ell(\mathbb{R})/2)[1] \oplus (\L(\mathbb{R})/(\ell(\mathbb{R}),2))[-2], \L(\mathbb{R}) \oplus (\L(\mathbb{R})/2)[1]) \simeq \mathrm{end}_{\L(\mathbb{R})}(\L(\mathbb{R}) \oplus (\L(\mathbb{R})/2)[1]),\]
the equivalence given by inverting $x \in \L_4(\mathbb{R})$. The four components of the latter are given by
\[\L(\mathbb{R}),\ (\L(\mathbb{R})/2)[1],\ \fib(\L(\mathbb{R}) \xrightarrow{\cdot 2} \L(\mathbb{R}))[-1] \quad \text{and} \quad \fib(\L(\mathbb{R})/2 \xrightarrow{\cdot 2} \L(\mathbb{R})/2)\]
whose first homotopy groups are $0,\mathbb{Z}/2,0$ and $0$, respectively.
\end{proof}
\begin{Rmk}
As mentioned in the introduction to the present section, there are also skew-symmetric and skew-quadratic versions of the genuine $\L$-spectra, and the same is true for the non-genuine spectra as well. However, in the case of the integers it turns out that the sequence
\[\L^{-\mathrm{q}}(\mathbb{Z}) \longrightarrow \L^{-\mathrm{gq}}(\mathbb{Z}) \longrightarrow \L^{-\mathrm{gs}}(\mathbb{Z}) \longrightarrow \L^{-\mathrm{s}}(\mathbb{Z})\]
is merely a two-fold shift of the sequence considered in the present section. The equivalence is classical for the outer two terms and enters the proof of the $4$-fold periodicity of $\L^{\mathrm{s}}(\mathbb{Z})$ and $\L^{\mathrm{q}}(\mathbb{Z})$. The remaining identifications are explained for example around \cite[R.10 Corollary]{CDHIII}.
Note, in particular, that $\L^{-\mathrm{gs}}(\mathbb{Z})$ is Anderson self-dual.
\end{Rmk}
|
2,869,038,154,683 | arxiv | \section{Conclusion}
In this work, we present an architecture designed to infer bird's-eye-view representations from arbitrary camera rigs. Our model outperforms baselines on a suite of benchmark segmentation tasks designed to probe the model's ability to represent semantics in the bird's-eye-view frame without any access to ground truth depth data at training or test time. We present methods for training our model that make the network robust to simple models of calibration noise. Lastly, we show that the model enables end-to-end motion planning that follows the trajectory shooting paradigm. In order to meet and possibly surpass the performance of similar networks that exclusively use ground truth depth data from pointclouds, future work will need to condition on multiple time steps of images instead of a single time step as we consider in this work.
\section{Experiments and Results}
\label{sec:experiments}
We use the nuScenes \cite{nuscenes} and Lyft Level 5 \cite{lyftlevel5} datasets to evaluate our approach. nuScenes is a large dataset of point cloud data and image data from 1k scenes, each of 20 seconds in length. The camera rig in both datasets is comprised of 6 cameras which roughly point in the forward, front-left, front-right, back-left, back-right, and back directions. In all datasets, there is a small overlap between the fields-of-view of the cameras. The extrinsic and intrinsic parameters of the cameras shift throughout both datasets. Since our model conditions on the camera calibration, it is able to handle these shifts.
We define two object-based segmentation tasks and two map-based tasks. For the object segmentation tasks, we obtain ground truth bird's-eye-view targets by projecting 3D bounding boxes into the bird's-eye-view plane. Car segmentation on nuScenes refers to all bounding boxes of class \verb|vehicle.car| and vehicle segmentation on nuScenes refers to all bounding boxes of meta-category \verb|vehicle|. Car segmentation on Lyft refers to all bounding boxes of class \verb|car| and vehicle segmentation on nuScenes refers to all bounding boxes with class $\in \{$ \verb|car, truck, other_vehicle, bus, bicycle| $\}$. For mapping, we use transform map layers from the nuScenes map into the ego frame using the provided 6 DOF localization and rasterize.
For all object segmentation tasks, we train with binary cross entropy with positive weight 1.0. For the lane segmentation, we set positive weight to 5.0 and for road segmentation we use positive weight 1.0 \cite{fastdraw}. In all cases, we train for 300k steps using Adam \cite{adam} with learning rate $1e-3$ and weight decay $1e-7$. We use the PyTorch framework \cite{pytorch}.
The Lyft dataset does not come with a canonical train/val split. We separate 48 of the Lyft scenes for validation to get a validation set of roughly the same size as nuScenes (6048 samples for Lyft, 6019 samples for nuScenes).
\subsection{Description of Baselines}
Unlike vanilla CNNs, our model comes equipped with 3-dimensional structure at initialization. We show that this structure is crucial for good performance by comparing against a CNN composed of standard modules. We follow an architecture similar to MonoLayout~\cite{krishna} which also trains a CNN to output bird's-eye-view labels from images only but does not leverage inductive bias in designing the architecture and trains on single cameras only. The architecture has an EfficientNet-B0 backbone that extracts features independently across all images. We concatenate the representations and perform bilinear interpolation to upsample into a $\mathbb{R}^{X \times Y}$ tensor as is output by our model. We design the network such that it has roughly the same number of parameters as our model. The weak performance of this baseline demonstrates how important it is to explicitly bake symmetry 3 from Sec~\ref{sec:intro} into the model in the multi-view setting.
To show that our model is predicting a useful implicit depth, we compare against our model where the weights of the pretrained CNN are frozen as well as to OFT \cite{oft}. We outperform these baselines on all tasks, as shown in Tables~\ref{tab:obj} and~\ref{tab:map}. We also outperform concurrent work that benchmarks on the same segmentation tasks \cite{zoox} \cite{roddick}. As a result, the architecture is learning both an effective depth distribution as well as effective contextual representations for the downstream task.
\subsection{Segmentation}
\label{subsec:super}
We demonstrate that our Lift-Splat model is able to learn semantic 3D representations given supervision in the bird's-eye-view frame. Results on the object segmentation tasks are shown in Table~\ref{tab:obj}, while results on the map segmentation tasks are in Table~\ref{tab:map}. On all benchmarks, we outperform our baselines. We believe the extent of these gains in performance from implicitly unprojecting into 3D are substantial, especially for object segmentation. We also include reported IOU scores for two concurrent works \cite{zoox} \cite{roddick} although both of these papers use different definitions of the bird's-eye-view grid and a different validation split for the Lyft dataset so a true comparison is not yet possible.
\begin{table}[t!]
\begin{minipage}{0.47\linewidth}
\begin{center}
\begin{adjustbox}{width=\textwidth}
\begin{tabular}{|l|c|c||c|c|}
\cline{2-5}
\multicolumn{1}{c|}{} & \multicolumn{2}{c||}{nuScenes} & \multicolumn{2}{c|}{Lyft}\\
\cline{2-5}
\multicolumn{1}{c|}{} & Car & Vehicles & Car & Vehicles\\
\hline
CNN & 22.78 & 24.25 & 30.71 & 31.91\\
\hline
Frozen Encoder & 25.51 & 26.83 & 35.28 & 32.42\\
\hline
OFT & 29.72 & 30.05 & 39.48 & 40.43\\
\hline
Lift-Splat (Us) & \textbf{32.06} & \textbf{32.07} & \textbf{43.09} & \textbf{44.64}\\
\hline
\hline
PON$^*$ \cite{roddick} & 24.7 & - & - & -\\
\hline
FISHING$^*$ \cite{zoox} & - & 30.0 & - & 56.0\\
\hline
\end{tabular}
\end{adjustbox}
\end{center}
\caption{\footnotesize Segment. IOU in BEV frame}
\label{tab:obj}
\end{minipage}
\begin{minipage}{0.5\linewidth}
\begin{center}
\begin{adjustbox}{width=\textwidth}
\begin{tabular}{|l|c|c|}
\cline{2-3}
\multicolumn{1}{c|}{} & Drivable Area & Lane Boundary \\
\hline
CNN & 68.96 & 16.51 \\
\hline
Frozen Encoder & 61.62 & 16.95 \\
\hline
OFT & 71.69 & 18.07 \\
\hline
Lift-Splat (Us) & \textbf{72.94} & \textbf{19.96} \\
\hline
\hline
PON$^*$ \cite{roddick} & 60.4 & - \\
\hline
\end{tabular}
\end{adjustbox}
\end{center}
\caption{\footnotesize Map IOU in BEV frame}
\label{tab:map}
\end{minipage}
\end{table}
\subsection{Robustness}
\label{subsec:robustness}
Because the bird's-eye-view CNN learns from data how to fuse information across cameras, we can train the model to be robust to simple noise models that occur in self-driving such as extrinsics being biased or cameras dying. In Figure~\ref{fig:robust}, we verify that by dropping cameras during training, our model handles dropped cameras at better at test time. In fact, the best performing model when all 6 cameras are present is the model that is trained with 1 camera being randomly dropped from every sample during training. We reason that sensor dropout forces the model to learn the correlation between images on different cameras, similar to other variants of dropout \cite{dropout} \cite{blockdrop}. We show on the left of Figure~\ref{fig:robust} that training the model with noisy extrinsics can lead to better test-time performance. For low amounts of noise at test-time, models that are trained without any noise in the extrinsics perform the best because the BEV CNN can trust the location of the splats with more confidence. For high amounts of extrinsic noise, our model sustains its good performance.
In Figure~\ref{fig:drop}, we measure the ``importance'' of each camera for the performance of car segmentation on nuScenes. Note that losing cameras on nuScenes implies that certain regions of the region local to the car have no sensor measurements and as a result performance strictly upper bounded by performance with the full sensor rig. Qualitative examples in which the network inpaints due to missing cameras are shown in Figure~\ref{fig:remove}. In this way, we measure the importance of each camera, suggesting where sensor redundancy is more important for safety.
\begin{figure}[t]%
\centering
\subfloat[Test Time Extrinsic Noise]{{\includegraphics[width=5cm]{imgs/extrins} }}%
\qquad
\subfloat[Test Time Camera Dropout]{{\includegraphics[width=5cm]{imgs/dropout} }}%
\caption{\footnotesize We show that it is possible to train our network such that it is resilient to common sources of sensor error. On the left, we show that by training with a large amount of noise in the extrinsics (blue), the network becomes more robust to extrinsic noise at test time. On the right, we show that randomly dropping cameras from each batch during training (red) increases robustness to sensor dropout at test time.}%
\label{fig:robust}%
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.75\linewidth,trim=0 30 0 25,clip]{imgs/drop}
\end{center}
\caption{\footnotesize We measure intersection-over-union of car segmentation when each of the cameras is missing. The backwards camera on the nuScenes camera rig has a wider field of view so it is intuitive that losing this camera causes the biggest decrease in performance relative to performance given the full camera rig (labeled ``full'' on the right). }
\label{fig:drop}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.95\linewidth]{imgs/remove}
\end{center}
\caption{\footnotesize For a single time stamp, we remove each of the cameras and visualize how the loss the cameras effects the prediction of the network. Region covered by the missing camera becomes fuzzier in every case. When the front camera is removed (top middle), the network extrapolates the lane and drivable area in front of the ego and extrapolates the body of a car for which only a corner can be seen in the top right camera.}
\label{fig:remove}
\end{figure}
\subsection{Zero-Shot Camera Rig Transfer}
\label{subsec:zeroshot}
We now probe the generalization capabilities of Lift-Splat. In our first experiment, we measure performance of our model when only trained on images from a subset of cameras from the nuScenes camera rig but at test time has access to images from the remaining two cameras. In Table~\ref{tab:newcams}, we show that the performance of our model for car segmentation improves when additional cameras are available at test time without any retraining. %
\begin{table}[t!]
\begin{minipage}{0.23\linewidth}
\begin{center}
\begin{adjustbox}{width=0.9\textwidth}
\begin{tabular}{|c|c|}
\hline
& IOU \\
\hline
4 & 26.53 \\
\hline
$4 + 1_{fl}$ & 27.35 \\
\hline
$4 + 1_{bl}$ & 27.27 \\
\hline
$4 + 1_{bl} + 1_{fl}$ & \textbf{27.94}\\
\hline
\end{tabular}
\end{adjustbox}
\end{center}
\end{minipage}
\begin{minipage}{0.745\linewidth}
\caption{\footnotesize We train on images from only 4 of the 6 cameras in the nuScenes dataset. We then evaluate with the new cameras ($1_{bl}$ corresponds to the ``back left'' camera and $1_{fl}$ corresponds to the ``front left'' camera) and find that the performance of the model strictly increases as we add more sensors unseen during training.}
\label{tab:newcams}
\end{minipage}
\end{table}
We take the above experiment a step farther and probe how well our model generalizes to the Lyft camera rig if it was only trained on nuScenes data. Qualitative results of the transfer are shown in Figure~\ref{fig:zero} and the benchmark against the generalization of our baselines is shown in Table~\ref{tab:zeroshot}.
\begin{table}[t!]
\begin{minipage}{0.58\linewidth}
\caption{\footnotesize We train the model on nuScenes then evaluate it on Lyft. The Lyft cameras are entirely different from the nuScenes cameras but the model succeeds in generalizing far better than the baselines. Note that our model has widened the gap from the standard benchmark in Tables~\ref{tab:obj} and~\ref{tab:map}.}
\label{tab:zeroshot}
\end{minipage}
\begin{minipage}{0.4\linewidth}
\begin{center}
\begin{adjustbox}{width=\textwidth}
\begin{tabular}{|c|c|c|}
\hline
& Lyft Car & Lyft Vehicle \\
\hline
CNN & 7.00 & 8.06 \\
\hline
Frozen Encoder & 15.08 & 15.82 \\
\hline
OFT & 16.25 & 16.27 \\
\hline
Lift-Splat (Us) & \textbf{21.35} & \textbf{22.59} \\
\hline
\end{tabular}
\end{adjustbox}
\end{center}
\end{minipage}
\end{table}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.95\linewidth]{imgs/zeroshot}
\end{center}
\caption{\footnotesize We qualitatively show how our model performs given an entirely new camera rig at test time. Road segmentation is shown in orange, lane segmentation is shown in green, and vehicle segmentation is shown in blue.}
\label{fig:zero}
\end{figure}
\subsection{Benchmarking Against Oracle Depth}
\label{subsec:oracle}
We benchmark our model against the pointpillars \cite{pointpillars} architecture which uses ground truth depth from LIDAR point clouds. As shown in Table~\ref{tab:maplidar}, across all tasks, our architecture performs slightly worse than pointpillars trained with a single scan of LIDAR. However, at least on drivable area segmentation, we note that we approach the performance of LIDAR. In the world in general, not all lanes are visible in a lidar scan. We would like to measure performance in a wider range of environments in the future.
To gain insight into how our model differs from LIDAR, we plot how performance of car segmentation varies with two control variates: distance to the ego vehicle and weather conditions. We determine the weather of a scene from the description string that accompanies every scene token in the nuScenes dataset. The results are shown in Figure~\ref{fig:control}. We find that performance of our model is much worse than pointpillars on scenes that occur at night as expected. We also find that both models experience roughly linear performance decrease with increased depth.
\begin{table}[t!]
\begin{center}
\begin{adjustbox}{width=\textwidth}
\addtolength{\tabcolsep}{3.5pt}
\begin{tabular}{|l|c|c||c|c||c|c|}
\cline{4-7}
\multicolumn{3}{c|}{} & \multicolumn{2}{c||}{nuScenes} & \multicolumn{2}{c|}{Lyft}\\
\cline{2-7}
\multicolumn{1}{c|}{} & Drivable Area & Lane Boundary & Car & Vehicle & Car & Vehicle \\
\hline
Oracle Depth (1 scan) & 74.91 & 25.12 & 40.26 & 44.48 & 74.96 & 76.16 \\
\hline
Oracle Depth ($>1$ scan) & 76.96 & 26.80 & 45.36 & 49.51 & 75.42 & 76.49 \\
\hline
\hline
Lift-Splat (Us) & 70.81 & 19.58 & 32.06 & 32.07 & 43.09 & 44.64 \\
\hline
\end{tabular}
\end{adjustbox}
\end{center}
\caption{\footnotesize When compared to models that use oracle depth from lidar, there is still room for improvement. Video inference from camera rigs is likely necessary to acquire the depth estimates necessary to surpass lidar.}
\label{tab:maplidar}
\end{table}
\begin{figure}[t]%
\centering
\subfloat[IOU versus distance]{{\includegraphics[width=4cm]{imgs/dist} }}%
\qquad
\subfloat[IOU versus weather]{{\includegraphics[width=4cm]{imgs/weather} }}%
\caption{\footnotesize We compare how our model's performance varies over depth and weather. As expected, our model drops in performance relative to pointpillars at nighttime.}%
\label{fig:control}%
\end{figure}
\subsection{Motion Planning}
\label{subsec:planning}
Finally, we evaluate the capability of our model to perform planning by training the representation output by Lift-Splat to be a cost function. The trajectories that we generate are 5 seconds long spaced by 0.25 seconds. To acquire templates, we fit K-Means for $K=1000$ to all ego trajectories in the training set of nuScenes. At test time, we measure how well the network is able to predict the template that is closest to the ground truth trajectory under the L2 norm. This task is an important experiment for self-driving because the ground truth targets for this experiment are orders of magnitude less expensive to acquire than ground truth 3D bounding boxes. %
This task is also important for benchmarking the performance of camera-based approaches versus lidar-based approaches because although the ceiling for 3D object detection from camera-only is certainly upper bounded by lidar-only, the optimal planner using camera-only should in principle upper bound the performance of an optimal planner trained from lidar-only.
Qualitative results of the planning experiment are shown in Figure~\ref{fig:qtraj}. The empirical results benchmarked against pointpillars are shown in Table~\ref{tab:planning}. The output trajectories exhibit desirable behavior such as following road boundaries and stopping at crosswalks or behind braking vehicles.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.9\linewidth]{imgs/qtraj}
\end{center}
\caption{\footnotesize We display the top 10 ranked trajectories out of the 1k templates. Video sequences are provided on our \href{https://nv-tlabs.github.io/lift-splat-shoot/}{project page}. Our model predicts bimodal distributions and curves from observations from a single timestamp. Our model does not have access to the speed of the car so it is compelling that the model predicts low-speed trajectories near crosswalks and brake lights.}
\label{fig:qtraj}
\end{figure}
\begin{table}[t!]
\begin{minipage}{0.39\linewidth}
\begin{center}
\begin{adjustbox}{width=\textwidth}
\begin{tabular}{|c|c|c|c|}
\hline
& Top 5 & Top 10 & Top 20 \\
\hline
Lidar (1 scan) & 19.27 & 28.88 & 41.93 \\
\hline
Lidar (10 scans) & 24.99 & 35.39 & 49.84 \\
\hline
\hline
Lift-Splat (Us) & 15.52 & 19.94 & 27.99 \\
\hline
\end{tabular}
\end{adjustbox}
\end{center}
\end{minipage}
\begin{minipage}{0.6\linewidth}
\caption{\footnotesize Since planning is framed as classification among a set of 1K template trajectories, we measure top-5, top-10, and top-20 accuracy. We find that our model is still lagging behind lidar-based approaches in generalization. Qualitative examples of the trajectories output by our model are shown in Fig.~\ref{fig:qtraj}.}
\label{tab:planning}
\end{minipage}
\end{table}
\section{Introduction}
\label{sec:intro}
Computer vision algorithms generally take as input an image and output either a prediction that is coordinate-frame agnostic -- such as in classification~\cite{mnist,imagenet,cifar,alexnet} -- or a prediction in the same coordinate frame as the input image -- such as in object detection, semantic segmentation, or panoptic segmentation~\cite{maskrcnn,segnet,panoptic,scnn}.
This paradigm does not match the setting for perception in self-driving out-of-the-box. In self-driving, multiple sensors are given as input, each with a different coordinate frame, and perception models are ultimately tasked with producing predictions in a new coordinate frame -- the frame of the ego car -- for consumption by the downstream planner, as shown in Fig.~\ref{fig:compare}.
There are many simple, practical strategies for extending the single-image paradigm to the multi-view setting. For instance, for the problem of 3D object detection from $n$ cameras, one can apply a single-image detector to all input images individually, then rotate and translate each detection into the ego frame according to the intrinsics and extrinsics of the camera in which the object was detected. This extension of the single-view paradigm to the multi-view setting has three valuable symmetries baked into it:
\begin{enumerate}
\item \textbf{Translation equivariance} -- If pixel coordinates within an image are all shifted, the output will shift by the same amount. Fully convolutional single-image object detectors roughly have this property and the multi-view extension inherits this property from them \cite{trans_invariance} \cite{goodfellow_text}.
\item \textbf{Permutation invariance} -- the final output does not depend on a specific ordering of the $n$ cameras.
\item \textbf{Ego-frame isometry equivariance} -- the same objects will be detected in a given image no matter where the camera that captured the image was located relative to the ego car. An equivalent way to state this property is that the definition of the ego-frame can be rotated/translated and the output will rotate/translate with it.
\end{enumerate}
The downside of the simple approach above is that using post-processed detections from the single-image detector prevents one from differentiating from predictions made in the ego frame all the way back to the sensor inputs. As a result, the model cannot learn in a data-driven way what the best way is to fuse information across cameras. It also means backpropagation cannot be used to automatically improve the perception system using feedback from the downstream planner.
We propose a model named ``Lift-Splat'' that preserves the 3 symmetries identified above by design while also being end-to-end differentiable. In Section~\ref{sec:method}, we explain how our model ``lifts'' images into 3D by generating a frustum-shaped point cloud of contextual features, ``splats'' all frustums onto a reference plane as is convenient for the downstream task of motion planning. In Section~\ref{subsec:explan}, we propose a method for ``shooting'' proposal trajectories into this reference plane for interpretable end-to-end motion planning. In Section~\ref{sec:impl}, we identify implementation details for training lift-splat models efficiently on full camera rigs. We present empirical evidence in Sec~\ref{sec:experiments} that our model learns an effective mechanism for fusing information from a distribution of possible inputs.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.95\linewidth,trim=0 2 0 0,clip]{imgs/compare}
\end{center}
\caption{\footnotesize (left, from SegNet~\cite{segnet}) Traditionally, computer vision tasks such as semantic segmentation involve making predictions in the same coordinate frame as the input image. (right, from Neural Motion Planner~\cite{nmp}) In contrast, planning for self-driving generally operates in the bird's-eye-view frame. Our model directly makes predictions in a given bird's-eye-view frame for end-to-end planning from multi-view images.}
\label{fig:compare}
\end{figure}
\section{Method}
\label{sec:method}
In this section, we present our approach for learning bird's-eye-view representations of scenes from image data captured by an arbitrary camera rig. We design our model such that it respects the symmetries identified in Section~\ref{sec:intro}.
Formally, we are given $n$ images $\{\mathbf{X}_k \in \mathbb{R}^{3 \times H \times W}\}_n$ each with an extrinsic matrix $\mathbf{E}_k \in \mathbb{R}^{3 \times 4}$ and an intrinsic matrix $\mathbf{I}_k \in \mathbb{R}^{3 \times 3}$, and we seek to find a rasterized representation of the scene in the BEV coordinate frame $\mathbf{y} \in \mathbb{R}^{C \times X \times Y}$. The extrinsic and intrinsic matrices together define the mapping from reference coordinates $(x, y, z)$ to local pixel coordinates $(h, w, d)$ for each of the $n$ cameras. We do not require access to any depth sensor during training or testing.
\subsection{Lift: Latent Depth Distribution}
\label{subsec:lift}
The first stage of our model operates on each image in the camera rig in isolation. The purpose of this stage is to ``lift'' each image from a local 2-dimensional coordinate system to a 3-dimensional frame that is shared across all cameras.
The challenge of monocular sensor fusion is that we require depth to transform into reference frame coordinates but the ``depth'' associated to each pixel is inherently ambiguous. Our proposed solution is to generate representations at all possible depths for each pixel.
Let $\mathbf{X} \in \mathbb{R}^{3 \times H \times W}$ be an image with extrinsics $\mathbf{E}$ and intrinsics $\mathbf{I}$, and let $p$ be a pixel in the image with image coordinates $(h, w)$. We associate $|D|$ points $\{ (h, w, d) \in \mathbb{R}^3 \mid d \in D \}$ to each pixel where $D$ is a set of discrete depths, for instance defined by $\{ d_0 + \Delta,..., d_0 + |D|\Delta \}$. Note that there are no learnable parameters in this transformation. We simply create a large point cloud for a given image of size $D \cdot H \cdot W$. This structure is equivalent to what the multi-view synthesis community~\cite{tucker2020singleview,lighthouse} has called a multi-plane image except in our case the features in each plane are abstract vectors instead of $(r,g,b,\alpha)$ values.
The context vector for each point in the point cloud is parameterized to match a notion of attention and discrete depth inference. At pixel $p$, the network predicts a context $\mathbf{c}\in \mathbb{R}^C$ and a distribution over depth $\alpha \in \triangle^{|D| - 1}$ for every pixel. The feature $\mathbf{c}_d \in \mathbb{R}^C$ associated to point $p_d$ is then defined as the context vector for pixel $p$ scaled by $\alpha_d$:
\begin{align}
\mathbf{c}_d = \alpha_d \mathbf{c}.
\end{align}
Note that if our network were to predict a one-hot vector for $\alpha$, context at the point $p_d$ would be non-zero exclusively for a single depth $d^*$ as in pseudolidar \cite{pseudolidar}. If the network predicts a uniform distribution over depth, the network would predict the same representation for each point $p_d$ assigned to pixel $p$ independent of depth as in OFT \cite{oft}. Our network is therefore in theory capable of choosing between placing context from the image in a specific location of the bird's-eye-view representation versus spreading the context across the entire ray of space, for instance if the depth is ambiguous.
In summary, ideally, we would like to generate a function $g_c : (x, y, z) \in \mathbb{R}^3 \rightarrow c \in \mathbb{R}^C$ for each image that can be queried at any spatial location and return a context vector. To take advantage of discrete convolutions, we choose to discretize space. For cameras, the volume of space visible to the camera corresponds to a frustum. A visual is provided in Figure~\ref{fig:explain}.
\subsection{Splat: Pillar Pooling}
\label{subsec:splat}
\begin{figure}[t]
\centering
\includegraphics[height=0.34\linewidth]{imgs/newsplat}
\caption{\footnotesize \textbf{Lift-Splat-Shoot Outline} Our model takes as input $n$ images (left) and their corresponding extrinsic and intrinsic parameters. In the ``lift'' step, a frustum-shaped point cloud is generated for each individual image (center-left). The extrinsics and intrinsics are then used to splat each frustum onto the bird's-eye-view plane (center-right). Finally, a bird's-eye-view CNN processes the bird's-eye-view representation for BEV semantic segmentation or planning (right).}
\label{fig:splat}
\end{figure}
We follow the pointpillars \cite{pointpillars} architecture to convert the large point cloud output by the ``lift'' step. ``Pillars'' are voxels with infinite height. We assign every point to its nearest pillar and perform sum pooling to create a $C \times H \times W$ tensor that can be processed by a standard CNN for bird's-eye-view inference. The overall lift-splat architecture is outlined in Figure~\ref{fig:splat}.
Just as OFT \cite{oft} uses integral images to speed up their pooling step, we apply an analagous technique to speed up sum pooling. Efficiency is crucial for training our model given the size of the point clouds generated. Instead of padding each pillar then performing sum pooling, we avoid padding by using packing and leveraging a ``cumsum trick'' for sum pooling. This operation has an analytic gradient that can be calculated efficiently to speed up autograd as explained in subsection~\ref{subsec:customback}.
\subsection{Shoot: Motion Planning}
\label{subsec:explan}
Key aspect of our Lift-Splat model is that it enables end-to-end cost map learning for motion planning from camera-only input. At test time, planning using the inferred cost map can be achieved by ``shooting'' different trajectories, scoring their cost, then acting according to lowest cost trajectory \cite{pkl}. In Sec~\ref{subsec:planning}, we probe the ability of our model to enable end-to-end interpretable motion planning and compare its performance to lidar-based end-to-end neural motion planners.
We frame ``planning'' as predicting a distribution over $K$ template trajectories for the ego vehicle
$$\mathcal{T} = \{\tau_i\}_K = \{\{ x_j, y_j, t_j \}_T\}_K$$
conditioned on sensor observations $p(\tau | o)$. Our approach is inspired by the recently proposed Neural Motion Planner (NMP)~\cite{nmp}, an architecture that conditions on point clouds and high-definition maps to generate a cost-volume that can be used to score proposed trajectories.
Instead of the hard-margin loss proposed in NMP, we frame planning as classification over a set of $K$ template trajectories. To leverage the cost-volume nature of the planning problem, we enforce the distribution over $K$ template trajectories to take the following form
\begin{align}
p(\tau_i | o) = \frac{\exp\left(-\sum\limits_{x_i, y_i \in \tau_i} c_o(x_i, y_i) \right)}{\sum\limits_{\tau \in \mathcal{T}}\exp\left(-\sum\limits_{x_i, y_i \in \tau} c_o(x_i, y_i) \right)}
\end{align}
where $c_o(x, y)$ is defined by indexing into the cost map predicted given observations $o$ at location $x,y$ and can therefore be trained end-to-end from data by optimizing for the log probability of expert trajectories. For labels, given a ground-truth trajectory, we compute the nearest neighbor in L2 distance to the template trajectories $\mathcal{T}$ then train with the cross entropy loss. This definition of $p(\tau_i | o)$ enables us to learn an interpretable spatial cost function without defining a hard-margin loss as in NMP \cite{nmp}.
In practice, we determine the set of template trajectories by running K-Means on a large number of expert trajectories. The set of template trajectories used for ``shooting'' onto the cost map that we use in our experiments are visualized in Figure~\ref{fig:trajs}.
\begin{figure}[t!]
\begin{minipage}{0.47\linewidth}
\begin{center}
\includegraphics[width=1\linewidth,trim= 0 25 0 35,clip]{imgs/clusters}
\end{center}
\end{minipage}
\begin{minipage}{0.525\linewidth}
\caption{\footnotesize We visualize the 1K trajectory templates that we ``shoot'' onto our cost map during training and testing. During training, the cost of each template trajectory is computed and interpreted as a 1K-dimensional Boltzman distribution over the templates. During testing, we choose the argmax of this distribution and act according to the chosen template.}
\label{fig:trajs}
\end{minipage}
\end{figure}
\section{Implementation}
\label{sec:impl}
\subsection{Architecture Details}
The neural architecture of our model is similar to OFT \cite{oft}. As in OFT, our model has two large network backbones. One of the backbones operates on each image individually in order to featurize the point cloud generated from each image. The other backbone operates on the point cloud once it is splatted into pillars in the reference frame. The two networks are joined by our lift-splat layer as defined in Section~\ref{sec:method} and visualize in Figure~\ref{fig:splat}.
For the network that operates on each image in isolation, we leverage layers from an EfficientNet-B0 \cite{efficientnet} pretrained on Imagenet \cite{imagenet} in all experiments for all models including baselines. EfficientNets are network architectures found by exhaustive architecture search in a resource limited regime with depth, width, and resolution scaled up proportionally. We find that they enable higher performance relative to ResNet-18/34/50 \cite{resnet} across all models with a minor inconvenience of requiring more optimization steps to converge.
For our bird's-eye-view network, we use a combination of ResNet blocks similar to PointPillars \cite{pointpillars}. Specifically, after a convolution with kernel 7 and stride 2 followed by batchnorm \cite{batchnorm} and ReLU \cite{relu}, we pass through the first 3 meta-layers of ResNet-18 to get 3 bird's-eye-view representations at different resolutions $x_1, x_2, x_3$. We then upsample $x_3$ by a scale factor of 4, concatenate with $x_1$, apply a resnet block, and finally upsample by 2 to return to the resolution of the original input bird's-eye-view pseudo image. We count 14.3M trainable parameters in our final network.
There are several hyper-parameters that determine the ``resolution'' of our model. First, there is the size of the input images $H \times W$. In all experiments below, we resize and crop input images to size $128 \times 352$ and adjust extrinsics and intrinsics accordingly. Another important hyperparameter of network is the size the resolution of the bird's-eye-view grid $X \times Y$. In our experiments, we set bins in both $x$ and $y$ from -50 meters to 50 meters with cells of size 0.5 meters $\times$ 0.5 meters. The resultant grid is therefore $200 \times 200$. Finally, there's the choice of $D$ that determines the resolution of depth predicted by the network. We restrict $D$ between 4.0 meters and 45.0 meters spaced by 1.0 meters. With these hyper-parameters and architectural design choices, the forward pass of the model runs at 35 hz on a Titan V GPU.
\subsection{Frustum Pooling Cumulative Sum Trick}
\label{subsec:customback}
Training efficiency is critical for learning from data from an entire sensor rig. We choose sum pooling across pillars in Section~\ref{sec:method} as opposed to max pooling because our ``cumulative sum'' trick saves us from excessive memory usage due to padding. The ``cumulative sum trick'' is the observation that sum pooling can be performed by sorting all points according to bin id, performing a cumulative sum over all features, then subtracting the cumulative sum values at the boundaries of the bin sections. Instead of relying on autograd to backprop through all three steps, the analytic gradient for the module as a whole can be derived, speeding up training by 2x. We call the layer ``Frustum Pooling'' because it handles converting the frustums produced by $n$ images into a fixed dimensional $C\times H \times W$ tensor independent of the number of cameras $n$. Code can be found on our \href{https://nv-tlabs.github.io/lift-splat-shoot/}{project page}.
\section{Related Work}
\label{sec:related}
Our approach for learning cohesive representations from image data from multiple cameras builds on recent work in sensor fusion and monocular object detection. Large scale multi-modal datasets from Nutonomy \cite{nuscenes}, Lyft \cite{lyftlevel5}, Waymo \cite{waymo}, and Argo \cite{argo}, have recently made full representation learning of the entire $360^\circ$ scene local to the ego vehicle conditioned exclusively on camera input a possibility. We explore that possibility with our Lift-Splat architecture.
\subsection{Monocular Object Detection}
Monocular object detectors are defined by how they model the transformation from the image plane to a given 3-dimensional reference frame. A standard technique is to apply a mature 2D object detector in the image plane and then train a second network to regress 2D boxes into 3D boxes \cite{ssd16,fastsingle,mair,monogr}. The current state-of-the-art 3D object detector on the nuScenes benchmark \cite{mair} uses an architecture that trains a standard 2d detector to also predict depth using a loss that seeks to disentangle error due to incorrect depth from error due to incorrect bounding boxes. These approaches achieve great performance on 3D object detection benchmarks because detection in the image plane factors out the fundamental cloud of ambiguity that shrouds monocular depth prediction.
An approach with recent empirical success is to separately train one network to do monocular depth prediction and another to do bird's-eye-view detection separately \cite{pseudolidar} \cite{pseudolidarpp}. These approaches go by the name of ``pseudolidar''. The intuitive reason for the empirical success of pseudolidar is that pseudolidar enables training a bird's-eye-view network that operates in the coordinate frame where the detections are ultimately evaluated and where, relative to the image plane, euclidean distance is more meaningful.
A third category of monocular object detectors uses 3-dimensional object primitives that acquire features based on their projection onto all available cameras. Mono3D \cite{mono3d} achieved state of the art monocular object detection on KITTI by generating 3-dimensional proposals on a ground plane that are scored by projecting onto available images. Orthographic Feature Transform \cite{oft} builds on Mono3D by projecting a fixed cube of voxels onto images to collect features and then training a second ``BEV'' CNN to detect in 3D conditioned on the features in the voxels. A potential performance bottleneck of these models that our model addresses is that a pixel contributes the same feature to every voxel independent of the depth of the object at that pixel.
\subsection{Inference in the Bird's-Eye-View Frame}
Models that use extrinsics and intrinsics in order to perform inference directly in the bird's-eye-view frame have received a large amount of interest recently. MonoLayout~\cite{krishna} performs bird's-eye-view inference from a single image and uses an adversarial loss to encourage the model to inpaint plausible hidden objects. In concurrent work, Pyramid Occupancy Networks~\cite{roddick} proposes a transformer architecture that converts image representations into bird's-eye-view representations. FISHING Net \cite{zoox} - also concurrent work - proposes a multi-view architecture that both segments objects in the current timestep and performs future prediction. We show that our model outperforms prior work empirically in Section~\ref{sec:experiments}. These architectures, as well as ours, use data structures similar to ``multi-plane'' images from the machine learning graphics community~\cite{splatnet,lighthouse,tucker2020singleview,neuralvolume}.
|
2,869,038,154,684 | arxiv | \section{Introduction}
A new technique called `polarization' has recently been introduced in \cite{ari} to develop efficient channel coding schemes. The codes resulting from this technique, called polar codes, have several nice attributes: (1) they are linear codes generated by a low-complexity deterministic matrix (2) they can be analyzed mathematically and bounds on the error probability (exponential in the square root of the block length) can be {\it proved} (3) they have a low encoding and decoding complexity (4) they allow to reach the Shannon capacity on any discrete memoryless channels (DMC). These codes are indeed the first codes with low decoding complexity that are provably capacity achieving on any DMC.
The key result in the development of polar code is the so-called `polarization phenomenon', initially shown in the channel setting in \cite{ari}. The same phenomenon admits a source setting formulation, as follows.
\begin{thm}\label{thmari}[\cite{ari,ari3}]
Let $X=[X_1,\dots,X_n]$ be i.i.d. Bernoulli($p$), $n$ be a power of 2, and $Y=X G_n$, where $G_n= \bigl[\begin{smallmatrix}
1 & 0 \\
1 & 1 \\
\end{smallmatrix}\bigr]^{\otimes \log_2(n)}$. Then, for any $\varepsilon \in (0,1)$,
\begin{align}
&\frac{1}{n} |\{j \in [n]: H(Y_j | Y^{j-1}) \geq 1-\varepsilon \}| \stackrel{n \to \infty}{\longrightarrow} H(p), \label{polar}
\end{align}
where $H(p)$ is the entropy of a Bernoulli($p$) distribution.
\end{thm}
Note that \eqref{polar} implies that the proportion of components $j$ for which $H(U_j | U^{j-1}) \in (\varepsilon,1-\varepsilon)$ tends to 0. Hence most of the randomness has been extracted in about $nH(p)$ components having conditional entropy close to 1 and indexed by
\begin{align}
R_\varepsilon(p)=\{ j \in [n] : H(Y_j |Y^{j-1}) \geq 1 -\varepsilon \} \label{defr}
\end{align}
and besides $o(n)$ fluctuating components, the remaining $n(1-H(p))$ components have conditional entropy below $\varepsilon$.
This theorem is extended in \cite{ari3} to $X=[X_1,\dots,X_n]$ being i.i.d.\ from an arbitrary distribution $\mu$ on $\F_q$, where $q$ is a prime, replacing $H(p)$ by $H(\mu)$ (and using the logarithm in base $q$). It is however mentioned that the theorem may fail when $q$ is not a prime but a power of a prime, with a counter-example provided for $q=4$. In Section \ref{galois} of this paper, we show a generalized version of the polarization phenomenon, i.e., of Theorem \ref{thmari}, for powers of primes (we show it explicitly for powers of 2, but the same holds for arbitrary primes).
Also, the formulation of Theorem \ref{thmari} is slightly more general in \cite{ari3}, it includes an auxiliary random variable $Y$ (side-information), which is a random variable correlated with $X$ but not intended to be compressed, and which is introduced in the conditioning of each entropy term. Although this formulation is mathematically close to Theorem \ref{thmari}, it is more suitable for an application to the Slepian-Wolf coding problem (distributed data compression), by reducing the problem to single-user source coding problems. A direct approach for this problem using polar codes is left open for future work in \cite{ari3}; we investigate this here in Section \ref{sw}. Finally, we also generalize Theorem \ref{thmari} to a setting allowing dependencies within the source (non i.i.d.\ setting.)
This paper provides a unified treatment of the three problems mentioned above, namely, the compression of multiple correlated sources, non i.i.d.\ sources and non binary sources. The main result of this paper is Theorem \ref{main}, where a ``matrix polarization'' shows how not only randomness but also dependencies can be extracted using $G_n$.
Some results presented in this paper can be viewed as counter-parts of the results in \cite{mmac} for a source rather than channel setting. Reciprocally, some results presented here in the source setting can be extended to a channel setting (such as channels with memory, or non-prime input alphabets).
Finally, connections with extractors in computer science and the matrix completion problem in machine learning are discussed in Sections \ref{pexts} and \ref{discussion}.
\subsection*{Some notations}
\begin{itemize}
\item $[n]=\{1,2,\dots,n\}$
\item For $x \in \F_2^k$ and $S \subseteq [k]$, $x[S]=[x_i : i \in S]$
\item For $x \in \F_2^k$, $x^{i}=[x_1,\dots,x_{i}]$
\item $\{0,1,\dots,m\} \pm \varepsilon = [-\varepsilon,\varepsilon] \cup [1-\varepsilon,1+\varepsilon] \cup \dots \cup [m-\varepsilon,m+\varepsilon]$
\item $H(X|Y)= \sum_y (\sum_x p_{X|Y}(x|y) \log 1/p_{X|Y}(x|y)) p_Y(y)$
\item For a matrix $A$, the matrix $A^{\otimes k}$ is obtained by taking $k$ Kronecker products of $A$ with itself.
\end{itemize}
\section{Results}\label{res}
\begin{definition}
A random variable $Z$ over $\F_2^k$ is $\varepsilon$-uniform if $H(Z) \geq k(1- \varepsilon)$, and it is $\varepsilon$-deterministic if $H(Z) \leq \varepsilon k$. We also say that $Z$ is $\varepsilon$-deterministic given $W$ if $H(Z|W) \leq \varepsilon k$.
\end{definition}
\begin{thm}\label{main}
(1) Let $n$ be a power of 2 and $X$ be an $m \times n$ random matrix with i.i.d.\ columns of arbitrary distribution $\mu$ on $\F_2^m$.
Let $Y=X G_n$ where $G_n=\bigl[\begin{smallmatrix}
1 & 0 \\
1 & 1 \\
\end{smallmatrix}\bigr]^{\otimes \log_2(n)}$. Then, for any $\varepsilon>0$, there exist two disjoint subsets of indices $R_\varepsilon, D_\varepsilon \subseteq [m] \times [n]$ with $| [m] \times [n] \setminus (R_\varepsilon \cup D_\varepsilon)|=o(n)$ such that the subset of entries $Y[U_{\varepsilon}]$ is $\varepsilon$-uniform and $Y[D_{\varepsilon}]$ is $\varepsilon$-deterministic given $Y[D_{\varepsilon}^c]$. (Hence $|R_\varepsilon| \stackrel{\cdot}{=} nH(\mu)$, $|D_\varepsilon| \stackrel{\cdot}{=} n(m-H(\mu))$.)
(2) Moreover, the computation of $Y$ as well as the reconstruction of $X$ from the non-deterministic entries of $Y$ can be done in $O(n \log n)$,
with an error probability of $O(2^{-n^\beta})$, $\beta < 1/2$, using the algorithm \texttt{polar-matrix-dec}.
\end{thm}
\noindent
{\bf Remarks.}
\begin{itemize}
\item The multiplication $X G_n$ is over $\F_2$
\item The sets $R_\varepsilon, D_\varepsilon$ depend on the distribution $\mu$ (and on the dimensions $m$ and $n$), but not on the realization of $Y$. These sets can be accurately computed in linear time (cf.\ Section \ref{discussion}).
\item To achieve an error probability of $O(2^{-n^\beta})$, one picks $\varepsilon=\varepsilon_n = 2^{-n^\alpha}$, for $\alpha < 1/2$.
\end{itemize}
The following lemma provides a characterization of the dependencies in the columns of $Y$, it is proved in Section \ref{proofs1}.
Recall that $Y_j$ denotes the $j$-th column of $Y$, $Y_j(i)$ the $(i,j)$-entry of $Y$, $Y_j[S]=[Y_j(i): i \in S]$ and $Y^{j}=[Y_1,\dots,Y_j]$.
\begin{lemma}\label{mainlemma}
For any $\varepsilon>0$, we have,
\begin{align*}
\frac{1}{n} |\{j \in [n]: H(Y_j[S]|Y^{j-1}) \in \{0,1,\dots,|S|\} \pm \varepsilon, \forall S \subseteq [m] \}| \\ \to 1
\end{align*}
\end{lemma}
This lemma implies the first part of Theorem \ref{main}, as shown in next section. The second part of the theorem is proved in Section \ref{proofs2}, together with the following result, which further characterizes the dependency structure of $Y$.
\begin{lemma}\label{null}
For any $\varepsilon>0$ and $j \in [n]$, let $A_j$ denote the binary matrix of maximal rank such that
\begin{align*}
H(A_j Y_j |Y^{j-1}) \leq \varepsilon.
\end{align*}
Note that $A_j$ can have zero rank, i.e., $A_j$ can be a matrix filled with zeros.
We then have,
\begin{align*}
\frac{1}{n} \sum_{j=1}^n \mathrm{nullity}(A_j) \to H(\mu).
\end{align*}
Moreover, the result still holds when $\varepsilon=\varepsilon_n = 2^{-n^{\alpha}}$, for $\alpha < 1/2$.
\end{lemma}
Note that, if $H(A_j Y_j |Y^{j-1}) \leq \varepsilon$, $A_j Y_j$ is $\varepsilon$-deterministic given $Y^{j-1}$, and if $A_j$ has rank $r_j$, by freezing $k_j=m-r_j$ components in $Y_j$ appropriately, say on $B_j$, we have that $A_j Y_j$ can be reduced to a full rank matrix multiplication $\tilde{A}_j \mathcal{Y}_j[B_j^c]$, and hence $\mathcal{Y}_j[B_j^c]$ is $\varepsilon$-deterministic given $Y^{j-1}$ and $Y_j[B_j]$. Hence the number of bits to freeze, is exactly $\sum_j k_j$, and as stated in the lemma, this corresponds to the total entropy of Y (up to a $o(n)$).
\subsection{Proof of Theorem \ref{main} (part 1) and how to set $R_\varepsilon$ and $D_\varepsilon$}\label{proof1}
Let $\varepsilon>0$ and Let $E_n=E_n(\varepsilon)$ be the set of indices $i \in [n]$ for which
$H(Y_j[S]|Y^{j-1}) \in \{0,1,\dots,|S|\} \pm \varepsilon$, for any $S \subseteq [m].$
From Lemma \ref{mainlemma}, $n-|E_n|=o(n)$.
Note that for $i \in E_n$, there exists a minimal set (not necessarily unique) $T_j$ such that
\begin{align}
& H(Y_j[T_j]|Y^{j-1}) \geq H(Y_j |Y^{j-1}) - \varepsilon \label{max}
\end{align}
which also implies
\begin{align}
& H(Y_j[T_j]|Y^{j-1}) \geq |T_j| - \varepsilon \label{max2},
\end{align}
and, by the chain rule and defining $S_j:=T_j^c$,
\begin{align}
H(Y_j[S_j]|Y^{j-1} Y_j[S_j^c]) \leq \varepsilon. \label{corr}
\end{align}
(Note that if $H(Y_j |Y^{j-1}) \leq \varepsilon$, we define $T_j=\emptyset$ so that $S_j=[m]$.)
We then have
\begin{align*}
&H(\cup_{j \in E_n}Y_j[S_j]| (\cup_{j \in E_n}Y_j[S_j])^c) \\
& \leq \sum_{j \in E_n} H(Y_j[S_j]| Y^{j-1}Y_j[S_j^c] ) \leq \varepsilon n
\end{align*}
and $\cup_{j \in E_n}Y_j[S_j]$ is $\varepsilon$-deterministic given $(\cup_{j \in E_n}Y_j[S_j])^c$, so that $D_{\varepsilon}=\cup_{j \in E_n} S_j$.
Moreover, we have
\begin{align}
H(Y)&\geq H(\cup_{j \in E_n}Y_j[T_j])
\geq \sum_{j \in E_n} H(Y_j[T_j]|Y^{j-1}]) \notag \\
& \geq \sum_{j \in E_n} H(Y_j |Y^{j-1}) -\varepsilon n \notag \\
& \geq \sum_{j=1}^n H(Y_j |Y^{j-1}) -\varepsilon n - o(n)\notag \\
& = H(Y) -\varepsilon n - o(n), \label{same}
\end{align}
where the third inequality uses \eqref{max}, and from \eqref{max2},
\begin{align*}
\sum_{j \in E_n} |T_j| \geq H(\cup_{j \in E_n}Y_j[T_j])& \geq \sum_{j \in E_n} |T_j| -\varepsilon n.
\end{align*}
Since $H(Y) = H(X) = n H(\mu)$,
we have
$$n H(\mu) +\varepsilon n \geq \sum_{j \in E_n} |T_j| \geq n H(\mu) -\varepsilon n -o(n)$$
and $\cup_{j \in E_n}Y_j[T_j]$ is $\frac{\varepsilon }{H(\mu)-2 \varepsilon}$-uniform, so that $R_{\varepsilon/(H(\mu)-2\varepsilon)}=\cup_{j \in E_n} T_j$.
\subsection{Decoding algorithm}
\begin{definition}\texttt{polar-matrix-dec}\\
Inputs: $D^c \subseteq [m] \times [n]$, $y[D^c] \in \F_2^{|D^c|}$.\\
Output: $y \in \F_2^{m n}$.\\
Algorithm:\\
0. Let $M=D$;\\
1. Find the smallest $j$ such that $S_j=\{(i,j) \in M\}$ is not empty; compute $$\hat{y}[S_j]=\arg\max_{u \in \F_2^{|S_j|}} \mathbb{P}\{ Y[S_j]= u | Y^{j-1}=y^{j-1}, Y[S_j^c]=y[S_j^c] \};$$
2. Update $M=M \setminus \{j\}$, $y[M]=y[M] \cup \hat{y}[S_j]$; \\
3. If $M$ is empty output $y$, otherwise go back to 1.
\end{definition}
Note that, using \eqref{max} for the definition of $S_j$ (and the corresponding $D_\varepsilon$), the realizations of $Y^{j-1}$ and $Y_j[S_j^c]$ are known, and with high probability one guesses $Y_j[S_j]$ correctly in step 1, because of \eqref{corr}.
Moreover, due to the Kronecker structure of $G_n$, and similarly to \cite{ari}, step 1.\ and the entire algorithm require only $O(n \log n)$ computations.
Finally, from the proof of Theorem \ref{main} part (2), it results that step 1. can also be performed slightly differently, by finding sequentially the inputs $Y[j]$ for $j \in S_j$, reducing an optimization over all possible $y \in \F_2^{|S_j|}$, where $|S_j|$ can be as large as $m$, to only $m$ optimizations over $\F_2$ (which may be useful for large $m$).
\section{Three Applications}
We present now three direct applications of Theorem \ref{main}:
\begin{itemize}
\item Distributed data compression, i.e., Slepian-Wolf coding
\item Compression of sources on arbitrary finite fields
\item Compression of non i.i.d.\ sources
\end{itemize}
\subsection{Source polarization for correlated sources: Slepian-Wolf coding}\label{sw}
In \cite{ari3}, the two-user Slepian-Wolf coding problem is approached via polar codes by reducing the problem
to single-user source coding problems. A direct approach is left open for future work; we investigated this here, for arbitrary many users.
Consider $m$ binary sources which are correlated with an arbitrary distribution $\mu$. We are interested in compressing
an i.i.d.\ output of these sources. That is, let $X_1,\dots,X_n$ be i.i.d.\ under $\mu$ on $\F_2^m$, i.e., $X_i$ is an $m$ dimensional binary random vector and, for example, $X_1[i],\dots,X_n[i]$ is the sources output for user $i$.
If we are encoding these sources together, a rate $H(\mu)$ is sufficient (and it is the lowest achievable rate).
In \cite{slepian}, Slepian and Wolf showed that, even if the encoders are not able to cooperate, lossless compression can still be achieved at rate $H(\mu)$.
We now present how to use Theorem \ref{main} to achieve this rate with a polar coding scheme.
{\it Polar codes for distributed data compression:}\\
1. For a given $n$ and $\varepsilon$ (which sets the error probability),
since each user knows the joint distribution $\mu$, each user can compute the ``chart'' of the deterministic indices, i.e., the set $D_\varepsilon \subset [m] \times [n]$ and identify its own chart $D_\varepsilon(i,\cdot)$. \\
2. Each user computes $Y(i,\cdot)=X(i,\cdot) G_n$ and stores $Y(i,\cdot)[D_\varepsilon(i,\cdot)^c]$, so that the joint decoder is in possession of $Y[D_\varepsilon^c]$,
and can run \texttt{polar-dec-matrix} with $Y[D_\varepsilon^c]$ as input to get $Y$, with an error probability at most $\varepsilon n$. Since $G_n$ is invertible, indeed $G_n^{-1}=G_n$, one can then find $X=Y G_n$.
From Theorem \ref{main}, we have the following result.
\begin{corol}\label{swc}[Distributed polar compression]
For $m$ correlated sources of joint distribution $\mu$, previously described scheme allows to perform lossless and distributed compression of the sources at sum-rate $H(\mu)$, with an error probability of $O(2^{-n^{\sqrt{\beta}}})$, $\beta < 1/2$, and an encoding and decoding complexity of $O(n \log n)$.
\end{corol}
Note that this result allows to achieve the sum-rate of the Slepian-Wolf region, i.e., a rate belonging to the dominant face of the Slepian-Wolf achievable rate region, it does not say that any rate in that region can be reached with the proposed scheme.
\subsection{Polarization for arbitrary finite fields}\label{galois}
In \cite{ari3}, the source polarization result is stated for sources that are i.i.d.\ and $q$-ary, where $q$ is prime.
It is also mentioned that if $q$ is not prime, the theorem may fail. In particular, an example for $q=4$ is provided where
the conclusion of Theorem \ref{thmari} does not hold. It is also mentioned that if additional randomness is introduced in the construction of the polar transformation (leading no longer to a deterministic matrix $G_n$), the result holds for arbitrary powers of primes.
We show here that a generalized polarization phenomenon still holds for arbitrary powers of primes (we formally show it for powers of 2 only but any prime would work) even for the deterministic polar transform $G_n$.
\begin{corol}\label{galoisc}[Polarization for finite fields]
Let $X=[X_1,\dots,X_n]$ be i.i.d.\ under $\mu$ on $\F_q$ where $q=2^m$, and let $Y=X G_n$ (computed over $\F_q$).
Then, although $Y$ may no polarize over $\F_{2^m}$, it polarizes over $\F_2^m$ in the sense of Theorem \ref{main}, more precisely:
Define by $V$ a $\F_2^m$ representation of $\F_{2^m}$, $\widetilde{\mu}$ the distribution on $\F_2^m$ induced by $\mu$ on $\F_{2^m}$, and set $\widetilde{Y}:=V(Y)$ (organized as an $m \times n$ matrix).
Then the conclusions of Theorem \ref{main} hold for $\widetilde{Y}$.
\end{corol}
Note: this theorem still holds when $q$ is a power of any prime, by combining it with the result in \cite{ari3} for prime alphabets. The case where $q=2^m$ is particularly interesting for complexity considerations (cf.\ Section \ref{discussion}).
{\it Interpretation of Corollary \ref{galoisc}:} When $q$ is a prime, $H(Y_j | Y^{j-1}) \in \{0,\log q\} \pm \varepsilon$, which means that $Y_j$ is either roughly uniform and independent of the past or roughly a deterministic function of the past. However, for $q$ being a power of 2 (or a power of a prime), we only get that $H(Y_j | Y^{j-1}) \in \{0,1,\dots,\log q\} \pm \varepsilon$, and previous conclusion cannot be drawn, stressing indeed a different polarization phenomenon. However, Corollary \ref{galoisc} says that if we work with the vector representation of the elements in $\F_{q}$, we still have a `polarization phenomenon' in the sense of Theorem \ref{main}, i.e., for almost all $j \in [n]$, a subset of the components of the $\widetilde{Y_j}$ are either roughly uniform and independent or deterministic functions of the past and the complementary components.
{\it Compression of $2^m$-ary i.i.d.\ sources:}
For a given $X=[X_1,\dots,X_n]$, compute $Y=X G_n$ and transform $Y$ into $\widetilde{Y}$ based on the representation of $\F_{2^m}$ by $\F_2^m$. Organize $\widetilde{Y}$ to be an $m \times n$ matrix. Note one can equivalently map $X$ into $\widetilde{X}$ and then take $G_n$ to get $\widetilde{Y}$. This is due to the fact that the $\F_{2^m}$ addition corresponds to the pointwise addition in $\F_2^m$. Finally, store $\widetilde{Y}$ on $D_\varepsilon(\widetilde{\mu})^c$, and run \texttt{polar-matrix-dec} to recover $\widetilde{Y}$, hence $Y$ and $X$.
\subsection{Source polarization for non i.i.d.\ sources}\label{memory}
Let a binary source consist of i.i.d.\ blocks of length $m$, each block having an arbitrary distribution $\mu$. We can then compress the source as follows. From $n$ blocks $X_1,\dots,X_n$ each of length $m$, i.e., $m n$ outputs of the source, create the matrix $X=[X_1^t | \dots | X_n^t ]$ and apply the polar transform to get $Y=X G_n$. Then store the components of $Y$ which belong to $D_{\varepsilon}(\mu)^c$. To reconstruct $X$, reconstruct $Y$ from $Y[D_{\varepsilon}(\mu)^c]$ using \texttt{polar-matrix-dec} and find $X=Y G_n$.
If the source is not exactly block i.i.d.\ but is mixing, i.e., if $\lim_{n \to \infty} \mathbb{P} \{X_n = x | X_0 =x_0\} = \mathbb{P} \{X_n = x\}$, for any $x_0$, we can open windows of length $o(n^2)$ between the blocks and store without compression these $o(n^2)$ inter-block bits, which does not increase the compression rate. We are then left with a source formed by blocks which are `almost' i.i.d.\ and a similar procedure can be used.
From Theorem \ref{main}, we have the following.
\begin{corol}\label{memoryc
For a binary source consisting of i.i.d.\ blocks of length $m$, each block having distribution $\mu$, the polar coding scheme described previously allows to compress losslessly the source at rate $H(\mu)$, with an error probability of $O(2^{-n^\beta})$, $\beta < 1/2$, and an encoding and decoding complexity of $O(n \log n)$.
\end{corol}
As discussed previously, a similar result holds for source which are mixing.
\section{Extractors in computer science}\label{pexts}
We have discussed in this paper a procedure to extract randomness, i.e., uniform bits, from non uniform bits. The applications we considered are in compression and coding, but there are also numerous applications of randomness extraction problems in computer science. In particular, there is a notion of ``extractor'' in theoretical computer science, which aims at extracting uniform bits from sources having much more general assumptions than the one considered here.
Phrased in our terminology, an extractor is roughly a map that extracts $m$ bits that are $\varepsilon$-uniform from $n$ bits that have a total entropy at least $k$, with the help of a seed of $d$ uniform bits. For more details and a survey on extractors see for example \cite{trevisan,shalt}.
The notion of $\varepsilon$-uniform, or $\varepsilon$-close to uniform, used in computer science is usually measured by the $l_1$-norm, rather than the entropy as used in this paper. Nevertheless, these two notions can be related and this is a minor distinction. Also, the entropy used in the computer science literature is the min-entropy rather than the Shannon-entropy, which is a stronger assumption, since the Shannon-entropy is an upper bound to the min-entropy.
On the other hand, the source for the extractor definition is only assumed to have min-entropy $k$, and no further assumptions are made on the distribution of $X_1,\dots,X_n$, whereas in our setting, we consider sources that are at least ergodic and with a known distribution.
One should also stress that we did not make use of any seed in our problems\footnote{Note that, as opposed to the compression problem, when only concerned with randomness extraction, the treatment of the deterministic bits and reconstruction algorithm may not matter.}.
In order to establish a more concrete connection between polar coding and formal extractors, we present here a result which takes into account one of the two caveat just mentioned: we only assume that the source has entropy at least $k$, without requiring the exact knowledge of the distribution, but we keep an i.i.d.\ setting. Using Section \ref{memory}, one can generalize this result to a setting where the source is mixing, but in even then we do not make use of any seed. In particular, if one could use a seed, ideally of small size, e.g., $O(\log n)$, to turn an arbitrary source of lower-bounded entropy, into a mixing source of comparable entropy, one could use the following result to construct real extractors (work in progress).
\begin{definition}
Let $(k,\varepsilon)$-$\mathrm{Pext}: \F_2^n \to \F_2^m$ be the matrix obtained by deleting the columns of $G_n$ that are not in $R_{\varepsilon^2/2n}(p(k))$, where $p(k)$ is one of the two binary distribution having entropy $H(p(k)) = k/n$ (and $R_\varepsilon(\cdot)$ as defined in \eqref{defr}).
\end{definition}
Note that $\mathrm{Pext}$ benefits from the low encoding complexity of $G_n$, namely $O(n \log n)$.
\begin{lemma}\label{pext}
Let $n$ be a power of two and $X=[X_1,\dots,X_n]$ be i.i.d.\ Bernoulli such that $H(X_1^n) \geq k$ (where $H$ denotes the Shannon or min-entropy). For any $\varepsilon \in (0,1)$, $\mathrm{Pext}(X)$ is $\varepsilon$-uniform (in the $l1$ or entropy sense) and $$m = k + o(n).$$
\end{lemma}
This result is proved in Section \ref{proofext}, and using Section \ref{memory} it can be extended to a setting where the source is mixing. Note that even in a mixing setting, the source entropy is $\Omega(n)$, which is indeed a regime where good extractors are known \cite{zuck}.
\section{Discussion}\label{discussion}
We have treated in this paper three problems, namely, compression of correlated sources, sources with memory and sources on finite fields, with a unified approach using a matrix polarization (Theorem \ref{main}), and we provided polar coding schemes for each of these problems. The advantage of using polar coding schemes is that these schemes have low encoding and decoding complexity, and achieve the optimal performance (Shannon limit) meanwhile affording mathematical guarantees on the performance, as described in Corollaries \ref{swc}, \ref{galoisc} and \ref{memoryc}.
One can now also combine these different problems. Namely, for multiple sources that are define on some finite fields, with some well-behaved correlations between and within themselves, one can, using the interleaving trick and the vector representation described respectively in Sections \ref{memory} and \ref{galois}, organize the sources outputs in a matrix form so as to meet the hypotheses of Theorem \ref{main}, and hence have a polar compression scheme requiring the minimal compression rate.
One can also translate the results in this paper to a channel setting, such as $m$-user multiple access channels (already treated in \cite{mmac}), channels with memory or channels with non binary fields inputs, by using duality arguments.
Although the results in this paper are expected to hold when $m=o(n)$, one has to be careful with the complexity scaling when $m$ gets large. In that regard, an advantage of using finite fields of cardinality $q=2^m$ rather than modular fields of prime cardinality, is that some operations required in the polar decoding algorithm are convolution-like operations over the underlying field, and as the FFT algorithm allows to reduce the computational cost of a convolution from $O(q^2)$ to $O(q \log_2 q)$ when $q$ is a power of 2, one can benefit from this fact.
We have assumed in this paper that the sets $D_\varepsilon(\mu)$ and $R_\varepsilon(\mu)$ can be computed, without discussing how. The first reason why we do not stress this aspect here, as in other papers in polar coding, is that these sets do not depend on the realization of the source(s). Namely, if one is able to compute these sets once for several values of interest of $\varepsilon$ and of the dimensions, one can then use the same sets for any outputs realizations. This is fundamentally different than the decoding algorithm which takes the source realization as an input. Yet, it is still crucial to be able to compute these sets once, for the parameters of interests. In order to do so, there are at least two possible approaches. The first one is via simulations, and is discussed in \cite{ari}: using the Kronecker structure of $G_n$, it is possible to run simulations and get accurate estimate of the conditional entropies $H(Y_j|Y^{j-1})$, in particular (from Section \ref{proof1}) of the sets $D_\varepsilon(\mu)$ and $R_\varepsilon(\mu)$. Another option is to use algorithms to approach the exact values of $H(Y_j|Y^{j-1})$ within a given precision, in linear time; this has been proposed in particular in \cite{vardy}. It would also be interesting to have mathematical characterizations of these sets. At the moment, this is an open problem, even for the simplest settings (single binary i.i.d.\ source, or in the channel setting, the binary erasure channel).
Finally, this work could also apply to the matrix completion setting. For example, if $X$ is an $m \times n$ matrix where column $X_j$ contains the ratings of $m$ movies by user $j$, we can use Theorem \ref{main} to show that by applying the matrix\footnote{the matrix obtained by deleting the columns of $G_n$ that are not in $D_\varepsilon$} $G_n \times I_{(D_\varepsilon)^c}$ to $X$, we are left with fewer entries (the more correlations between the movie ratings the fewer entries) that yet allow to recover the initial matrix. Hence, if we are given only a smaller set of appropriate entries (and which sets can be characterized using Section \ref{proof1}), we can reconstruct the initial matrix using \texttt{polar-matrix-dec}.
\section{Proofs}\label{proofs}
\subsection{Proof of Lemma \ref{mainlemma}}\label{proofs1}
In order to prove Lemma \ref{mainlemma}, we need the following definition and lemmas.
\begin{definition}
For a random vector $V$ distributed over $\F_2^m$, define $V^{-}=V+V'$ and $V^{+}=V'$, where $V'$ is an i.i.d.\ copy of $V$.
Let $\{b_i\}_{i \geq 1}$ be i.i.d.\ binary random variables in $\{-,+\}$ with uniform probability distribution, and let
\begin{align*}
& \eta_k[S]=H(V^{b_1 \dots b_k}[S]| V^{c_1 \dots c_k}, \forall (c_1 \dots c_k) < (b_1 \dots b_k))
\end{align*}
for $S \subseteq [m]$, where the order between $(-,+)$-sequences is the lexicographic order (with $- < +$).
\end{definition}
Note that $$\{ V^{b_1 \dots b_k} : (b_1 \dots b_k) \in \{-,+\}^k \} \stackrel{(d)}{=} X G_{2^k}$$ where $X$ is the matrix whose columns are i.i.d copies of $V$. The following lemma justifies the definition of previous random processes.
\begin{lemma}\label{corresp}
Using $V \sim \mu$ in the definition of $\eta_k[S]$, we have for any $n$ and any set $D \subseteq [0,|S|]$
$$\frac{1}{n} \{j \in [n] : H(Y_j[S]| Y^{j-1} ) \in D \} = \mathbb{P}\{ \eta_{\log_2 (n)}[S] \in D \}.$$
\end{lemma}
The proof is a direct consequence from the fact that the $b_k$'s are i.i.d.\ uniform.
Using the invertibility of $\bigl[\begin{smallmatrix}
1 & 0 \\
1 & 1 \\
\end{smallmatrix}\bigr]$ and properties of the conditional entropy, we have the following.
\begin{lemma}
$\eta_k[S]$ is a super-martingale with respect to $b_k$ for any $S \subseteq [m]$ and a martingale for $S=[m]$.
\end{lemma}
\begin{proof}
For $n=2$, we have
\begin{align}
2 H(X_1 [S] ) &= H(X_1 [S] X_2[S] ]) \notag \\
&= H(Y_1 [S] Y_2[S] ) \notag \\
&= H(Y_1 [S] ) + H(Y_2 [S] | Y_1[S] ) \notag \\
& \geq H(Y_1 [S] ) + H(Y_2 [S] | Y_1 ) \label{last}
\end{align}
with equality in the \eqref{last} if $S=[m]$.
For $n\geq 2$, the same expansion holds including in the conditioning the appropriate ``past'' random variables.
\end{proof}
Note that because $\eta_k[S]$ is a martingale for $S=[m]$, the sum-rate $H(\mu)$ is conserved through the polarization process.
Now, using previous lemma and the fact that $\eta_k[S] \in [0,|S|]$ for any $S$, the martingale convergence theorem implies the following.
\begin{corol}\label{conv}
For any $S \subseteq [m]$, $\eta_k[S]$ converges almost surely.
\end{corol}
The following allows to characterize possible values of the process $\eta_k[S]$ when it converges.
\begin{lemma}\label{invar}
For any $\varepsilon>0$, $X$ valued in $\F_2^{m}$, $Z$ arbitrary, $(X',Z')$ an i.i.d.\ copy of $(X,Z)$, $S \subseteq [m]$, there exists $\delta=\delta(\varepsilon)$ such that
$H( X'[S] | Z') - H(X' [S]| Z, Z',X[S]+X'[S]) \leq \delta$ implies $H(X'[S] | Z')-H(X'[S \setminus i] | Z') \in \{0,1 \} \pm \varepsilon$ for any $i \in S$.
\end{lemma}
\begin{proof}
We have
\begin{align}
&H( X'[S] | Z') - H(X' [S]| Z, Z',X[S]+X'[S]) \notag \\
&=I(X'[S]; X[S]+X'[S]| Z ,Z') \notag \\
& \geq I(X'[S]; X[i]+X'[i]| Z, Z') \notag \\
& \geq I(X'[i]; X[i]+X'[i]| Z ,Z', X'[S \setminus i]) \notag \\
& = H(X'[i]| Z', X'[S \setminus i]) - H(X[i]+X'[i]| Z ,Z', X'[S \setminus i]) . \label{squiz}
\end{align}
It is shown in \cite{ari} that if $A_1,A_2$ are binary random variables and $B_1,B_2$ are arbitrary such that
$\mathbb{P}_{A_1 A_2 B_1 B_2} (a_1,a_2,b_1,b_2) = \frac{1}{4} Q(b_1|a_1+ a_2) Q(b_2|a_2)$, for some conditional probability $Q$,
then, for any $a>0$, there exists $b >0$ such that $H(A_2|B_2) - H(A_2|B_1 B_2 A_1) \leq b$ implies $H(A_2|B_2) \in\{0,1\} \pm a$.
Using this result, we can pick $\delta$ small enough to lower bound \eqref{squiz} and show that $H(X'[i]| Z', X'[S \setminus i]) \in \{0,1\} \pm \varepsilon$.
From the chain rule, we conclude that $H(X'[S \setminus i]| Z') \in \{0,1\} \pm \varepsilon$.
\end{proof}
We then get the following using Corollary \ref{conv} and Lemma \ref{invar}.
\begin{corol}\label{integer}
With probability one, $\lim_{k \to \infty} \eta_k[S] \in \{0,1,\dots,|S|\}$.
\end{corol}
Finally, Lemma \ref{corresp} and Corollary \ref{integer} imply Lemma \ref{mainlemma}.
\subsection{Proof of Lemma \ref{null} and Theorem \ref{main} part (2)}\label{proofs2}
In order to prove Theorem \ref{main} part (2), we basically need to show that part (1) still holds when taking $\varepsilon$ scaling like $\varepsilon_n=2^{-n^{\alpha}}$ for $\alpha <1/2$, as in \cite{ari2}. We did not find a direct way to show that when $\eta_k[S]$ converges to $|S|$, it must do it that fast (the sub-martingale characterization is too week to apply results of \cite{ari2} directly). This is why we looked into Lemma \ref{null}. By developing a correspondence between previous results and analogue results dealing with linear forms of the $X[S]$'s, we are able to use the speed convergence results shown for the single-user setting and conclude. This approach was developed in \cite{mmac} for the multiple access channel, below is the counter-part for our source setting.
\begin{lemma}\label{tech1}
For a random vector $Y$ valued in $\F_2^m$, and an arbitrary random vector $Z$, if $$H(Y [S]| Z) \in \{0,1,\dots,|S|\} \pm \varepsilon$$ for any $S \subseteq [m]$, we have $$ H(\sum_{i \in S} Y[i]|Z) \in \{0,1\} \pm \delta(\varepsilon),$$
with $\delta(\varepsilon) \stackrel{\varepsilon \to 0}{\to}0$.
\end{lemma}
This lemma is proved in \cite{abbematroid}.
Using this result, we have that for $j \in E_n$, there exists a matrix $A_j$ of rank $r_j=|S_j|$, such that
$$H(A_j Y_j |Y^{j-1}) \leq m \delta(\varepsilon) .$$
This implies the first part of Lemma \ref{null}, and we now show how we can use this other characterization of the dependencies in $Y$ to conclude a speed convergence result. We first need the following ``single-user'' result.
\begin{lemma}\label{tech2}
For any $\beta<1/2$ and $\varepsilon_n=2^{-n^{\beta}}$, we have,
\begin{align*}
\frac{1}{n} | \{j \in [n]: \varepsilon_n < H(\sum_{i \in S} Y_j[i]|Y^{j-1}) <\varepsilon , \forall S \subseteq [m] \} | \to 0 .
\end{align*}
\end{lemma}
\begin{proof}
We define the auxiliary family of random processes $\zeta_k[S]$, for $S \subseteq [m]$, by
\begin{align*}
& \zeta_k[S]=Z(\sum_{i \in S}V^{b_1 \dots b_k}[i]| V^{c_1 \dots c_k}, \forall (c_1 \dots c_k) < (b_1 \dots b_k))
\end{align*}
where, for a binary uniform random variable $A$ and an arbitrary random variable $B$, $Z(A|B)=2 \mathbb{E}_B (\mathbb{P}\{ A=0 | B \} \mathbb{P}\{ A=1 | B \})^{1/2}$ is the Bhattacharyya parameter. Note that
\begin{align}
Z(A|B) \geq H(A|B). \label{bata}
\end{align}
(This also follows from Proposition 2 in \cite{ari3}.)
We then have, using the chain rule and source polarization inequalities on the Bhattacharyya parameter, namely Proposition 1 in \cite{ari3}, that
\begin{align*}
& \zeta_{k+1}[S] \leq \zeta_{k}[S]^2 \text{ if } b_{k+1}=1,\\% \label{z1} \\
& \zeta_{k+1}[S] \leq 2 \zeta_k[S] \text{ if } b_{k+1}=0,
\end{align*}
and using Theorem 3 of \cite{ari2}, we conclude that for any $\alpha < 1/2$
$$\lim \inf_{\ell \rightarrow \infty} \mathbb{P}(\zeta_k \leq 2^{-2^{\alpha k}}) \geq \mathbb{P}( \zeta_\infty=0).$$
Finally, we conclude using \eqref{bata}.
\end{proof}
We then use Lemma \ref{tech1} and \ref{tech2} to conclude that
\begin{align}
\frac{1}{n} | \{j \in [n]:
&H(Y_j [S]| Y^{j-1}) \in \{0,1,\dots,|S|\} \pm \varepsilon, \forall S \subseteq [m], \notag \\
&\exists A_j \text{ with } \mathrm{rank} (A_j)= \mathrm{int}( m-H(Y_j | Y^{j-1})),\notag \\
&H(A_jY_j |Y^{j-1}) <\varepsilon_n \} | \to 1 , \label{set}
\end{align}
which implies Lemma \ref{null}.
To conclude the proof of Theorem \ref{main} part (2), let $\varepsilon_n=2^{-n^{\alpha}}$ and $E_n=E_n(\varepsilon_n)$ be the set defined through \eqref{set} (which, in view of previous results, is equivalent to the definition given in Section \ref{proof1}). We then have for $j \in E_n$
that the components $S_j$ to be decoded in $Y_j$ are not correctly decoded with probability
$$P_e(j) \leq H(A_jY_j |Y^{j-1}) \leq \varepsilon_n,$$
and the block error probability is bounded as
$$P_e \leq \sum_{j \in E_n} P_e(j) \leq n \varepsilon_n,$$
so that taking $\alpha <1/2$ large enough, we can reach a block error probability of $O(2^{-n^{\beta}})$ for any $\beta < 1/2$.
\subsection{Proof of Lemma \ref{pext}}\label{proofext}
For $j \in R_{\varepsilon^2/2n}(p(k))$,
\begin{align*}
H(Y_j(p(k)) |Y^{j-1}(p(k))) \geq 1 - \tilde{\varepsilon}
\end{align*}
where $\tilde{\varepsilon}=\varepsilon^2/2n$ and $Y(p(k))=X(p(k)) G_n$ where $X(p(k))$ is i.i.d. under $p(k)$.
Moreover, for any distribution $p$ on $\F_2$ such that $H(p) \geq H(p(k))=k/n$, there exists a distribution $\nu$ on $\F_2$ such that
$p(k) \star \nu = p$, where $\star$ denotes the circular convolution.
Equivalently, there exists $Z \stackrel{\text{iid}}{\sim} \nu$ independent of $X(p(k))\stackrel{\text{iid}}{\sim} p(k)$, such that $X(p)=X(p(k)) \oplus Z \stackrel{\text{iid}}{\sim} p$.
Define $Y(p)=G_n X(p)$, $Y(p(k))=G_n X(p(k))$ and $W=G_n Z$, hence $Y(p) = Y(p(k)) \oplus W$. We have
\begin{align}
H(Y(p)_j | Y(p)^{j-1}) &\geq H(Y(p)_j | Y(p)^{j-1}, W) \notag \\
&= H(Y(p(k))_i | Y(p(k))^{j-1}, W) \notag \\
&= H(Y(p(k))_i | Y(p(k))^{j-1}) \label{allo}
\end{align}
where the last equality follows from the fact that $Y(p)$ is independent of $W$ since $X(p)$ is independent of $Z$.
Therefore, for any $X(p)$ i.i.d.\ such that $H(p) \geq k/n$ and for any $j \in R_{\tilde{\varepsilon}}(p(k))$, we have
\begin{align}
H(Y(p)_j | Y(p)^{j-1}) &\geq 1- \tilde{\varepsilon}
\end{align}
and
\begin{align}
H(Y(p)[R_{\tilde{\varepsilon}}(p(k))])
&\geq \sum_{j \in R_{\tilde{\varepsilon}}(p(k))} H(Y_j(p)|Y^{j-1}(p)]) \notag \\
& \geq |R_{\tilde{\varepsilon}}(p(k))| ( 1-\tilde{\varepsilon} ) .\notag
\end{align}
Hence, defining by $\mu_R$ the distribution of $Y(p)[R_{\tilde{\varepsilon}}(p(k))]$ and $U_R$ the uniform distribution on $R_{\tilde{\varepsilon}}(p(k))$, we have
\begin{align}
D(\mu_R || U_R) &\leq H(U_R) - H(\mu_R) \notag \\
& \leq |R_{\tilde{\varepsilon}}(p(k))| \tilde{\varepsilon} \notag \\
& \leq n \tilde{\varepsilon} \label{Dbound}.
\end{align}
Using Pinsker inequality and \eqref{Dbound}, we obtain
\begin{align*}
\| \mu_R - U_R \|_1 \leq 2 \ln 2 D(\mu_R || U_R)^{1/2} &\leq \varepsilon .
\end{align*}
Finally, we have from Theorem \ref{thmari}
\begin{align*}
|R_{\tilde{\varepsilon}}(p(k))| = k + o(n).
\end{align*}
|
2,869,038,154,685 | arxiv | \section{Introduction}
The intrinsic spin angular momentum is an important property not only in quantum mechanical descriptions of fundamental particles but also in polarization representations of general wave mechanics~\cite{long2018,burns2020acoustic,shi2019observation,burns2020acoustic}. For electromagnetic and elastic waves, the transversely circular polarization has a direct correspondence to the spin-1 photons~\cite{banzer2012,eismann2020} and phonons~\cite{zhu2018observation,holanda2018detecting,shi2020,ruckriegel2020angular,an2020coherent}, respectively. This duality between the classical and quantum worlds has inspired a number of recent studies in optics~\cite{le2015,sollner2015,peng2019transverse,kim2012time}, gravitational waves~\cite{golat2020evanescent}, acoustics~\cite{wang2018topological,toftul2019acoustic,bliokh2019spin,bliokh2019acoustic,long2020symmetry} and solid mechanics~\cite{calderin2019,kumar2020,hasan2020experimental}. Many intriguing features have been demonstrated, including spin-orbit coupling~\cite{liu2017circular,fu2019}, spin-momentum locking~\cite{liu2019three} and topological edge states analogous to the quantum spin Hall effect~\cite{zhou2018quantum,wu2018topological,tuo2019twist,jin2020}.\\
\begin{figure}[b!]
\includegraphics[scale=0.3]{fig-1.png}
\caption{\label{fig:F1} Spin categories of bulk elastic waves: (a) linear longitudinal $(m,n,l)=(0,0,1)$ and (b) linear transverse $(1,0,0)$ are spin-less. (c) right circular $(1,i,0)$ and (d) left circular $(1,-i,0)$ carry paraxial spins. (e) rolling forward $(1,0,i)$ and (f) rolling backward $(1,0,-i)$ carry non-paraxial spins. Here $\boldsymbol{u}$ shows displacement trajectory. $\boldsymbol{k}$ and $\boldsymbol{s}$ represent wave vector and spin vector, respectively. }
\end{figure}
Focusing on elastic waves in solids, we note that there is naturally a \textit{traveling} longitudinal component (Fig.\,\ref{fig:F1}(a)), which can co-propagate with shear waves (Fig.\,\ref{fig:F1}(b)). This is in stark contrast with electromagnetic waves, for which the longitudinal component can only exist either due to localized interference~\cite{banzer2012,eismann2020}, with field couplings~\cite{sollner2015,le2015,peng2019transverse}, or as evanescent waves~\cite{kim2012time}. Consequently, in addition to the usual paraxial spin carried by shear waves (Figs.\,\ref{fig:F1}(c) and \ref{fig:F1}(d)), elastic waves may also carry spins corresponding to the displacement trajectory rolling forward (Fig.\,\ref{fig:F1}(e)) or rolling backward (Fig.\,\ref{fig:F1}(f)). Such special cases of non-paraxial spins are also referred to as ``transverse spins"~\cite{leykam2020edge,wei2020far}, as the spin vector here is perpendicular to the wave vector. Importantly, recent research has proposed hybrid spins of two elastic waves using interference patterns, i.e., a localized superposition of waves in different directions~\cite{long2018}. Differently, here we report new results on \textit{traveling} rolling waves with \textit{propagating} non-paraxial spins, which are defined as follows: We consider the displacement field of a general plane wave $\boldsymbol{u}=\boldsymbol{\tilde{u}}\exp(i\boldsymbol{k \cdot r}-i\omega t)$ with
\begin{equation}
\boldsymbol{\tilde{u}} =\frac{A}{\sqrt{|m|^2+|n|^2+|l|^2}} \left( \begin{array}{c}
m \\ n \\ l
\end{array}\right),
\label{u_hat}
\end{equation}
where $A$ denotes the amplitude. Importantly, here $m$, $n$ and $l$ are complex-valued, so that they contain the information about not only relative amplitudes but also \textit{relative phase differences} among the displacement components.
The spin angular momentum density, as a real-valued vector, can be calculated as~\cite{long2018,burns2020acoustic,berry2009optical}:
\begin{equation}
\boldsymbol{s}= \frac{\rho \omega}{2} \textless\boldsymbol{\tilde{u}}|\boldsymbol{\hat{\textbf{S}}}|\boldsymbol{\tilde{u}}\textgreater = \frac{\rho \omega}{2} {\rm Im}[\boldsymbol{\tilde{u}}^*\times \boldsymbol{\tilde{u}}],
\label{S1}\end{equation}
where $(\cdot)^*$ denotes complex conjugation, and the spin-1 operator is defined as
\begin{equation}
\boldsymbol{\hat{\textbf{S}}}=\left[\left( \begin{array}{ccc}
0 & 0 & 0\\
0 & 0 & -i\\
0 & i & 0
\end{array} \right),
\left( \begin{array}{ccc}
0 & 0 & i\\
0 & 0 & 0\\
-i & 0 & 0
\end{array} \right),
\left( \begin{array}{ccc}
0 & -i & 0\\
i & 0 & 0\\
0 & 0 & 0
\end{array} \right) \right].
\end{equation}
Hence, the spin density for a general traveling wave is
\begin{equation}
\boldsymbol{s}=\frac{\rho\omega A^2}{|m|^2+|n|^2+|l|^2}{\rm Im}\left( \begin{array}{c}
n^*l \\ l^*m \\ m^*n
\end{array}\right).
\label{S2}\end{equation}
For stable propagation of rolling polarizations (Figs.\,\ref{fig:F1}(e) or \ref{fig:F1}(f)), the longitudinal and transverse components have to share the same phase velocity $c = \omega / k$.
While this condition may be satisfied due to the effects of boundaries and interfaces, e.g., acoustic waves in air ducts~\cite{long2020symmetry}, water waves in the ocean~\cite{li2018}, Rayleigh waves along solid surfaces~\cite{brule2020possibility,zhao2020non} and Lamb waves in elastic plates~\cite{jin2020}, we can show that it is never satisfied for bulk waves in isotropic media.\\
Consider an isotropic elastic material with shear modulus $\mu$, Poisson’s ratio $\nu$, and mass density $\rho$: The ratio between the transverse phase velocity, $c_\text{T}$, and the longitudinal phase velocity, $c_\text{L}$, is given by
\begin{equation}
\frac{c_\text{T}}{c_\text{L}} =
{\sqrt{\frac{\mu}{\rho}}} \Bigg/ {\sqrt{\frac{2\mu(1-\nu)}{(1-2\nu)\rho}}} =\sqrt{1-\frac{1}{2(1-\nu)}}.
\label{W1}
\end{equation}
For static stability~\cite{Landau1970}, we are constrained by $-1\leq \nu \leq 1/2 $ in 3D and $-1\leq \nu \leq 1$ in 2D, both of which imply ${c_\text{T}}/{c_\text{L}} \leq \sqrt{3}/2$. Even if we disregard these constraints and allow for arbitrary values of the Poisson’s ratio, the speed ratio ${c_\text{T}}/{c_\text{L}}$ can only asymptotically approach unity when $\nu \rightarrow \infty$. Therefore, as a frequency-independent material property, the \textit{equal-speed criterion}, ${c_\text{T}}={c_\text{L}}$, cannot be met by any isotropic medium.\\
As a consequence, we turn our attention to anisotropic media. For a plane wave with wave vector $\bm{k}$ and wave number $k=|\bm{k}|$, we write
\begin{equation}
\boldsymbol{\tilde{k}}=\bm{k}/k=l_1\boldsymbol{e}_1+l_2\boldsymbol{e}_2+l_3\boldsymbol{e}_3,
\label{eq:72}\end{equation}
where $l_1,l_2,l_3$ are the direction cosines of the wave vector with respect to the Cartesian coordinate axes. We define a matrix $\textbf{L}$ as
\begin{equation}
\textbf{L}=\left( \begin{array}{cccccc}
l_1 & 0 & 0 & 0 & l_3 & l_2\\
0 & l_2 & 0 & l_3 & 0 & l_1\\
0 & 0 & l_3 & l_2 & l_1 & 0
\end{array}
\right ),
\end{equation}
and introduce the Kelvin-Christoffel matrix~\cite{carcione2007wave,SI}:
\begin{equation}
\boldsymbol{\Gamma}=\textbf{L} \boldsymbol{\cdot} \textbf{C} \boldsymbol{\cdot} \textbf{L}^\text{T}\quad \text{or} \quad \mathit{\Gamma}_{ij}=L_{iI}C_{IJ}L_{Jj},
\end{equation}
where $C_{IJ}$ is the elastic stiffness in Voigt notation $(I,J=1,2,3...6)$.
Then, with the definition of phase velocity $c=\omega / k$, the wave equation can be written as~\cite{carcione2007wave,SI}:
\begin{equation}
\boldsymbol{\Gamma \cdot u}-\rho c^2 \boldsymbol{u}=(\boldsymbol{\Gamma}-\rho c^2\textbf{I}) \cdot\boldsymbol{u}=\boldsymbol{0}.
\label{eq:75}\end{equation}
Therefore, the \textit{equal-speed criterion} is equivalent to the 3-by-3 matrix $\boldsymbol{\Gamma}$ having degenerate eigenvalues. For media with spatial symmetries, the criterion can be simplified further. Here we consider three examples for bulk waves propagating along the $z$-direction:
\begin{subequations}
\label{criterion}
\begin{align}
&\text{2D $xz$-plane strain: } C_{33} = C_{55} \text{ and } C_{35} = 0\label{criterion-2d}\\
&\text{Transversely $xy$-isotropic: } C_{33} = C_{44}\label{criterion-trans}\\
&\text{Cubic symmetric: } C_{11} = C_{44}\label{criterion-cubic}
\end{align}
\end{subequations}
To the best of our knowledge, the criteria listed in Eq.\,(\ref{criterion}) are not satisfied by any existing materials, natural or synthetic. This motivates us to design metamaterials for this purpose. Aiming for structures that can be readily fabricated, we exclusively focus on architected geometries made from a single isotropic base material with Young's modulus $E$ and Poisson's ratio $\nu = 0.3$. To identify suitable designs that work in the long wavelength limit, we perform quasi-static calculations using unit cells with appropriate periodic boundary conditions~\cite{overvelde2014relating} on the finite element platform \textsc{abaqus} (element types \textsc{cpe4} and \textsc{c3d4}). From the numerical results, we extract the dimensionless effective elastic constants $\bar C_{IJ}=C_{IJ}/E$.\\
First, we consider the 2D $xz$-plane strain case for waves propagating along the $z$-direction. In order to arrive at a micro-structure satisfying the equal-speed criterion, we need to strengthen the shear stiffness of the material without increasing its normal stiffness. Fig.\,2(a) shows an example unit-cell, where the X-shaped center enhances the shear stiffness and the absence of vertical support reduces the normal stiffness. Using the numerical procedure described above, we calculate the dependence of the dimensionless effective elastic constants $\bar{C}_{33}$ and $\bar{C}_{55}$ on the geometry parameters $L_1$ and $L_2$. The results are presented as two surfaces shown in Fig.\,2(b). The equal-speed criterion is met at the line of intersection of the two surfaces. The geometry shown in Fig.\,2(a) corresponds to the circled point at $(L_1/a,L_2/a)=(0.2,0.1962)$.
Next, as shown in Fig.\,2(c), we present another 2D micro-structure satisfying Eq.\,(\ref{criterion}a). This design was adapted from a previous study on dilatational materials~\cite{buckmann2014three}, which in turn was based on earlier theoretical results~\cite{milton2013complete}. The unit cell entails $C_{11} = C_{33}$ due to symmetry, and, hence, supports the propagation of rolling waves along both the $x$- and $z$-direction. The circled point $(b_1/a,b_2/a)=(0.01878,0.005)$ in Fig.\,2(d) corresponds to the geometry in Fig.\,2(c) with $b_3=b_4=0.05a$ and $b_5=0.3221a$. While this geometry was previously designed for auxetic (i.e., negative Poisson's ratio) properties~\cite{buckmann2014three}, the parameters satisfying the equal-speed criterion actually result in a positive effective 2D Poisson's ratio between the principal directions, $\nu_{xz} = 0.996$. As this structure is strongly anisotropic, the fact that the principal Poisson's ratio approaches unity does not imply a large difference between $c_\text{T}$ and $c_\text{L}$.
We note that both 2D designs presented in Fig.\,2 are mirror-symmetric with respect to both the $x$- and $z$-axis. This symmetry implies the absence of normal-to-shear or shear-to-normal couplings in the effective constitutive relations. Thus, irregardless of the geometry parameters, the condition $C_{35} = 0$ in Eq.\,(\ref{criterion}a), holds for both structures."\\
\begin{figure}[htb]
\centering
\includegraphics[scale=0.36]{fig-2.png}
\caption{\label{fig:F2} 2D metamaterials capable of hosting rolling waves: (a) Unit cell design obtained by taking mirror images of the green quarter. Each red straight line segment is of length $L_2$. (b) Numerically calculated effective elastic constants for the unit cell in (a) with varying geometric parameters. (c) Unit cell design adapted from \cite{buckmann2014three}. It is obtained by taking mirror images of the green quarter. (d) Numerical results for the unit cell in (c) with $b_3=b_4=0.05a$ and $b_5=0.3221a$. Geometries in (a) and (c) correspond to the circled points in (b) and (d), respectively.
}
\end{figure}
In the 3D case, we consider highly symmetric geometries exhibiting either transverse isotropy or cubic symmetry. Fig.\,3(a) shows the unit-cell design satisfying Eq.\,(\ref{criterion-trans}). The honeycomb geometry is chosen to guarantee isotropy in the $xy$-plane. Each out-of-plane wall is constructed by extruding the planar pattern in Fig.\,2(a). Besides $L_1$ and $L_2$, the wall thickness, $h$, is an additional parameter of the 3D structure. Numerical results for the dimensionless constants, $\bar C_{33}$ and $\bar C_{44}$ are shown in Fig.\,3(b) with $h/a=0.2$ fixed. At the line of surface intersection, we obtain a set of designs satisfying the equal-speed criterion. The geometry in Fig.\,3(a) corresponds to the circled point $(L_1/a,L_2/a)=(0.216,0.1)$ in Fig.\,3(b).\\
For the cubic symmetric case, a unit cell satisfying Eq.\,(\ref{criterion-cubic}) is shown in Fig.\,3(c). This geometry was previously studied as an auxetic micro-structure~\cite{dirrenberger2013effective}. It has the symmetry of the point group m$\bar{3}$m - one of the cubic crystallographic point groups, which are characterized by four axes of three-fold rotational symmetry. The symmetry axes can be identified with the body diagonals of the cubic unit cell~\cite{authier2003international,buckmann2014three,dirrenberger2013effective}. The cubic symmetry guarantees that the resulting stiffness matrix has a specific form with three independent elastic constants~\cite{authier2003international,norris2006poisson,SI}. The unit cell in Fig.\,3(c) is composed of identical beams with a square cross-section with side length $L$. For each of the six cubic faces, four beams extend from the vertices and meet at an interior point at a distance $h_2$ from the planar face center. By varying both $L$ and $h_2$, as shown in Fig.\,3(d), we numerically find the line of intersection where the equal-speed criterion is satisfied. Parameters at the circled point $(L/a,h_2/a)=(0.06,0.1035)$ correspond to the geometry in Fig.\,3(c). While being auxetic in other directions, this structure actually has a positive effective Poisson's ratio along its principal directions, as also shown in \cite{dirrenberger2013effective}.\\
\begin{figure}[t]
\centering
\includegraphics[scale=0.36]{fig-3.png}
\caption{\label{fig:F2}
3D metamaterials capable of hosting rolling waves: (a) Honeycomb unit cell design based on the planar pattern shown in Fig.\,2(a), exhibiting isotropy in the $xy$-plane. (b) Numerical results of effective elastic constants for the unit cell in (a) with $h=0.2a$. (c) Unit cell design adapted from \cite{dirrenberger2013effective}, exhibiting cubic symmetry. (d) Numerical results of the cubic case. Geometries in (a) and (c) correspond to the circled points in (b) and (d), respectively.
}
\end{figure}
As an example of non-paraxial spin manipulation, we next investigate normal reflections of a rolling wave at a general elastic boundary. Considering a rolling wave along the $z$-direction normally incident on a flat surface, we have the instantaneous wave displacement fields at time $t=0$ as $\boldsymbol{u}^\text{I}=\boldsymbol{\tilde{u}}^\text{I}\exp(\text{i}kz)$ and $\boldsymbol{u}^\text{R}=\boldsymbol{\tilde{u}}^\text{R}\exp(-\text{i}kz)$ with
\begin{equation}
\boldsymbol{\tilde{u}}^\text{I}=\left( \begin{array}{c}
m^\text{I} \\ n^\text{I} \\ l^\text{I}
\end{array}\right) \quad \text{and} \quad
\boldsymbol{\tilde{u}}^\text{R}=\left( \begin{array}{c}
m^\text{R} \\ n^\text{R} \\ l^\text{R}
\end{array}\right),
\end{equation}
where the superscripts, I and R, denote the incident and reflected waves, respectively. For the surface at $z=0$, we have
\begin{equation}
\sigma^0_{zj}=K_ju^0_j,\ \ j=x,y,z,
\label{R1}\end{equation}
where $K_j$ represents the distributed stiffness of a elastic foundation. At the surface, the stress $\sigma^0_{zj}$ and displacement $u^0_j$ are the superposition of the incident and reflected waves. By calculating the stresses and substituting them into the boundary conditions of \eqref{R1}, we obtain
\begin{equation}
m^\text{R}=R_x m^\text{I},\ \ n^\text{R}=R_y n^\text{I},\ \ l^\text{R}=R_z l^\text{I},
\end{equation}
with
\begin{equation}
R_j=\frac{C_{33} ik-K_j}{C_{33} ik+K_j},\ \ j=x,y,z.
\end{equation}
\begin{figure}[t]
\centering
\includegraphics[scale=0.55]{fig-4aa.png}
\caption{\label{fig:F4} Reflections of a rolling wave (with $s^\text{I}_y=-1$) for normal incidence: (a) Illustration of the setup for the time-domain numerical simulations. Periodic boundary conditions are applied on the top and bottom edges (dotted lines). A total number of 70 unit cells are used along the $z$-direction. Insets of (b)-(e) show the time history of displacements at the position marked by the yellow dot (\textcolor{yellow}{$\bullet$}) in (a).
(b) rigid and (c) free boundaries are both spin-preserving. (d) free-fixed and (e) fixed-free boundaries are both spin-flipping.}
\end{figure}
We next focus on 2D rolling waves in the $xz$-plane where the wave amplitudes in the $y$-direction vanish, $n^\text{I}=n^\text{R}=0$. For $K_x=K_z=0$, the boundary becomes traction-free (Neumann type) and we have $\boldsymbol{\tilde{u}}^\text{R}=\boldsymbol{\tilde{u}}^\text{I}$ with no phase change. For $K_x=K_z=\infty$, the boundary becomes rigid (Dirichlet type) and we have an out-of-phase reflected wave with $\boldsymbol{\tilde{u}}^\text{R}=-\boldsymbol{\tilde{u}}^\text{I}$. In both cases, we can obtain from Eq.\,(\ref{S1}) that $\boldsymbol{s}^\text{R}=\boldsymbol{s}^\text{I}$, so the spin is preserved. In contrast, for hybrid boundaries ($K_x=0$, $K_z=\infty$) and ($K_x=\infty$, $K_z=0$), similar analyses~\cite{SI} result in $\boldsymbol{s}^\text{R}=-\boldsymbol{s}^\text{I}$, so the spin is flipped due to the difference in phase change between the longitudinal and transverse components during the reflection process. These behaviors are further demonstrated in time-domain simulations of a metamaterial made of unit cells shown in Fig.\,2(a) using the commercial software \textsc{comsol} (quadratic quadrilateral elements). Fig.\,4 shows the results for the incident rolling wave carrying a non-paraxial spin of $s_y^\text{I}=-1$. The reflection process is spin-preserving in both rigid and free boundaries, while being spin-flipping for both hybrid free-fixed and hybrid fixed-free boundary conditions. Detailed numerical procedures and additional results of time-domain simulations are available in the Supplemental Material~\cite{SI}.\\
In summary, we studied elastic waves carrying non-paraxial spins, which can propagate in special anisotropic media satisfying the \textit{equal-speed criterion}, $c_\text{T}=c_\text{L}$. We presented both 2D and 3D metamaterial designs satisfying this criterion. In addition, we analysed the reflection of such rolling waves incident on elastic boundaries, demonstrating spin-preserving and spin-flipping behaviors. In contrast to scattering-based~\cite{he2018topological,deng2018metamaterials,celli2019bandgap,wang2020evanescent,xia2020experimental,rosa2020topological,ramakrishnan2020multistable} and resonance-based~\cite{palermo2019tuning,arretche2019experimental,wang2020robust,sugino2020,hussein2020thermal,ghanem2020nanocontact,nassar2020polar,bilal2020enhancement} metamaterials, our designs work in the non-resonant long-wavelength regime~\cite{zheng2019theory,patil20193d,behrou2020topology,zheng2020non,xu2020physical}, essentially using exotic quasi-static properties for wave manipulations. All features shown in this study are \textit {frequency independent} up to the cutoff threshold, which is only limited by how small we can make the unit cells. The tailored structures can be readily fabricated by existing techniques~\cite{liu2020maximizing,elder2020nanomaterial}. This work lays a solid foundation for the new field of broadband phononic spin engineering.\\
This work was supported by start-up funds of the Dept. Mechanical Engineering at Univ.\,Utah. CK is supported by the NSF through Grant DMS-1814854. The authors thank Graeme Milton (Univ.\,Utah), Stephano Gonella (Univ.\,Minnesota), Liyuan Chen (Harvard Univ. \& Westlake Univ.) and Bolei Deng (Harvard Univ.) for discussions. The support and resources from the Center for High Performance Computing at Univ. Utah are gratefully acknowledged.
\normalem
\input{ref_NoTitle.bbl}
\section{Introduction}
The intrinsic spin angular momentum is an important property not only in quantum mechanical descriptions of fundamental particles but also in polarization representations of general wave mechanics~\cite{long2018,burns2020acoustic,shi2019observation,burns2020acoustic}. For electromagnetic and elastic waves, the transversely circular polarization has a direct correspondence to the spin-1 photons~\cite{banzer2012,eismann2020} and phonons~\cite{zhu2018observation,holanda2018detecting,shi2020,ruckriegel2020angular,an2020coherent}, respectively. This duality between the classical and quantum worlds has inspired a number of recent studies in optics~\cite{le2015,sollner2015,peng2019transverse,kim2012time}, gravitational waves~\cite{golat2020evanescent}, acoustics~\cite{wang2018topological,toftul2019acoustic,bliokh2019spin,bliokh2019acoustic,long2020symmetry} and solid mechanics~\cite{calderin2019,kumar2020,hasan2020experimental}. Many intriguing features have been demonstrated, including spin-orbit coupling~\cite{liu2017circular,fu2019}, spin-momentum locking~\cite{liu2019three} and topological edge states analogous to the quantum spin Hall effect~\cite{zhou2018quantum,wu2018topological,tuo2019twist,jin2020}.\\
\begin{figure}[b!]
\includegraphics[scale=0.3]{fig-1.png}
\caption{\label{fig:F1} Spin categories of bulk elastic waves: (a) linear longitudinal $(m,n,l)=(0,0,1)$ and (b) linear transverse $(1,0,0)$ are spin-less. (c) right circular $(1,i,0)$ and (d) left circular $(1,-i,0)$ carry paraxial spins. (e) rolling forward $(1,0,i)$ and (f) rolling backward $(1,0,-i)$ carry non-paraxial spins. Here $\boldsymbol{u}$ shows displacement trajectory. $\boldsymbol{k}$ and $\boldsymbol{s}$ represent wave vector and spin vector, respectively. }
\end{figure}
Focusing on elastic waves in solids, we note that there is naturally a \textit{traveling} longitudinal component (Fig.\,\ref{fig:F1}(a)), which can co-propagate with shear waves (Fig.\,\ref{fig:F1}(b)). This is in stark contrast with electromagnetic waves, for which the longitudinal component can only exist either due to localized interference~\cite{banzer2012,eismann2020}, with field couplings~\cite{sollner2015,le2015,peng2019transverse}, or as evanescent waves~\cite{kim2012time}. Consequently, in addition to the usual paraxial spin carried by shear waves (Figs.\,\ref{fig:F1}(c) and \ref{fig:F1}(d)), elastic waves may also carry spins corresponding to the displacement trajectory rolling forward (Fig.\,\ref{fig:F1}(e)) or rolling backward (Fig.\,\ref{fig:F1}(f)). Such special cases of non-paraxial spins are also referred to as ``transverse spins"~\cite{leykam2020edge,wei2020far}, as the spin vector here is perpendicular to the wave vector. Importantly, recent research has proposed hybrid spins of two elastic waves using interference patterns, i.e., a localized superposition of waves in different directions~\cite{long2018}. Differently, here we report new results on \textit{traveling} rolling waves with \textit{propagating} non-paraxial spins, which are defined as follows: We consider the displacement field of a general plane wave $\boldsymbol{u}=\boldsymbol{\tilde{u}}\exp(i\boldsymbol{k \cdot r}-i\omega t)$ with
\begin{equation}
\boldsymbol{\tilde{u}} =\frac{A}{\sqrt{|m|^2+|n|^2+|l|^2}} \left( \begin{array}{c}
m \\ n \\ l
\end{array}\right),
\label{u_hat}
\end{equation}
where $A$ denotes the amplitude. Importantly, here $m$, $n$ and $l$ are complex-valued, so that they contain the information about not only relative amplitudes but also \textit{relative phase differences} among the displacement components.
The spin angular momentum density, as a real-valued vector, can be calculated as~\cite{long2018,burns2020acoustic,berry2009optical}:
\begin{equation}
\boldsymbol{s}= \frac{\rho \omega}{2} \textless\boldsymbol{\tilde{u}}|\boldsymbol{\hat{\textbf{S}}}|\boldsymbol{\tilde{u}}\textgreater = \frac{\rho \omega}{2} {\rm Im}[\boldsymbol{\tilde{u}}^*\times \boldsymbol{\tilde{u}}],
\label{S1}\end{equation}
where $(\cdot)^*$ denotes complex conjugation, and the spin-1 operator is defined as
\begin{equation}
\boldsymbol{\hat{\textbf{S}}}=\left[\left( \begin{array}{ccc}
0 & 0 & 0\\
0 & 0 & -i\\
0 & i & 0
\end{array} \right),
\left( \begin{array}{ccc}
0 & 0 & i\\
0 & 0 & 0\\
-i & 0 & 0
\end{array} \right),
\left( \begin{array}{ccc}
0 & -i & 0\\
i & 0 & 0\\
0 & 0 & 0
\end{array} \right) \right].
\end{equation}
Hence, the spin density for a general traveling wave is
\begin{equation}
\boldsymbol{s}=\frac{\rho\omega A^2}{|m|^2+|n|^2+|l|^2}{\rm Im}\left( \begin{array}{c}
n^*l \\ l^*m \\ m^*n
\end{array}\right).
\label{S2}\end{equation}
For stable propagation of rolling polarizations (Figs.\,\ref{fig:F1}(e) or \ref{fig:F1}(f)), the longitudinal and transverse components have to share the same phase velocity $c = \omega / k$.
While this condition may be satisfied due to the effects of boundaries and interfaces, e.g., acoustic waves in air ducts~\cite{long2020symmetry}, water waves in the ocean~\cite{li2018}, Rayleigh waves along solid surfaces~\cite{brule2020possibility,zhao2020non} and Lamb waves in elastic plates~\cite{jin2020}, we can show that it is never satisfied for bulk waves in isotropic media.\\
Consider an isotropic elastic material with shear modulus $\mu$, Poisson’s ratio $\nu$, and mass density $\rho$: The ratio between the transverse phase velocity, $c_\text{T}$, and the longitudinal phase velocity, $c_\text{L}$, is given by
\begin{equation}
\frac{c_\text{T}}{c_\text{L}} =
{\sqrt{\frac{\mu}{\rho}}} \Bigg/ {\sqrt{\frac{2\mu(1-\nu)}{(1-2\nu)\rho}}} =\sqrt{1-\frac{1}{2(1-\nu)}}.
\label{W1}
\end{equation}
For static stability~\cite{Landau1970}, we are constrained by $-1\leq \nu \leq 1/2 $ in 3D and $-1\leq \nu \leq 1$ in 2D, both of which imply ${c_\text{T}}/{c_\text{L}} \leq \sqrt{3}/2$. Even if we disregard these constraints and allow for arbitrary values of the Poisson’s ratio, the speed ratio ${c_\text{T}}/{c_\text{L}}$ can only asymptotically approach unity when $\nu \rightarrow \infty$. Therefore, as a frequency-independent material property, the \textit{equal-speed criterion}, ${c_\text{T}}={c_\text{L}}$, cannot be met by any isotropic medium.\\
As a consequence, we turn our attention to anisotropic media. For a plane wave with wave vector $\bm{k}$ and wave number $k=|\bm{k}|$, we write
\begin{equation}
\boldsymbol{\tilde{k}}=\bm{k}/k=l_1\boldsymbol{e}_1+l_2\boldsymbol{e}_2+l_3\boldsymbol{e}_3,
\label{eq:72}\end{equation}
where $l_1,l_2,l_3$ are the direction cosines of the wave vector with respect to the Cartesian coordinate axes. We define a matrix $\textbf{L}$ as
\begin{equation}
\textbf{L}=\left( \begin{array}{cccccc}
l_1 & 0 & 0 & 0 & l_3 & l_2\\
0 & l_2 & 0 & l_3 & 0 & l_1\\
0 & 0 & l_3 & l_2 & l_1 & 0
\end{array}
\right ),
\end{equation}
and introduce the Kelvin-Christoffel matrix~\cite{carcione2007wave,SI}:
\begin{equation}
\boldsymbol{\Gamma}=\textbf{L} \boldsymbol{\cdot} \textbf{C} \boldsymbol{\cdot} \textbf{L}^\text{T}\quad \text{or} \quad \mathit{\Gamma}_{ij}=L_{iI}C_{IJ}L_{Jj},
\end{equation}
where $C_{IJ}$ is the elastic stiffness in Voigt notation $(I,J=1,2,3...6)$.
Then, with the definition of phase velocity $c=\omega / k$, the wave equation can be written as~\cite{carcione2007wave,SI}:
\begin{equation}
\boldsymbol{\Gamma \cdot u}-\rho c^2 \boldsymbol{u}=(\boldsymbol{\Gamma}-\rho c^2\textbf{I}) \cdot\boldsymbol{u}=\boldsymbol{0}.
\label{eq:75}\end{equation}
Therefore, the \textit{equal-speed criterion} is equivalent to the 3-by-3 matrix $\boldsymbol{\Gamma}$ having degenerate eigenvalues. For media with spatial symmetries, the criterion can be simplified further. Here we consider three examples for bulk waves propagating along the $z$-direction:
\begin{subequations}
\label{criterion}
\begin{align}
&\text{2D $xz$-plane strain: } C_{33} = C_{55} \text{ and } C_{35} = 0\label{criterion-2d}\\
&\text{Transversely $xy$-isotropic: } C_{33} = C_{44}\label{criterion-trans}\\
&\text{Cubic symmetric: } C_{11} = C_{44}\label{criterion-cubic}
\end{align}
\end{subequations}
To the best of our knowledge, the criteria listed in Eq.\,(\ref{criterion}) are not satisfied by any existing materials, natural or synthetic. This motivates us to design metamaterials for this purpose. Aiming for structures that can be readily fabricated, we exclusively focus on architected geometries made from a single isotropic base material with Young's modulus $E$ and Poisson's ratio $\nu = 0.3$. To identify suitable designs that work in the long wavelength limit, we perform quasi-static calculations using unit cells with appropriate periodic boundary conditions~\cite{overvelde2014relating} on the finite element platform \textsc{abaqus} (element types \textsc{cpe4} and \textsc{c3d4}). From the numerical results, we extract the dimensionless effective elastic constants $\bar C_{IJ}=C_{IJ}/E$.\\
First, we consider the 2D $xz$-plane strain case for waves propagating along the $z$-direction. In order to arrive at a micro-structure satisfying the equal-speed criterion, we need to strengthen the shear stiffness of the material without increasing its normal stiffness. Fig.\,2(a) shows an example unit-cell, where the X-shaped center enhances the shear stiffness and the absence of vertical support reduces the normal stiffness. Using the numerical procedure described above, we calculate the dependence of the dimensionless effective elastic constants $\bar{C}_{33}$ and $\bar{C}_{55}$ on the geometry parameters $L_1$ and $L_2$. The results are presented as two surfaces shown in Fig.\,2(b). The equal-speed criterion is met at the line of intersection of the two surfaces. The geometry shown in Fig.\,2(a) corresponds to the circled point at $(L_1/a,L_2/a)=(0.2,0.1962)$.
Next, as shown in Fig.\,2(c), we present another 2D micro-structure satisfying Eq.\,(\ref{criterion}a). This design was adapted from a previous study on dilatational materials~\cite{buckmann2014three}, which in turn was based on earlier theoretical results~\cite{milton2013complete}. The unit cell entails $C_{11} = C_{33}$ due to symmetry, and, hence, supports the propagation of rolling waves along both the $x$- and $z$-direction. The circled point $(b_1/a,b_2/a)=(0.01878,0.005)$ in Fig.\,2(d) corresponds to the geometry in Fig.\,2(c) with $b_3=b_4=0.05a$ and $b_5=0.3221a$. While this geometry was previously designed for auxetic (i.e., negative Poisson's ratio) properties~\cite{buckmann2014three}, the parameters satisfying the equal-speed criterion actually result in a positive effective 2D Poisson's ratio between the principal directions, $\nu_{xz} = 0.996$. As this structure is strongly anisotropic, the fact that the principal Poisson's ratio approaches unity does not imply a large difference between $c_\text{T}$ and $c_\text{L}$.
We note that both 2D designs presented in Fig.\,2 are mirror-symmetric with respect to both the $x$- and $z$-axis. This symmetry implies the absence of normal-to-shear or shear-to-normal couplings in the effective constitutive relations. Thus, irregardless of the geometry parameters, the condition $C_{35} = 0$ in Eq.\,(\ref{criterion}a), holds for both structures."\\
\begin{figure}[htb]
\centering
\includegraphics[scale=0.36]{fig-2.png}
\caption{\label{fig:F2} 2D metamaterials capable of hosting rolling waves: (a) Unit cell design obtained by taking mirror images of the green quarter. Each red straight line segment is of length $L_2$. (b) Numerically calculated effective elastic constants for the unit cell in (a) with varying geometric parameters. (c) Unit cell design adapted from \cite{buckmann2014three}. It is obtained by taking mirror images of the green quarter. (d) Numerical results for the unit cell in (c) with $b_3=b_4=0.05a$ and $b_5=0.3221a$. Geometries in (a) and (c) correspond to the circled points in (b) and (d), respectively.
}
\end{figure}
In the 3D case, we consider highly symmetric geometries exhibiting either transverse isotropy or cubic symmetry. Fig.\,3(a) shows the unit-cell design satisfying Eq.\,(\ref{criterion-trans}). The honeycomb geometry is chosen to guarantee isotropy in the $xy$-plane. Each out-of-plane wall is constructed by extruding the planar pattern in Fig.\,2(a). Besides $L_1$ and $L_2$, the wall thickness, $h$, is an additional parameter of the 3D structure. Numerical results for the dimensionless constants, $\bar C_{33}$ and $\bar C_{44}$ are shown in Fig.\,3(b) with $h/a=0.2$ fixed. At the line of surface intersection, we obtain a set of designs satisfying the equal-speed criterion. The geometry in Fig.\,3(a) corresponds to the circled point $(L_1/a,L_2/a)=(0.216,0.1)$ in Fig.\,3(b).\\
For the cubic symmetric case, a unit cell satisfying Eq.\,(\ref{criterion-cubic}) is shown in Fig.\,3(c). This geometry was previously studied as an auxetic micro-structure~\cite{dirrenberger2013effective}. It has the symmetry of the point group m$\bar{3}$m - one of the cubic crystallographic point groups, which are characterized by four axes of three-fold rotational symmetry. The symmetry axes can be identified with the body diagonals of the cubic unit cell~\cite{authier2003international,buckmann2014three,dirrenberger2013effective}. The cubic symmetry guarantees that the resulting stiffness matrix has a specific form with three independent elastic constants~\cite{authier2003international,norris2006poisson,SI}. The unit cell in Fig.\,3(c) is composed of identical beams with a square cross-section with side length $L$. For each of the six cubic faces, four beams extend from the vertices and meet at an interior point at a distance $h_2$ from the planar face center. By varying both $L$ and $h_2$, as shown in Fig.\,3(d), we numerically find the line of intersection where the equal-speed criterion is satisfied. Parameters at the circled point $(L/a,h_2/a)=(0.06,0.1035)$ correspond to the geometry in Fig.\,3(c). While being auxetic in other directions, this structure actually has a positive effective Poisson's ratio along its principal directions, as also shown in \cite{dirrenberger2013effective}.\\
\begin{figure}[t]
\centering
\includegraphics[scale=0.36]{fig-3.png}
\caption{\label{fig:F2}
3D metamaterials capable of hosting rolling waves: (a) Honeycomb unit cell design based on the planar pattern shown in Fig.\,2(a), exhibiting isotropy in the $xy$-plane. (b) Numerical results of effective elastic constants for the unit cell in (a) with $h=0.2a$. (c) Unit cell design adapted from \cite{dirrenberger2013effective}, exhibiting cubic symmetry. (d) Numerical results of the cubic case. Geometries in (a) and (c) correspond to the circled points in (b) and (d), respectively.
}
\end{figure}
As an example of non-paraxial spin manipulation, we next investigate normal reflections of a rolling wave at a general elastic boundary. Considering a rolling wave along the $z$-direction normally incident on a flat surface, we have the instantaneous wave displacement fields at time $t=0$ as $\boldsymbol{u}^\text{I}=\boldsymbol{\tilde{u}}^\text{I}\exp(\text{i}kz)$ and $\boldsymbol{u}^\text{R}=\boldsymbol{\tilde{u}}^\text{R}\exp(-\text{i}kz)$ with
\begin{equation}
\boldsymbol{\tilde{u}}^\text{I}=\left( \begin{array}{c}
m^\text{I} \\ n^\text{I} \\ l^\text{I}
\end{array}\right) \quad \text{and} \quad
\boldsymbol{\tilde{u}}^\text{R}=\left( \begin{array}{c}
m^\text{R} \\ n^\text{R} \\ l^\text{R}
\end{array}\right),
\end{equation}
where the superscripts, I and R, denote the incident and reflected waves, respectively. For the surface at $z=0$, we have
\begin{equation}
\sigma^0_{zj}=K_ju^0_j,\ \ j=x,y,z,
\label{R1}\end{equation}
where $K_j$ represents the distributed stiffness of a elastic foundation. At the surface, the stress $\sigma^0_{zj}$ and displacement $u^0_j$ are the superposition of the incident and reflected waves. By calculating the stresses and substituting them into the boundary conditions of \eqref{R1}, we obtain
\begin{equation}
m^\text{R}=R_x m^\text{I},\ \ n^\text{R}=R_y n^\text{I},\ \ l^\text{R}=R_z l^\text{I},
\end{equation}
with
\begin{equation}
R_j=\frac{C_{33} ik-K_j}{C_{33} ik+K_j},\ \ j=x,y,z.
\end{equation}
\begin{figure}[t]
\centering
\includegraphics[scale=0.55]{fig-4aa.png}
\caption{\label{fig:F4} Reflections of a rolling wave (with $s^\text{I}_y=-1$) for normal incidence: (a) Illustration of the setup for the time-domain numerical simulations. Periodic boundary conditions are applied on the top and bottom edges (dotted lines). A total number of 70 unit cells are used along the $z$-direction. Insets of (b)-(e) show the time history of displacements at the position marked by the yellow dot (\textcolor{yellow}{$\bullet$}) in (a).
(b) rigid and (c) free boundaries are both spin-preserving. (d) free-fixed and (e) fixed-free boundaries are both spin-flipping.}
\end{figure}
We next focus on 2D rolling waves in the $xz$-plane where the wave amplitudes in the $y$-direction vanish, $n^\text{I}=n^\text{R}=0$. For $K_x=K_z=0$, the boundary becomes traction-free (Neumann type) and we have $\boldsymbol{\tilde{u}}^\text{R}=\boldsymbol{\tilde{u}}^\text{I}$ with no phase change. For $K_x=K_z=\infty$, the boundary becomes rigid (Dirichlet type) and we have an out-of-phase reflected wave with $\boldsymbol{\tilde{u}}^\text{R}=-\boldsymbol{\tilde{u}}^\text{I}$. In both cases, we can obtain from Eq.\,(\ref{S1}) that $\boldsymbol{s}^\text{R}=\boldsymbol{s}^\text{I}$, so the spin is preserved. In contrast, for hybrid boundaries ($K_x=0$, $K_z=\infty$) and ($K_x=\infty$, $K_z=0$), similar analyses~\cite{SI} result in $\boldsymbol{s}^\text{R}=-\boldsymbol{s}^\text{I}$, so the spin is flipped due to the difference in phase change between the longitudinal and transverse components during the reflection process. These behaviors are further demonstrated in time-domain simulations of a metamaterial made of unit cells shown in Fig.\,2(a) using the commercial software \textsc{comsol} (quadratic quadrilateral elements). Fig.\,4 shows the results for the incident rolling wave carrying a non-paraxial spin of $s_y^\text{I}=-1$. The reflection process is spin-preserving in both rigid and free boundaries, while being spin-flipping for both hybrid free-fixed and hybrid fixed-free boundary conditions. Detailed numerical procedures and additional results of time-domain simulations are available in the Supplemental Material~\cite{SI}.\\
In summary, we studied elastic waves carrying non-paraxial spins, which can propagate in special anisotropic media satisfying the \textit{equal-speed criterion}, $c_\text{T}=c_\text{L}$. We presented both 2D and 3D metamaterial designs satisfying this criterion. In addition, we analysed the reflection of such rolling waves incident on elastic boundaries, demonstrating spin-preserving and spin-flipping behaviors. In contrast to scattering-based~\cite{he2018topological,deng2018metamaterials,celli2019bandgap,wang2020evanescent,xia2020experimental,rosa2020topological,ramakrishnan2020multistable} and resonance-based~\cite{palermo2019tuning,arretche2019experimental,wang2020robust,sugino2020,hussein2020thermal,ghanem2020nanocontact,nassar2020polar,bilal2020enhancement} metamaterials, our designs work in the non-resonant long-wavelength regime~\cite{zheng2019theory,patil20193d,behrou2020topology,zheng2020non,xu2020physical}, essentially using exotic quasi-static properties for wave manipulations. All features shown in this study are \textit {frequency independent} up to the cutoff threshold, which is only limited by how small we can make the unit cells. The tailored structures can be readily fabricated by existing techniques~\cite{liu2020maximizing,elder2020nanomaterial}. This work lays a solid foundation for the new field of broadband phononic spin engineering.\\
This work was supported by start-up funds of the Dept. Mechanical Engineering at Univ.\,Utah. CK is supported by the NSF through Grant DMS-1814854. The authors thank Graeme Milton (Univ.\,Utah), Stephano Gonella (Univ.\,Minnesota), Liyuan Chen (Harvard Univ. \& Westlake Univ.) and Bolei Deng (Harvard Univ.) for discussions. The support and resources from the Center for High Performance Computing at Univ. Utah are gratefully acknowledged.
\normalem
\input{ref_NoTitle.bbl}
\section{Other non-paraxial spins}
There is no limit to the directions of spin vectors of elastic waves, or equivalently, phonon spins in general. In addition to the examples shown in the main text, Fig.\,S1 shows more categories of traveling waves with various types of propagating non-paraxial spins. This full range of spin degrees of freedom brings great potential in future applications.\\
\begin{figure}[htb]
\centering
\includegraphics[width=0.5\textwidth]{fig-1-SI.png}
\caption{\label{fig:F1} The extended spin categories: (a,b) Spin vector perpendicular to wavenumber vector with $(m,n,l) = (i,0,1)$ and $(-i,0,1)$, respectively. (c,d) Other non-paraxial spin directions with $(m,n,l) = (1+i,2,1-i)$ and $(1-i,1+i,2)$, respectively. }
\end{figure}
\clearpage
\section{Wave speeds in isotropic materials}
We consider the frequency-independent wave speeds at the long-wavelength limit, i.e., the quasi-static limit, in an elastic material with isotropy, which is often desirable in applications~\cite{Chen2020Isotropic}. The elasticity can be described by two independent constants~\cite{Landau1970}. For simplicity of discussion, here we use shear modulus $\mu$ and Poisson's ratio $\nu$.\\
The velocities of transverse wave $c_\text{T}$ and longitudinal wave $c_\text{L}$ are:\\
\begin{equation}
\begin{split}
c_\text{T} &=\sqrt{\frac{\mu}{\rho}}\\
c_\text{L} &=\sqrt{\frac{2\mu(1-\nu)}{\rho(1-2\nu)}}\\
\end{split}
\end{equation}
where $\rho$ is the mass density of the material. The wave speed ratio is:\\
\begin{equation}
c_\text{T}/c_\text{L} = \sqrt{\frac{1-2\nu}{2(1-\nu)}} = \sqrt{1-\frac{1}{2(1-\nu)}}.
\label{ratio}\end{equation} \\
The classical upper bound given by Landau \& Lifshitz~\cite{Landau1970} is
\begin{equation}
c_\text{T}/c_\text{L} < \sqrt{3}/2 < 1.
\end{equation}
This bound is based on the limits of $\nu$:
\begin{equation}
-1 < \nu < 1/2
\end{equation}
which is obtained by requiring both bulk and shear moduli to be positive.\\
With modern metamaterial concepts in mind, we now know that it is possible to achieve negative bulk modulus (e.g., post-buckling structures, active materials with energy source / sink, etc.). To explore the possibilities in the most general case, we assume that the ratio $c_\text{T}/c_\text{L}$ defined in Eqn. (\ref{ratio}) has no other constraints what so ever. In Fig. \ref{fig:F2}, we plot possible speed ratio values by varying $\nu$. It is clear from the graph that we can actually achieve $c_\text{T}/c_\text{L}>1$ for any $\nu>1$, which indeed implies a negative bulk modulus for any positive shear modulus.\\
However, the speed ratio can only asymptotically approach $1$ at the limits of $\nu \rightarrow \pm \infty$. We note that, while the concept of ``infinite Poisson's ratio'' may be demonstrated as a dynamic-equivalency effective property in locally resonant metamaterials~\cite{ding2007metamaterial}, it is inherently frequency-dependent and narrow-band. Also, the resonance will make both group velocities vanish (i.e. the flat band). In such case, we still cannot have a propagating non-praxial phononic spin. Therefore, for frequency-independent properties in the long-wavelength limit, while normal materials always have $c_\text{T}<c_\text{L}$ and exotic metamaterials may accomplish $c_\text{T}>c_\text{L}$, it is not possible to satisfy the \textit{equal-speed criterion}, $c_\text{T}=c_\text{L}$, with isotropy.\\
\begin{figure}[htb]
\centering
\includegraphics[width=0.7\textwidth]{Ratio_speed.jpg}
\caption{\label{fig:F2} The ratio between transverse wave speed and longitudinal wave speed in terms Poisson's ratio under the assumption of isotropy. The red dashed line represents the equal-speed criterion of $c_\text{T}/c_\text{L} = 1$.}
\end{figure}
\clearpage
\section{Wave speeds in anisotropic materials}
We follow the notations and conventions used in \cite{carcione2007wave} for discussions below. The equation of motion without body force is,
\begin{equation}
\boldsymbol{\nabla\cdot\sigma}=\rho\Ddot{\boldsymbol{u}},
\label{P1}\end{equation}
where $\boldsymbol{\sigma}$ is stress vector in Voigt notation, $\rho$ is density, $\boldsymbol{u}$ is displacement, $\Ddot{\boldsymbol{u}}$ denotes second order time derivatives of displacement and $\boldsymbol{\nabla}$ is spatial gradient vector in Voigt notation,\\
\begin{equation}
\bm{\nabla}=\left( \begin{array}{cccccc}
\partial_1 & 0 & 0 & 0 & \partial_3 & \partial_2\\
0 & \partial_2 & 0 & \partial_3 & 0 & \partial_1\\
0 & 0 & \partial_3 & \partial_2 & \partial_1 & 0
\end{array}
\right ).
\end{equation}
The stress vector can be obtained through constitutive relation in terms of displacement by,
\begin{equation}
\boldsymbol{\sigma}=\boldsymbol{C \cdot \nabla}^\text{T}\boldsymbol{\cdot u},
\label{P2}\end{equation}
where $\bm{C}$ is rank-2 stiffness tensor in Voigt notation.\\
The general solution of plane wave is
\begin{equation}
\boldsymbol{u}=\boldsymbol{\tilde{u}} \exp[\text{i} (\boldsymbol{k\cdot r}-\omega t)],
\label{eq:71}\end{equation}
where $\boldsymbol{\tilde{u}}$ is the complex-valued displacement vector, $\boldsymbol{k}$ is the wave vector and $\boldsymbol{r}$ is the position vector. The wave propagates along the $\boldsymbol{\tilde{k}}$-direction,
\begin{equation}
\boldsymbol{\tilde{k}}=\frac{\bm{k}}{k}=l_1\boldsymbol{e}_1+l_2\boldsymbol{e}_2+l_3\boldsymbol{e}_3,
\label{eq:72}\end{equation}
with $l_1,l_2,l_3$ are direction cosines of the wave vector and $k=|\bm{k}|$ is the wavenumber.\\
With the definition of $\textbf{L}$,
\begin{equation}
\textbf{L}=\left( \begin{array}{cccccc}
l_1 & 0 & 0 & 0 & l_3 & l_2\\
0 & l_2 & 0 & l_3 & 0 & l_1\\
0 & 0 & l_3 & l_2 & l_1 & 0
\end{array}
\right ),
\end{equation}
The spatial derivative operator is equivalent to,
\begin{equation}
\boldsymbol{\nabla} \leftrightarrow \text{i} k \textbf{L}.
\label{eq:73}\end{equation}
Therefore, we can substitute Eqs. \eqref{P2}-\eqref{eq:71} in \eqref{P1} by \eqref{eq:72}-\eqref{eq:73} .
\begin{equation}
k^2 \boldsymbol{\Gamma \cdot u}=\rho \omega^2 \boldsymbol{u} \quad \text{or} \quad k^2 \Gamma_{ij}u_j=\rho \omega^2 u_i,
\label{eq:74}\end{equation}
where
\begin{equation}
\boldsymbol{\Gamma}=\textbf{L} \boldsymbol{\cdot} \textbf{C} \boldsymbol{\cdot} \textbf{L}^T \quad \text{or} \quad \Gamma_{ij}=L_{iI}C_{IJ}L_{Jj},
\end{equation}
is the Kelvin-Christoffel matrix with elastic stiffness $C_{IJ}$ in Voigt notation $(I,J=1,2,3...6)$. \comment{Here, the relation between the rank-4 elastic tensor $\mathbb{C}_{ijkl}$ and the rank-2 tensor $C_{IJ}$ is the mappings from $(i,j)$ to $I$, and $(k,l)$ to $J$. For example, the relation between subscripts $ij\rightarrow I$ is $(11,22,33,23,13,12)\rightarrow(1,2,3,4,5,6)$.}\\
More explicitly, the matrix \textbf{C} relates the stress and strain components in the following way:
\begin{equation}
\left( \begin{array}{c}
\sigma_{xx} \\ \sigma_{yy} \\ \sigma_{zz} \\ \sigma_{yz} \\ \sigma_{xz} \\ \sigma_{xy}
\end{array}\right)=\left( \begin{array}{cccccc}
C_{11} & C_{12} & C_{13} & C_{14} & C_{15} & C_{16}\\
& C_{22} & C_{23} & C_{24} & C_{25} & C_{26}\\
& & C_{33} & C_{34} & C_{35} & C_{36}\\
& & & C_{44} & C_{45} & C_{46}\\
& \text{\large Sym} & & & C_{55} & C_{56}\\
& & & & & C_{66}\end{array} \right )\boldsymbol{\cdot}
\left( \begin{array}{c}
\epsilon_{xx} \\ \epsilon_{yy} \\ \epsilon_{zz} \\ 2\epsilon_{yz} \\ 2\epsilon_{xz} \\ 2\epsilon_{xy}
\end{array}\right).
\end{equation}
Denoting the phase velocity by $\boldsymbol{c}$:
\begin{equation}
\boldsymbol{c}=c \tilde{\boldsymbol{k}} \quad \text{with} \quad c=\frac{\omega}{k},
\end{equation}
Eq. \eqref{eq:74} can rewritten as
\begin{equation}
\boldsymbol{\Gamma \cdot u}-\rho c^2 \boldsymbol{u}=(\boldsymbol{\Gamma}-\rho c^2\textbf{I}) \boldsymbol{\cdot u}=\boldsymbol{0}.
\label{eq:75}\end{equation}
Hence, the 3 eigenvalues of the 3-by-3 Kelvin-Christoffel matrix $\boldsymbol{\Gamma}$ have a one-to-one correspondence with the 3 wave speeds in the anisotropic solid.\\
We note that all the derivations are based on the Cartesian coordinate system, but it can be generalized to other coordinates in 3D, such as the cylindrical coordinates.\\
\clearpage
\subsection{General (Triclinic)}
For a general anisotropic (a.k.a Triclinic) case, we have
\begin{equation}
\textbf{C}=\left( \begin{array}{cccccc}
C_{11} & C_{12} & C_{13} & C_{14} & C_{15} & C_{16}\\
& C_{22} & C_{23} & C_{24} & C_{25} & C_{26}\\
& & C_{33} & C_{34} & C_{35} & C_{36}\\
& & & C_{44} & C_{45} & C_{46}\\
& \text{\large Sym} & & & C_{55} & C_{56}\\
& & & & & C_{66}
\end{array}
\right ).
\end{equation}
Then the expressions of each elements in matrix $\boldsymbol{\Gamma}$ are,
\begin{equation}\begin{split}
&\Gamma_{11}=C_{11}l_1^2+C_{66}l_2^2+C_{55}l_3^2+2C_{56}l_2l_3+2C_{15}l_1l_3+2C_{16}l_1l_2,\\
&\Gamma_{22}=C_{66}l_1^2+C_{22}l_2^2+C_{44}l_3^2+2C_{24}l_2l_3+2C_{46}l_1l_3+2C_{26}l_1l_2,\\
&\Gamma_{33}=C_{55}l_1^2+C_{44}l_2^2+C_{33}l_3^2+2C_{34}l_2l_3+2C_{35}l_1l_3+2C_{45}l_1l_2,\\
&\begin{split}\Gamma_{12}=&C_{16}l_1^2+C_{26}l_2^2+C_{45}l_3^2+(C_{46}+C_{25})l_2l_3+(C_{14}+C_{56})l_1l_3+(C_{12}+C_{66})l_1l_2,\end{split}\\
&\begin{split}\Gamma_{13}=&C_{15}l_1^2+C_{46}l_2^2+C_{35}l_3^2+(C_{45}+C_{36})l_2l_3+(C_{13}+C_{55})l_1l_3+(C_{14}+C_{56})l_1l_2,\end{split}\\
&\begin{split}\Gamma_{23}=&C_{56}l_1^2+C_{24}l_2^2+C_{34}l_3^2+(C_{44}+C_{23})l_2l_3+(C_{36}+C_{45})l_1l_3+(C_{25}+C_{46})l_1l_2.\end{split}
\end{split}
\label{P3}\end{equation}
Without loss of generality, we consider a wave propagating along a specific direction, e.g., the $z-$axis:
\begin{equation}
l_1=0, \ l_2=0, \ l_3=1.
\label{P4}\end{equation}
For other directions with different $(l_1,l_2,l_3)$, the analysis can be obtained similarly.\\
Plug Eq. \eqref{P4} into \eqref{P3}, the Kelvin-Christoffel matrix becomes,
\begin{equation}\begin{split}
&\Gamma_{11}=C_{55},\\
&\Gamma_{22}=C_{44},\\
&\Gamma_{33}=C_{33},\\
&\Gamma_{12}=C_{45},\\
&\Gamma_{13}=C_{35},\\
&\Gamma_{23}=C_{34}.
\end{split}
\end{equation}
Therefore, the governing equation \eqref{eq:75} becomes,
\begin{equation}\begin{split}
\boldsymbol{0}&=(\boldsymbol{\Gamma}-\rho c^2\textbf{I}) \boldsymbol{\cdot u}\\
&=\left( \begin{array}{ccc}
C_{55}-\rho c^2 & C_{45} & C_{35}\\
C_{45} & C_{44}-\rho c^2 & C_{34}\\
C_{35} & C_{34} & C_{33}-\rho c^2
\end{array} \right )
\boldsymbol{\cdot} \left( \begin{array}{c}
u_1 \\ u_2 \\ u_3
\end{array}\right).\\
\end{split}
\label{eq:79}\end{equation}
Hence, the 3 wave velocities depend on the 3 eigenvalues of matrix $\boldsymbol{\Gamma}$:
\begin{equation}
c_j=\sqrt{\lambda_j/\rho}
\end{equation}
where $\lambda_j$ for $j=1,2,3$ are eigenvalues of $\boldsymbol{\Gamma}$.\\
Eq. (\ref{eq:79}) may seem to indicate that all three waves are coupled together, preventing any spin polarizations, as we can no longer specify arbitrary phase differences freely.\\
However, we note that $\boldsymbol{\Gamma}$ is a real symmetric matrix, so it is diagonalizable by an orthogonal matrix $\textbf{Q}$:
\begin{equation}
\textbf{D} = \textbf{Q} \boldsymbol{\cdot} \boldsymbol{\Gamma} \boldsymbol{\cdot} \textbf{Q}^\text{T} =\left( \begin{array}{ccc}
\lambda_1 & 0 & 0\\
0 & \lambda_2 & 0\\
0 & 0 & \lambda_3
\end{array} \right ).
\label{diagonalization}\end{equation}
Since $\textbf{Q}$ is orthogonal, we have $\det{(\textbf{Q})} = \pm 1$. Then we can define:
\begin{equation}\\
\textbf{R} = \begin{cases}
\textbf{Q} &\text{if} \quad \det{(\textbf{Q})} = 1\\
-\textbf{Q} &\text{if} \quad \det{(\textbf{Q})} = -1
\end{cases}
\end{equation}
It is apparent that $\textbf{R}$ also diagonalizes $\boldsymbol{\Gamma}$. Since $\textbf{R}$ is an orthogonal matrix with positive unitary determinant, it must be a rotation matrix in the three-dimensional Euclidean space (i.e., a representation of the special orthogonal group SO$(3)$). Therefore, we know that, by rigid rotation of the material (or equivalently, rotating the coordinate system), we can always find the directions in which all three waves with orthogonal displacement fields are decoupled from each other. Thus, as long as $\boldsymbol{\Gamma}$ has degenerate (equal) eigenvalues, we still have the freedom to use phase differences between any two equal-speed modes to create a well-defined propagating spin angular momentum of the traveling wave.\\
In general, none of those decoupled modes needs to be parallel or perpendicular to the $z$-direction. Since the matrix $\boldsymbol{\Gamma}$ is built on the assumption of propagation along the $z$-direction: $\boldsymbol{\tilde{k}} = (l_1=0,l_2=0,l_3=1)$, we may not have pure longitudinal or pure transverse waves as independent and decoupled modes any more.\\
Next, we consider the special case of the ``ultimate" \textit{equal-speed criterion} for all three wave speeds to be the same. This needs all three eigenvalues of $\boldsymbol{\Gamma}$ to be equal ($\lambda=\lambda_1=\lambda_2 = \lambda_3$), then we have:\\
\begin{equation}
\textbf{D} = \lambda\textbf{I} \quad \Rightarrow \quad \boldsymbol{\Gamma} = \textbf{Q}^\text{T} \textbf{D} \textbf{Q}
= \lambda\textbf{Q}^\text{T} \textbf{Q} = \lambda\textbf{I},
\end{equation}
since $\textbf{Q}^\text{T}$\textbf{Q} = \textbf{I} always holds for any orthogonal matrix \textbf{Q}. Therefore, $\boldsymbol{\Gamma}$ must be a \textit{scalar matrix} of the form $\lambda\textbf{I}$. Consequently, the requirements become: \\
\begin{equation}\label{ultimate}
C_{33}=C_{44}=C_{55} \quad \text{and} \quad C_{35}=C_{45}=C_{34}=0.
\end{equation}
\clearpage
\subsection{Orthotropic}
The stiffness matrix for the orthotropic case is,
\begin{equation}
\textbf{C}(\rm Orthotropic)=\left( \begin{array}{cccccc}
C_{11} & C_{12} & C_{13} & 0 & 0 & 0\\
& C_{22} & C_{23} & 0 & 0 & 0\\
& & C_{33} & 0 & 0 & 0\\
& & & C_{44} & 0 & 0\\
& \text{\large Sym} & & & C_{55} & 0\\
& & & & & C_{66}
\end{array}
\right ), \quad (9 \ \rm constants).
\end{equation}
For the wave propagating along $z-$direction,
\begin{equation}
l_1=0, \ l_2=0, \ l_3=1.
\end{equation}
Then, the corresponding Kelvin-Christoffel matrix becomes,
\begin{equation}\begin{split}
&\Gamma_{11}=C_{55},\\
&\Gamma_{22}=C_{44},\\
&\Gamma_{33}=C_{33},\\
&\Gamma_{12}=0,\\
&\Gamma_{13}=0,\\
&\Gamma_{23}=0.
\end{split}
\end{equation}
Therefore, the governing equation \eqref{eq:75} becomes,
\begin{equation}\begin{split}
\boldsymbol{0}&=(\boldsymbol{\Gamma}-\rho c^2\textbf{I}) \boldsymbol{\cdot u}\\
&=\left( \begin{array}{ccc}
C_{55}-\rho c^2 & 0 & 0\\
0 & C_{44}-\rho c^2 & 0\\
0 & 0 & C_{33}-\rho c^2
\end{array} \right )
\boldsymbol{\cdot} \left( \begin{array}{c}
u_1 \\ u_2 \\ u_3
\end{array}\right).
\end{split}
\end{equation}
It is noticed that all waves are decoupled. Considering that the wave propagates along $z-$direction, the requirement of equal-speed propagation is,
\begin{equation}
C_{33}=C_{55} \quad \text{in the $xz$-plane}
\end{equation}
\begin{equation}
C_{33}=C_{44} \quad \text{in the $yz$-plane}
\end{equation}
\comment{
For the waves propagating along $x$- and $y$-directions, we can set the direction cosines accordingly, \\
\begin{equation}
l_1=1, \ l_2=0, \ l_3=0 \quad \text{for the $x$-direction}
\end{equation}
and
\begin{equation}
l_1=0, \ l_2=1, \ l_3=0 \quad \text{for the $y$-direction}
\end{equation}
Similarly, we can obtain the $\boldsymbol{\Gamma}$ matrices and then the corresponding governing equations. Because of the property of orthotropic material, we will have very similar requirements of the rolling wave.\\
For waves propagating along the $x$-direction, the requirements are
\begin{equation}
C_{11}=C_{66}
\end{equation}
in the $xy$-plane and
\begin{equation}
C_{11}=C_{55}
\end{equation}
in the $xz$-plane.\\
For waves propagating along the $x$-direction, the requirements are
\begin{equation}
C_{22}=C_{66}
\end{equation}
in the $xy$-plane and
\begin{equation}
C_{22}=C_{44}
\end{equation}
in the $yz$-plane.
}
\clearpage
\subsection{Transversely Isotropic}
The stiffness matrix for transversely isotropic material is defined by 5 independant elastic constants,
\begin{equation}
\textbf{C}=\left( \begin{array}{cccccc}
C_{11} & C_{12} & C_{13} & 0 & 0 & 0\\
& C_{11} & C_{13} & 0 & 0 & 0\\
& & C_{33} & 0 & 0 & 0\\
& & & C_{44} & 0 & 0\\
& \text{\large Sym} & & & C_{44} & 0\\
& & & & & C_{66}
\end{array}
\right )
\end{equation}
where $C_{66}$ is not independent,
\begin{equation}
C_{66}=\frac{C_{11}-C_{12}}{2}.
\end{equation}
The transversely isotropic material defined above has isotropic property in the $xy$-plane and anisotropic property along $z$-direction. \\
For the wave propagating along $z$-direction,
\begin{equation}
l_1=0, \ l_2=0, \ l_3=1.
\end{equation}
Then, the corresponding Kelvin-Christoffel matrix becomes,
\begin{equation}\begin{split}
&\Gamma_{11}=C_{44},\\
&\Gamma_{22}=C_{44},\\
&\Gamma_{33}=C_{33},\\
&\Gamma_{12}=0,\\
&\Gamma_{13}=0,\\
&\Gamma_{23}=0.
\end{split}
\end{equation}
Therefore, the governing equation \eqref{eq:75} becomes,
\begin{equation}\begin{split}
\boldsymbol{0}&=(\boldsymbol{\Gamma}-\rho c^2\textbf{I}) \boldsymbol{\cdot u}\\
&=\left( \begin{array}{ccc}
C_{44}-\rho c^2 & 0 & 0\\
0 & C_{44}-\rho c^2 & 0\\
0 & 0 & C_{33}-\rho c^2
\end{array} \right )
\boldsymbol{\cdot} \left( \begin{array}{c}
u_1 \\ u_2 \\ u_3
\end{array}\right).
\end{split}
\label{eq:76}\end{equation}
All waves are decoupled. The longitudinal wave relates to $C_{33}$ and both shear waves relate to $C_{44}$. Therefore, the requirement of equal-speed propagation along the $z$-direction is,
\begin{equation}
C_{33}=C_{44}.
\end{equation}
\clearpage
\subsection{Cubic}
The stiffness matrix for cubic material is,
\begin{equation}
\textbf{C}(\rm Cubic)=\left( \begin{array}{cccccc}
C_{11} & C_{12} & C_{12} & 0 & 0 & 0\\
& C_{11} & C_{12} & 0 & 0 & 0\\
& & C_{11} & 0 & 0 & 0\\
& & & C_{44} & 0 & 0\\
& \text{\large Sym} & & & C_{44} & 0\\
& & & & & C_{44}
\end{array}
\right ), \quad \left(3 \ \rm constants\right ).
\end{equation}
All 3 directions along the coordinate axes are same due to the cubic symmetry. Therefore, it is sufficient to analyze the $x$-direction only.\\
For the wave propagating along the $x$-direction, the direction cosines are
\begin{equation}
l_1=1, \ l_2=0, \ l_3=0.
\end{equation}
Then, the corresponding Kelvin-Christoffel matrix becomes,
\begin{equation}\begin{split}
\Gamma_{11}&=C_{11},\\
\Gamma_{22}&=C_{44},\\
\Gamma_{33}&=C_{44},\\
\Gamma_{12}=\Gamma_{13}&=\Gamma_{23}=0.\\
\end{split}
\end{equation}
Therefore, the governing equation \eqref{eq:75} becomes,
\begin{equation}\begin{split}
\boldsymbol{0}&=(\boldsymbol{\Gamma}-\rho c^2\textbf{I}) \boldsymbol{\cdot u}\\
&=\left( \begin{array}{ccc}
C_{11}-\rho c^2 & 0 & 0\\
0 & C_{44}-\rho c^2 & 0\\
0 & 0 & C_{44}-\rho c^2
\end{array} \right )
\boldsymbol{\cdot} \left( \begin{array}{c}
u_1 \\ u_2 \\ u_3
\end{array}\right).
\end{split}
\label{eq:86}\end{equation}
Here, $u_1$ represents longitudinal wave and $u_2,u_3$ represent transverse waves. Therefore, from Eq. \eqref{eq:86}, we obtain
\begin{equation}
c_\text{L}=\sqrt{\frac{C_{11}}{\rho}},\ \ c_\text{T}=\sqrt{\frac{C_{44}}{\rho}}.
\end{equation}
Thus, to generate the rolling wave in $x$-direction, we only need to have,
\begin{equation}
C_{11}=C_{44}.
\label{cubic_criterion}
\end{equation}
\clearpage
\subsection{Plane Strain}
The plane strain condition of $xz$-plane for general anistropic material is,
\begin{equation}
\boldsymbol{\epsilon}=\left( \begin{array}{c}
\epsilon_{xx} \\ \epsilon_{yy} \\ \epsilon_{zz} \\ 2\epsilon_{yz} \\ 2\epsilon_{xz} \\ 2\epsilon_{xy}
\end{array}\right)
=\left( \begin{array}{c}
\epsilon_{xx} \\ 0 \\ \epsilon_{zz} \\ 0 \\ 2\epsilon_{xz} \\ 0
\end{array}\right).
\end{equation}
The general elastic stiffness tensor in Voigt notation is,
\begin{equation}
\textbf{C}=\left( \begin{array}{cccccc}
C_{11} & C_{12} & C_{13} & C_{14} & C_{15} & C_{16}\\
& C_{22} & C_{23} & C_{24} & C_{25} & C_{26}\\
& & C_{33} & C_{34} & C_{35} & C_{36}\\
& & & C_{44} & C_{45} & C_{46}\\
& \text{\large Sym} & & & C_{55} & C_{56}\\
& & & & & C_{66}
\end{array}
\right ).
\end{equation}
Then, by substituting into the constitutive relation $\boldsymbol{\sigma}=\boldsymbol{C\cdot\epsilon}$, we have
\begin{equation}
\left( \begin{array}{c}
\sigma_{xx} \\ \sigma_{yy} \\ \sigma_{zz} \\ \sigma_{yz} \\ \sigma_{xz} \\ \sigma_{xy}
\end{array}\right)=\left( \begin{array}{cccccc}
C_{11} & C_{12} & C_{13} & C_{14} & C_{15} & C_{16}\\
& C_{22} & C_{23} & C_{24} & C_{25} & C_{26}\\
& & C_{33} & C_{34} & C_{35} & C_{36}\\
& & & C_{44} & C_{45} & C_{46}\\
& \text{\large Sym} & & & C_{55} & C_{56}\\
& & & & & C_{66}\end{array} \right )\boldsymbol{\cdot}
\left( \begin{array}{c}
\epsilon_{xx} \\ 0 \\ \epsilon_{zz} \\ 0 \\ 2\epsilon_{xz} \\ 0
\end{array}\right).
\end{equation}
Therefore, for the 2D plane strain cases, we only need to consider the following stress components,
\begin{equation}
\left( \begin{array}{c}
\sigma_{xx} \\ \sigma_{zz} \\ \sigma_{xz}
\end{array}\right)=\left( \begin{array}{ccc}
C_{11} & C_{13} & C_{15}\\
C_{31} & C_{33} & C_{35}\\
C_{51} & C_{53} & C_{55}\end{array} \right )\boldsymbol{\cdot}
\left( \begin{array}{c}
\epsilon_{xx} \\ \epsilon_{zz} \\ 2\epsilon_{xz}
\end{array}\right).
\end{equation}
The governing equation of plane strain situation becomes,
\begin{equation}
(\boldsymbol{\Gamma}-\rho c^2\textbf{I}) \boldsymbol{\cdot u}=\boldsymbol{0},
\end{equation}
where $\boldsymbol{\Gamma}$ is the Kelvin-Christoffel matrix in $xz$-plane,
\begin{equation}
\boldsymbol{\Gamma}=\left( \begin{array}{cc}
\Gamma_{11} & \Gamma_{13}\\
\Gamma_{13} & \Gamma_{33}
\end{array} \right ),
\end{equation}
which is from Eq. \eqref{P3}.\\
For $z$-direction propagating wave,
\begin{equation}
l_1=0,\ \ l_2=0,\ \ l_3=1.
\end{equation}
The corresponding Kelvin-Christoffel matrix becomes,
\begin{equation}\begin{split}
&\Gamma_{11}=C_{55},\\
&\Gamma_{33}=C_{33},\\
&\Gamma_{13}=C_{35}.
\end{split}
\end{equation}
The governing equation of plane strain situation becomes,
\begin{equation}\begin{split}
\boldsymbol{0}&=(\boldsymbol{\Gamma}-\rho c^2\textbf{I}) \cdot\boldsymbol{u}\\
&=\left( \begin{array}{cc}
C_{55}-\rho c^2 & C_{35}\\
C_{35} & C_{33}-\rho c^2
\end{array} \right )
\boldsymbol{\cdot} \left( \begin{array}{c}
u_1 \\ u_3
\end{array}\right).
\end{split}
\end{equation}
Hence, for $u_1$ and $u_3$ to be decoupled and to propagate at the same wave speed, the requirements are $C_{35} = 0$ and $C_{55} = C_{33}$.
\clearpage
\section{The Square Lattice}
\begin{figure}[htb]
\centering
\includegraphics[scale=0.5]{fig-square.png}
\caption{\label{fig:sq} (a) The unit cell of a 2D square lattice. It is constructed by taking mirror images of the green quarter. All red straight line segments are of length $L$. The structure illustrated here is with $L/a=0.2$. The assumed propagation direction along the $z$-axis corresponds to the diagonal (i.e. 45 degree) direction in the square lattice (b) The non-dimensional elastic constants results with varying $L/a$.}
\end{figure}\\
\noindent Some previous studies~\cite{phani2006,wang2015locally} used the Bloch-wave formulation to calculate dispersion relations for the square lattice of beams. The band structures presented in those studies seem to indicate nearly equal-speed propagation at the low-frequency long-wavelength limit in the diagonal (45 degree) direction of the square.\\
Unfortunately, our calculations, as presented in Fig.\,\ref{fig:sq}, show that the square lattice can only asymptotically approach the limit of $c_\text{L} = c_\text{T}$ (equivalent to $C_{33}=C_{55}$ according to Eq. (11a) in the main text) when the beam width $\rightarrow 0$. This fact renders square lattices with finite bending stiffness impractical for hosting rolling waves.\\
For completeness, we also perform the band structure calculations to match the results in \cite{phani2006,wang2015locally} as well as the zoom-in calculations in the long wavelength limit. We note that the ``slenderness ratio" defined in both studies is equivalent to $(a/\sqrt{2})\sqrt{(E (\sqrt{2}L)^2)/(E\frac{(\sqrt{2}L)^4}{12})} = \sqrt{3}(a/L)$ for the parameters defined in Fig.\,\ref{fig:sq}(a), since we actually have the beam thickness = $\sqrt{2}L$ and the conventional square unit cell size = $a/\sqrt{2}$.
\comment{
The normalized frequency is defined by,
\begin{equation}
\Omega=\frac{\omega}{\omega_1},\ \ \text{with}\ \ \omega_1=\pi^2\sqrt{\frac{EI}{\rho (\sqrt{2}L)(a/\sqrt{2})^4}}.
\label{norm_freq}
\end{equation}
}
\begin{figure}[htb]
\centering
\includegraphics[scale=0.42]{Band.png}
\caption{Dispersion relations of the square lattice with different beam thickness: (a) Same slenderness ratio of 20 as \cite{wang2015locally}. (b) Same slenderness ratio of 50 as \cite{phani2006}. (c) Schematic of square lattice. (d) The Brillioun zone of square lattice. Following the same convention in \cite{phani2006}, the normalized frequency is defined as $\Omega=\frac{\omega}{\omega_1}$
with
$\omega_1=\pi^2\sqrt{\frac{EI}{\rho_{A} (\sqrt{2}L)(a/\sqrt{2})^4}}$, where $\rho_{A}$ denotes mass per unit area, or equivalently the ``2D mass density".}
\label{fig:sq1}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[scale=0.42]{Band-zoom-ab.png}
\caption{Zoomed in Dispersion relations at direction $\text{M}\rightarrow\text{G}$ close to G: (a) Same slenderness ratio of 20 as \cite{wang2015locally}. (b) Same slenderness ratio of 50 as \cite{phani2006}.}
\label{fig:sq2}
\end{figure}
\clearpage
\section{An alternative 3D design}
We may also use the structure shown as Fig. 2(c) in the main text to build 3D designs. For example, we can directly use the 2D pattern as each of the six faces of a cube (Fig. \ref{fig:3D-alternative}(a)). This is very similar to the geometry used in \cite{buckmann2014three}, but here we have removed structures inside the unit cube for simplicity. As duly noted in \cite{buckmann2014three}, this 3D geometry itself does not have the symmetry of cubic crystallographic point groups, which are characterised by the four threefold rotation axes along the body diagonals of a cube. Hence, there is no \textit{a priori} guarantee that it will result in an elasticity tensor for cubic symmetry.\\
On the other hand, our numerical calculations can confirm that it can still meet the \textit{equal-speed criterion}. Here, each face of the unit cube has a uniform out-of-plane thickness $h_1$ (due to spatial periodicity the metamaterial has a wall thickness of $2h_1$). With the cube edge length being $a$, we fix the parameters as $2h_1=b_3=b_4=0.05a$ and $b_5=0.3221a$. Then we vary $b_1/a$ and $b_2/a$ to calculate the elastic constants in each case. The results are shown in (Fig. \ref{fig:3D-alternative}(b)). The equal-speed criterion is met at the intersection of two (blue and yellow) surfaces in the parameter space.
\begin{figure}[htb]
\centering
\includegraphics[scale=0.5]{fig-3D-auxetic.png}
\caption{\label{fig:3D-alternative} (a) Auxetic unit cell design similar to the geometry used in \cite{buckmann2014three}. Colors are for visual distinction. All six faces are identical. (b) The non-dimensional elastic constants results with varying parameters $b_1/a$ and $b_2/a$. Geometry in (a) corresponds to the circled point in (b).}
\end{figure}
\clearpage
\section{Reflection of Rolling Wave at Elastic Boundary}
\subsection{Normal incidence and reflection}
Omitting the time harmonic term $e^{-i\omega t}$ and assuming the principle wave displacement directions coincide with the coordinate system, we consider a general plane wave propagating along $z$-direction incident on a flat surface (the $xy$-plane at $z=0$),
\begin{equation}
\boldsymbol{u}^\text{I}=\left( \begin{array}{c}
m^\text{I} \\ n^\text{I} \\ l^\text{I}
\end{array}\right)e^{ikz}
\end{equation}
where I denotes the incident wave.\\
The backward reflection wave can be written as,
\begin{equation}
\boldsymbol{u}^\text{R}=\left( \begin{array}{c}
m^\text{R} \\ n^\text{R} \\ l^\text{R}
\end{array}\right)e^{-ikz}
\end{equation}
where R denotes the reflected wave.\\
The strains can be calculated from displacements by
\begin{equation}
\epsilon_{ij}=\frac{1}{2}(u_{i,j}+u_{j,i})
\end{equation}
where the comma $``,"$ denotes the derivative operation.\\
The reflection occurs at $z=0$. So we have $e^{ikz} = e^{-ikz} = 1$, and the strain vector in Voigt notation becomes,
\begin{equation}
\boldsymbol{\epsilon}^\text{I}=\left( \begin{array}{c}
\epsilon^\text{I}_{xx} \\ \epsilon^\text{I}_{yy} \\ \epsilon^\text{I}_{zz} \\ 2\epsilon^\text{I}_{yz} \\ 2\epsilon^\text{I}_{xz} \\ 2\epsilon^\text{I}_{xy}
\end{array}\right)=\left( \begin{array}{c}
0 \\ 0 \\ l^\text{I} ik \\ n^\text{I} ik \\ m^\text{I} ik \\ 0
\end{array}\right),\ \
\boldsymbol{\epsilon}^\text{R}=\left( \begin{array}{c}
\epsilon^\text{R}_{xx} \\ \epsilon^\text{R}_{yy} \\ \epsilon^\text{R}_{zz} \\ 2\epsilon^\text{R}_{yz} \\ 2\epsilon^\text{R}_{xz} \\ 2\epsilon^\text{R}_{xy}
\end{array}\right)=\left( \begin{array}{c}
0 \\ 0 \\ -l^\text{R} ik \\ -n^\text{R} ik \\ -m^\text{R} ik \\ 0
\end{array}\right).
\label{strain_IR}
\end{equation}
Here we assume the effective orthotropic constitutive relation in Voigt notation:
\begin{equation}
\left( \begin{array}{c}
\sigma_{xx} \\ \sigma_{yy} \\ \sigma_{zz} \\ \sigma_{yz} \\ \sigma_{xz} \\ \sigma_{xy}
\end{array}\right)=\left( \begin{array}{cccccc}
C_{11} & C_{12} & C_{13} & 0 & 0 & 0 \\
C_{12} & C_{22} & C_{23} & 0 & 0 & 0 \\
C_{13} & C_{23} & C_{33} & 0 & 0 & 0 \\
0 & 0 & 0 & C_{44} & 0 & 0 \\
0 & 0 & 0 & 0 & C_{55} & 0 \\
0 & 0 & 0 & 0 & 0 & C_{66}
\end{array}\right)\cdot
\left( \begin{array}{c}
\epsilon_{xx} \\ \epsilon_{yy} \\ \epsilon_{zz} \\ 2\epsilon_{yz} \\ 2\epsilon_{xz} \\ 2\epsilon_{xy}
\end{array}\right).
\end{equation}
\comment{with
\begin{equation}
\boldsymbol{\sigma}=\left( \begin{array}{ccc}
\sigma_{xx} & \sigma_{xy} & \sigma_{xz}\\
\sigma_{yx} & \sigma_{yy} & \sigma_{yz}\\
\sigma_{zx} & \sigma_{zy} & \sigma_{zz}
\end{array}\right)
\end{equation}
denotes the stress tensor.}\\
It is easy to compute the corresponding stress components for incident and reflect waves.
\begin{equation}
\left( \begin{array}{c}
\sigma^\text{I}_{xx} \\ \sigma^\text{I}_{yy} \\ \sigma^\text{I}_{zz} \\ \sigma^\text{I}_{yz} \\ \sigma^\text{I}_{xz} \\ \sigma^\text{I}_{xy}
\end{array}\right)=\left( \begin{array}{c}
C_{13}l^\text{I} ik \\ C_{23}l^\text{I} ik \\ C_{33}l^\text{I} ik \\ C_{44}n^\text{I} ik \\ C_{55}m^\text{I} ik \\ 0
\end{array}\right),\ \
\left( \begin{array}{c}
\sigma^\text{R}_{xx} \\ \sigma^\text{R}_{yy} \\ \sigma^\text{R}_{zz} \\ \sigma^\text{R}_{yz} \\ \sigma^\text{R}_{xz} \\ \sigma^\text{R}_{xy}
\end{array}\right)=\left( \begin{array}{c}
-C_{13}l^\text{R} ik \\ -C_{23}l^\text{R} ik \\ -C_{33}l^\text{R} ik \\ -C_{44}n^\text{R} ik \\ -C_{55}m^\text{R} ik \\ 0
\end{array}\right).
\end{equation}
For the elastically supported cubic half-space, the boundary conditions are
\begin{equation}
\sigma^0_{zx}=K_xu^0_x,\ \ \sigma^0_{zy}=K_yu^0_y,\ \ \sigma^0_{zz}=K_zu^0_z.
\end{equation}
where $(K_x,K_y,K_z)$ are components of \underline{distributed stiffness per unit area}~\cite{zhang2017reflection} representing a general elastic foundation supporting the solid surface. The stress and displacement summations are
\begin{equation}
\sigma^0_{zx}=\sigma^\text{I}_{zx}+\sigma^\text{R}_{zx},\ \ \sigma^0_{zy}=\sigma^\text{I}_{zy}+\sigma^\text{R}_{zy},\ \ \sigma^0_{zz}=\sigma^\text{I}_{zz}+\sigma^\text{R}_{zz}.
\end{equation}
\begin{equation}
u^0_x=u^\text{I}_x+u^\text{R}_x,\ \ u^0_y=u^\text{I}_y+u^\text{R}_y,\ \ u^0_z=u^\text{I}_z+u^\text{R}_z.
\end{equation}
Substituting the displacement and stress components into elastic boundary condition,
\begin{equation}
C_{55}m^\text{I} ik-C_{55}m^\text{R} ik=K_x(m^\text{I}+m^\text{R}),
\end{equation}
\begin{equation}
C_{44}n^\text{I} ik-C_{44}n^\text{R} ik=K_y(n^\text{I}+n^\text{R}),
\end{equation}
\begin{equation}
C_{33}l^\text{I} ik-C_{33}l^\text{R} ik=K_z(l^\text{I}+l^\text{R}).
\end{equation}
Solving the equation set gives us,
\begin{equation}
m^\text{R}=\frac{C_{55} ik-K_x}{C_{55} ik+K_x}m^\text{I},
\end{equation}
\begin{equation}
n^\text{R}=\frac{C_{44} ik-K_y}{C_{44} ik+K_y}n^\text{I},
\end{equation}
\begin{equation}
l^\text{R}=\frac{C_{33} ik-K_z}{C_{33} ik+K_z}l^\text{I}.
\end{equation}
Because of the requirement of rolling wave ($C_{33}=C_{44}=C_{55}$), the amplitude of reflection wave becomes,
\begin{equation}
m^\text{R}=R_x m^\text{I},\ \ n^\text{R}=R_y n^\text{I},\ \ l^\text{R}=R_z l^\text{I},
\end{equation}
with
\begin{equation}
\label{Rxyz}
R_x=\frac{C_{33} ik-K_x}{C_{33} ik+K_x},\ \
R_y=\frac{C_{33} ik-K_y}{C_{33} ik+K_y},\ \
R_z=\frac{C_{33} ik-K_z}{C_{33} ik+K_z}.
\end{equation}
Clearly, the reflected wave amplitude depend on the spring stiffness. Moreover, the elastic boundary condition will degenerate into traction free boundary condition when the stiffness $K_j=0$. Then the amplitudes of reflected wave become
\begin{equation}
m^\text{R}=m^\text{I},\ \ n^\text{R}=n^\text{I},\ \ l^\text{R}=l^\text{I}.
\end{equation}
Similarly, the elastic boundary condition will degenerate into fixed boundary condition when the stiffness $K_j=\infty$. Then, the amplitudes of reflected wave become
\begin{equation}
m^\text{R}=-m^\text{I},\ \ n^\text{R}=-n^\text{I},\ \ l^\text{R}=-l^\text{I}.
\end{equation}
In both cases above, by the definition of spin given in Eq. (2) of the main text, we can conclude $\boldsymbol{s}^\text{R}=\boldsymbol{s}^\text{I}$, so the spin is unaffected by reflection.\\
Next, we consider in-$xz$-plane waves with $n^{\text{I}}=n^{\text{R}}=0$. For the free-rigid hybrid boundary ($K_x=0$,$K_z=\infty$), we have
\begin{equation}
m^\text{R}=m^\text{I},\ \ l^\text{R}=-l^\text{I} \quad \Rightarrow \quad \boldsymbol{s}^\text{R}=-\boldsymbol{s}^\text{I}.
\end{equation}
Similarly, for the rigid-free hybrid boundary ($K_x=\infty$, $K_z=0$), we have
\begin{equation}
m^\text{R}=-m^\text{I},\ \ l^\text{R}=l^\text{I} \quad \Rightarrow \quad \boldsymbol{s}^\text{R}=-\boldsymbol{s}^\text{I}.
\end{equation}
Thus, both hybrid boundaries will flip the spin for any incident rolling wave. \\
\subsection{Complex-valued amplitude ratio $R_j$}
The effects of normal reflection can be described by the generalized amplitude ratio in (\ref{Rxyz}):
\begin{equation}
R_j=\frac{C_{33} ik-K_j}{C_{33} ik+K_j},\ \ j=x,y,z.
\end{equation}
This complex non-dimensional parameter $R_j$ plays a key role between the incident and reflect waves and deserves further analysis. We note that $|R_j|=1$ is consistent with the fact that all wave energy is reflected, and the phase angle $\phi$ represents the phase change during the reflection. Hence, we have \\
\begin{equation}
R_j = e^{i\phi} \quad \text{with} \quad \tan{\phi}=\frac{2{K_j}/{C_{33}k}}{1-({K_j}/{C_{33}k})^2}
\end{equation}
and it dependents on the boundary-bulk stiffness ratio ${K_j}/{C_{33}k}$.
By varying this ratio, we plot the real and imaginary parts of $R_j$ as well as the phase angle $\phi$ in Figure \ref{fig:Rj}. These values, with respect to the logarithmic magnitude of the boundary-bulk stiffness ratio, show typical symmetric and anti-symmetric properties.
Therefore, by adjusting the elastic stiffness at the boundary, one can manipulate the spin of the reflection waves.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.6]{Amp_Ratio.pdf}
\caption{\label{fig:Rj} The amplitude ratio $R_j$ between reflect wave and incident wave. }
\end{figure}\\
Although all properties of both the bulk and the reflection surface are assumed to be independent of the incident wave frequency, here the reflection phase change can be frequency-dependent, as the angular wave number $k$ appears in the ratio. The emergence of frequency dependency can be intuitively explained by the role of wavelength during reflection:\\
We note from Eq. (\ref{strain_IR}) that $k$ first appears in the strain calculations since, for a fixed wave displacement amplitude, the strains in the propagation direction, $\epsilon_{zj}$, are actually inversely proportional to the wavelength: Longer wavelength gives rise to a smaller strain and vice versa. Consequently, the stresses, $\sigma_{zj}$, and the force acting on the boundary springs are wavelength-dependent as well. If the boundary is supported by an elastic foundation with finite stiffness per area, $K_j$, we have the following:\\
At the low-frequency and long-wavelength limit, the force per area acting on the boundary, $|\sigma_{zj}| \propto C_{33}k \rightarrow 0$. So, the boundary hardly move, and the incident wave effectively ``sees" a rigid surface;\\
At the high-frequency and short-wavelength limit, the force per area acting on the boundary, $|\sigma_{zj}| \propto C_{33}k \rightarrow \infty$. So, the boundary moves a lot, and the incident wave effectively ``sees" a free surface.\\
\subsection{Time domain simulations}\\
Fig.\,\ref{fig:SIF3} shows the time-domain finite element simulations of the rolling wave inside 2D anisotropic plane with required elastic constants. This illustrates the satisfaction of the equal-speed criterion and the feasibility for anisotropic material to host the propagating rolling wave.\\
Fig.\,\ref{fig:SIF4} shows the time-domain finite element simulations of the rolling wave inside the designed structured plane. This illustrates the capability of the structure to host the propagating rolling wave. By monitoring specific point, time evolution of the displacements data was extracted and analyzed. It is found the spin property is preserved with the fully-fixed are fully-free boundary conditions, while being flipped with the hybrid fixed-free and hybrid free-fixed boundary conditions.\\
The results elucidate that, in both models, the longitudinal and transverse waves propagate at the same speed. The numerical observations verify that the elastic constants and the structure can host rolling waves. Moreover, the rolling wave spin property can be altered by different boundary conditions. This provides us the potential to use simple edges and surfaces in future applications of rolling waves.
\begin{figure}[htb]
\centering
\includegraphics[scale=0.8]{TD.png}
\caption{\label{fig:SIF3} (a) The schematic of the setup of \textsc{comsol} anisotropic plane model. (b) The displacement fields $u_x$ and $u_y$ at time $t=2$s. (c) The displacement fields $u_x$ and $u_y$ at time $t=4$s. (d) The displacement fields $u_x$ and $u_y$ at time $t=7$s. }
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[scale=0.8]{TD-SS.png}
\caption{\label{fig:SIF4} (a) The schematic of the setup of \textsc{comsol} micro-structured plane model. (b) The displacement fields $u_x$ and $u_y$ at time $t=20$s. (c) The displacement fields $u_x$ and $u_y$ at time $t=40$s. (d) The displacement fields $u_x$ and $u_y$ at time $t=70$s. }
\end{figure}
\clearpage
\section{Numerical Procedures}
The 2D and 3D unit cell geometries are designed by finite element calculations in \textsc{abaqus} to satisfy the requirements of elastic constants for different anisotropic cases. \\
For 2D plane strain cases, square unit cells are used. We first build one quarter of the unit cell and its mesh. Then, by symmetry operations, the other parts with mesh are generated. This gives us the easiest way to guarantee the one-to-one correspondence between each boundary node-pair, making the application of proper periodic boundary conditions possible. By prescribing unit cell deformation, the effective elastic constants can be obtained by averaging element stresses.\\
Similarly, for 3D cases, we first build the one-eighth structure and then make the symmetry operations. With periodic boundary conditions, we prescribe the unit cell deformation and calculate the average stress components to obtain the effective elastic constants.\\
In addition, 2D time-domain simulations by \textsc{comsol} are conducted to illustrate the reflections of roll waves from different boundaries. We use two different time-domain models: a) the anisotropic media with elastic constants satisfying the requirements listed in Eq.\,(11a) of the main text; and b) the periodic micro-structured lattice with unit geometry shown in Fig.\,2(a) of the main text. The parameters used in the simulations are set to be at the long-wavelength limit with $\lambda_0/a=4\pi$ ($\lambda_0$ is the wave length, $a$ is the unit cell size). For both models, periodic boundary conditions are applied to the top and bottom boundaries. The displacement boundary conditions with rolling excitation $(u_x,u_y)=({\rm sin}(\omega t), {\rm cos}(\omega t))$ are prescribed at the left edge. The right side as the reflection surface is prescribed with different boundary conditions, i.e., fixed, stress-free, hybrid fixed-free and hybrid free-fixed. \\
\noindent Since numerical procedures employed in this study might be useful in a variety of applications, we make the codes available for free download, advocating for an open-source initiative in the research community:\\
\begin{enumerate}
\item ``ABAQUS-2D.zip" - An Abaqus Python script for the 2D geometry in Fig. 2(a).
\item ``ABAQUS-3D.zip" - An Abaqus Python script for the 3D geometry in Fig. 3(c).
\item ``COMSOL.zip" - A time-domain simulation for results in \ref{fig:SIF4}.
\end{enumerate}
\clearpage
\printbibliography
\end{document}
\section{Other non-paraxial spins}
There is no limit to the directions of spin vectors of elastic waves, or equivalently, phonon spins in general. In addition to the examples shown in the main text, Fig.\,S1 shows more categories of traveling waves with various types of propagating non-paraxial spins. This full range of spin degrees of freedom brings great potential in future applications.\\
\begin{figure}[htb]
\centering
\includegraphics[width=0.5\textwidth]{fig-1-SI.png}
\caption{\label{fig:F1} The extended spin categories: (a,b) Spin vector perpendicular to wavenumber vector with $(m,n,l) = (i,0,1)$ and $(-i,0,1)$, respectively. (c,d) Other non-paraxial spin directions with $(m,n,l) = (1+i,2,1-i)$ and $(1-i,1+i,2)$, respectively. }
\end{figure}
\clearpage
\section{Wave speeds in isotropic materials}
We consider the frequency-independent wave speeds at the long-wavelength limit, i.e., the quasi-static limit, in an elastic material with isotropy, which is often desirable in applications~\cite{Chen2020Isotropic}. The elasticity can be described by two independent constants~\cite{Landau1970}. For simplicity of discussion, here we use shear modulus $\mu$ and Poisson's ratio $\nu$.\\
The velocities of transverse wave $c_\text{T}$ and longitudinal wave $c_\text{L}$ are:\\
\begin{equation}
\begin{split}
c_\text{T} &=\sqrt{\frac{\mu}{\rho}}\\
c_\text{L} &=\sqrt{\frac{2\mu(1-\nu)}{\rho(1-2\nu)}}\\
\end{split}
\end{equation}
where $\rho$ is the mass density of the material. The wave speed ratio is:\\
\begin{equation}
c_\text{T}/c_\text{L} = \sqrt{\frac{1-2\nu}{2(1-\nu)}} = \sqrt{1-\frac{1}{2(1-\nu)}}.
\label{ratio}\end{equation} \\
The classical upper bound given by Landau \& Lifshitz~\cite{Landau1970} is
\begin{equation}
c_\text{T}/c_\text{L} < \sqrt{3}/2 < 1.
\end{equation}
This bound is based on the limits of $\nu$:
\begin{equation}
-1 < \nu < 1/2
\end{equation}
which is obtained by requiring both bulk and shear moduli to be positive.\\
With modern metamaterial concepts in mind, we now know that it is possible to achieve negative bulk modulus (e.g., post-buckling structures, active materials with energy source / sink, etc.). To explore the possibilities in the most general case, we assume that the ratio $c_\text{T}/c_\text{L}$ defined in Eqn. (\ref{ratio}) has no other constraints what so ever. In Fig. \ref{fig:F2}, we plot possible speed ratio values by varying $\nu$. It is clear from the graph that we can actually achieve $c_\text{T}/c_\text{L}>1$ for any $\nu>1$, which indeed implies a negative bulk modulus for any positive shear modulus.\\
However, the speed ratio can only asymptotically approach $1$ at the limits of $\nu \rightarrow \pm \infty$. We note that, while the concept of ``infinite Poisson's ratio'' may be demonstrated as a dynamic-equivalency effective property in locally resonant metamaterials~\cite{ding2007metamaterial}, it is inherently frequency-dependent and narrow-band. Also, the resonance will make both group velocities vanish (i.e. the flat band). In such case, we still cannot have a propagating non-praxial phononic spin. Therefore, for frequency-independent properties in the long-wavelength limit, while normal materials always have $c_\text{T}<c_\text{L}$ and exotic metamaterials may accomplish $c_\text{T}>c_\text{L}$, it is not possible to satisfy the \textit{equal-speed criterion}, $c_\text{T}=c_\text{L}$, with isotropy.\\
\begin{figure}[htb]
\centering
\includegraphics[width=0.7\textwidth]{Ratio_speed.jpg}
\caption{\label{fig:F2} The ratio between transverse wave speed and longitudinal wave speed in terms Poisson's ratio under the assumption of isotropy. The red dashed line represents the equal-speed criterion of $c_\text{T}/c_\text{L} = 1$.}
\end{figure}
\clearpage
\section{Wave speeds in anisotropic materials}
We follow the notations and conventions used in \cite{carcione2007wave} for discussions below. The equation of motion without body force is,
\begin{equation}
\boldsymbol{\nabla\cdot\sigma}=\rho\Ddot{\boldsymbol{u}},
\label{P1}\end{equation}
where $\boldsymbol{\sigma}$ is stress vector in Voigt notation, $\rho$ is density, $\boldsymbol{u}$ is displacement, $\Ddot{\boldsymbol{u}}$ denotes second order time derivatives of displacement and $\boldsymbol{\nabla}$ is spatial gradient vector in Voigt notation,\\
\begin{equation}
\bm{\nabla}=\left( \begin{array}{cccccc}
\partial_1 & 0 & 0 & 0 & \partial_3 & \partial_2\\
0 & \partial_2 & 0 & \partial_3 & 0 & \partial_1\\
0 & 0 & \partial_3 & \partial_2 & \partial_1 & 0
\end{array}
\right ).
\end{equation}
The stress vector can be obtained through constitutive relation in terms of displacement by,
\begin{equation}
\boldsymbol{\sigma}=\boldsymbol{C \cdot \nabla}^\text{T}\boldsymbol{\cdot u},
\label{P2}\end{equation}
where $\bm{C}$ is rank-2 stiffness tensor in Voigt notation.\\
The general solution of plane wave is
\begin{equation}
\boldsymbol{u}=\boldsymbol{\tilde{u}} \exp[\text{i} (\boldsymbol{k\cdot r}-\omega t)],
\label{eq:71}\end{equation}
where $\boldsymbol{\tilde{u}}$ is the complex-valued displacement vector, $\boldsymbol{k}$ is the wave vector and $\boldsymbol{r}$ is the position vector. The wave propagates along the $\boldsymbol{\tilde{k}}$-direction,
\begin{equation}
\boldsymbol{\tilde{k}}=\frac{\bm{k}}{k}=l_1\boldsymbol{e}_1+l_2\boldsymbol{e}_2+l_3\boldsymbol{e}_3,
\label{eq:72}\end{equation}
with $l_1,l_2,l_3$ are direction cosines of the wave vector and $k=|\bm{k}|$ is the wavenumber.\\
With the definition of $\textbf{L}$,
\begin{equation}
\textbf{L}=\left( \begin{array}{cccccc}
l_1 & 0 & 0 & 0 & l_3 & l_2\\
0 & l_2 & 0 & l_3 & 0 & l_1\\
0 & 0 & l_3 & l_2 & l_1 & 0
\end{array}
\right ),
\end{equation}
The spatial derivative operator is equivalent to,
\begin{equation}
\boldsymbol{\nabla} \leftrightarrow \text{i} k \textbf{L}.
\label{eq:73}\end{equation}
Therefore, we can substitute Eqs. \eqref{P2}-\eqref{eq:71} in \eqref{P1} by \eqref{eq:72}-\eqref{eq:73} .
\begin{equation}
k^2 \boldsymbol{\Gamma \cdot u}=\rho \omega^2 \boldsymbol{u} \quad \text{or} \quad k^2 \Gamma_{ij}u_j=\rho \omega^2 u_i,
\label{eq:74}\end{equation}
where
\begin{equation}
\boldsymbol{\Gamma}=\textbf{L} \boldsymbol{\cdot} \textbf{C} \boldsymbol{\cdot} \textbf{L}^T \quad \text{or} \quad \Gamma_{ij}=L_{iI}C_{IJ}L_{Jj},
\end{equation}
is the Kelvin-Christoffel matrix with elastic stiffness $C_{IJ}$ in Voigt notation $(I,J=1,2,3...6)$. \comment{Here, the relation between the rank-4 elastic tensor $\mathbb{C}_{ijkl}$ and the rank-2 tensor $C_{IJ}$ is the mappings from $(i,j)$ to $I$, and $(k,l)$ to $J$. For example, the relation between subscripts $ij\rightarrow I$ is $(11,22,33,23,13,12)\rightarrow(1,2,3,4,5,6)$.}\\
More explicitly, the matrix \textbf{C} relates the stress and strain components in the following way:
\begin{equation}
\left( \begin{array}{c}
\sigma_{xx} \\ \sigma_{yy} \\ \sigma_{zz} \\ \sigma_{yz} \\ \sigma_{xz} \\ \sigma_{xy}
\end{array}\right)=\left( \begin{array}{cccccc}
C_{11} & C_{12} & C_{13} & C_{14} & C_{15} & C_{16}\\
& C_{22} & C_{23} & C_{24} & C_{25} & C_{26}\\
& & C_{33} & C_{34} & C_{35} & C_{36}\\
& & & C_{44} & C_{45} & C_{46}\\
& \text{\large Sym} & & & C_{55} & C_{56}\\
& & & & & C_{66}\end{array} \right )\boldsymbol{\cdot}
\left( \begin{array}{c}
\epsilon_{xx} \\ \epsilon_{yy} \\ \epsilon_{zz} \\ 2\epsilon_{yz} \\ 2\epsilon_{xz} \\ 2\epsilon_{xy}
\end{array}\right).
\end{equation}
Denoting the phase velocity by $\boldsymbol{c}$:
\begin{equation}
\boldsymbol{c}=c \tilde{\boldsymbol{k}} \quad \text{with} \quad c=\frac{\omega}{k},
\end{equation}
Eq. \eqref{eq:74} can rewritten as
\begin{equation}
\boldsymbol{\Gamma \cdot u}-\rho c^2 \boldsymbol{u}=(\boldsymbol{\Gamma}-\rho c^2\textbf{I}) \boldsymbol{\cdot u}=\boldsymbol{0}.
\label{eq:75}\end{equation}
Hence, the 3 eigenvalues of the 3-by-3 Kelvin-Christoffel matrix $\boldsymbol{\Gamma}$ have a one-to-one correspondence with the 3 wave speeds in the anisotropic solid.\\
We note that all the derivations are based on the Cartesian coordinate system, but it can be generalized to other coordinates in 3D, such as the cylindrical coordinates.\\
\clearpage
\subsection{General (Triclinic)}
For a general anisotropic (a.k.a Triclinic) case, we have
\begin{equation}
\textbf{C}=\left( \begin{array}{cccccc}
C_{11} & C_{12} & C_{13} & C_{14} & C_{15} & C_{16}\\
& C_{22} & C_{23} & C_{24} & C_{25} & C_{26}\\
& & C_{33} & C_{34} & C_{35} & C_{36}\\
& & & C_{44} & C_{45} & C_{46}\\
& \text{\large Sym} & & & C_{55} & C_{56}\\
& & & & & C_{66}
\end{array}
\right ).
\end{equation}
Then the expressions of each elements in matrix $\boldsymbol{\Gamma}$ are,
\begin{equation}\begin{split}
&\Gamma_{11}=C_{11}l_1^2+C_{66}l_2^2+C_{55}l_3^2+2C_{56}l_2l_3+2C_{15}l_1l_3+2C_{16}l_1l_2,\\
&\Gamma_{22}=C_{66}l_1^2+C_{22}l_2^2+C_{44}l_3^2+2C_{24}l_2l_3+2C_{46}l_1l_3+2C_{26}l_1l_2,\\
&\Gamma_{33}=C_{55}l_1^2+C_{44}l_2^2+C_{33}l_3^2+2C_{34}l_2l_3+2C_{35}l_1l_3+2C_{45}l_1l_2,\\
&\begin{split}\Gamma_{12}=&C_{16}l_1^2+C_{26}l_2^2+C_{45}l_3^2+(C_{46}+C_{25})l_2l_3+(C_{14}+C_{56})l_1l_3+(C_{12}+C_{66})l_1l_2,\end{split}\\
&\begin{split}\Gamma_{13}=&C_{15}l_1^2+C_{46}l_2^2+C_{35}l_3^2+(C_{45}+C_{36})l_2l_3+(C_{13}+C_{55})l_1l_3+(C_{14}+C_{56})l_1l_2,\end{split}\\
&\begin{split}\Gamma_{23}=&C_{56}l_1^2+C_{24}l_2^2+C_{34}l_3^2+(C_{44}+C_{23})l_2l_3+(C_{36}+C_{45})l_1l_3+(C_{25}+C_{46})l_1l_2.\end{split}
\end{split}
\label{P3}\end{equation}
Without loss of generality, we consider a wave propagating along a specific direction, e.g., the $z-$axis:
\begin{equation}
l_1=0, \ l_2=0, \ l_3=1.
\label{P4}\end{equation}
For other directions with different $(l_1,l_2,l_3)$, the analysis can be obtained similarly.\\
Plug Eq. \eqref{P4} into \eqref{P3}, the Kelvin-Christoffel matrix becomes,
\begin{equation}\begin{split}
&\Gamma_{11}=C_{55},\\
&\Gamma_{22}=C_{44},\\
&\Gamma_{33}=C_{33},\\
&\Gamma_{12}=C_{45},\\
&\Gamma_{13}=C_{35},\\
&\Gamma_{23}=C_{34}.
\end{split}
\end{equation}
Therefore, the governing equation \eqref{eq:75} becomes,
\begin{equation}\begin{split}
\boldsymbol{0}&=(\boldsymbol{\Gamma}-\rho c^2\textbf{I}) \boldsymbol{\cdot u}\\
&=\left( \begin{array}{ccc}
C_{55}-\rho c^2 & C_{45} & C_{35}\\
C_{45} & C_{44}-\rho c^2 & C_{34}\\
C_{35} & C_{34} & C_{33}-\rho c^2
\end{array} \right )
\boldsymbol{\cdot} \left( \begin{array}{c}
u_1 \\ u_2 \\ u_3
\end{array}\right).\\
\end{split}
\label{eq:79}\end{equation}
Hence, the 3 wave velocities depend on the 3 eigenvalues of matrix $\boldsymbol{\Gamma}$:
\begin{equation}
c_j=\sqrt{\lambda_j/\rho}
\end{equation}
where $\lambda_j$ for $j=1,2,3$ are eigenvalues of $\boldsymbol{\Gamma}$.\\
Eq. (\ref{eq:79}) may seem to indicate that all three waves are coupled together, preventing any spin polarizations, as we can no longer specify arbitrary phase differences freely.\\
However, we note that $\boldsymbol{\Gamma}$ is a real symmetric matrix, so it is diagonalizable by an orthogonal matrix $\textbf{Q}$:
\begin{equation}
\textbf{D} = \textbf{Q} \boldsymbol{\cdot} \boldsymbol{\Gamma} \boldsymbol{\cdot} \textbf{Q}^\text{T} =\left( \begin{array}{ccc}
\lambda_1 & 0 & 0\\
0 & \lambda_2 & 0\\
0 & 0 & \lambda_3
\end{array} \right ).
\label{diagonalization}\end{equation}
Since $\textbf{Q}$ is orthogonal, we have $\det{(\textbf{Q})} = \pm 1$. Then we can define:
\begin{equation}\\
\textbf{R} = \begin{cases}
\textbf{Q} &\text{if} \quad \det{(\textbf{Q})} = 1\\
-\textbf{Q} &\text{if} \quad \det{(\textbf{Q})} = -1
\end{cases}
\end{equation}
It is apparent that $\textbf{R}$ also diagonalizes $\boldsymbol{\Gamma}$. Since $\textbf{R}$ is an orthogonal matrix with positive unitary determinant, it must be a rotation matrix in the three-dimensional Euclidean space (i.e., a representation of the special orthogonal group SO$(3)$). Therefore, we know that, by rigid rotation of the material (or equivalently, rotating the coordinate system), we can always find the directions in which all three waves with orthogonal displacement fields are decoupled from each other. Thus, as long as $\boldsymbol{\Gamma}$ has degenerate (equal) eigenvalues, we still have the freedom to use phase differences between any two equal-speed modes to create a well-defined propagating spin angular momentum of the traveling wave.\\
In general, none of those decoupled modes needs to be parallel or perpendicular to the $z$-direction. Since the matrix $\boldsymbol{\Gamma}$ is built on the assumption of propagation along the $z$-direction: $\boldsymbol{\tilde{k}} = (l_1=0,l_2=0,l_3=1)$, we may not have pure longitudinal or pure transverse waves as independent and decoupled modes any more.\\
Next, we consider the special case of the ``ultimate" \textit{equal-speed criterion} for all three wave speeds to be the same. This needs all three eigenvalues of $\boldsymbol{\Gamma}$ to be equal ($\lambda=\lambda_1=\lambda_2 = \lambda_3$), then we have:\\
\begin{equation}
\textbf{D} = \lambda\textbf{I} \quad \Rightarrow \quad \boldsymbol{\Gamma} = \textbf{Q}^\text{T} \textbf{D} \textbf{Q}
= \lambda\textbf{Q}^\text{T} \textbf{Q} = \lambda\textbf{I},
\end{equation}
since $\textbf{Q}^\text{T}$\textbf{Q} = \textbf{I} always holds for any orthogonal matrix \textbf{Q}. Therefore, $\boldsymbol{\Gamma}$ must be a \textit{scalar matrix} of the form $\lambda\textbf{I}$. Consequently, the requirements become: \\
\begin{equation}\label{ultimate}
C_{33}=C_{44}=C_{55} \quad \text{and} \quad C_{35}=C_{45}=C_{34}=0.
\end{equation}
\clearpage
\subsection{Orthotropic}
The stiffness matrix for the orthotropic case is,
\begin{equation}
\textbf{C}(\rm Orthotropic)=\left( \begin{array}{cccccc}
C_{11} & C_{12} & C_{13} & 0 & 0 & 0\\
& C_{22} & C_{23} & 0 & 0 & 0\\
& & C_{33} & 0 & 0 & 0\\
& & & C_{44} & 0 & 0\\
& \text{\large Sym} & & & C_{55} & 0\\
& & & & & C_{66}
\end{array}
\right ), \quad (9 \ \rm constants).
\end{equation}
For the wave propagating along $z-$direction,
\begin{equation}
l_1=0, \ l_2=0, \ l_3=1.
\end{equation}
Then, the corresponding Kelvin-Christoffel matrix becomes,
\begin{equation}\begin{split}
&\Gamma_{11}=C_{55},\\
&\Gamma_{22}=C_{44},\\
&\Gamma_{33}=C_{33},\\
&\Gamma_{12}=0,\\
&\Gamma_{13}=0,\\
&\Gamma_{23}=0.
\end{split}
\end{equation}
Therefore, the governing equation \eqref{eq:75} becomes,
\begin{equation}\begin{split}
\boldsymbol{0}&=(\boldsymbol{\Gamma}-\rho c^2\textbf{I}) \boldsymbol{\cdot u}\\
&=\left( \begin{array}{ccc}
C_{55}-\rho c^2 & 0 & 0\\
0 & C_{44}-\rho c^2 & 0\\
0 & 0 & C_{33}-\rho c^2
\end{array} \right )
\boldsymbol{\cdot} \left( \begin{array}{c}
u_1 \\ u_2 \\ u_3
\end{array}\right).
\end{split}
\end{equation}
It is noticed that all waves are decoupled. Considering that the wave propagates along $z-$direction, the requirement of equal-speed propagation is,
\begin{equation}
C_{33}=C_{55} \quad \text{in the $xz$-plane}
\end{equation}
\begin{equation}
C_{33}=C_{44} \quad \text{in the $yz$-plane}
\end{equation}
\comment{
For the waves propagating along $x$- and $y$-directions, we can set the direction cosines accordingly, \\
\begin{equation}
l_1=1, \ l_2=0, \ l_3=0 \quad \text{for the $x$-direction}
\end{equation}
and
\begin{equation}
l_1=0, \ l_2=1, \ l_3=0 \quad \text{for the $y$-direction}
\end{equation}
Similarly, we can obtain the $\boldsymbol{\Gamma}$ matrices and then the corresponding governing equations. Because of the property of orthotropic material, we will have very similar requirements of the rolling wave.\\
For waves propagating along the $x$-direction, the requirements are
\begin{equation}
C_{11}=C_{66}
\end{equation}
in the $xy$-plane and
\begin{equation}
C_{11}=C_{55}
\end{equation}
in the $xz$-plane.\\
For waves propagating along the $x$-direction, the requirements are
\begin{equation}
C_{22}=C_{66}
\end{equation}
in the $xy$-plane and
\begin{equation}
C_{22}=C_{44}
\end{equation}
in the $yz$-plane.
}
\clearpage
\subsection{Transversely Isotropic}
The stiffness matrix for transversely isotropic material is defined by 5 independant elastic constants,
\begin{equation}
\textbf{C}=\left( \begin{array}{cccccc}
C_{11} & C_{12} & C_{13} & 0 & 0 & 0\\
& C_{11} & C_{13} & 0 & 0 & 0\\
& & C_{33} & 0 & 0 & 0\\
& & & C_{44} & 0 & 0\\
& \text{\large Sym} & & & C_{44} & 0\\
& & & & & C_{66}
\end{array}
\right )
\end{equation}
where $C_{66}$ is not independent,
\begin{equation}
C_{66}=\frac{C_{11}-C_{12}}{2}.
\end{equation}
The transversely isotropic material defined above has isotropic property in the $xy$-plane and anisotropic property along $z$-direction. \\
For the wave propagating along $z$-direction,
\begin{equation}
l_1=0, \ l_2=0, \ l_3=1.
\end{equation}
Then, the corresponding Kelvin-Christoffel matrix becomes,
\begin{equation}\begin{split}
&\Gamma_{11}=C_{44},\\
&\Gamma_{22}=C_{44},\\
&\Gamma_{33}=C_{33},\\
&\Gamma_{12}=0,\\
&\Gamma_{13}=0,\\
&\Gamma_{23}=0.
\end{split}
\end{equation}
Therefore, the governing equation \eqref{eq:75} becomes,
\begin{equation}\begin{split}
\boldsymbol{0}&=(\boldsymbol{\Gamma}-\rho c^2\textbf{I}) \boldsymbol{\cdot u}\\
&=\left( \begin{array}{ccc}
C_{44}-\rho c^2 & 0 & 0\\
0 & C_{44}-\rho c^2 & 0\\
0 & 0 & C_{33}-\rho c^2
\end{array} \right )
\boldsymbol{\cdot} \left( \begin{array}{c}
u_1 \\ u_2 \\ u_3
\end{array}\right).
\end{split}
\label{eq:76}\end{equation}
All waves are decoupled. The longitudinal wave relates to $C_{33}$ and both shear waves relate to $C_{44}$. Therefore, the requirement of equal-speed propagation along the $z$-direction is,
\begin{equation}
C_{33}=C_{44}.
\end{equation}
\clearpage
\subsection{Cubic}
The stiffness matrix for cubic material is,
\begin{equation}
\textbf{C}(\rm Cubic)=\left( \begin{array}{cccccc}
C_{11} & C_{12} & C_{12} & 0 & 0 & 0\\
& C_{11} & C_{12} & 0 & 0 & 0\\
& & C_{11} & 0 & 0 & 0\\
& & & C_{44} & 0 & 0\\
& \text{\large Sym} & & & C_{44} & 0\\
& & & & & C_{44}
\end{array}
\right ), \quad \left(3 \ \rm constants\right ).
\end{equation}
All 3 directions along the coordinate axes are same due to the cubic symmetry. Therefore, it is sufficient to analyze the $x$-direction only.\\
For the wave propagating along the $x$-direction, the direction cosines are
\begin{equation}
l_1=1, \ l_2=0, \ l_3=0.
\end{equation}
Then, the corresponding Kelvin-Christoffel matrix becomes,
\begin{equation}\begin{split}
\Gamma_{11}&=C_{11},\\
\Gamma_{22}&=C_{44},\\
\Gamma_{33}&=C_{44},\\
\Gamma_{12}=\Gamma_{13}&=\Gamma_{23}=0.\\
\end{split}
\end{equation}
Therefore, the governing equation \eqref{eq:75} becomes,
\begin{equation}\begin{split}
\boldsymbol{0}&=(\boldsymbol{\Gamma}-\rho c^2\textbf{I}) \boldsymbol{\cdot u}\\
&=\left( \begin{array}{ccc}
C_{11}-\rho c^2 & 0 & 0\\
0 & C_{44}-\rho c^2 & 0\\
0 & 0 & C_{44}-\rho c^2
\end{array} \right )
\boldsymbol{\cdot} \left( \begin{array}{c}
u_1 \\ u_2 \\ u_3
\end{array}\right).
\end{split}
\label{eq:86}\end{equation}
Here, $u_1$ represents longitudinal wave and $u_2,u_3$ represent transverse waves. Therefore, from Eq. \eqref{eq:86}, we obtain
\begin{equation}
c_\text{L}=\sqrt{\frac{C_{11}}{\rho}},\ \ c_\text{T}=\sqrt{\frac{C_{44}}{\rho}}.
\end{equation}
Thus, to generate the rolling wave in $x$-direction, we only need to have,
\begin{equation}
C_{11}=C_{44}.
\label{cubic_criterion}
\end{equation}
\clearpage
\subsection{Plane Strain}
The plane strain condition of $xz$-plane for general anistropic material is,
\begin{equation}
\boldsymbol{\epsilon}=\left( \begin{array}{c}
\epsilon_{xx} \\ \epsilon_{yy} \\ \epsilon_{zz} \\ 2\epsilon_{yz} \\ 2\epsilon_{xz} \\ 2\epsilon_{xy}
\end{array}\right)
=\left( \begin{array}{c}
\epsilon_{xx} \\ 0 \\ \epsilon_{zz} \\ 0 \\ 2\epsilon_{xz} \\ 0
\end{array}\right).
\end{equation}
The general elastic stiffness tensor in Voigt notation is,
\begin{equation}
\textbf{C}=\left( \begin{array}{cccccc}
C_{11} & C_{12} & C_{13} & C_{14} & C_{15} & C_{16}\\
& C_{22} & C_{23} & C_{24} & C_{25} & C_{26}\\
& & C_{33} & C_{34} & C_{35} & C_{36}\\
& & & C_{44} & C_{45} & C_{46}\\
& \text{\large Sym} & & & C_{55} & C_{56}\\
& & & & & C_{66}
\end{array}
\right ).
\end{equation}
Then, by substituting into the constitutive relation $\boldsymbol{\sigma}=\boldsymbol{C\cdot\epsilon}$, we have
\begin{equation}
\left( \begin{array}{c}
\sigma_{xx} \\ \sigma_{yy} \\ \sigma_{zz} \\ \sigma_{yz} \\ \sigma_{xz} \\ \sigma_{xy}
\end{array}\right)=\left( \begin{array}{cccccc}
C_{11} & C_{12} & C_{13} & C_{14} & C_{15} & C_{16}\\
& C_{22} & C_{23} & C_{24} & C_{25} & C_{26}\\
& & C_{33} & C_{34} & C_{35} & C_{36}\\
& & & C_{44} & C_{45} & C_{46}\\
& \text{\large Sym} & & & C_{55} & C_{56}\\
& & & & & C_{66}\end{array} \right )\boldsymbol{\cdot}
\left( \begin{array}{c}
\epsilon_{xx} \\ 0 \\ \epsilon_{zz} \\ 0 \\ 2\epsilon_{xz} \\ 0
\end{array}\right).
\end{equation}
Therefore, for the 2D plane strain cases, we only need to consider the following stress components,
\begin{equation}
\left( \begin{array}{c}
\sigma_{xx} \\ \sigma_{zz} \\ \sigma_{xz}
\end{array}\right)=\left( \begin{array}{ccc}
C_{11} & C_{13} & C_{15}\\
C_{31} & C_{33} & C_{35}\\
C_{51} & C_{53} & C_{55}\end{array} \right )\boldsymbol{\cdot}
\left( \begin{array}{c}
\epsilon_{xx} \\ \epsilon_{zz} \\ 2\epsilon_{xz}
\end{array}\right).
\end{equation}
The governing equation of plane strain situation becomes,
\begin{equation}
(\boldsymbol{\Gamma}-\rho c^2\textbf{I}) \boldsymbol{\cdot u}=\boldsymbol{0},
\end{equation}
where $\boldsymbol{\Gamma}$ is the Kelvin-Christoffel matrix in $xz$-plane,
\begin{equation}
\boldsymbol{\Gamma}=\left( \begin{array}{cc}
\Gamma_{11} & \Gamma_{13}\\
\Gamma_{13} & \Gamma_{33}
\end{array} \right ),
\end{equation}
which is from Eq. \eqref{P3}.\\
For $z$-direction propagating wave,
\begin{equation}
l_1=0,\ \ l_2=0,\ \ l_3=1.
\end{equation}
The corresponding Kelvin-Christoffel matrix becomes,
\begin{equation}\begin{split}
&\Gamma_{11}=C_{55},\\
&\Gamma_{33}=C_{33},\\
&\Gamma_{13}=C_{35}.
\end{split}
\end{equation}
The governing equation of plane strain situation becomes,
\begin{equation}\begin{split}
\boldsymbol{0}&=(\boldsymbol{\Gamma}-\rho c^2\textbf{I}) \cdot\boldsymbol{u}\\
&=\left( \begin{array}{cc}
C_{55}-\rho c^2 & C_{35}\\
C_{35} & C_{33}-\rho c^2
\end{array} \right )
\boldsymbol{\cdot} \left( \begin{array}{c}
u_1 \\ u_3
\end{array}\right).
\end{split}
\end{equation}
Hence, for $u_1$ and $u_3$ to be decoupled and to propagate at the same wave speed, the requirements are $C_{35} = 0$ and $C_{55} = C_{33}$.
\clearpage
\section{The Square Lattice}
\begin{figure}[htb]
\centering
\includegraphics[scale=0.5]{fig-square.png}
\caption{\label{fig:sq} (a) The unit cell of a 2D square lattice. It is constructed by taking mirror images of the green quarter. All red straight line segments are of length $L$. The structure illustrated here is with $L/a=0.2$. The assumed propagation direction along the $z$-axis corresponds to the diagonal (i.e. 45 degree) direction in the square lattice (b) The non-dimensional elastic constants results with varying $L/a$.}
\end{figure}\\
\noindent Some previous studies~\cite{phani2006,wang2015locally} used the Bloch-wave formulation to calculate dispersion relations for the square lattice of beams. The band structures presented in those studies seem to indicate nearly equal-speed propagation at the low-frequency long-wavelength limit in the diagonal (45 degree) direction of the square.\\
Unfortunately, our calculations, as presented in Fig.\,\ref{fig:sq}, show that the square lattice can only asymptotically approach the limit of $c_\text{L} = c_\text{T}$ (equivalent to $C_{33}=C_{55}$ according to Eq. (11a) in the main text) when the beam width $\rightarrow 0$. This fact renders square lattices with finite bending stiffness impractical for hosting rolling waves.\\
For completeness, we also perform the band structure calculations to match the results in \cite{phani2006,wang2015locally} as well as the zoom-in calculations in the long wavelength limit. We note that the ``slenderness ratio" defined in both studies is equivalent to $(a/\sqrt{2})\sqrt{(E (\sqrt{2}L)^2)/(E\frac{(\sqrt{2}L)^4}{12})} = \sqrt{3}(a/L)$ for the parameters defined in Fig.\,\ref{fig:sq}(a), since we actually have the beam thickness = $\sqrt{2}L$ and the conventional square unit cell size = $a/\sqrt{2}$.
\comment{
The normalized frequency is defined by,
\begin{equation}
\Omega=\frac{\omega}{\omega_1},\ \ \text{with}\ \ \omega_1=\pi^2\sqrt{\frac{EI}{\rho (\sqrt{2}L)(a/\sqrt{2})^4}}.
\label{norm_freq}
\end{equation}
}
\begin{figure}[htb]
\centering
\includegraphics[scale=0.42]{Band.png}
\caption{Dispersion relations of the square lattice with different beam thickness: (a) Same slenderness ratio of 20 as \cite{wang2015locally}. (b) Same slenderness ratio of 50 as \cite{phani2006}. (c) Schematic of square lattice. (d) The Brillioun zone of square lattice. Following the same convention in \cite{phani2006}, the normalized frequency is defined as $\Omega=\frac{\omega}{\omega_1}$
with
$\omega_1=\pi^2\sqrt{\frac{EI}{\rho_{A} (\sqrt{2}L)(a/\sqrt{2})^4}}$, where $\rho_{A}$ denotes mass per unit area, or equivalently the ``2D mass density".}
\label{fig:sq1}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[scale=0.42]{Band-zoom-ab.png}
\caption{Zoomed in Dispersion relations at direction $\text{M}\rightarrow\text{G}$ close to G: (a) Same slenderness ratio of 20 as \cite{wang2015locally}. (b) Same slenderness ratio of 50 as \cite{phani2006}.}
\label{fig:sq2}
\end{figure}
\clearpage
\section{An alternative 3D design}
We may also use the structure shown as Fig. 2(c) in the main text to build 3D designs. For example, we can directly use the 2D pattern as each of the six faces of a cube (Fig. \ref{fig:3D-alternative}(a)). This is very similar to the geometry used in \cite{buckmann2014three}, but here we have removed structures inside the unit cube for simplicity. As duly noted in \cite{buckmann2014three}, this 3D geometry itself does not have the symmetry of cubic crystallographic point groups, which are characterised by the four threefold rotation axes along the body diagonals of a cube. Hence, there is no \textit{a priori} guarantee that it will result in an elasticity tensor for cubic symmetry.\\
On the other hand, our numerical calculations can confirm that it can still meet the \textit{equal-speed criterion}. Here, each face of the unit cube has a uniform out-of-plane thickness $h_1$ (due to spatial periodicity the metamaterial has a wall thickness of $2h_1$). With the cube edge length being $a$, we fix the parameters as $2h_1=b_3=b_4=0.05a$ and $b_5=0.3221a$. Then we vary $b_1/a$ and $b_2/a$ to calculate the elastic constants in each case. The results are shown in (Fig. \ref{fig:3D-alternative}(b)). The equal-speed criterion is met at the intersection of two (blue and yellow) surfaces in the parameter space.
\begin{figure}[htb]
\centering
\includegraphics[scale=0.5]{fig-3D-auxetic.png}
\caption{\label{fig:3D-alternative} (a) Auxetic unit cell design similar to the geometry used in \cite{buckmann2014three}. Colors are for visual distinction. All six faces are identical. (b) The non-dimensional elastic constants results with varying parameters $b_1/a$ and $b_2/a$. Geometry in (a) corresponds to the circled point in (b).}
\end{figure}
\clearpage
\section{Reflection of Rolling Wave at Elastic Boundary}
\subsection{Normal incidence and reflection}
Omitting the time harmonic term $e^{-i\omega t}$ and assuming the principle wave displacement directions coincide with the coordinate system, we consider a general plane wave propagating along $z$-direction incident on a flat surface (the $xy$-plane at $z=0$),
\begin{equation}
\boldsymbol{u}^\text{I}=\left( \begin{array}{c}
m^\text{I} \\ n^\text{I} \\ l^\text{I}
\end{array}\right)e^{ikz}
\end{equation}
where I denotes the incident wave.\\
The backward reflection wave can be written as,
\begin{equation}
\boldsymbol{u}^\text{R}=\left( \begin{array}{c}
m^\text{R} \\ n^\text{R} \\ l^\text{R}
\end{array}\right)e^{-ikz}
\end{equation}
where R denotes the reflected wave.\\
The strains can be calculated from displacements by
\begin{equation}
\epsilon_{ij}=\frac{1}{2}(u_{i,j}+u_{j,i})
\end{equation}
where the comma $``,"$ denotes the derivative operation.\\
The reflection occurs at $z=0$. So we have $e^{ikz} = e^{-ikz} = 1$, and the strain vector in Voigt notation becomes,
\begin{equation}
\boldsymbol{\epsilon}^\text{I}=\left( \begin{array}{c}
\epsilon^\text{I}_{xx} \\ \epsilon^\text{I}_{yy} \\ \epsilon^\text{I}_{zz} \\ 2\epsilon^\text{I}_{yz} \\ 2\epsilon^\text{I}_{xz} \\ 2\epsilon^\text{I}_{xy}
\end{array}\right)=\left( \begin{array}{c}
0 \\ 0 \\ l^\text{I} ik \\ n^\text{I} ik \\ m^\text{I} ik \\ 0
\end{array}\right),\ \
\boldsymbol{\epsilon}^\text{R}=\left( \begin{array}{c}
\epsilon^\text{R}_{xx} \\ \epsilon^\text{R}_{yy} \\ \epsilon^\text{R}_{zz} \\ 2\epsilon^\text{R}_{yz} \\ 2\epsilon^\text{R}_{xz} \\ 2\epsilon^\text{R}_{xy}
\end{array}\right)=\left( \begin{array}{c}
0 \\ 0 \\ -l^\text{R} ik \\ -n^\text{R} ik \\ -m^\text{R} ik \\ 0
\end{array}\right).
\label{strain_IR}
\end{equation}
Here we assume the effective orthotropic constitutive relation in Voigt notation:
\begin{equation}
\left( \begin{array}{c}
\sigma_{xx} \\ \sigma_{yy} \\ \sigma_{zz} \\ \sigma_{yz} \\ \sigma_{xz} \\ \sigma_{xy}
\end{array}\right)=\left( \begin{array}{cccccc}
C_{11} & C_{12} & C_{13} & 0 & 0 & 0 \\
C_{12} & C_{22} & C_{23} & 0 & 0 & 0 \\
C_{13} & C_{23} & C_{33} & 0 & 0 & 0 \\
0 & 0 & 0 & C_{44} & 0 & 0 \\
0 & 0 & 0 & 0 & C_{55} & 0 \\
0 & 0 & 0 & 0 & 0 & C_{66}
\end{array}\right)\cdot
\left( \begin{array}{c}
\epsilon_{xx} \\ \epsilon_{yy} \\ \epsilon_{zz} \\ 2\epsilon_{yz} \\ 2\epsilon_{xz} \\ 2\epsilon_{xy}
\end{array}\right).
\end{equation}
\comment{with
\begin{equation}
\boldsymbol{\sigma}=\left( \begin{array}{ccc}
\sigma_{xx} & \sigma_{xy} & \sigma_{xz}\\
\sigma_{yx} & \sigma_{yy} & \sigma_{yz}\\
\sigma_{zx} & \sigma_{zy} & \sigma_{zz}
\end{array}\right)
\end{equation}
denotes the stress tensor.}\\
It is easy to compute the corresponding stress components for incident and reflect waves.
\begin{equation}
\left( \begin{array}{c}
\sigma^\text{I}_{xx} \\ \sigma^\text{I}_{yy} \\ \sigma^\text{I}_{zz} \\ \sigma^\text{I}_{yz} \\ \sigma^\text{I}_{xz} \\ \sigma^\text{I}_{xy}
\end{array}\right)=\left( \begin{array}{c}
C_{13}l^\text{I} ik \\ C_{23}l^\text{I} ik \\ C_{33}l^\text{I} ik \\ C_{44}n^\text{I} ik \\ C_{55}m^\text{I} ik \\ 0
\end{array}\right),\ \
\left( \begin{array}{c}
\sigma^\text{R}_{xx} \\ \sigma^\text{R}_{yy} \\ \sigma^\text{R}_{zz} \\ \sigma^\text{R}_{yz} \\ \sigma^\text{R}_{xz} \\ \sigma^\text{R}_{xy}
\end{array}\right)=\left( \begin{array}{c}
-C_{13}l^\text{R} ik \\ -C_{23}l^\text{R} ik \\ -C_{33}l^\text{R} ik \\ -C_{44}n^\text{R} ik \\ -C_{55}m^\text{R} ik \\ 0
\end{array}\right).
\end{equation}
For the elastically supported cubic half-space, the boundary conditions are
\begin{equation}
\sigma^0_{zx}=K_xu^0_x,\ \ \sigma^0_{zy}=K_yu^0_y,\ \ \sigma^0_{zz}=K_zu^0_z.
\end{equation}
where $(K_x,K_y,K_z)$ are components of \underline{distributed stiffness per unit area}~\cite{zhang2017reflection} representing a general elastic foundation supporting the solid surface. The stress and displacement summations are
\begin{equation}
\sigma^0_{zx}=\sigma^\text{I}_{zx}+\sigma^\text{R}_{zx},\ \ \sigma^0_{zy}=\sigma^\text{I}_{zy}+\sigma^\text{R}_{zy},\ \ \sigma^0_{zz}=\sigma^\text{I}_{zz}+\sigma^\text{R}_{zz}.
\end{equation}
\begin{equation}
u^0_x=u^\text{I}_x+u^\text{R}_x,\ \ u^0_y=u^\text{I}_y+u^\text{R}_y,\ \ u^0_z=u^\text{I}_z+u^\text{R}_z.
\end{equation}
Substituting the displacement and stress components into elastic boundary condition,
\begin{equation}
C_{55}m^\text{I} ik-C_{55}m^\text{R} ik=K_x(m^\text{I}+m^\text{R}),
\end{equation}
\begin{equation}
C_{44}n^\text{I} ik-C_{44}n^\text{R} ik=K_y(n^\text{I}+n^\text{R}),
\end{equation}
\begin{equation}
C_{33}l^\text{I} ik-C_{33}l^\text{R} ik=K_z(l^\text{I}+l^\text{R}).
\end{equation}
Solving the equation set gives us,
\begin{equation}
m^\text{R}=\frac{C_{55} ik-K_x}{C_{55} ik+K_x}m^\text{I},
\end{equation}
\begin{equation}
n^\text{R}=\frac{C_{44} ik-K_y}{C_{44} ik+K_y}n^\text{I},
\end{equation}
\begin{equation}
l^\text{R}=\frac{C_{33} ik-K_z}{C_{33} ik+K_z}l^\text{I}.
\end{equation}
Because of the requirement of rolling wave ($C_{33}=C_{44}=C_{55}$), the amplitude of reflection wave becomes,
\begin{equation}
m^\text{R}=R_x m^\text{I},\ \ n^\text{R}=R_y n^\text{I},\ \ l^\text{R}=R_z l^\text{I},
\end{equation}
with
\begin{equation}
\label{Rxyz}
R_x=\frac{C_{33} ik-K_x}{C_{33} ik+K_x},\ \
R_y=\frac{C_{33} ik-K_y}{C_{33} ik+K_y},\ \
R_z=\frac{C_{33} ik-K_z}{C_{33} ik+K_z}.
\end{equation}
Clearly, the reflected wave amplitude depend on the spring stiffness. Moreover, the elastic boundary condition will degenerate into traction free boundary condition when the stiffness $K_j=0$. Then the amplitudes of reflected wave become
\begin{equation}
m^\text{R}=m^\text{I},\ \ n^\text{R}=n^\text{I},\ \ l^\text{R}=l^\text{I}.
\end{equation}
Similarly, the elastic boundary condition will degenerate into fixed boundary condition when the stiffness $K_j=\infty$. Then, the amplitudes of reflected wave become
\begin{equation}
m^\text{R}=-m^\text{I},\ \ n^\text{R}=-n^\text{I},\ \ l^\text{R}=-l^\text{I}.
\end{equation}
In both cases above, by the definition of spin given in Eq. (2) of the main text, we can conclude $\boldsymbol{s}^\text{R}=\boldsymbol{s}^\text{I}$, so the spin is unaffected by reflection.\\
Next, we consider in-$xz$-plane waves with $n^{\text{I}}=n^{\text{R}}=0$. For the free-rigid hybrid boundary ($K_x=0$,$K_z=\infty$), we have
\begin{equation}
m^\text{R}=m^\text{I},\ \ l^\text{R}=-l^\text{I} \quad \Rightarrow \quad \boldsymbol{s}^\text{R}=-\boldsymbol{s}^\text{I}.
\end{equation}
Similarly, for the rigid-free hybrid boundary ($K_x=\infty$, $K_z=0$), we have
\begin{equation}
m^\text{R}=-m^\text{I},\ \ l^\text{R}=l^\text{I} \quad \Rightarrow \quad \boldsymbol{s}^\text{R}=-\boldsymbol{s}^\text{I}.
\end{equation}
Thus, both hybrid boundaries will flip the spin for any incident rolling wave. \\
\subsection{Complex-valued amplitude ratio $R_j$}
The effects of normal reflection can be described by the generalized amplitude ratio in (\ref{Rxyz}):
\begin{equation}
R_j=\frac{C_{33} ik-K_j}{C_{33} ik+K_j},\ \ j=x,y,z.
\end{equation}
This complex non-dimensional parameter $R_j$ plays a key role between the incident and reflect waves and deserves further analysis. We note that $|R_j|=1$ is consistent with the fact that all wave energy is reflected, and the phase angle $\phi$ represents the phase change during the reflection. Hence, we have \\
\begin{equation}
R_j = e^{i\phi} \quad \text{with} \quad \tan{\phi}=\frac{2{K_j}/{C_{33}k}}{1-({K_j}/{C_{33}k})^2}
\end{equation}
and it dependents on the boundary-bulk stiffness ratio ${K_j}/{C_{33}k}$.
By varying this ratio, we plot the real and imaginary parts of $R_j$ as well as the phase angle $\phi$ in Figure \ref{fig:Rj}. These values, with respect to the logarithmic magnitude of the boundary-bulk stiffness ratio, show typical symmetric and anti-symmetric properties.
Therefore, by adjusting the elastic stiffness at the boundary, one can manipulate the spin of the reflection waves.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.6]{Amp_Ratio.pdf}
\caption{\label{fig:Rj} The amplitude ratio $R_j$ between reflect wave and incident wave. }
\end{figure}\\
Although all properties of both the bulk and the reflection surface are assumed to be independent of the incident wave frequency, here the reflection phase change can be frequency-dependent, as the angular wave number $k$ appears in the ratio. The emergence of frequency dependency can be intuitively explained by the role of wavelength during reflection:\\
We note from Eq. (\ref{strain_IR}) that $k$ first appears in the strain calculations since, for a fixed wave displacement amplitude, the strains in the propagation direction, $\epsilon_{zj}$, are actually inversely proportional to the wavelength: Longer wavelength gives rise to a smaller strain and vice versa. Consequently, the stresses, $\sigma_{zj}$, and the force acting on the boundary springs are wavelength-dependent as well. If the boundary is supported by an elastic foundation with finite stiffness per area, $K_j$, we have the following:\\
At the low-frequency and long-wavelength limit, the force per area acting on the boundary, $|\sigma_{zj}| \propto C_{33}k \rightarrow 0$. So, the boundary hardly move, and the incident wave effectively ``sees" a rigid surface;\\
At the high-frequency and short-wavelength limit, the force per area acting on the boundary, $|\sigma_{zj}| \propto C_{33}k \rightarrow \infty$. So, the boundary moves a lot, and the incident wave effectively ``sees" a free surface.\\
\subsection{Time domain simulations}\\
Fig.\,\ref{fig:SIF3} shows the time-domain finite element simulations of the rolling wave inside 2D anisotropic plane with required elastic constants. This illustrates the satisfaction of the equal-speed criterion and the feasibility for anisotropic material to host the propagating rolling wave.\\
Fig.\,\ref{fig:SIF4} shows the time-domain finite element simulations of the rolling wave inside the designed structured plane. This illustrates the capability of the structure to host the propagating rolling wave. By monitoring specific point, time evolution of the displacements data was extracted and analyzed. It is found the spin property is preserved with the fully-fixed are fully-free boundary conditions, while being flipped with the hybrid fixed-free and hybrid free-fixed boundary conditions.\\
The results elucidate that, in both models, the longitudinal and transverse waves propagate at the same speed. The numerical observations verify that the elastic constants and the structure can host rolling waves. Moreover, the rolling wave spin property can be altered by different boundary conditions. This provides us the potential to use simple edges and surfaces in future applications of rolling waves.
\begin{figure}[htb]
\centering
\includegraphics[scale=0.8]{TD.png}
\caption{\label{fig:SIF3} (a) The schematic of the setup of \textsc{comsol} anisotropic plane model. (b) The displacement fields $u_x$ and $u_y$ at time $t=2$s. (c) The displacement fields $u_x$ and $u_y$ at time $t=4$s. (d) The displacement fields $u_x$ and $u_y$ at time $t=7$s. }
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[scale=0.8]{TD-SS.png}
\caption{\label{fig:SIF4} (a) The schematic of the setup of \textsc{comsol} micro-structured plane model. (b) The displacement fields $u_x$ and $u_y$ at time $t=20$s. (c) The displacement fields $u_x$ and $u_y$ at time $t=40$s. (d) The displacement fields $u_x$ and $u_y$ at time $t=70$s. }
\end{figure}
\clearpage
\section{Numerical Procedures}
The 2D and 3D unit cell geometries are designed by finite element calculations in \textsc{abaqus} to satisfy the requirements of elastic constants for different anisotropic cases. \\
For 2D plane strain cases, square unit cells are used. We first build one quarter of the unit cell and its mesh. Then, by symmetry operations, the other parts with mesh are generated. This gives us the easiest way to guarantee the one-to-one correspondence between each boundary node-pair, making the application of proper periodic boundary conditions possible. By prescribing unit cell deformation, the effective elastic constants can be obtained by averaging element stresses.\\
Similarly, for 3D cases, we first build the one-eighth structure and then make the symmetry operations. With periodic boundary conditions, we prescribe the unit cell deformation and calculate the average stress components to obtain the effective elastic constants.\\
In addition, 2D time-domain simulations by \textsc{comsol} are conducted to illustrate the reflections of roll waves from different boundaries. We use two different time-domain models: a) the anisotropic media with elastic constants satisfying the requirements listed in Eq.\,(11a) of the main text; and b) the periodic micro-structured lattice with unit geometry shown in Fig.\,2(a) of the main text. The parameters used in the simulations are set to be at the long-wavelength limit with $\lambda_0/a=4\pi$ ($\lambda_0$ is the wave length, $a$ is the unit cell size). For both models, periodic boundary conditions are applied to the top and bottom boundaries. The displacement boundary conditions with rolling excitation $(u_x,u_y)=({\rm sin}(\omega t), {\rm cos}(\omega t))$ are prescribed at the left edge. The right side as the reflection surface is prescribed with different boundary conditions, i.e., fixed, stress-free, hybrid fixed-free and hybrid free-fixed. \\
\noindent Since numerical procedures employed in this study might be useful in a variety of applications, we make the codes available for free download, advocating for an open-source initiative in the research community:\\
\begin{enumerate}
\item ``ABAQUS-2D.zip" - An Abaqus Python script for the 2D geometry in Fig. 2(a).
\item ``ABAQUS-3D.zip" - An Abaqus Python script for the 3D geometry in Fig. 3(c).
\item ``COMSOL.zip" - A time-domain simulation for results in \ref{fig:SIF4}.
\end{enumerate}
\clearpage
\printbibliography
\end{document} |
2,869,038,154,686 | arxiv | \section{Introduction}\label{s1}
All graphs considered in this paper are finite, undirected and simple. Let $G$ be a graph with vertex set $V(G)$ and edge set $E(G)$. Let $\vec{E}(G)$ be the set $\{e_{uv},e_{vu}~|~uv\in E(G)\}$, where $e_{uv}$ denotes the ordered pair $(u,v)$. We use $\mathbb{T}$ to denote the the multiplicative group consisting of all the complex units, i.e. $\mathbb{T}=\{z\in \mathbb{C}~|~|z|=1\}$. Suppose $\phi:\vec{E}(G)\to \mathbb{T}$ is an arbitrary mapping with the property $\phi(e_{uv})=\phi(e_{vu})^{-1}$. Then $\Phi=(G,\mathbb{T},\phi)$, with $G$ as its underlying graph, is called a complex unit gain graph (or $\mathbb{T}$-gain graph), which was introduced by Reff in \cite{R}. We refer to the elements in $\mathbb{T}$ as gains and $\phi$ as the gain function of $\Phi$. Note that a graph $G$ is just a complex unit gain graph with all the ordered pairs in $\vec{E}(G)$ having $1$ as their gains, and we denote it by $(G,\mathbb{T},1)$. If there is a cycle $C$ of $G$ consisting of edges $v_{1}v_{2},v_{2}v_3,\ldots, v_kv_1$, it can be given two directions: $C_1=v_{1}v_{2}\cdots v_{k}v_{1}$ and $C_2=v_{k}v_{k-1}\cdots v_{1}v_{k}$. The gain of $C_1$ in $\Phi$ is defined to be $\phi(C_1) = \phi(e_{v_{1}v_{2}})\phi(e_{v_{2}v_{3}}) \cdots \phi(e_{v_{k-1}v_{k}})\phi(e_{v_{k}v_{1}})$. Similarly, we can define $\phi(C_2)$ and have $\phi(C_2)=\phi(C_1)^{-1}$. If $\phi(C_1)=\phi(C_2)=1$, we say the cycle $C$ is neutral in $\Phi$ without a mention of the direction. If every cycle of $G$ is neutral in $\Phi$, we say $\Phi$ is balanced. Complex unit gain graphs have caused attention in recent years. For more information, see \cite{HHD,MMA,YQT}.
The adjacency matrix $A(\Phi)$ of $\Phi=(G,\mathbb{T},\phi)$ is the $n\times n$ complex matrix $(a_{ij})$ with $a_{ij}=\phi(e_{v_iv_j})$ if $e_{v_iv_j}\in \vec{E}$ and $0$ otherwise, where $n$ is the order of $G$. Clearly, $A(\Phi)$ is Hermitian and all its eigenvalues are real. The energy of $\Phi$, denoted by $\mathcal{E}(\Phi)$, is defined to be the sum of the absolute values of its eigenvalues.
There are amounts of literature investigating the bounds on the energy of a graph in terms of other parameters. In \cite{AGZ}, Akbari et al. proved the rank of a graph is a sharp lower bound of its energy. Wang and Ma \cite{WM} gave sharp bounds of graph energy in terms of vertex cover number and characterized all the extremal graphs attaining these bounds. Wong et al. \cite{WWC} proved $\mathcal{E}(G)\ge 2\mu(G)$, where $\mu(G)$ is the matching number of $G$, and partially characterized the extremal graphs. These results have already been extended to oriented graphs and mixed graphs in \cite{TW,WL}.
In this paper, we establish a lower bound for the energy of a complex unit gain graph
in terms of the matching number of its underlying graph, and characterize
all the complex unit gain graphs whose energy reaches this lower bound. Our result generalizes the corresponding results on graphs \cite{WWC}, oriented graphs \cite{TW} and mixed graphs in \cite{WL}.
\begin{thm}\label{thm 1.1}
Let $\Phi=(G,\mathbb{T},\phi)$ be a complex unit gain graph and $\mu(G)$ the matching number of $G$. Then
\begin{equation}\label{eq 1}
\mathcal{E}(\Phi)\ge 2\mu(G).
\end{equation}
Equality holds if and only if $\Phi$ is balanced and $G$ is the disjoint union of some regular complete bipartite graphs, together with some isolated vertices.
\end{thm}
\section{Preliminaries}\label{s2}
In this section, we shall introduce some notations and lemmas on complex unit gain graphs.
Recall that the degree of a vertex $u$ of $G$ is the number of its neighbors, i.e., the number of the vertices which are adjacent to $u$. If the degree of $u$ is $1$, we call it a pendant vertex of $G$. A graph $H$ is a subgraph of $G$ if $V(H)\subseteq V(G)$ and $E(H)\subseteq E(G)$. Further, if two vertices in $V(H)$ are adjacent in $H$ if and only if they are adjacent in $G$, $H$ is called an induced subgraph of $G$. For any subgraph $G_1$ of $G$, we define the subgraph $\Phi_1=(G_1,\mathbb{T},\phi)$ of $\Phi=(G,\mathbb{T},\phi)$ by restricting $\phi$ to $\overrightarrow{E}(G_1)=\{e_{uv},e_{vu}~|~uv\in E(G_1)\}$.
Let $G[V_1]$ (resp. $G[E_1]$) be the subgraph of $G$ induced by $V_1 \subseteq V(G)$ (resp. $E_1 \subseteq E(G)$). We use $G-V_1$ to denote $G[ \overline{V_1} ]$, where $\overline{V_1}=V(G)\setminus V_1$. For any induced subgraph $H$ of $G$, we simply denote $G-V(H)$ by $G-H$ and call it the complement of $H$ in $G$. We write $G = H \oplus (G-H)$ when no edges in $G$ join the induced subgraph $H$ and its complement $G-H$. For a nonempty set $S\subseteq E(G)$, let $G-S$ be the spanning subgraph obtained from $G$ by deleting the edges in $S$. If there exists an induced subgraph $K$ such that $G-S=K\oplus (G-K)$, $S$ is called an edge cut of $G$.
Similar to \cite[Theorems 3.4 and 3.6]{DS}, we have the following lemma.
\begin{lem}\label{lem 3.1}
Let $\Phi=(G,\mathbb{T},\phi)$ be a complex unit gain graph and $S$ an edge cut of $G$. Then $\mathcal{E}(G-S,\mathbb{T},\phi) \leq \mathcal{E}(\Phi)$. Further, if $G[S]$ is a star, $\mathcal{E}(G-S,\mathbb{T},\phi)< \mathcal{E}(\Phi)$.
\end{lem}
A matching $M$ of $G$ is an edge subset such that no two edges in $M$ share a common vertex. If $u$ is incident to some edge in $M$, $u$ is said to be saturated by $M$. Vertices which are not incident to any edge in $M$ are unsaturated by $M$. A maximum matching of $G$ is a matching which contains the largest possible number of edges. The size of a maximum matching is known as the matching number of $G$, denoted by $\mu(G)$. $M$ is called a perfect matching of $G$ if every vertex of $G$ is saturated by $M$.
Similar to \cite[Theorem 1.1 {\rm (i)}]{WWC}, Lemma \ref{lem 3.1} implies the inequality (\ref{eq 1}) in Theorem \ref{thm 1.1}. Using this inequality, the following Lemmas \ref{lem 3.3}-\ref{lem 3.5} can be proved by similar methods in \cite{TW} and \cite{WL}.
\begin{lem}\label{lem 3.3}
Let $\Phi=(G,\mathbb{T},\phi)$ be a complex unit gain graph with at least $3$ vertices. If $G$ is connected and has a pendant vertex, then $\mathcal{E}(\Phi)> 2\mu(G)$.
\end{lem}
\begin{lem}\label{lem 3.4}
Let $\Phi=(\tilde{C_6},\mathbb{T},\phi)$ be a complex unit gain graph, where $\tilde{C_6}$ is obtained from a $6$-cycle $C_6 = v_1v_2v_3v_4v_5v_6v_1$ by adding an edge $v_2v_5$. Then $\mathcal{E}(\Phi)> 6$.
\end{lem}
\begin{lem}\label{lem 3.2}
Let $\Phi=(G,\mathbb{T},\phi)$ be a complex unit gain graph. Suppose $G_1$ is an induced subgraph of $G$ with $\mu(G)= \mu(G_1) + \mu(G-G_1)$. If $\mathcal{E}(\Phi)=2\mu(G)$, then $\mathcal{E}(G_1,\mathbb{T},\phi)=2\mu(G_1)$ and $G_1$ is not $P_4$ or $\tilde{C_6}$.
\end{lem}
\begin{lem}\label{lem 3.5}
Let $\Phi=(G,\mathbb{T},\phi)$ be a complex unit gain graph without isolated vertices. If $\mathcal{E}(\Phi) = 2\mu(G)$, then $G$ has a perfect matching.
\end{lem}
\section{Proof of Theorem \ref{thm 1.1}}\label{s3}
To prove Theorem \ref{thm 1.1}, we need the following lemmas.
\begin{lem}\label{lem 3.6}
Let $\Phi=(G,\mathbb{T},\phi)$ be a complex unit gain graph, where $G$ is a connected bipartite graph with at least two vertices. If $\mathcal{E}(\Phi) = 2\mu(G)$, $G$ is a regular complete bipartite graph.
\end{lem}
\begin{proof}
Assume that $G$ has $n$ vertices and its two partite sets are $X,Y$. Since $\mathcal{E}(\Phi) = 2\mu(G)$, by Lemma \ref{lem 3.5} we know that $G$ has a perfect matching, say $M$. Thus, $n$ is even and $|X|=|Y|= \mu(G) = n/2$.
Next we prove $G$ is complete by induction on $n$. If $n=2$, $G$ is the complete graph of order $2$. Thus the result holds clearly. We assume that the result holds for complex unit gain graphs of order at most $n-2$ $(n\ge 4)$. In what follows we suppose that $\Phi=(G,\mathbb{T},\phi)$ is an $n$-vertex complex unit gain graph with $\mathcal{E}(\Phi) = 2\mu(G)$ and $G$ is a connected bipartite graph.
Let $X =\{x_1, x_2, \ldots , x_{n/2}\}$ and $Y =\{y_1, y_2, \ldots , y_{n/2}\}$. Suppose on the contrary that $G$ is incomplete. Then there exist vertices $x_1 \in X$, $y_1 \in Y$ such that $x_1$ is not adjacent to $y_1$ in $G$. Suppose that $x_1$ is $M$-saturated by edge $x_1y_2$ and $y_1$ is $M$-saturated by edge $x_2y_1$. If $n=4$, $x_2$ must be adjacent to $y_2$ as $G$ is connected and thus $G \cong P_4$ with two pendant vertices $x_1, y_1$. By Lemma \ref{lem 3.3}, we have $\mathcal{E}(\Phi)> 2\mu(G)$, which is a contradiction. Thus, $x_1$ must be adjacent to $y_1$ and hence $G$ is complete.
If $n\ge 6$, let $G_1= G[\{x_1,x_2,y_1,y_2\}]$, $G_2 = G-\{x_1,x_2,y_1,y_2\}$ and $S$ the edge cut of $G$ such that $G-S=G_1 \oplus G_2$. Note that $\mu(G) = \mu(G_1) + \mu(G_2)$. Denote $(G_1,\mathbb{T},\phi)$ and $(G_2,\mathbb{T},\phi)$ by $\Phi_1$ and $\Phi_2$, respectively. By Lemma \ref{lem 3.2}, we have $\mathcal{E}(\Phi_1) = 2\mu(G_1)$, $\mathcal{E}(\Phi_2)= 2\mu(G_2)$ and $G_1$ is not $P_4$. Thus, $x_2$ is not adjacent to $y_2$ in $G$ and $G_1\cong 2K_2$. Let $G_1^1=G[\{x_1,y_2\}]$ and $G_1^2=G[\{x_2,y_1\}]$. Thus $G_1=G_1^1\oplus G_1^2$.
Assume that $G_2$ has $\omega$ connected components, denoted by $G_2^1,G_2^2, \ldots , G_2^\omega$. As $G_2$ has a perfect matching, each of these connected components is non-trivial and has a perfect matching. By the inequality (\ref{eq 1}), $$2\mu(G_2)=\mathcal{E}(\Phi_2)=\sum_{j=1}^\omega \mathcal{E}(G_2^j,\mathbb{T},\phi)\ge\sum_{j=1}^\omega 2\mu(G_2^j) = 2\mu(G_2).$$Hence, $\mathcal{E}(G_2^j,\mathbb{T},\phi) = 2\mu(G_2^j)$ for $j=1,2,\ldots,\omega$. By induction hypothesis, we obtain that each connected component $G_2^j$ is a regular complete bipartite graph.
If $x,y\in V(G)$ are adjacent in $G$, we write $x\sim y$. According to Lemma \ref{lem 3.2}, if $x_1\sim y_0$ (resp. $x_2\sim y_0$) for any $y_0\in V(G_2^j)\cap Y$, $y_2\sim x$ (resp. $y_1\sim x$) for every $x\in V(G_2^j)\cap X$ and $x_1\sim y$ (resp. $x_2\sim y$) for every $y\in V(G_2^j)\cap Y$. Similarly, if $y_1\sim x_0$ (resp. $y_2\sim x_0$) for any $x_0\in V(G_2^j)\cap X$, $x_2\sim y$ (resp. $x_1\sim y$) for every $y\in V(G_2^j)\cap Y$ and $y_1\sim x$ (resp. $y_2\sim x$) for every $x\in V(G_2^j)\cap X$ and .
We claim that $\omega=1$. Suppose on the contrary that $\omega\ge 2$. For two subsets $V_1,V_2$ of $V(G)$, let $E(V_1,V_2)=\{uv\in E(G)~|~u\in V_1,v\in V_2\}$. Since $G$ is connected, there exist two different connected components $G_2^{j_1}$ and $G_2^{j_2}$ of $G_2$ such that both $E(V(G_1^i),V(G_2^{j_1}))$ and $E(V(G_1^i),V(G_2^{j_2}))$ are non-empty sets, where $i=1$ or $2$. Without loss of generality, suppose both $E(V(G_1^1),V(G_2^1))$ and $E(V(G_1^1),V(G_2^2))$ are non-empty. For any $x_3\in V(G_2^1)\cap X$, $y_3 \in V(G_2^1)\cap Y$, $x_4\in V(G_2^2)\cap X$ and $y_4 \in V(G_2^2)\cap Y$, consider the subgraph $H$ induced by $\{x_1,x_3,x_4,y_2,y_3,y_4\}$. Clearly, $\mu(G)=\mu(H)+\mu(G-H)$ and $H\cong \tilde{C_6}$, which is a contradiction by Lemma \ref{lem 3.2}. Thus $\omega=1$.
Now we have $G-S=G_1^1\oplus G_1^2\oplus G_2$, where $G_1^1$, $G_1^2$ and $G_2$ are connected. As $G$ is connected, both $E(V(G_1^1),V(G_2))$ and $E(V(G_1^2),V(G_2))$ are non-empty. Then consider the subgraph $K$ induced by $\{x_1,x_2,x_3,y_1,y_2,y_3\}$, where $x_3\in V(G_2)\cap X$ and $y_3\in V(G_2)\cap Y$. We have $K\cong \tilde{C_6}$ and $\mu(G)=\mu(K)+\mu(G-K)$, which is contradict with Lemma \ref{lem 3.2}. Thus, the assumption that $x_1\not \sim y_2$ is incorrect and we obtain that $G$ is a complete bipartite graph. As $|X|=|Y|$, $G$ is also regualr.
\end{proof}
Any function $\zeta:V(G)\to \mathbb{T}$ is called a switching function of $G$. Switching $\Phi=(G,\mathbb{T},\phi)$ by $\zeta$ means replacing $\phi$ by $\phi^\zeta$, which is defined as $\phi^\zeta(e_{uv}) = \zeta(u)^{-1}\phi(e_{uv})\zeta(v)$, and the resulting graph is denoted by $\Phi^\zeta = (G,\mathbb{T},\phi^\zeta)$. In this case, we say $\Phi$ and $\Phi^\zeta$ are switching equivalent, written by $\Phi\sim \Phi^\zeta$. As $A(\Phi)$ and $A(\Phi^\zeta)$ are similarity matrices, $\Phi$ and $\Phi^\zeta$ have the same energy. \cite[Lemma 5.3]{Z} shows that a complex unit gain graph $\Phi=(G,\mathbb{T},\phi)$ is balanced if and only if $\Phi$ is switching equivalent to $(G,\mathbb{T},1)$.
\begin{lem}\label{lem 3.10}
Let $\Phi=(G,\mathbb{T},\phi)$ be a complex unit gain graph, where $G$ is a connected bipartite graph with at least two vertices. If $\mathcal{E}(\Phi) = 2\mu(G)$, then $\Phi$ is balanced.
\end{lem}
\begin{proof}
According to Lemma \ref{lem 3.6}, we know $G$ is a regular complete bipartite. We show that $\Phi$ is balanced by induction on $n$. If $n=2$, $G$ is a tree which has no cycles and thus $\Phi$ is balanced. We assume that the result holds for complex unit gain graphs of order $n-2$ $(n\ge 4)$. Let $G$ be an $n$-vertex regular complete bipartite graph with $X=\{x_1,x_2,\ldots,x_{n/2}\}$ and $Y=\{y_1,y_2,\ldots,y_{n/2}\}$ being its partite sets and $\Phi=(G,\mathbb{T},\phi)$ a complex unit gain graph with $\mathcal{E}(\Phi)=2\mu(G)$.
Let $S$ be the edge cut of $G$ such that $G-S=G_1\oplus G_2$, where $G_1=G[\{x_1,y_1\}]$ and $G_2=G-\{x_1,y_1\}$. Clearly, $G_1$ is balanced, and thus there exists a switching function of $G_1$, denoted by $\zeta_1':V(G_1)\to \mathbb{T}$, such that $(G_1,\mathbb{T},\phi^{\zeta_1'})=(G_1,\mathbb{T},1)$. Define a switching function $\zeta_1:V(G)\to \mathbb{T}$ of $G$, where $\zeta_1(x_1)=\zeta_1'(x_1)$, $\zeta_1(y_1)=\zeta_1'(y_1)$ and $\zeta_1(z)=1$ for any $z\in V(G)\setminus\{x_1,y_1\}$. Switch $\Phi$ by $\zeta_1$ and denote $\Phi^{\zeta_1}$ by $\Phi_1=(G,\mathbb{T},\phi_1)$, where $\phi_1=\phi^{\zeta_1}$. Then we have $(G_1,\mathbb{T},\phi_1)=(G_1,\mathbb{T},1)$ and $\mathcal{E}(\Phi_1)=\mathcal{E}(\Phi)=2\mu(G)$. Note the fact that $\mu(G)=\mu(G_1)+\mu(G_2)$. Then according to Lemma \ref{lem 3.2}, we have $\mathcal{E}(G_2,\mathbb{T},\phi_1)=2\mu(G_2)$.
Note that $G_2$ is an $(n-2)$-vertex regular complete bipartite graph. Then by induction hypothesis, $(G_2,\mathbb{T},\phi_1)$ is balanced and thus there exists a switching function $\zeta_2':V(G_2)\to \mathbb{T}$ of $G_2$ such that $(G_2,\mathbb{T},\phi_1^{\zeta_2'})=(G_2,\mathbb{T},1)$. Define a new switching function $\zeta_2:V(G)\to \mathbb{T}$ of $G$, where $\zeta_2(z)=\zeta_2'(z)$ for all $z\in V(G_2)$ and $\zeta_2(x_1)=\zeta_2(y_1)=1$. Switch $\Phi_1$ by $\zeta_2$ and denote $\Phi_1^{\zeta_2}$ by $\Phi_2=(G,\mathbb{T},\phi_2)$, where $\phi_2=\phi_1^{\zeta_2}$. Clearly, $(G_1,\mathbb{T},\phi_2)=(G_1,\mathbb{T},1)$, $(G_2,\mathbb{T},\phi_2)=(G_2,\mathbb{T},1)$ and $\mathcal{E}(\Phi_2)=\mathcal{E}(\Phi_1)=\mathcal{E}(\Phi)=2\mu(G)$.
Consider the subgraph $K$ induced by $\{x_1,y_1,x_0,y_0\}$ where $x_0$ is any vertex in $V(G_2)\cap X$ and $y_0$ is any vertex in $V(G_2)\cap Y$. Then $K$ is a $4$-cycle $x_1y_1x_0y_0x_1$ and $\phi_2(e_{x_1y_1})=\phi_2(e_{x_0y_0})=1$. Note that $\mu(G)=\mu(K)+\mu(G-K)$. By Lemma \ref{lem 3.2}, we have $\mathcal{E}(K,\mathbb{T},\phi_2)=2\mu(K)=4$. Suppose $\phi_2(e_{y_1x_0})=a$ and $\phi_2(e_{y_0x_1})=b$ where $a,b\in \mathbb{T}$. Then the characteristic polynomial of $A(K,\mathbb{T},\phi_2)$ is $f(\lambda)=\lambda^4-4\lambda^2+2-2{\rm Re}(ab)$. Let $x={\rm Re}(ab)\in [-1,1]$. The energy of $(K,\mathbb{T},\phi_2)$ is $$2\sqrt{2+\sqrt{2+2x}}+2\sqrt{2-\sqrt{2+2x}}\ge 4,$$ and the equality holds if and only if $x=1$ which implies $a=\bar{b}$. Thus we know that $\phi_2(e_{y_1x_0})=\phi_2(e_{x_1y_0})=a$. Because of the arbitrariness of $y_0\in V(G_2)\cap Y$, we have for any $y\in V(G_2)\cap Y$, $\phi_2(e_{x_1y})=a$. Similarly, due to the arbitrariness of $x_0\in V(G_2)\cap X$, we have for any $x\in V(G_2)\cap X$, $\phi_2(e_{y_1x})=a$.
Here we have $(G_1,\mathbb{T},\phi_2)=(G_1,\mathbb{T},1)$, $(G_2,\mathbb{T},\phi_2)=(G_2,\mathbb{T},1)$ and $\phi_2(e_{x_1y})=\phi_2(e_{y_1x})=a$ for every $y\in V(G_2)\cap Y$ and $x\in V(G_2)\cap X$. Define the third switching function $\zeta_3:V(G)\to \mathbb{T}$ of $G$, where $\zeta_3(x_1)=\zeta_3(y_1)=1$ and $\zeta_3(z)=a^{-1}$ for all $z\in V(G_2)$. Switching $\Phi_2$ by $\zeta_3$ and denote $\Phi_2^{\zeta_3}$ by $\Phi_3=(G,\mathbb{T},\phi_3)$, where $\phi_3=\phi_2^{\zeta_3}$. One can verify that all edges in $\Phi_3$ have the gain $1$. Thus we switch $\Phi$ by $\zeta_1$, $\zeta_2$ and $\zeta_3$ successively and then get $(G,\mathbb{T},1)$. By Lemma 5.3 in \cite{Z}, we obtain the desired result that $\Phi$ is balanced.
\end{proof}
Suppose $G$ and $H$ are graphs with vertex set $V(G)=\{v_1,v_2,\ldots,v_n\}$ and $V(H) =\{u_1, u_2, \ldots , u_m\}$, respectively. Then we define the Kronecker product of $\Phi=(G,\mathbb{T},\phi)$ and $H$, which is also a complex unit gain graph, denoted by
$\Phi\otimes H$. Its underlying graph is $G\otimes H$ with vertex set $\{(v_s,u_t)~|~s=1,2,\ldots,n;t=1,2,\ldots,m\}$ and edge set $\big\{(v_s,u_t)(v_{s'},u_{t'})~\big|~v_sv_{s'}\in E(G),~u_t u_{t'}\in E(H)\big\}$. The gain of $e_{(v_s,u_t)(v_{s'},u_{t'})}$ in $\Phi\otimes H$ is defined to be the gain of $e_{v_sv_{s'}}$ in $\Phi$. In particular, $\Phi\otimes K_2$ is called the complex unit gain bipartite double of $\Phi$.
Let $U=(u_{st})$ and $V$ be two matrices of order $p_1 \times p_2$ and $q_1 \times q_2$, respectively. The Kronecker product of $U$ and $V$ is defined to be $U\otimes V=(u_{st}V)$, which is a $p_1q_1 \times p_2q_2$ matrix. Note that the adjacency matrix of $\Phi\otimes H$ is $A(\Phi\otimes H) = A(\Phi) \otimes A(H)$. If the eigenvalues of $\Phi$ are $\eta_1, \eta_2, \ldots , \eta_n$ and the eigenvalues of $H$ are $\lambda_1, \lambda_2, \ldots , \lambda_m$, the eigenvalues of $\Phi\otimes H$ are $\eta_s\lambda_t$ where $s=1,2,\ldots,n$ and $t=1,2,\ldots,m$ (see \cite{B} for details).
\begin{lem}\label{lem 3.7}
Let $\Phi=(G,\mathbb{T},\phi)$ be a complex unit gain graph whose underlying graph $G$ is connected and non-bipartite. Then we have $\mathcal{E}(\Phi)> 2\mu(G)$.
\end{lem}
\begin{proof}
Suppose on the contrary that $\mathcal{E}(\Phi)=2\mu(G)=n$. Then by Lemma \ref{lem 3.5}, $G$ has a perfect matching. As $\mu(G)=n/2$, we know $G$ has $n$ vertices, denoted by $\{v_1,v_2,\ldots,v_{n}\}$. Suppose that $G$ has $m$ edges. Let $H$ be the complete graph with vertex set $\{u_1,u_2\}$. Then consider the Kronecker product $\Phi\otimes H$. Its underlying graph $G\otimes H$ has $2n$ vertices and $2m$ edges. Clearly, $G\otimes H$ is a bipartite graph with $X=\{(v_i,u_1)~|~i=1,2,\ldots,n\}$ and $Y=\{(v_i,u_2)~|~i=1,2,\ldots,n\}$ being its two partite sets and $\mathcal{E}(\Phi\otimes H)=2\mathcal{E}(\Phi)=2n$. Since $G$ has a perfect matching, so does $G\otimes H$ and $\mu(G\otimes H)=2\mu(G)=n$. Then $\mathcal{E}(\Phi\otimes H)=2\mu(G\otimes H)$.
Clearly, $G\otimes H$ is incomplete, as $(v_i,u_1)\in X$ is not adjacent to $(v_i,u_2)\in Y$ for any $i=1,2,\ldots,n$. According to Lemma \ref{lem 3.6}, we know $G\otimes H$ is not connected.
We claim $G\otimes H$ has only two isomorphic connected components. Suppose that $G\otimes H$ has $l$ connected components, denoted by $\Omega_1,\Omega_2,\ldots,\Omega_l$, each of which is a bipartite graph. Then by inequality (\ref{eq 1}), $2\mu(G\otimes H)=\mathcal{E}(\Phi\otimes H)=\sum_j\mathcal{E}(\Omega_j,\mathbb{T},\phi)\ge \sum_j 2\mu(\Omega_j)=2\mu(G\otimes H)$. Then $\mathcal{E}(\Omega_j,\mathbb{T},\phi)=2\mu(\Omega_j)$ for all $j=1,2,\ldots,l$. By Lemma \ref{lem 3.6}, we know that each $\Omega_j$ is a regular complete bipartite graph.
Suppose the two partite sets of $\Omega_1$ are $X_1=\{(x_1,u_1),(x_2,u_1),\ldots,(x_t,u_1)\}$ and $Y_1=\{(y_1,u_2),(y_2,u_2),\ldots,(y_t,u_2)\}$, where $x_i,y_i\in V(G)$ for $i=1,2,\ldots,t$. Let $X_1'=\{x_1,x_2,\ldots,x_t\}$ and $Y_1'=\{y_1,y_2,\ldots,y_t\}$. Since $\Omega_1$ is a complete bipartite graph, $x_i\sim y_j$ in $G$ for any $i,j=1,2,\ldots,t$. Thus $X_1'\cap Y_1'=\emptyset$ and $Y_1'\subseteq N(x_i)$ for each $i=1,2,\ldots,t$, where $N(x_i)$ is the set of the neighbors of $x_i$ in $G$. Suppose there exists $z\in V(G)\setminus Y_1'$ such that $x_i\sim z$ in $G$. Then $(x_i,u_1)$ must be adjacent to $(z,u_2)$ in $G\otimes H$. Hence $(z,u_2)\in Y_1$, which implies $z\in Y_1'$, a contradiction to the choice of $z$. Thus $N(x_i)=Y_1'$ for all $i=1,2,\ldots,t$. Similarly, $N(y_j)=X_1'$ for all $j=1,2,\ldots,t$. Let $X_2=\{(x_1,u_2),(x_2,u_2),\ldots,(x_t,u_2)\}$ and $Y_2=\{(y_1,u_1),(y_2,u_1),\ldots,(y_t,u_1)\}$. Then $X_2\cup Y_2$ induce another connected component of $G\otimes H$, say $\Omega_2$.
Consider the subgraph $G'$ of $G$ induced by $X_1'\cup Y_1'$. Clearly, $G'$ is a complete bipartite graph with $X_1'$ and $Y_1'$ being its two partite sets, and both $\Omega_1$ and $\Omega_2$ are isomorphic to $G'$. We also assert that $G'$ is a connected components of $G$ . If $G\otimes H$ has a third connected component, then $V(G)\setminus V(G')$ is not empty and thus $G$ is not connected, which is a contradiction. Here we prove the desired result that $G\otimes H$ has only two isomorphic connected components.
From the above discuss, we also know that $G$ is a complete bipartite graph. This is contradictory with the fact that $G$ is non-bipartite. Thus the assumption $\mathcal{E}(\Phi)=2\mu(G)$ is incorrect and $\mathcal{E}(\Phi)>2\mu(G)$.
\end{proof}
\noindent{\bf Proof of Theorem \ref{thm 1.1}}:
The inequality (\ref{eq 1}) can be proved by a similar method used in \cite[Theorem 1.1 {\rm (i)}]{WWC}. In the following, we prove the necessary and sufficient conditions for the energy of a complex unit gain graph to reach its lower bound.
(Sufficiency) Assume $$G=(\cup_{j=1}^\omega K_{n_j,n_j})\cup (n-2\sum_{j=1}^\omega n_j)K_1,$$ where $K_{s,n-s}$ and $K_n$ are a complete bipartite graph and the complete graph of order $n$, respectively. Since $\Phi$ is balanced, we have $$\mathcal{E}(\Phi)=\sum_{j=1}^\omega\mathcal{E}(K_{n_j,n_j},\mathbb{T},\phi)=\sum_{j=1}^\omega\mathcal{E}(K_{n_j,n_j})=\sum_{j=1}^\omega 2n_j=\sum_{j=1}^\omega 2\mu(K_{n_j,n_j})=2\mu(G).$$
(Necessity) Assume that $\mathcal{E}(\Phi) = 2\mu(G)$. Suppose that $G$ has $\omega$ non-trivial connected components $G_1, G_2, \ldots , G_\omega$. By the inequality (\ref{eq 1}), we obtain $$2\mu(G)=\mathcal{E}(\Phi) = \sum_{j=1}^\omega\mathcal{E}(G_j,\mathbb{T},\phi)\ge 2\mu(G_j)=2\mu(G).$$
Thus $\mathcal{E}(G_j,\mathbb{T},\phi)=2\mu(G)$ for $j=1,2,\ldots,\omega$. By Lemma \ref{lem 3.7}, each non-trivial connected component $G_j$ is bipartite. By Lemma \ref{lem 3.6} and \ref{lem 3.10}, $(G_j,\mathbb{T},\phi)$ is balanced and $G_j$ is a regular complete bipartite graph, for $j=1,2,\ldots,\omega$. Therefore, $\Phi$ is balanced and $G$ is the disjoint union of some regular complete bipartite graphs, together with some isolated vertices. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$\qed$
\begin{re}
{\rm
The results in \cite[Theorem 1.3]{WF} and \cite[Theorem 5.2]{WLM} can be extended to complex unit gain graphs, which gives a upper bound of $\mathcal{E}(\Phi)$ in terms of the rank of $\Phi$ and characterizes all the extremal complex unit gain graphs. Applying Theorem \ref{thm 1.1}, the bounds of graph energy in terms of the vertex cover number given in \cite[Theorems 3.1 and 4.2 ]{WM} can also be extended to the energy of complex unit gain graphs. However, the equality case in \cite[Theorem 3.1]{WM} follows from Perron-Frobinus Theorem, which only holds for real matrices. }
\end{re}
\section*{Acknowledgement}
The author would like to thank Professor Kaishun Wang and Doctor Benjian Lv for their valuable comments and suggestions regarding this work.
|
2,869,038,154,687 | arxiv | \section{Introduction}
\label{sec:intro}
The precise determination of the present-day expansion rate of the Universe, expressed by the Hubble constant $H_0$, is one of the most challenging tasks of modern cosmology.
Indeed, the value of $H_0$ inferred from different observations appears to be in persistent discrepancy which is conventionally treated as a tension between the {\it direct} (local) and {\it indirect} (global) measurements of the Hubble constant. Namely, the Planck measurement of $H_0$ coming from the cosmic microwave background (CMB) \cite{Aghanim:2018eyx} disagrees with the SH0ES result \cite{Riess:2019cxk}, based on traditional distance ladder approach utilizing Type Ia supernova, at $4.4\sigma$ level. The significance of this tension makes it unlikely to be a statistical fluctuation and hence requires an explanation.
Numerous local or late-time observations can provide an independent cross-check on the Cepheid-based $H_0$ measurements. In particular, the SN luminosity distances can be calibrated by Miras, $H_0=73.3\pm3.9\rm \,\,km\,s^{-1}Mpc^{-1}$ \cite{Huang:2019yhh} and the Tip of the Red Giant Branch (TRGB) in the Hertzsprung-Russell diagram, $H_0=69.6\pm1.9\rm \,\,km\,s^{-1}Mpc^{-1}$ \cite{Freedman:2020dne}. Alternatively, local measurements can be performed without relying on any distance ladder indicator through very-long-baseline interferometry observations of water megamasers $H_0=73.9\pm3.0\rm \,\,km\,s^{-1}Mpc^{-1}$ \cite{Pesce:2020xfe}, using strongly-lensed quasar systems $H_0=73.3^{+1.7}_{-1.8}\rm \,\,km\,s^{-1}Mpc^{-1}$ \cite{Wong:2019kwg}
\footnote{Recently, the modeling error in time delay cosmography under the assumptions on the form of the mass density profile has been questioned \cite{Kochanek:2019ruu,Blum:2020mgu}.
The thing is that there is a significant mass-sheet degeneracy that leaves the lensing observables unchanged while rescaling the absolute time delay, and thus alters the inferred $H_0$. The common strategy to deal with that is to make assumptions on the mass density profile motivated by local observations as done by H0LiCOW\,\cite{Wong:2019kwg}. An alternative approach is to partially constrain this inherent degeneracy exclusively by the kinematic information of the deflector galaxy that brings much looser constraints, $H_0=74.5^{+5.6}_{-7.1}\rm \,\,km\,s^{-1}Mpc^{-1}$ for a sample of 7 lenses (6 from H0LiCOW) and $H_0=67.4^{+4.1}_{-3.2}\rm \,\,km\,s^{-1}Mpc^{-1}$ when a set of 33 strong gravitational lenses from the SLACS sample is used \cite{Birrer:2020tax}. This hierarchical analysis fully accounts for the mass-sheet degeneracy in the error budget that statistically validates the mass profile assumptions made by H0LiCOW \cite{Birrer:2020tax}.
} and gravitational wave signal from merging binary neutron stars \cite{Abbott:2017xzu,Palmese:2020aof}. All these measurements affected by completely different possible systematics agree with each other and give persistently higher values of $H_0$ being in conflict with the Planck prediction \cite{Verde:2019ivm}.
The Hubble tension can be explained by the impact of possible systematics in the Planck data. Indeed, it has been found that the Planck data suffer from multiple internal inconsistencies that can potentially obscure the cosmological inference \cite{Ade:2015xua,Addison:2015wyg,Aghanim:2018eyx}. The most significant feature refers to an interesting oscillatory shape in the TT power spectrum that resembles an extra lensing smoothing of the CMB peaks compared to the $\Lambda$CDM expectation.
The significance of this "lensing anomaly" is rather high, $2.8\sigma$ \cite{Motloch:2019gux}, while no systematics in the Planck data has been identified so far \cite{Aghanim:2016sns,Aghanim:2018eyx}. Such inconsistencies force one to consider independent measurements of the CMB anisotropies, especially on small scales. Ground based observations provided by the South Pole Telescope (SPT) \cite{Story:2012wx,Henning:2017nuy} and the Atacama Cosmology Telescope (ACT) \cite{Aiola:2020azj,Choi:2020ccd} perfectly suit for this purpose since they probe exclusively small angular scales. These observations indicate internally consistent gravitational lensing of CMB, i.e. the lensing information deduced from the smoothing of acoustic peaks at high-$\ell$ agrees well with the predictions of 'unlensed' CMB temperature and polarization power spectra \cite{Henning:2017nuy,Han:2020qbp}.
A more beneficial approach is to combine ground based observations with the full sky surveys. Indeed, ground-based telescopes are sensitive to much smaller angular scales unattainable in full sky surveys that can bring a noticeable cosmological gain. Recently, combined data analysis based on the Planck temperature and SPTPol polarization and lensing measurements found a substantially higher value $H_0=69.68\pm1.00\rm \,\,km\,s^{-1}Mpc^{-1}$ \cite{Chudaykin:2020acu} that alleviates the Hubble tension to $2.5\sigma$ statistical significance within the $\Lambda$CDM cosmology. It also completely mitigates the so-called $S_8$ tension between different probes of Large Scale Structure (LSS) statistics and the Planck measurements \cite{DiValentino:2020vvd}.
It implies that the mild tension between the LSS and CMB data is solely driven by an excess of the lensing-induced smoothing of acoustic peaks observed in the Planck temperature power spectrum at high-$\ell$ \cite{Chudaykin:2020acu,Addison:2015wyg,Aghanim:2016sns}.
Besides that, the information about the present-day expansion rate of the Universe can be extracted from different measurements at low redshifts calibrated by any early-universe data independently of any CMB data. This is done by combining LSS observations with primordial deuterium abundance measurements. First such measurement comes from the baryon acoustic oscillation (BAO) experiments. Utilizing the BAO data from the Baryon Oscillation Spectroscopic Survey (BOSS) \cite{Alam:2016hwk}, the prior on $\omega_b$ inferred from the Big Bang Nucleosynthesis (BBN) \cite{Cooke:2016rky} and late-time probe of the matter density from the Dark Energy Survey (DES) \cite{Abbott:2017wau} yields $H_0 = 67.4^{+1.1}_{-1.2}\rm \,\,km\,s^{-1}Mpc^{-1}$ \cite{Abbott:2017smn}.
Measurements of BAO scales for galaxies and the Ly$\alpha$ forest \cite{Blomqvist:2019rah} augmented with the BBN prior bring similar estimate $H_0 =67.6\pm1.1\rm \,\,km\,s^{-1}Mpc^{-1}$ \cite{Aubourg:2014yra,Cuceu:2019for,Schoneberg:2019wmt}. Second, the Hubble constant measurement can be accomplished with galaxy clustering alone using the full-shape (FS) information of the galaxy power spectrum \cite{Ivanov:2019pdj,DAmico:2019fhj,Philcox:2020vvt}. In particular, the joint FS+BAO analysis brings $H_0 = 68.6 \pm 1.1\rm \,\,km\,s^{-1}Mpc^{-1}$ \cite{Philcox:2020vvt}.
Importantly, all these measurements assume the standard evolution of the Universe prior to recombination. It sticks the sound horizon $r_s$ to the $\Lambda$CDM function of cosmological parameters. However, any sizable shift in $H_0$ value that needed to solve the Hubble tension must be accompanied by corresponding modification of $r_s$ to preserve the fit to CMB data that measure the angular scale of the sound horizon $\theta_s$ with a very high accuracy. This modification can be accomplished by introducing a new component which increases $H(z)$ in the decade of scale factor evolution prior to recombination. Such early-universe scenarios have been advocated as the most likely solution of the Hubble tension in Ref. \cite{Knox:2019rjx}. The broad subclass of these models has been termed Early Dark Energy (EDE). Many EDE-like scenarios have been proposed from a phenomenological point of view \cite{Kamionkowski:2014zda,Poulin:2018a,Poulin:2018cxd,Smith:2019ihp,Agrawal:2019lmo,Lin:2019qug,Ye:2020btb,Niedermann:2020qbw}, whilst others present concrete realizations of particle-physics models \cite{Niedermann:2019olb,Alexander:2019rsc,Kaloper:2019lpl,Berghaus:2019cls}. It is pertinent to highlight two interesting realizations \cite{Sakstein:2019fmf,Braglia:2020bym} in which the EDE field becomes dynamical precisely around matter-radiation equality that ameliorates the coincidence problem inherent to most EDE implementations.
We examine one popular EDE implementation which postulates a dynamical scalar field which behaves like dark energy in early times and then rapidly decays in a relatively narrow time interval near matter-radiation equality. The increased energy density of the Universe prior to recombination shrinks the comoving sound horizon $r_s$ which lifts up $H_0$ while keeping the angular scale $\theta_s$ intact. This extension of the $\Lambda$CDM model can be parameterized by 3 parameters: the maximal injected EDE fraction $f_{\rm EDE}$, the critical redshift $z_c$ at which this maximum is reached and an initial scalar field value denoted by dimensional quantity $\theta_i$ (in analogy to the axion misalignment angle \cite{Dine:1982ah,Abbott:1982af,Preskill:1982cy}). It has been previously established that this prescription allows for values of $H_0$ consistent with SH0ES whilst preserving the fit to the CMB data \cite{Poulin:2018cxd,Smith:2019ihp}.
The situation becomes more intricate when LSS data are taken into account. The thing is that the EDE scenario matches the CMB data at the cost of shifting several cosmological parameters that is not compatible with LSS data. In particular, it substantially increases the physical density of cold dark matter $\omega_c$ and to a lesser extent the spectral index $n_s$ that raise up the late-time parameter $S_8=\sigma_8\sqrt{\Omega_m/0.3}$. This change exacerbates the $S_8$ tension between LSS observables and the Planck data and imposes tight constraints on the EDE scenario as a possible solution to the Hubble tension \cite{Hill:2020osr}. Namely, when considering all LSS data with the Planck, SNIa, BAO and RSD measurements one finds $f_{\rm EDE}<0.06\,(2\sigma)$ \cite{Hill:2020osr} which is well below the value needed to resolve the Hubble tension, $f_{\rm EDE}\sim0.1$. The main driver of this strong constraint is the overly enhanced lensing-induced smoothing effect that affects the Planck temperature power spectrum at high-$\ell$ and pulls the late-time amplitude to a higher value \cite{Addison:2015wyg,Aghanim:2016sns} being in conflict with the LSS data.
It has been shown that the tension between the Planck and various LSS probes can be reconciled if one combines the large-angular scale Planck temperature measurements \footnote{The idea of separating large and small angular scales in the Planck temperature power spectrum has been thoroughly investigated in literature, see e.g.\,\cite{Aghanim:2016sns,Burenin:2018nuf}.} with the ground-based observations of the SPTPol survey as argued in Ref. \cite{Chudaykin:2020acu}. Thus, one expects that the tight LSS constraints on EDE can be alleviated if one replaces the Planck CMB data at high multipoles $\ell$ with the SPTPol measurements. Revising the constraining power of LSS in the EDE model using the different CMB setup that predicts the consistent CMB lensing effect is one of the main goals of this paper.
Another important ingredient of our study is the full-shape analysis of galaxy power spectrum. This treatment is based on complete cosmological perturbation theory with a major input from the Effective Field Theory (EFT) of LSS. This approach includes all necessary ingredients (UV counterterms and IR resummation) needed to reliably describe galaxy clustering on mildly nonlinear scales. The full-shape template of the galaxy power spectrum contains a large amount of cosmological information that can effectively constrain various extensions of the $\Lambda$CDM model. In particular, it has been shown that the full-shape BOSS likelihood yields a $\approx20\%$ improvement on the EDE constraint from the CMB data alone \cite{Ivanov:2020ril}. Crucially, the standard BOSS likelihood does not appreciably shrink the Planck limits due to the lack of full-shape information therein \cite{Ivanov:2020ril}. In order to obtain more refined constraints on EDE, we employ the full-shape BOSS likelihood in our analysis.
In this paper, we examine the EDE scenario using the Planck and SPTPol measurements of the CMB anisotropy. Namely, we follow the combined data approach validated in Ref. \cite{Chudaykin:2020acu} and combine the Planck temperature power spectrum at large angular scales with polarization and lensing measurements from the SPTPol survey \cite{Henning:2017nuy}. This approach ensures the internally consistent CMB lensing effect and allows one to gain cosmological information from both large and small angular scales.
We improve our previous analysis \cite{Chudaykin:2020acu} in several directions. First, we solve the evolution of the scalar field perturbations directly using the Klein-Gordon equation which does not rely on the effective fluid description. Second, we consider a more realistic EDE setup which generalizes a pure power-law potential considered in Ref. \cite{Chudaykin:2020acu}. Third, we use the full BOSS galaxy power spectrum likelihood that yields much stronger constraints on EDE compared to the standard BOSS likelihood \cite{Ivanov:2020ril}.
Finally, we exploit the more recent LSS data coming from the DES-Y1 \cite{Abbott:2017wau}, Kilo-Degree Survey (KiDS) \cite{Asgari:2020wuj} and Subaru Hyper Suprime-Cam (HSC) \cite{Hikage:2018qbn} measurements that allow us to reduce by half the error bars on $S_8$ compared to that examined in Ref. \cite{Chudaykin:2020acu}.
The outline of this paper is as follows. In Sec. \ref{sec:theory} we review the physics of the EDE scenario. In Sec. \ref{sec:constr} we present the combined data approach, data sets and main results. Finishing in Sec. \ref{sec:conc} we highlight the differences between our approach and previous EDE analyses, interpret our outcomes and discuss the prospects.
\section{The early dark energy model}
\label{sec:theory}
The main goal of EDE proposal is to decrease the comoving sound horizon of the last scattering epoch,
\begin{equation}
\label{rs}
r_s(z_*) = \int _{z_*} ^\infty \frac{{\rm d} z}{H(z)} c_s(z) ,
\end{equation}
where $z_*$ denotes the redshift of the last scattering in such a way that the higher value of $H_0$ encoded in the comoving angular diameter distance
\begin{equation}
\label{DA}
D_A(z_*) = \int _0 ^{z_*} \frac{{\rm d} z}{H(z)} ,
\end{equation}
can be accommodated without changing the angular scale of the sound horizon,
\begin{equation}
\label{thetas}
\theta_* = \frac{r_s (z_*)}{D_A(z_*)}.
\end{equation}
Necessary adjustments of the early-universe dynamics can be readily understood. The angular diameter distance defined by \eqref{DA} is driven by the low-redshift cosmic evolution and, hence, directly relies on $H_0$. Eq. \eqref{thetas} implies that the upward shift in $H_0$ must be accompanied by the downward shift of $r_s$ since $\theta_*$ is measured to $0.03\%$ precision by Planck. However, the sound horizon given by \eqref{rs} is saturated near the lower bound of the integral that requires the increased expansion rate of the Universe at times shortly before recombination.
In EDE scenarios such increase is provided by an additional contribution to the total energy density of the Universe which acts as dark energy at early times. The magnitude of the Hubble tension dictates the energy scale of the early-time contribution to be of order $\sim\,\text{eV}$. Crucially, this extra energy density initially stored in EDE must rapidly decay and practically disappear before the last scattering so as not to affect the CMB anisotropy on small scales.
The simplest model where the requisite dynamics can be realized is that of the scalar field. Indeed, in this scenario at high redshifts the scalar field is frozen and acts as EDE whereas afterwards it begins to oscillate and its energy density redshifts like matter density, $\rho \propto a^{-3}$. In the context of particle physics, the candidate for this scalar field can be the axion \cite{Peccei:1977hh,Wilczek:1977pj,Weinberg:1977ma} with a periodic potential $V \propto m^2 f^2 \cos \phi/f$ generated by non-perturbative effects. However, the EDE field must decay as radiation or faster to keep the late-time evolution intact, while in the simplest example of axion-like model the EDE energy density redshifts as matter. This obstacle can be overcome in various extensions of the axion physics, see \cite{Kamionkowski:2014zda,Montero:2015ofa}.
We consider a general scalar field endowed with the discrete periodic symmetry and its scalar potential has a generic form $V=\sum c_n\cos(n\phi/f)$.
Then, we tune the first $n$ coefficients and neglect higher order harmonics to end up with scalar potential of specific form
\begin{equation}
\label{PoulinEDE}
V = V_0 \left( 1 - \cos (\phi/f)\right)^n \;\; , \;\; V_0 \equiv m^2 f^2 \, .
\end{equation}
It has been shown that this type of potential can alleviate the Hubble tension \cite{Poulin:2018cxd}. One observes, that after onset of scalar field oscillations, this potential with $n=2$ affords the dilution of the energy density initially stored in EDE as radiation ($\propto a^{-4}$), and for $n\rightarrow \infty$ it redshifts as kinetic energy ($\propto a^{-6}$) thereby reproducing the Acoustic Dark Energy scenario \cite{Lin:2019qug}. Recent investigations of the EDE dynamics with potential \eqref{PoulinEDE} in the context of the Hubble tension reveal that the case $n=3$ provides a somewhat better fit to the overall cosmological data \cite{Smith:2019ihp,Agrawal:2019lmo,Chudaykin:2020acu}.
Potential \eqref{PoulinEDE} can appear with specially tuned model parameters e.g. as a low-energy limit of multi-axion models, where several QCD-like sectors fall into confinement and form a complicated periodic potential for the lightest axion. With specially tuned model parameters the effective potential can take the form \eqref{PoulinEDE}. This construction has been suggested to improve the model of natural inflation, see e.g.\,\cite{Kim:2004rp,Agrawal:2018mkd}. Anyway, it is worth noting that much later, after recombination, hence for much smaller scalar field $\phi$, the effective axion potential may differ, e.g. approach more standard case of $n=1$. The field decays slower then, like scalar dark matter, but has no recognizable impact on late-time cosmology, and so we stick to \eqref{PoulinEDE}. To highlight that our setup is different from the Peccei--Quinn axion, we refer to the scalar field $\phi$ as axion-like particle (ALP) in what follows.
The cosmological dynamics of EDE field with potential \eqref{PoulinEDE} can be succinctly described by the following effective parameters: the maximal fraction of the EDE field in the total energy density of the Universe, $f_{\rm EDE}$, and the redshift, $z_c$, at which this maximum is reached. These parameters absorb the particle physics parameters, the ALP mass $m$ and decay constant $f$, and have clear cosmological implication. To solve the Klein-Gordon equation one must also specify the initial field displacement $\theta_i\equiv\phi_i/f$ which represents re-normalized field variable at very early times. There the EDE field remains almost constant, so we start its evolution with zero velocity, and the system quickly approach the slow-roll regime for the EDE field. Later this regime terminates and the field starts non-harmonic oscillations, which frequency is determined by both $\theta_i$ and $m$\,\cite{Smith:2019ihp}. These parameters also govern the perturbed dynamics of the EDE field \cite{Smith:2019ihp}. The last parameter $n$ determines the rate at which the EDE field dilutes. All in all, the EDE dynamics is entirely described by the following four parameters $f_{\rm EDE},\,z_c,\,\theta_i,\,n$.
\section{Constraints on the EDE scenario}
\label{sec:constr}
Parameter estimates presented in this paper are obtained with the combined Einstein-Boltzmann code comprised of \texttt{CLASS\_EDE} \cite{Hill:2020osr} and \texttt{CLASS-PT} \cite{Chudaykin:2020aoj} (both extensions of \texttt{CLASS}~\cite{Blas:2011rf}) \footnote{The code that combines both \texttt{CLASS\_EDE} and \texttt{CLASS-PT} extensions is available on the web-page \href{https://github.com/Michalychforever/EDE_class_pt}{https://github.com/Michalychforever/EDE\_class\_pt}.}, interfaced with the \texttt{Montepython} Monte Carlo sampler \cite{Audren:2012wb,Brinckmann:2018cvx}. \texttt{CLASS\_EDE} implements both the background and perturbed dynamics of the scalar field. Namely, it directly solves the homogeneous and perturbed Klein-Gordon equation along the lines of Ref. \cite{Agrawal:2019lmo,Smith:2019ihp}. We apply adiabatic initial conditions for the scalar field fluctuations in full accordance with \cite{Smith:2019ihp}. We perform the Markov Chain Monte Carlo approach to sample the posterior distribution adopting a Gelman-Rubin \cite{Gelman:1992zz} convergence criterion $R-1 < 0.15$. The plots with posterior densities and marginalized limits are generated with the latest version of the \texttt{getdist} package\footnote{\href{https://getdist.readthedocs.io/en/latest/}{
\textcolor{blue}{https://getdist.readthedocs.io/en/latest/}}
}~\cite{Lewis:2019xzd}.
Following previous EDE analysis \cite{Hill:2020osr,Ivanov:2020ril} we impose uniform priors on the EDE parameters: $f_{\rm EDE}=[0.001,0.5]$, $\mathrm{log}_{10}(z_c)=[3,4.3]$ and $\theta_i=[0.1,3.1]$. We translate the effective parameters $f_{\rm EDE}$ and $z_c$, to the particle physics parameters, $f$ and $m$, given some initial field displacement $\theta_i$ via a shooting algorithm realized in \texttt{CLASS\_EDE}. We fix $n=3$ as the cosmological data only weakly constrain this parameter \cite{Smith:2019ihp}. We also vary 6 standard $\Lambda$CDM parameters within broad uniform priors: $\omega_c=\Omega_ch^2$, $\omega_b=\Omega_bh^2$, $H_0$, $\ln(10^{10} A_\mathrm{s})$, $n_s$ and $\tau$. We assume the normal neutrino hierarchy with the total active neutrino mass $\sum m_\nu=0.06\,\text{eV}$. When the full-shape BOSS likelihood is included, all matter transfer functions are calculated along the lines of the standard cosmological perturbation theory that consistently predict galaxy/matter clustering on mildly nonlinear scales. Otherwise, we use the Halofit module to compute nonlinear matter power spectrum which allows us to reliably predict the lensed CMB power spectra and lensing potential power spectrum at high multipoles.
One comment on the choice of EDE priors is in order here. Uniform priors on the EDE parameters $f_{\rm EDE}$, $z_c$ and $\theta_i$ implies strongly non-uniform probability distributions for the particle physical parameters, namely the ALP mass $m$ and decay constant $f$. In particular, the posterior distribution for $f$ is peaked near the Planck mass scale \cite{Hill:2020osr}. This drastic departure from a uniform distribution raises the question regarding the dependence of the EDE posteriors on the choice of the flat priors. The analysis of Ref. \cite{Hill:2020osr} demonstrates that uniform priors imposed directly on the particle physics parameters, $f$ and $\log m$, strongly downweight large $f_{\rm EDE}$ values, in comparison to uniform priors placed on the effective EDE parameters, $f_{\rm EDE}$ and $z_c$. Thereby, the analysis with the flat physical priors further tightens the upper limits on $f_{\rm EDE}$ and thus significantly weakens the possibility of resolving the $H_0$ tension. Although imposing flat physical priors is arguably more physically reasonable, we exploit the uniform priors on the effective parameters, $f_{\rm EDE}$ and $z_c$, for two reasons. First, the effective parameters have clear cosmological implication that provides direct comparison with other early-universe scenarios, e.g. \cite{Lin:2020jcb,Murgia:2020ryi}. Second, the effective description allows one to cover other physical realizations of the EDE model which have different physical priors, see Ref. \cite{Berghaus:2019cls,Sakstein:2019fmf,Braglia:2020bym,Niedermann:2019olb,Alexander:2019rsc}.
\subsection{Methodology}
\label{subsec:method}
The cornerstone of our analysis is the combined data approach that allows one to extract reliable cosmological information from multiple CMB experiments in a wide range of angular scales, see \cite{Chudaykin:2020acu}. We examine the $\Lambda$CDM and EDE predictions utilizing the Planck large-angular scale measurements along with the ground-based observations of the 500 deg$^2$ SPTPol survey. Before going to the data sets we assert our CMB setup and reveal its importance in the light of the Planck lensing tension.
We combine the Planck TT power spectrum at $\ell<1000$ with the SPTPol measurements of TE, EE spectra following the CMB specification adopted in Ref. \cite{Chudaykin:2020acu}. We do not include the Planck polarisation measurements at intermediate angular scales because the Planck TE and EE spectra have residuals at $\ell\sim30-1000$ relative to the $\Lambda$CDM prediction \cite{Smith:2019ihp}. Given this range of multipoles roughly corresponds to the modes that enter the horizon while the EDE density is important, the Planck polarization measurements strongly disfavour the EDE solution as shown in Ref. \cite{Lin:2020jcb}. Interestingly, the ACT observations do not detect any features in the TE and EE measurements in this multipole region \cite{Aiola:2020azj}. This data discrepancy motivates us to take the TE and EE power spectra entirely from the SPTPol survey which do not manifest any significant residuals relative to $\Lambda$CDM \cite{Henning:2017nuy}.
We further include the SPTPol measurement of the lensing potential power spectrum $C_\ell^{\phi\phi}$. Despite a somewhat higher constraining power of the Planck lensing measurements, we do not include it for the following reasons. First, it allows us to investigate the lensing information entirely encoded in SPTPol maps. Second, despite the fact that the Planck lensing amplitude extracted from quadratic estimators of T -, E- or B-fields is in excellent agreement with that from the Planck power spectra, the former is in a mild tension with the large-angular scale Planck TT and SPTPol data which prefer substantially lower values of fluctuation amplitudes \cite{Aghanim:2016sns,Henning:2017nuy}. To provide a self-consistent cosmological inference we employ the direct measurement of $C_\ell^{\phi\phi}$ from the SPTPol survey that agrees well with SPTPol measurements of TE and EE spectra \cite{Bianchini:2019vxp}.
Statistical agreement between the large-scale Planck temperature and SPTPol polarization measurements has been validated on the level of posterior distributions in Ref. \cite{Chudaykin:2020acu}. Herein, we corroborate this analysis by direct comparison of the spectra to show their consistency in the way that they are used. In Figure \ref{fig:residuals} we show the Planck TT ($30<\ell<1000$) and SPTPol TE and EE ($50<\ell<3000$) residuals relative the $\Lambda$CDM prediction optimized to the $\rm Planck\text{-}low\ell\!+\!SPT$ likelihood \footnote{We do not show the SPTPol polarisazation measurements at high multipoles because the error bars at $3000<\ell<8000$ significantly surpass the cosmic variance.}.
\begin{figure}[h]
\begin{center}
\hspace{-0.3cm} \includegraphics[width=0.49\textwidth]{TT}
\includegraphics[width=0.49\textwidth]{EE}
\includegraphics[width=0.49\textwidth]{TE}
\end{center}
\caption{
CMB residuals of Planck TT (top panel), SPTPol EE (middle panel) and EE (bottom panel) data with respect to the $\Lambda$CDM model optimized to the $\rm Planck\text{-}low\ell\!+\!SPT$ likelihood.
\label{fig:residuals} }
\end{figure}
The residuals are shown in units of $\sigma_{\rm CV}$, the cosmic variance error per multipole moment which is given by
\begin{equation}
\sigma_{\rm CV} =
\begin{cases}
\sqrt{\frac{2}{2\ell+1}} C_\ell^{TT}, & {\rm TT} ,\\
\sqrt{\frac{1}{2\ell+1}} \sqrt{C_\ell^{TT} C_\ell^{EE} + (C_\ell^{TE})^2}, & {\rm TE} ,\\
\sqrt{\frac{2}{2\ell+1}} C_\ell^{EE}, & {\rm EE} . \\
\end{cases}
\end{equation}
Once can see that the $\Lambda$CDM predictions match both Planck TT and SPTPol TE and EE measurements well in the range of interest. The corresponding difference between data and theory predictions is comparable to the statistical uncertainties of the Planck and SPTPol data. This analysis paves the way towards direct application of the combined data approach based on the Planck temperature and SPTPol polarization data.
One of the main advantages of the combined data approach presented here is that it predicts a consistent CMB lensing effect within the $\Lambda$CDM cosmology. To make it clear we introduces one phenomenological parameter $A_L$ that scales the lensing potential power spectrum at every point in the parameter space. Utilizing the Planck temperature power spectrum at $\ell<1000$ along with the SPTPol polarization and lensing measurements yields $A_L=0.990\pm0.035$ \cite{Chudaykin:2020acu} being in perfect agreement with the $\Lambda$CDM expectation. In turn, the full Planck data favours overly enhanced lensing-induced smoothing of acoustic peaks which imposes $A_L=1.180\pm0.065$ \cite{Aghanim:2018eyx}. More lensing in the Planck maps may be driven by uncounted systematics in the Planck data on small scales or be a mere statistical fluctuation arising from instrumental noise and sample variance \cite{Aghanim:2016sns,Aghanim:2018eyx}. This effect may be also explained by a proper oscillatory feature in the primordial spectrum generated in specific inflation scenarios \cite{Domenech:2020qay}.
It was recently suggested that the source of the Hubble tension could possibly originate in the modeling of late-time physics within the CMB analysis \cite{Haridasu:2020pms}. Anyway, our combined data approach is free from the Planck lensing anomaly and provides reliable measurements of cosmological parameters.
Noteworthy, the lensing-like anomaly in the Planck data can be partially alleviated by allowing a higher value of the optical depth $\tau$ \cite{Addison:2015wyg}. Indeed, higher $\tau$ gives larger $A_s$ preferred by stronger lensing effect in the CMB maps at high multiploes. However, this would be at odds with the Planck large-scale polarization measurements. In particular, the baseline Planck 2018 measurement of the EE power spectra at $\ell<30$ brings $\tau=0.0506\pm0.0086$ \cite{Aghanim:2018eyx}. Moreover, the improved analysis of the Planck data allows one to reduce the Planck 2018 legacy release uncertainty by $40\%$ which gives $\tau=0.059\pm0.006$ \cite{Pagano:2019tci}. The significantly higher optical depth $\tau=0.089\pm0.014$ was reported by WMAP \cite{Hinshaw:2012aka} but it relies on the old maps of polarized Galactic dust emission \cite{Ade:2015xua}. In our analysis in Fig. \ref{fig:residuals} we follow the conservative Planck 2018 estimate of $\tau$ provided by the $\rm Planck\text{-}low\ell$ likelihood. We argue that even for such a low optical depth, the Planck large-scale temperature and SPTPol polarization measurements predict the consistent CMB lensing effect in the $\Lambda$CDM cosmology \cite{Chudaykin:2020acu} and thus can be used to obtain reasonable cosmological constraints in various extensions of the $\Lambda$CDM model.
\subsection{Data sets}
\label{subsec:data}
We employ the Planck temperature \texttt{Plik} likelihood for multipoles $30\leq\ell<1000$ in the concert with low-$\ell$ \texttt{Commander} TT likelihood, and the low-$\ell$ \texttt{SimAll} EE likelihood \cite{Aghanim:2018eyx}. We vary over all nuisance parameters required to account for observational and instrumental uncertainties. We refer to these measurements as $\rm Planck\text{-}low\ell$.
We utilize CMB polarization measurements from the 500 $\deg^2$ SPTPol survey which includes TE and EE spectra in the multipole range $50<\ell\leq8000$ \cite{Henning:2017nuy}. We vary all necessary nuisance parameters which account for foreground, calibration and beam uncertainties and impose reasonable priors on them. We transform theoretical spectra from unbinned to binned bandpower space using appropriate window functions. We supplement these polarization measurements with the observation of the lensing potential power spectrum $C_\ell^{\phi\phi}$ in the multipole range $100<\ell<2000$ from the same 500 $\deg^2$ SPTPol survey \cite{Wu:2019hek}. The lensing potential is reconstructed from a minimum-variance quadratic estimator that combines both the temperature and polarization CMB fields. To take into account the effects of the survey geometry we convolve the theoretical prediction for the lensing potential power spectrum with appropriate window functions at each point in the parameter space. We also perturbatively correct $C_\ell^{\phi\phi}$ to address the difference between the recovered lensing spectrum from simulation and the input spectrum along the lines of Ref. \cite{Bianchini:2019vxp}. We denote these SPTPol polarization and lensing measurements \footnote{The complete SPTPol likelihoods for the \texttt{Montepython} environment are publicly available at \href{https://github.com/ksardase/SPTPol-montepython}{https://github.com/ksardase/SPTPol-montepython}.} as SPT in what follows.
We use the data from the final BOSS release DR12 \cite{Alam:2016hwk}, implemented as a joint FS+BAO likelihood, as in Ref. \cite{Philcox:2020vvt}. We refer the reader to \cite{Ivanov:2019pdj,Chudaykin:2019ock} for details of the pre-reconstruction full-shape power spectrum analysis and to Ref. \cite{Philcox:2020vvt} for the strategy of the BAO measurements performed with the post-reconstruction power spectra. Our likelihood includes both pre- and post-reconstruction galaxy power spectrum multipoles $\ell=0,2$ across two different non-overlapping redshift bins with $z_{\rm eff}=0.38$ and $z_{\rm eff}=0.61$ observed in the North and South Galactic Caps (NGC and SGC, respectively). It results in the four different data chunks with the total comoving volume $\simeq6\,(h^{-1}\,\text{Gpc})^3$. We create separate sets of nuisance parameters for each data chunk and vary all of them independently. We impose the conservative priors on the nuisance parameters following Ref. \cite{Ivanov:2019pdj}. We present all nuisance parameters and their priors in Appendix \ref{nuisance}. We employ the wavenumber range [$0.01,0.25$] $\hMpc$ for the pre-reconstruction power spectra and [$0.01,0.3$] $\hMpc$ for the BAO measurements based on the post-reconstruction power spectra. We recall this full-shape likelihood as BOSS.
We adopt the SH0ES measurement of the Hubble constant $H_0=74.03\pm1.42\rm \,\,km\,s^{-1}Mpc^{-1}$ \cite{Riess:2019cxk}. We impose the Gaussian prior on $H_0$ and call this local measurement as SH0ES.
Finally, we utilize the additional LSS information from multiple photometric surveys. In particular, we consider the DES-Y1 galaxy clustering, galaxy-galaxy lensing and cosmic shear observations \cite{Abbott:2017wau} along with the weak gravitational lensing measurements from KiDS-100 \cite{Asgari:2020wuj} and HSC \cite{Hikage:2018qbn} \footnote{We do not include the cross-correlation between these measurements because the sky overlap between these surveys is small, see \cite{Hill:2020osr}.}. As demonstrated in \cite{Hill:2020osr}, the DES-Y1 "3x2pt" likelihood
can be well approximated in the EDE analysis by a Gaussian prior on $S_8$. Driven by this observation, we include the DES-Y1, KiDS-100 and HSC measurements via appropriate priors on $S_8$. Namely, for the DES-Y1 combined data analysis we use $S_8=0.773^{+0.026}_{-0.020}$ \cite{Abbott:2017wau}; for the KiDS-100 cosmic shear measurements we adopt $S_8=0.759^{+0.024}_{-0.021}$\cite{Asgari:2020wuj}; for HSC observations we utilize $S_8=0.780^{+0.030}_{-0.033}$ \cite{Hikage:2018qbn}. Finally, we weight each mean with its inverse-variance and obtain the resultant constraint $S_8=0.769\pm0.015$. We include this combined measurement as a Gaussian prior and refer to this simply as $\rm S_8$ in our analysis.
\subsection{Results}
\label{subsec:spt}
We report our main results obtained from analyses of different cosmological data sets.
\subsubsection{$Planck\text{-}low\ell\!+\!SPT$}
\label{subsec:cmb}
We firstly examine the cosmological inference from the primary CMB data alone following \cite{Chudaykin:2020acu}. The resulting parameter constraints in the $\Lambda$CDM and EDE models inferred from the $\rm Planck\text{-}low\ell\!+\!SPT$ data are given in Tab. \ref{table:base}. The limits for the $\Lambda$CDM scenario are taken from Ref. \cite{Chudaykin:2020acu} (data set $Base$ therein). The 2d posterior distributions for the EDE model are shown in Fig. \ref{fig:spt_1}.
\begin{table*}[htb!]
\renewcommand{\arraystretch}{1.1}
Constraints from $\rm Planck\text{-}low\ell\!+\!SPT$ \vspace{2pt} \\
\centering
\begin{tabular}{|l|c|c|}
\hline Parameter &$\Lambda$CDM~~&~~~EDE \\ \hline\hline
{$\ln(10^{10} A_\mathrm{s})$} &
$3.021 \pm 0.017$ &
$3.024 \pm 0.018$ \\
{$n_\mathrm{s}$} &
$0.9785 \pm 0.0074 $ &
$0.9816 \pm 0.0094$ \\
$H_0 \, [\mathrm{km/s/Mpc}]$ &
$69.68 \pm 1.00$ &
$70.79\pm1.41$ \\
{$\Omega_\mathrm{b} h^2$} &
$0.02269 \pm 0.00025 $ &
$0.02291 \pm 0.00036$ \\
{$\Omega_\mathrm{cdm} h^2$} &
$0.1143\pm0.0020$ &
$0.1178\pm0.0039$ \\
{$\tau_\mathrm{reio}$} &
$0.0510\pm 0.0086$ &
$0.0511\pm 0.0085$\\
{$\mathrm{log}_{10}(z_c)$} &
$-$ &
$3.75^{+0.55}_{-0.17}$ \\
{$f_\mathrm{EDE} $} &
$-$ &
$< 0.104$\\
{$\theta_i$} &
$-$ &
$1.60^{+1.13}_{-0.88} $\\
\hline
$\Omega_\mathrm{m}$ &
$0.2838 \pm0.0118$ &
$0.2822 \pm0.0120$ \\
$\sigma_8$ &
$0.7842 \pm 0.0087$ &
$0.7894\pm0.0131$ \\
$S_8$ &
$0.763 \pm 0.022$ &
$0.766 \pm 0.024$ \\
$r_s$ &
$145.76\pm 0.46$ &
$143.71\pm 1.84$ \\
\hline
\end{tabular}
\caption{Marginalized constraints on the cosmological parameters in $\Lambda$CDM and in the EDE scenario with $n=3$, as inferred from the $\rm Planck\text{-}low\ell\!+\!SPT$ data. The upper limit on $f_{\rm EDE}$ is quoted at 95\% CL. }
\label{table:base}
\end{table*}
We find no evidence for non-zero $f_{\rm EDE}$ in the CMB data only analysis. We report an upper bound $f_{\rm EDE}<0.104\,(2\sigma)$ which is compatible with the amount of EDE required to alleviate the Hubble tension. Thus, the EDE model yields substantially higher values of the Hubble parameter, $H_0=70.79\pm1.41\rm \,\,km\,s^{-1}Mpc^{-1}$, being in $1.6\sigma$ agreement with the SH0ES measurements.
We emphasize that the $\rm Planck\text{-}low\ell\!+\!SPT$ data allow for somewhat larger values of $f_{\rm EDE}$ as compared to that from the full Planck likelihood, namely $f_{\rm EDE}<0.087\,(2\sigma)$ \cite{Hill:2020osr}.
At the same time, the EDE scenario supplies substantially low values of the late-time amplitude, $S_8=0.766\pm 0.024$, being in perfect agreement with the multiple probes of LSS. This effect is attributed to the fact that the large-angular scale Planck temperature power spectrum and SPTPol data both accommodate a low $S_8$ \cite{Chudaykin:2020acu}. On the contrary, the full Planck likelihood favours substantially higher values of $\sigma_8$ and $S_8$ which are primarily driven by overly enhanced lensing smoothing of the CMB peaks in the Planck temperature spectrum. The upward shift of these parameters makes the EDE prediction incompatible with the current LSS data as reported in Ref. \cite{Hill:2020osr}. The $\rm Planck\text{-}low\ell\!+\!SPT$ data allows one to alleviate this issue making the region of appreciably high $f_{\rm EDE}\sim0.1$ compatible with cosmological data.
The epoch of EDE is constrained to $\mathrm{log}_{10}(z_c)=3.75^{+0.55}_{-0.17}$. It reliably supports only a lower bound on $z_c$, whereas the upper tail of $\mathrm{log}_{10}(z_c)$ remains largely unconstrained. The posterior distribution for $\mathrm{log}_{10}(z_c)$ clearly indicates one single maximum, whereas the previous EDE studies hint at a weakly bimodal distribution for that \cite{Smith:2019ihp,Hill:2020osr,Ivanov:2020ril}. As discussed in Ref. \cite{Smith:2019ihp}, this ambiguous behaviour is driven by the Planck polarization measurements at high-$\ell$ and could simply be a noise fluctuation. We also find a much flatter distribution for the initial field displacement, namely $\theta_i=1.60^{+1.13}_{-0.88}$. The previous EDE analyses which adopted the full Planck likelihood \cite{Smith:2019ihp,Hill:2020osr,Ivanov:2020ril} reveal, on the contrary, a strong preference for a large initial field displacement. This large $\theta_i$ preference comes from a oscillatory pattern in the residuals of the TE and EE Planck spectra in the multipole range $\ell\sim30-500$ \cite{Smith:2019ihp} which is disfavored by the $\rm Planck\text{-}low\ell\!+\!SPT$ data \footnote{The ACT observations also do not detect any oscillatory feature in TE and EE measurements at intermediate scales \cite{Aiola:2020azj} thus supporting our inference. This implies that the residuals observed in Planck polarization measurements are, most likely, merely caused by systematic effects \cite{Lin:2020jcb}.}. Thus, our result validates the monomial expansion of the field potential when one employs the optimally constructed $\rm Planck\text{-}low\ell\!+\!SPT$ likelihood.
\subsubsection{$Planck\text{-}low\ell\!+\!SPT$+BOSS}
\label{subsec:boss}
We perform a joint analysis of the $\rm Planck\text{-}low\ell\!+\!SPT$ data and the BOSS DR12 FS+BAO likelihood. The parameter constraints for the $\Lambda$CDM and EDE scenarios are presented in Tab. \ref{table:boss}. The corresponding 2d posterior distributions are shown in Fig. \ref{fig:spt_1}.
\begin{table*}[htb!]
\renewcommand{\arraystretch}{1.1}
Constraints from $\rm Planck\text{-}low\ell\!+\!SPT\!+\!BOSS$ \vspace{2pt} \\
\centering
\begin{tabular}{|l|c|c|}
\hline Parameter &$\Lambda$CDM~~&~~~EDE \\ \hline\hline
{$\ln(10^{10} A_\mathrm{s})$} &
$3.014 \pm 0.017$ &
$3.019 \pm 0.018$ \\
{$n_\mathrm{s}$} &
$0.9716 \pm 0.0056 $ &
$0.9766 \pm 0.0090$ \\
$H_0 \, [\mathrm{km/s/Mpc}]$ &
$68.50 \pm 0.57$ &
$69.89\pm1.28$ \\
{$\Omega_\mathrm{b} h^2$} &
$0.02250 \pm 0.00021 $ &
$0.02279 \pm 0.00039$ \\
{$\Omega_\mathrm{cdm} h^2$} &
$0.1166\pm0.0012$ &
$0.1213\pm0.0042$ \\
{$\tau_\mathrm{reio}$} &
$0.0456\pm 0.0082$ &
$0.0457\pm 0.0085$\\
{$\mathrm{log}_{10}(z_c)$} &
$-$ &
$3.69^{+0.61}_{-0.14}$ \\
{$f_\mathrm{EDE} $} &
$-$ &
$< 0.118$\\
{$\theta_i$} &
$-$ &
$1.61^{+1.13}_{-0.83} $\\
\hline
$\Omega_\mathrm{m}$ &
$0.2978\pm0.0071$ &
$0.2963 \pm0.0070$ \\
$\sigma_8$ &
$0.7880 \pm 0.0074$ &
$0.7966\pm0.0129$ \\
$S_8$ &
$0.785 \pm 0.014$ &
$0.792 \pm 0.016$ \\
$r_s$ &
$145.31\pm 0.31$ &
$142.67\pm 2.14$ \\
\hline
\end{tabular}
\caption{Marginalized constraints (68\% CL) on the cosmological parameters in $\Lambda$CDM and in the EDE scenario with $n=3$, as inferred from the $\rm Planck\text{-}low\ell\!+\!SPT\!+\!BOSS$ data. The upper limit on $f_{\rm EDE}$ is quoted at 95\% CL. }
\label{table:boss}
\end{table*}
\begin{figure*}[ht]
\begin{center}
\includegraphics[width=1\textwidth]{multiple_EDE_LCDM}
\end{center}
\caption{
Posterior distributions of the cosmological parameters of the $\Lambda$CDM and EDE model for different data sets. The gray and yellow bands represent the constraints ($1\sigma$ and $2\sigma$ confidence regions) on $H_0$ and $S_8$ coming from SH0ES and $S_8$ data, respectively.
\label{fig:spt_1} }
\end{figure*}
We found an appreciably weaker constraint on EDE, $f_{\rm EDE}<0.118\,(2\sigma)$, compared to that in the analysis with CMB alone in the previous subsection. The $14\%$ alleviation of the upper bound is primarily driven by the upward shift in the mean value of $\omega_c$ needed to maintain the fit to the BOSS data. We emphasize that our analysis provides with substantially larger values of $f_{\rm EDE}$ compared to that from the full Planck and BOSS data, $f_{\rm EDE}<0.072\,(2\sigma)$ \cite{Ivanov:2020ril}. Despite the weaker constraint on $f_{\rm EDE}$, the mean value of $H_0$ significantly decreases and its error bar is considerably shrunk, namely $H_0=68.52\pm 0.58\rm \,\,km\,s^{-1}Mpc^{-1}$ and $H_0=69.89\pm1.28\rm \,\,km\,s^{-1}Mpc^{-1}$ in the $\Lambda$CDM and EDE scenarios. It becomes possible due to the more precise BAO measurements which being combined with the FS information impose a tight constraint on $H_0$. Nevertheless, within the EDE scenario the Hubble tension with the SH0ES measurement is below $2.2\sigma$ statistical significance. Besides that, we found appreciably larger values of $S_8$, namely $S_8=0.785\pm 0.014$ and $S_8=0.792\pm 0.016$ in the $\Lambda$CDM and EDE models. We emphasize that these constraints are fully compatible with the galaxy clustering and weak gravitational lensing measurements that justifies further including of DES-Y1, KiDS-1000 and HSC data. Overall, the BAO+FS BOSS likelihood yields unprecedented gain of cosmological information: it provides with the two-fold improvement for the $\omega_c$ and $H_0$ measurements over those from the CMB alone.
Regarding the other EDE parameters, we found a slightly more precise measurement of the EDE epoch, $\mathrm{log}_{10}(z_c)=3.69^{+0.61}_{-0.14}$. As a result, the maximum of the posterior distribution for this parameter is better visualized as can be appreciated from Fig. \ref{fig:spt_1}. We do not find any improvement for the initial field displacement, $\theta_i=1.61^{+1.13}_{-0.83}$, which still supports the substantially flat distribution. It also corroborates our CMB only analysis which does not find any evidence for subdominant peaks in the $\mathrm{log}_{10}(z_c)$ distribution. This indicates that the bimodality behaviour previously claimed in the EDE analyses \cite{Smith:2019ihp,Hill:2020osr,Ivanov:2020ril} indeed comes from the Planck measurements at high-$\ell$ in full accord with the claim made in Ref. \cite{Smith:2019ihp}.
\subsubsection{$Planck\text{-}low\ell\!+\!SPT$+BOSS+$S_8$}
\label{subsec:S8}
On the next step, we include the $S_8$ information from DES-Y1, KiDS-1000 and HSC measurements by adopting the Gaussian prior $S_8=0.769\pm0.015$. The parameter constraints in the $\Lambda$CDM and EDE scenarios are reported in Tab. \ref{table:S8H0} (second and third columns). The corresponding 2d posterior distributions are shown in Fig. \ref{fig:spt_2}.
\begin{table*}[htb!]
\renewcommand{\arraystretch}{1.1}
Constraints from $\rm Planck\text{-}low\ell\!+\!SPT\!+\!BOSS\!+\!S_8(\!+\!SH0ES)$ \vspace{2pt} \\
\centering
\begin{tabular}{|l|c|c|c|}
\hline Parameter &$\Lambda$CDM~~&~~~EDE &~~~EDE $\rm (+SH0ES)$\\ \hline\hline
{$\ln(10^{10} A_\mathrm{s})$} &
$3.008 \pm 0.017$ &
$3.013 \pm 0.017$ &
$3.021 \pm 0.017$ \\
{$n_\mathrm{s}$} &
$0.9735 \pm 0.0054 $ &
$0.9753 \pm 0.0076$ &
$0.9870 \pm 0.0089$ \\
$H_0 \, [\mathrm{km/s/Mpc}]$ &
$68.82 \pm 0.50$ &
$69.79\pm0.99$ &
$71.81\pm1.19$ \\
{$\Omega_\mathrm{b} h^2$} &
$0.02255 \pm 0.00020 $ &
$0.02276 \pm 0.00036$ &
$0.02318 \pm 0.00042$ \\
{$\Omega_\mathrm{cdm} h^2$} &
$0.1159\pm0.0010$ &
$0.1193\pm0.0028$ &
$0.1241\pm0.0039$ \\
{$\tau_\mathrm{reio}$} &
$0.0437\pm 0.0087$ &
$0.0446\pm 0.0086$ &
$0.0448\pm 0.0089$ \\
{$\mathrm{log}_{10}(z_c)$} &
$-$ &
$3.74^{+0.56}_{-0.15}$ &
$3.64^{+0.13}_{-0.18}$ \\
{$f_\mathrm{EDE} $} &
$-$ &
$< 0.094$ &
$0.088\pm0.034$ \\
{$\theta_i$} &
$-$ &
$1.57^{+1.05}_{-0.86} $ &
$1.79^{+1.02}_{-0.42} $ \\
\hline
$\Omega_\mathrm{m}$ &
$0.2938\pm0.0059$ &
$0.2930 \pm0.0059$ &
$0.2870 \pm0.0055$ \\
$\sigma_8$ &
$0.7839 \pm 0.0069$ &
$0.7889\pm0.0089$ &
$0.8005\pm0.0111$ \\
$S_8$ &
$0.776 \pm 0.010$ &
$0.780 \pm 0.011$ &
$0.783 \pm 0.012$ \\
$r_s$ &
$145.45\pm 0.28$ &
$143.51\pm 1.50$ &
$140.61\pm 2.01$ \\
\hline
\end{tabular}
\caption{Marginalized constraints (68\% CL) on the cosmological parameters in $\Lambda$CDM and in the EDE scenario with $n=3$, as inferred from the $\rm Planck\text{-}low\ell\!+\!SPT\!+\!BOSS\!+\!S_8$ (second and third columns) and $\rm Planck\text{-}low\ell\!+\!SPT\!+\!BOSS\!+\!S_8\!+\!SH0ES$ (fourth column) data sets. The upper limit on $f_{\rm EDE}$ is quoted at 95\% CL. }
\label{table:S8H0}
\end{table*}
\begin{figure*}[ht]
\begin{center}
\includegraphics[width=1\textwidth]{multiple_EDE_769_BOSS_S8_H0}
\end{center}
\caption{
Posterior distributions of the cosmological parameters of the EDE model for different data sets. The gray and yellow bands represent the constraints ($1\sigma$ and $2\sigma$ confidence regions) on $H_0$ and $S_8$ coming from SH0ES and $S_8$ data, respectively.
\label{fig:spt_2} }
\end{figure*}
We found a more stringent limit on EDE, $f_{\rm EDE}<0.094\,(2\sigma)$, which represents a $20\%$ improvement over that from the analysis without $S_8$ information in the previous subsection. This gain is explained by a $0.5\sigma$ downward shift in $\omega_c$ which strongly correlates with $f_{\rm EDE}$. A lower value of $\omega_c$, in turn, reduces the growth rate of perturbations at late times that allows one to accommodate a substantially lower value of $S_8$ in accord with the $S_8$ data. In particular, the combined $\rm Planck\text{-}low\ell\!+\!SPT\!+\!BOSS\!+\!S_8$ data bring $S_8=0.776 \pm 0.011$ and $S_8=0.780 \pm 0.011$ in the $\Lambda$CDM and EDE scenarios which represent $20\%$ and $30\%$ improvements over that in the $\rm Planck\text{-}low\ell\!+\!SPT\!+\!BOSS$ analysis (without $S_8$). More precise determination of $S_8$ improves the $H_0$ constraints to the same extent, namely $H_0=68.84 \pm 0.50\rm \,\,km\,s^{-1}Mpc^{-1}$ and $H_0=69.79\pm0.99\rm \,\,km\,s^{-1}Mpc^{-1}$ in the $\Lambda$CDM and EDE scenarios. We emphasize that the mean values of $H_0$ upon including the $S_8$ information remain essentially unchanged that demonstrates a high level of compatibility between the $\rm Planck\text{-}low\ell\!+\!SPT\!+\!BOSS$ and $S_8$ data. It is worth nothing that the $H_0$ constraint inferred in the $\Lambda$CDM scenario is in a substantial $3.5\sigma$ tension with the Cepheid calibrated local measurement of $H_0$. In the EDE model this tension is alleviated to an allowable level of $2.5\sigma$ that makes the $\rm Planck\text{-}low\ell\!+\!SPT$+BOSS+$\rm S_8$ and SH0ES data statistically compatible within the EDE framework.
Upon including the $S_8$ information we do not find a substantial improvement in the $\mathrm{log}_{10}(z_c)$ and $\theta_i$ measurements. We found $\mathrm{log}_{10}(z_c)=3.74^{+0.56}_{-0.15}$ and $\theta_i=1.57^{+1.05}_{-0.86}$ which are consistent with the results of the previous analysis without the $S_8$ data. The virtually intact constraints on these parameters can be readily understood. Unlike $f_{\rm EDE}$, the $\mathrm{log}_{10}(z_c)$ and $\theta_i$ very weakly correlate with the other $\Lambda$CDM parameters as demonstrated in Fig. \ref{fig:spt_1}. It implies that a more precise measurements of $S_8$ without a significant shift in its mean value can not substantially alter the posterior distributions of $\mathrm{log}_{10}(z_c)$ and $\theta_i$ parameters.
\subsubsection{$Planck\text{-}low\ell\!+\!SPT$+BOSS+$S_8$+SH0ES}
\label{subsec:H0}
We finally address the Cepheid-based local measurements of $H_0$. Since the SH0ES measurement and $\rm Planck\text{-}low\ell\!+\!SPT$+BOSS+$\rm S_8$ inference of $H_0$ are in the significant tension in the framework of $\Lambda$CDM cosmology, we do not combine them in one data set under the $\Lambda$CDM assumption. The parameter constraints for the EDE scenario are presented in Tab. \ref{table:S8H0} (fourth column). The corresponding 2d posterior distributions are shown in Fig. \ref{fig:spt_2}.
We found $f_{\rm EDE}=0.088\pm0.034$, which indicates a $2.6\sigma$ preference for nonzero EDE. This result is driven by the SH0ES measurement which favours a substantially higher value of $H_0$. Namely, we found $H_0=71.81\pm1.19\rm \,\,km\,s^{-1}Mpc^{-1}$ which is now in $1.2\sigma$ agreement with the SH0ES data. We emphasize that the error bar on $H_0$ increases quite moderately, by $20\%$ over that from the analysis without SH0ES. It indicates that a better agreement with the SH0ES measurement comes from a released freedom in the EDE model rather than to be a result of the worse description of other data sets. Indeed, the constraint on $S_8$ remains virtually unchanged compared to the analysis without SH0ES, namely $S_8=0.783 \pm 0.012$. It implies that a higher $H_0$ does not significantly degrade the fit to the LSS data. Thus, our approach based on the $\rm Planck\text{-}low\ell\!+\!SPT$ CMB data alleviates the conflict between the SH0ES-tension-resolving EDE cosmologies and LSS data previously claimed in Ref. \cite{Hill:2020osr,Ivanov:2020ril}.
Addressing the local $H_0$ measurements significantly alters the posterior distribution for $\mathrm{log}_{10}(z_c)$ and $\theta_i$ parameters. We found a more stringent constraint on the EDE epoch, $\mathrm{log}_{10}(z_c)=3.64^{+0.13}_{-0.18}$, as opposed to the more flattened distribution observed in the previous analyses without SH0ES. This result distinctively indicates a narrow redshift interval prior to recombination at which EDE efficiently decays in full accord with the EDE proposal. We also find a strong preference for large initial field displacement, $\theta_i=1.79^{+1.02}_{-0.42}$, consistent with the findings of \cite{Smith:2019ihp,Hill:2020osr}. Our results reflect how the SH0ES measurements break parameter degeneracies in the EDE sector to restore cosmological concordance.
To scrutinize the impact of the SH0ES measurements, we examine the goodness-of-fit for each individual likelihood. For that, we provide the best-fit $\chi^2$ values for each data set optimized to the data with and without SH0ES measurements in Tab. \ref{tab:chi2}.
\begin{table}
\renewcommand{\arraystretch}{1.0}
\centering
\begin{tabular} {| c | c |c | c|}
\hline
Data set & w/o SH0ES & w SH0ES & $N_{\rm dof}$ \\
\hline
\hline
$\rm Planck\text{-}low\ell$ & $825.43$ & $825.45$ & $1005$ \\
$\rm SPT$ & $148.82$ & $145.83$ & $114$ \\
$\rm BOSS$ & $476.55$ & $478.80$ & $372$ \\
$\rm S_8$ & $<0.01$ & $1.03$ & $1$ \\
$\rm SH0ES$ & $10.23$ & $1.13$ & $1$ \\
\hline
$\sum\chi^2_{\rm -SH0ES}$ & $1450.79$ & $1451.10$ & $1492$ \\
$\sum\chi^2$ & $1461.02$ & $1452.23$ & $1493$ \\
\hline
\end{tabular}
\caption {$\chi^2$ values for the best-fit EDE model optimized to the $\rm Planck\text{-}low\ell\!+\!SPT\!+\!BOSS\!+\!S_8$ (second column) and $\rm Planck\text{-}low\ell\!+\!SPT\!+\!BOSS\!+\!S_8\!+\!SH0ES$ (third column) data.
We also provide the number of degrees of freedom $N_{\rm dof}=N_{\rm data}-N_{\rm fit}$ where $N_{\rm data}$ is the number of data points and $N_{\rm fit}$ is the sum of fitting parameters (fourth column)}.
\label{tab:chi2}
\end{table}
We found that the total $\chi^2$-statistics optimized to the data without SH0ES changes insignificantly, $\Delta\chi^2_{\rm -SH0ES}=0.31$. It implies that adding SH0ES does not considerably spoil the fit to the other data.
In particular, including SH0ES does not afflict the fit to the $\rm Planck\text{-}low\ell$ likelihood. In turn, it moderately degrades the fit to the BOSS data by $\Delta\chi^2_{\rm BOSS}=2.25$ \footnote{We caution that the reduced $\chi^2$-statistics, $\chi^2/N_{\rm dof}$, is a very inaccurate metric for the goodness of fit. First, it does not accounts for the covariance between different data bins. Second, it assumes that cosmological information is uniformly distributed between different data bins which is not the case for the BOSS data. Indeed, the reduced $\chi^2$ for the BOSS likelihood reads $\chi^2/N_{\rm dof}=1.29$, see Tab. \ref{tab:chi2}, which can be naively interpreted as a bad fit. However, the reduced $\chi^2$ substantially decreases if one exploits a substantially larger momentum cutoff $k_{\min}=0.05~\hMpc$ instead of $k_{\min}=0.01~\hMpc$ used in our analysis as demonstrated in Ref. \cite{Ivanov:2019pdj}.}. This effect can be readily understood. A higher value of $H_0$ driven by SH0ES is accommodated by higher $f_{\rm EDE}$ which, in turn, strongly correlates with $\omega_c$ and $\sigma_8$ as shown in Fig. \ref{fig:spt_2}. However, higher values of these parameters become at odds with the BOSS likelihood that favours moderately lower values of $\omega_c$ and $\sigma_8$. The worsening of the BOSS fit is entirely compensated by the improved fit to the SPT data. Namely, we found $\Delta\chi^2_{\rm Pol}=-2.24$ and $\Delta\chi^2_{\rm Lens}=-0.75$ for polarization and gravitational lensing measurements, respectively. These improvements can be attributed to considerably higher values of $H_0$ inferred from the SPTPol survey \cite{Henning:2017nuy,Bianchini:2019vxp}. Finally, the fit to the $S_8$ data degrades quite marginally, $\Delta\chi^2_{\rm S_8}=1.03$. It demonstrates that the EDE scenario can accommodate a higher value of $H_0$ while not significantly deteriorating the fit to the galaxy clustering and weak lensing measurements.
To reliably predict the preference of the EDE scenario over $\Lambda$CDM we resort to several statistical tools. First, we employ an essentially frequentist Akaike Information Criteria (AIC) \cite{1100705} that sets the penalty for extra free parameters in more complex models. For that, we build up the absolute difference in logarithmic likelihoods $\log L$ calculated for EDE and $\Lambda$CDM models at their respective best-fit points optimized to the $\rm Planck\text{-}low\ell\!+\!SPT\!+\!BOSS+\!S_8\!+\!SH0ES$ likelihood. The AIC states that the quantity $2\Delta\log L$ defined in this way is distributed as $\chi^2_n$ with $n$ degrees of freedom equal to the difference of fitting parameters in $\Lambda$CDM and EDE models. As the EDE model has three extra parameters, we put $n=3$. We found $2\Delta\log L=9.3$ that indicates a quite moderate $2.2\sigma$ preference for the EDE scenario over $\Lambda$CDM. Second, we apply a more sophisticated Bayesian evidence analysis which is ought to be preferred in model comparison since it addresses the prior volume effects that allows one to directly control the lack of predictivity of more complicated models \cite{zbMATH03189754}. The AIC does not account for the prior information which is highly relevant for model comparison questions and omitting it would result in seriously wrong inferences \cite{Trotta:2008qt}. To avoid this, we employ the \texttt{MCEvidence} code \cite{Heavens:2017afc} to estimate the Bayesian factor, the ratio between EDE and $\Lambda$CDM evidences,
\begin{equation}
B=\frac{p({\rm EDE}|d)}{p({\rm \Lambda CDM}|d)}.
\end{equation}
Using the $\rm Planck\text{-}low\ell\!+\!SPT\!+\!BOSS+\!S_8\!+\!SH0ES$ data set we found $B=1.8$ that supports the EDE preference over $\Lambda$CDM. This preference is rather weak according to Jeffrey’s scale \cite{zbMATH03189754} due to a significantly larger parameter space volume in the EDE model compared to that in $\Lambda$CDM. Our result reveals an importance of the Bayesian evidence that penalizes models with a large volume of unconstrained parameter space more harshly than the AIC \cite{Trotta:2008qt}.
\section{Conclusions}
\label{sec:conc}
The EDE scenario is a compelling early-time resolution of the persistent and increasingly significant tension between local and global measurements of the Hubble constant. The EDE model successfully decreases the sound horizon enabling a higher value of $H_0$ in concordance with the SH0ES measurement. Accompanying shifts in $\omega_c$ and $n_s$ parameters produce the CMB power spectra nearly indistinguishable from that in the $\Lambda$CDM scenario, hence providing a good fit to both the primary CMB and distance-ladder $H_0$ data. However, as demonstrated in Ref. \cite{Hill:2020osr}, the shifts in the standard $\Lambda$CDM parameters are in tension with the various LSS data, in particular measurements of galaxy clustering and weak gravitational lensing. The region of parameter space capable of addressing the Hubble tension is further constrained when the full BOSS galaxy power spectrum likelihood is included \cite{Ivanov:2020ril}. In this paper, we revisit these stringent limits on EDE using a different CMB setup.
In fact, past claims of tight constraints on the EDE scenario \cite{Hill:2020osr,Ivanov:2020ril} essentially rely on the full Planck data. However, the Planck's residuals of the CMB temperature power spectrum present a curious oscillatory feature conventionally attributed to the extra smoothing effect of gravitational lensing that pulls the late-time amplitude to a higher value \cite{Addison:2015wyg,Aghanim:2018eyx}. Although the lensing-like anomaly does not significantly alter the $\Lambda$CDM predictions \cite{Aghanim:2016sns,Aghanim:2018eyx}, its effect may be crucial for the various extensions of the $\Lambda$CDM model which open up a new degeneracy direction with $\sigma_8$. This is indeed the case for the EDE scenario which dictates a higher value of $\sigma_8$ for the SH0ES-tension-resolving cosmology due to the tight correlation between $\sigma_8$, $f_{\rm EDE}$ and $H_0$ parameters.
The overly enhanced smoothing of CMB peaks in the Planck temperature power spectrum increases $\sigma_8$ even further that exacerbates the discrepancy between the Planck and LSS data in the EDE framework.
As demonstrated in Ref. \cite{Murgia:2020ryi}, the full Planck likelihood indeed provides the more stringent constraints on EDE compared to the 'unlensed' CMB power spectra. Since we do not know what is behind the Planck lensing anomaly, the more conservative approach would be to analyse the CMB data without this feature. The one way is to marginalize over the lensing information in the Planck CMB power spectra as done in Ref. \cite{Murgia:2020ryi}. The second approach refers to the usage of alternative CMB measurements. In our analysis we employ the second treatment and combine the Planck and SPTPol measurements following the strategy of Ref. \cite{Chudaykin:2020acu}.
In this work, we reanalyse the EDE scenario using the Planck and SPTPol measurements of the CMB anisotropy, the full BOSS likelihood and photometric galaxy clustering and weak lensing data. As the primary CMB data we consider the Planck TT power spectrum at $\ell<1000$ and the SPTPol measurements of TE, EE and lensing potential power spectra. It has been shown in Ref. \cite{Chudaykin:2020acu} that this CMB setup predicts the consistent CMB lensing effect in the $\Lambda$CDM cosmology which modelling is important for resulting EDE constraints \cite{Murgia:2020ryi}. In this paper, we extend the previous EDE analysis \cite{Chudaykin:2020acu} assuming a more motivated power-law cosine potential \eqref{PoulinEDE} and implementing a full perturbative dynamics of the EDE field.
We find no evidence for EDE in the primary CMB data alone: the fit to $\rm Planck\text{-}low\ell\!+\!SPT$ brings $f_{\rm EDE}<0.104\,(2\sigma)$. Our CMB analysis yields considerably higher values of the Hubble parameter, $H_0=70.79\pm1.41$, albeit with a $40\%$ larger error bar compared to that from the full Planck data \cite{Hill:2020osr}. Upon including the full BOSS galaxy power spectrum likelihood we found a somewhat looser constraint on EDE, $f_{\rm EDE}<0.118\,(2\sigma)$. The mean value of the Hubble constant is shifted considerably downwards, $H_0=69.89\pm1.28\rm \,\,km\,s^{-1}Mpc^{-1}$. Supplemented with additional LSS data in the form of a Gaussian prior on $S_8$ from DES-Y1 \cite{Abbott:2017wau}, KiDS \cite{Asgari:2020wuj} and HSC \cite{Hikage:2018qbn} photometric measurements (a procedure was validated for the EDE model in \cite{Hill:2020osr}), we obtain a considerably tighter constraint on EDE, $f_{\rm EDE}<0.094\,(2\sigma)$ and $H_0=69.79\pm0.99\rm \,\,km\,s^{-1}Mpc^{-1}$. We emphasize that even after taking into account the data from photometric surveys the available $f_{\rm EDE}$ values are still capable of addressing the Hubble tension in contrast to the past EDE analyses which fail to simultaneously resolve the Hubble tension and maintain a good fit to both CMB and LSS data \cite{Hill:2020osr,Ivanov:2020ril}. The main culprit of the past strong constraints on EDE is the overly enhanced smoothing effect of acoustic peaks that mainly affects the Planck temperature spectrum at high-$\ell$ and pulls $\sigma_8$ to a higher value thereby conflicting with the LSS constraints. We also found that the $H_0$-tension with the SH0ES measurements is alleviated to an acceptable $2.5\sigma$ level that enables one to include the SH0ES data in the fit.
Finally, we fit the EDE model to the combined data set with SH0ES. We found a $2.6\sigma$ evidence for non-zero EDE, $f_{\rm EDE}=0.088\pm0.034$. Our measurements reconcile the tension with the SH0ES-only constraint leading to $H_0=71.81\pm1.19\rm \,\,km\,s^{-1}Mpc^{-1}$. We emphasize that our inference of the Hubble constant yields a considerably higher mean value with only modestly larger error bar compared to the results of past EDE studies that utilize a similar combination of data sets (with the high-$\ell$ Planck data) \cite{Poulin:2018cxd,Smith:2019ihp,Hill:2020osr}. We scrutinize the impact of the SH0ES data on the goodness-of-fit of each individual measurement. We found that the inclusion of SH0ES moderately degrades the fit to the BOSS likelihood but this effect is entirely compensated by the improved fit to the SPTPol data. At the same time, the goodness-of-fit of the photometric LSS data is only marginally deteriorated upon including SH0ES in the analysis. It reconciles the conflict between the SH0ES-tension-resolving EDE cosmologies and LSS data previously claimed in Ref. \cite{Hill:2020osr,Ivanov:2020ril}. The AIC criteria indicates a mild preference for the EDE scenario over $\Lambda$CDM at a $2.2\sigma$ level. The statistical analysis based on the Bayesian evidence reveals an even weaker evidence for the EDE scenario. It is caused by a significantly larger volume of parameter space available in the EDE model.
Overall, our results indicate that the combined analysis of Planck and SPTPol data can address a higher $H_0$ value in full concordance with the SH0ES measurement whilst not substantially worsening the fit to the galaxy clustering and weak lensing measurements. This inference is mainly driven by two causes. First, our primary CMB data perfectly agree with the various LSS data within the $\Lambda$CDM cosmology \cite{Chudaykin:2020acu} that opens up a new region of higher $\sigma_8$ values which can accommodate a higher $H_0$ in the EDE scenario. Second, the combined data approach provides considerably larger error bars as compared to those in the baseline Planck analysis \cite{Aghanim:2018eyx} that facilitates a resolution of the Hubble tension. In particular, the fit to $\rm Planck\text{-}low\ell\!+\!SPT$ yields the $H_0$ measurement with twice the error bar and a $40\%$ looser constraint on $S_8$ \cite{Chudaykin:2020acu} as compared to that in the full Planck data analysis assuming the $\Lambda$CDM cosmology. All this makes the LSS constraints on EDE rather harmless. The constraints obtained in this work are similar to those from Ref. \cite{Murgia:2020ryi} that examines the lensing-marginalized Planck power spectra.
Likewise, this paper underlines the extreme importance of CMB lensing effect for obtaining solid constraints on EDE. Future high-resolution Simons Observatory’s Large Aperture Telescope (CMB-S4) \cite{Ade:2018sbj} will probe the CMB anisotropy with pinpoint accuracy thus providing the robust measurement of the CMB lensing effect down to very small scales. Upgrading the ongoing ground-based experiments such as SPT-3G \cite{Benson:2014qhw,Anderson:2018mry} and AdvACTPol \cite{Calabrese:2014gwa,Li:2018uwb} will continue to make progress towards more precise measurements of the CMB anisotropies on very small scales. The complementary information delivered by Cosmology Large Angular Scale Survey (CLASS) \cite{Essinger-Hileman:2014pja,Xu:2019rne} and Simons Array \cite{POLAR,Faundez:2019lmz} can also shed light on the gravitational lensing effect. Future progress provided by upcoming and ongoing CMB measurements will most probably make clear whether the lensing-like anomaly in the Planck data is real or it is merely driven by systematic effects.
\vspace{1cm}
\section*{Acknowledgments}
We are indebted to Mikhail M. Ivanov for his collaboration on the initial stages of this project. The work is supported by the RSF grant 17-12-01547. All numerical calculations have been performed with the HybriLIT heterogeneous computing platform (LIT, JINR) (\href{http://hlit.jinr.ru}{http://hlit.jinr.ru}).
|
2,869,038,154,688 | arxiv | \section{Introduction}\label{sec1}
Here, we report on presentations and discussions that took place
at the OC1 parallel session entitled ``Primordial Gravitational Waves
and the CMB". The programme was designed to include both theoretical
and observational results. The electronic versions of some of the
talks delivered at this session are available at the MG12 website \cite{mg12website}.
We shall start from the overall framework of this session and motivations
for studying the primordial (relic) gravitational waves.
Primordial gravitational waves (PGWs) are necessarily generated by a strong
variable gravitational field of the very early Universe \cite{gr74}.
The existence of relic grtavitational waves relies only on the validity of
basic laws of general relativity and quantum mechanics. Specifically, the
generating mechanism is the superadiabatic (parametric) amplification of the waves'
zero-point quantum oscillations. In contrast to other known massless particles,
the coupling of gravitational waves to the external (``pump") gravitational
field is such that they could be classically amplified or quantum-mechanically
generated by the gravitational field of a homogeneous isotropic FLRW
(Friedmann-Lemaitre-Robertson-Walker) universe. Under certain extra conditions
the same applies to the primordial density perturbations. The PGWs are the
cleanest probe of the physical conditions in the early Universe right down
to the limits of applicability of currently available theories, i.e. the Planck
density $\rho_{\rm Pl} = c^5/G^2 \hbar \approx 10^{94}{\rm g}/{\rm cm}^{3}$
and the Planck size $l_{\rm Pl} = (G \hbar/c^3)^{1/2} \approx 10^{-33}{\rm cm}$.
The amount and spectral content of the PGWs field depend on the evolution of the
cosmological scale factor $a(\eta)$ representing the gravitational pump field.
The theory was applied to a variety of $a(\eta)$, including
those that are now called inflationary models \cite{gr74,a12a,starobinski}. If the
exact $a(\eta)$ were known in advance from some fundamental ``theory-of-everything",
we would have derived the properties of the today's signal with no ambiguity. In
the absence of such a theory, we have to use the available partial information
in order to reduce the number of options. The prize is very high - the actual detection
of a particular background of PGWs will provide us with a unique clue to the
birth of the Universe and its very early dynamical behaviour.
To be more specific, let us put PGWs in the context of a complete cosmological
theory hypothesizing that the observed Universe has come to the existence with
near-Planckian energy density and size (see papers \cite{birthpapers,Grishchuk2009} and
references therein). It seems reasonable to conjecture that the embryo Universe was
created by a quantum-gravity or by a ``theory-of-everything" process in a
near-Planckian state and then started to expand. (If you think that the development
of a big universe from a tiny embryo is arrant nonsense, you should recollect that
you have also developed from a single cell of microscopic size. Analogy proposed by
the biophysicist E. Grishchuk.) The total energy, including gravity, of the
emerging classical configuration was likely to be zero then and remains zero now.
In order for the natural hypothesis of spontaneous birth of the observed Universe
to bring us anywhere near our present state
characterized by $\rho_p =3 H_0^2/8 \pi G \approx 10^{-29} {\rm g}/{\rm cm}^3$
and $l_p = 500 l_H$ (which is the minimum size of the present-day patch of homogeneity
and isotropy, as follows from observations \cite{GrZeld}) the newly-born Universe
needs a significant `primordial kick'. During the kick, the size of the
Universe (or, better to say, the size of our patch of homogeneity and isotropy)
should increase by about 33 orders of magnitude without losing too much of the
energy density of whatever substance that was there, or maybe even slightly increasing
this energy density at the expence of the energy density of the gravitational field.
This process is graphically depicted in Fig.~\ref{birth} (adopted from the paper
\cite{Grishchuk2009}). The present state of the accessible Universe is marked by the
point P, the birth of the Universe is marked by the point B. If the
configuration starts at the point B and then expands according to the usual laws
of radiation-dominated and matter-dominated evolution (blue curve), it completely
misses the desired point P. By the time the Universe has reached the size $l_p$,
the energy density of its matter content would have dropped to the level many orders of
magnitude lower than the required $\rho_p$. The only way to reach P from B is to assume
that the newly-born Universe has experienced a primordial kick allowing the point of
evolution to jump over from the blue curve to the black curve.
If we were interested only in the zero-order approximation of homogeneity and isotropy,
there would be many evolutionary paths equally good for connecting the points B
and P. However, in the next-order approximations, which take into account the inevitable
quantum-mechanical generation of cosmological perturbations, the positioning and form of
the transition curve in Fig.~\ref{birth} become crucial. The numerical value of
the Hubble parameter $H$ (related to the energy density of matter driving the kick, as shown
on the vertical axis of the figure) determines the numerical level of amplitudes of the
generated cosmological perturbations, while the shape of the transition curve determines
the shape of the primordial power spectrum.
\begin{figure}
\includegraphics[width=12cm,height=10cm]{g3.eps}
\caption{A primordial kick is required in order to reach the
present state of the Universe P from the birth event B. Red lines describe
possible transitions that would be accompanied by the generated cosmological
perturbations of observationally necessary level and spectral shape
\cite{Grishchuk2009}. The ``legitimate transition in inflationary theory" is an
evolution allowed by the incorrect (inflationary) formula for density perturbations.
See explanations in Sec.{\ref{section3}} below.}
\label{birth}
\end{figure}
The simplest assumption about the initial kick is that its entire duration was
characterized by a single power-law scale factor \cite{gr74}
\begin{eqnarray}
\label{scfactor}
a(\eta) = l_o|\eta|^{1+\beta},
\end{eqnarray}
where $l_o$ and $\beta$ are constants, and $\beta < -1$. In this power-law case, the
gravitational pump field is such that the generated primordial metric power spectra
(primordial means considered for wavelengths longer than the Hubble radius at the
given moment of time), for both gravitational waves and density perturbations, have the
universal power-law dependence on the wavenumber $n$:
\begin{equation}
\label{primsp}
h^{2}(n) \propto n^{2(\beta+2)}.
\end{equation}
It is common to write these metric power spectra separately for gravitational waves (gw)
and density perturbations (dp):
\begin{equation}
\label{primsp1}
h^2(n)~({\rm gw}) = B_t^2 n^{n_t}, ~~~~~h^2(n)~({\rm dp})=B_s^2 n^{n_s -1}.
\end{equation}
According to the theory of quantum-mechanical generation of cosmological
perturbations (for a review, see \cite{a12a}), the spectral indices are approximately
equal, $n_s-1 = n_t = 2(\beta+2)$, and the amplitudes $B_t$ and $B_s$ are of the
order of magnitude of the ratio $H_i/H_{\rm Pl}$, where $H_i\sim c/l_o$ is the
characteristic value of the Hubble parameter $H$ during the kick.
The straight lines in Fig.~\ref{birth} symbolize the power-law kicks
(\ref{scfactor}). They generate primordial spectra with constant spectral indices
throughout all wavelenghts. In particular, any horizontal line describes an
interval of de Sitter evolution, $\beta = -2$, ${\dot H} =0$, $H = const$. (Initial
kick driven by a scalar field is appropriately called inflation:
dramatic increase in size with no real change in purchasing power, i.e. in matter
energy density.) The gravitational pump field of a de Sitter kick transition
generates perturbations with flat (scale-independent) spectra $n_s-1 = n_t = 0$.
The red horizontal line shown in Fig.~\ref{birth} corresponds to
$H_i/H_{\rm Pl} \approx 10^{-5}$ and the generated primordial amplitudes
$B_t \approx B_s \approx 10^{-5}$. In numerical calculations,
the primordial metric power spectra are usually parameterized by
\begin{eqnarray}
\label{PsPt}
P_{t}(k)=A_t(k/k_0)^{n_t},~~P_{s}(k)=A_s(k/k_0)^{n_s-1},
\end{eqnarray}
where $k_0=0.002$Mpc$^{-1}$.
Although the assumption of a single piece of power-law evolution (\ref{scfactor})
is simple and easy to analyze, the reality could be more complicated and probably
was more complicated (see the discussion of WMAP data in Sec.\ref{section3}). A less
simplistic kick can be approximated by a sequence of power-law evolutions, and
then the primordial power spectra will consist of a sequence of power-law intervals.
The amplitudes of generated cosmological perturbations are large at
long wavelengths. According to the widely accepted assumption the observed
anisotropies in the cosmic microwave background radiation (CMB) are caused
by perturbations of quantum-mechanical origin. This assumption is
partially supported by the observed ``peak and dip" structure of the CMB angular
power spectra. Presumably, this structure reflects the phenomenon of
quantum-mechanical phase squeezing and standing-wave pattern of the generated metric
fields \cite{a12a}. The search for the relic gravitational waves is a goal of
a number of current and future space-borne, sub-orbital and ground-based CMB
experiments \cite{Planck,WMAP5,BICEP,quad,Clover,QUITE,EBEX,SPIDER,cmbpol}.
The CMB anisotropies are usually characterized by the four angular
power spectra $C_{\ell}^{TT}$, $C_{\ell}^{EE}$, $C_{\ell}^{BB}$ and
$C_{\ell}^{TE}$ as functions of the multipole number $\ell$. The contribution
of gravitational waves to these power spectra has been studied, both
analytically and numerically, in a number of papers
\cite{Polnarev1,a8,a11,a12,a13}. The derivation of today's CMB
power spectra brings us to approximate formulas of the following structure \cite{a12}:
\begin{eqnarray}
\label{exact-clxx'}
\begin{array}{l}
C_{\ell}^{TT}= \int \frac{dn}{n} [h(n, \eta_{rec})]^2 \left[F^T_{\ell}(n)\right]^2, \\
C_{\ell}^{TE}=\int \frac{dn}{n} h(n, \eta_{rec}) h^{\prime}(n, \eta_{rec})
\left[F^T_{\ell}(n) F^E_{\ell}(n)\right], \\
C_{\ell}^{YY}=\int \frac{dn}{n} [h^{\prime} (n, \eta_{rec})]^2
\left[F^Y_{\ell}(n)\right]^2, ~~~~{\rm where}~Y=E,B.
\end{array}
\end{eqnarray}
In the above expressions, $[h(n, \eta_{rec})]^2$ and $[h^{\prime} (n, \eta_{rec})]^2$
are power spectra of the gravitational wave field and its first
time-derivative. The spectra are taken at the recombination (decoupling) time
$\eta_{rec}$. The functions $F^X_{\ell}(n)$ ($X=T, E, B$) take care of the
radiative transfer of CMB photons in the presence of metric perturbations. To a good
approximation, the power residing in the metric fluctuations at wavenumber $n$
translates into the CMB $TT$ power at the angular scales characterized by the
multipoles $\ell \approx n$. Similar results hold for the CMB power spectra
induced by density perturbations. Actual numerical calculations use equations more accurate
than Eq.~(\ref{exact-clxx'}).
There are several differences between the CMB signals arising from density
perturbations and gravitational waves. For example, gravitational waves
produce B-component of polarization, while density perturbations do not \cite{a8};
gravitational waves produce negative TE-correlation at lower multipoles, while
density perturbations produce positive correlation
\cite{a12a,a12,polnarev,zbg,zbg2}, and so on. However, it is important to realize that
it is not simply the difference between zero and non-zero or between positive and
negative that matters. (In any case, various noises and systematics will make
this division much less clear cut.) What really matters is that the gw and dp sources
produce different CMB outcomes, and they are in principle distinguishable, even if
they both are non-zero and of the same sign. For example, if the parameters of density
perturbations were precisely known from other observations, any observed deviation
from the expected $TT$, $TE$ and $EE$ correlation functions could be attributed
(in conditions of negligible noises) to gravitational waves. From this perspective,
the identification of the PGWs signal goes well beyond the often stated goal of
detecting the B-mode of polarization. In fact, as was argued in the talk by
D.~Baskaran, delivered on behalf of the group including also L.~P.~Grishchuk
and W.~Zhao, the $TT$, $TE$, and $EE$ observational channels could be much more
informative than the $BB$ channel alone. Specifically for the Planck mission, the
inclusion of other correlation functions, in addition to $BB$, will significantly
increase the expected signal-to-noise ratio in the PGWs detection.
It is convenient to compare the gravitational wave signal in CMB with that
induced by density perturbation. A useful measure, directly related to
observations, is the quadrupole ratio $R$ defined by
\begin{eqnarray}
\label{defineR}
R \equiv \frac{C_{\ell=2}^{TT}({\rm gw})}{C_{\ell=2}^{TT}({\rm dp})},
\end{eqnarray}
i.e. the ratio of contributions of gw and dp to the CMB temperature quadrupole.
Another measure is the so-called tensor-to-scalar ratio $r$. This quantity is
built from primordial power spectra (\ref{PsPt}):
\begin{eqnarray}
\label{definer}
r \equiv \frac{A_t}{A_s}.
\end{eqnarray}
Usually, one finds this parameter linked to incorrect (inflationary) statements.
Concretely, inflationary theory substitutes (for ``consistency") its prediction of
arbitrarily large amplitudes of density perturbations in the limit of models where
spectral index $n_s$ approaches $n_s=1$ and $n_t$ approaches $n_t=0$ (horizontal
transition lines in Fig.~\ref{birth}) by the claim that it is the amount of relic
gravitational waves, expressed in terms of $r$, that should be arbitrarily small.
(For more details, see Sec.~{\ref{section3}}.) However, if $r$ is defined by
Eq.(\ref{definer}) without implying inflationary claims, one can use
this parameter. Moreover, one can derive a relation between $R$ and $r$ which depends
on the background cosmological model and spectral indices. For a rough comparison
of results one can use $r\approx2R$.
\section{Overview of oral presentations}
The OC1 session opened with the talk of Brian Keating, appropriately entitled
``The Birth Pangs of the Big Bang in the Light of BICEP". The speaker reported on the
initial results from the Background Imaging of Cosmic Extragalactic Polarization (BICEP)
experiment. The conclusions were based on data from two years of observation. For
the first time, some meaningful limits on $r$ were set exclusively from the fact of
(non)observation of the B-mode of polarization (see figure \ref{Keatingfig1} adopted
from Chiang et al. paper\cite{keating2009}). The author paid attention to various
systematic effects and showed that they were smaller than the statistical errors.
B.~Keating explained how the BICEP's design and observational strategies will serve
as a guide for future experiments aimed at detecting primordial gravitational waves.
In conclusion, it was stressed that a further $90\%$ increase in the amount of
analyzed BICEP data is expected in the near future.
\begin{figure}
\begin{center}
\includegraphics[width=12cm]{Keating1.eps}
\end{center}
\caption{BICEPs TE, EE, and BB power spectra together with data from other CMB
polarization experiments. Theoretical spectra from a CDM model with $r=0.1$ are shown
for comparison. The BB curve is the sum of the gravitational wave and lensing components.
At degree angular scales BICEPs constraints on $BB$ are the most powerful to
date \cite{keating2009}. }
\label{Keatingfig1}
\end{figure}
The next talk, delivered by Deepak Baskaran (co-authors:
L.~P.~Grishchuk and W.~Zhao), was entitled ``Stable indications of relic gravitational
waves in WMAP data and forecasts for the Planck mission". D. Baskaran reported on
the results of the likelihood analysis, performed by this group of authors, of the WMAP
5-year $TT$ and $TE$ data at lower multipoles. Obviously, in the center of their effort
was the search for the presence of a gravitational wave signal \cite{zbg,zbg2}. For the
parameter $R$, the authors found the maximum likelihood value $R=0.23$, indicating a
significant amount of gravitational waves. Unfortunately, this determination is
surrounded by large uncertainties due to remaining noises. This means that the
hypothesis of no gravitational waves, $R=0$, cannot be excluded yet with any significant
confidence. The speaker compared these findings with the result of WMAP team, which found
no evidence for gravitational waves. The reasons by which the gw signal can be overlooked
in a data analysis were identified and discussed. Finally, D.~Baskaran presented the
forecasts for the Planck mission. It was shown that the stable indications of relic
gravitational waves in the WMAP data are likely to become a certainty in the Planck
experiment. Specifically, if PGWs are characterized by the maximum likelihood parameters
found \cite{zbg,zbg2} from WMAP5 data, they will be detected by {\it Planck} at the
signal-to-noise level $S/N=3.65$, even under unfavorable conditions in terms of
instrumental noises and foregrounds. (For more details along these lines, see
sections below.)
The theoretical aspects of generation of CMB polarization and temperature anisotropies by
relic gravitational waves were reviewed by Alexander Polnarev in the contribution ``CMB
polarization generated by primordial gravitational waves. Analytical solutions".
The author described the analytical methods of solving the radiative
transfer equations in the presence of gravitational waves. This problem is usually tackled
by numerical codes, but this approach foreshadows the ability of the researcher to
understand the physical origin of the results. The analytical methods are a useful complement
to numerical techniques. They allow one not only to explain in terms of the
underlying physics the existing features of the final product, but also anticipate the
appearance of new features when the physical conditions change. A. Polnarev showed how the
problem of CMB anisotropies induced by gravitational waves can be reduced to a single integral
equation. This equation can be further analyzed in terms of some integral and differential
operators. Building on this technique, the author formulated analytical solutions as
expansion series over gravitational wave frequency. He discussed the resonance generation
of polarization and possible observational consequences of this effect.
An overview of the current experimental situation was delivered by Carrie MacTavish in
the talk ``CMB from space and from a balloon". C. MacTavish focused on the interplay
between experimental results from two CMB missions: the Planck satellite and the Spider
balloon experiment. Spider is scheduled for flight over Australia in spring 2010 making
measurements with 1600 detectors. The speaker emphasised the important combined impact
of these two experiments on determination of cosmological parameters in general.
The ever increasing precision of CMB experiments warrants analysis of subtle observational
effects inspired by the ideas from particle physics. The talk by Stephon Alexander
``Can we probe leptogenesis from inflationary primordial birefringent gravitational
waves" discussed a special mechanism of production of the lepton asymmetry with the
help of cosmological birefringent gravitational waves. This mechanism was proposed in
the recent paper \cite{Alexander}. The mechanism assumes that gravitational waves are generated
in the presence of a CP violating component of the inflaton field that couples to
a gravitational Chern-Simons term (Chern-Simons gravity). The lepton number
arises via the gravitational anomaly in the lepton number current. As pointed out
by the speaker, the participating gravitational waves should lead to a unique parity
violating cross correlation in the CMB. S.~Alexander discussed the viability of
detecting such a signal, and concluded by analyzing the corresponding observational
constraints on the proposed mechanism of leptogenesis.
Apart from being an arena for detecting relic gravitational waves, CMB is of course
a powerful tool for cosmology in general. The final talk of the session, delivered
by Grant Mathews (co-authors D.~Yamazaki, K.~Ichiki and T.~Kajino), discussed the
``Evidence for a primordial magnetic field from the CMB temperature and polarization
power spectra". This is an interesting subject as the magnetic fields are abundant
in many astrophysical and cosmological phenomena. In particular,
primordial magnetic fields could manifest themselves in the temperature and
polarization anisotropies of the CMB. The speaker reported on
a new theoretical framework for calculating CMB anisotropies
along with matter power spectrum in the presence of magnetic fields with
power-law spectra. The preliminary evidence from the data on matter and CMB power
spectra on small angular scales suggest an upper and a lower limit on the
strength of the magnetic field and its spectral index. It was pointed out that this
determination might be the first direct evidence of the presence of primordial
magnetic field in the era of recombination. Finally, the author showed that the
existence of such magnetic field can lead to an independent constraint on the
neutrino mass.
\section{Analysis of WMAP data and outlook for {\it Planck}}
Along the lines of the presentation by Baskaran et al, we review the results of the
recent analysis of WMAP 5-year data. The results suggest the evidence,
although still preliminary, that relic gravitational waves are present, and in the
amount characterized by $R\approx0.23$, This conclusion follows from the likelihood
analysis of WMAP5 $TT$ and $TE$ data at lower multipoles $\ell\leq100$. It is
only within this range of multipoles that the power in relic gravitational waves
is comparable with that in density perturbations, and gravitational waves compete
with density perturbations in generating CMB temperature and polarization
anisotropies. At larger multipoles, the dominant signal in CMB comes primarily from
density perturbations.
\subsection{Likelihood analysis of WMAP data
\label{section2.1}}
The analysis in papers \cite{zbg,zbg2} was based on proper specification of the
likelihood function. Since $TT$ and $TE$ data are the most informative in the WMAP5,
the likelihood function was marginalized over the remaining data variables $EE$
and $BB$. In what follows $D_{\ell}^{TT}$ and $D_{\ell}^{TE}$
denote the estimators (and actual data) of the $TT$ and $TE$ power spectra.
The joint pdf for $D_{\ell}^{TT}$ and $D_{\ell}^{TE}$ has the form
\begin{eqnarray}
f(D_{\ell}^{TT},D_{\ell}^{TE})= n^2{x}^{\frac{n-3}{2}}
\left\{2^{1+n}\pi\Gamma^2(\frac{n}{2})(1-\rho_{\ell}^2)(\sigma_{\ell}^T)^{2n}
(\sigma_{\ell}^E)^2\right\}^{-\frac{1}{2}}
\nonumber\\
\times\exp\left\{\frac{1}{1-\rho^2_{\ell}}\left(\frac{{\rho_{\ell}}
{z}}{{\sigma_\ell^T}{\sigma_\ell^E}}-\frac{{z}^2}{2x{(\sigma_\ell^E)^2}
}-\frac{{x}}{2{(\sigma_\ell^T)^2}}\right)\right\}.
\label{pdf_CT}
\end{eqnarray}
This pdf contains the variables (actual data) $D_{\ell}^{XX'}$
($XX'=TT,TE$) in quantities ${x}\equiv n(D_\ell^{TT}+N_{\ell}^{TT})$
and ${z}\equiv nD_\ell^{TE}$, where $N_{\ell}^{TT}$ is the temperature total noise
power spectrum. The quantity $n= (2\ell+1)f_{\rm sky}$ is the number of effective
degrees of freedom at multipole $\ell$, where $f_{\rm sky}$ is the sky-cut factor.
$f_{\rm sky}=0.85$ for WMAP and $f_{\rm sky}=0.65$ for {\it Planck}. $\Gamma$
is the $Gamma$-function. The quantities $\sigma_\ell^T$, $\sigma_\ell^E$ and $\rho_\ell$,
include theoretical power spectra $C_{\ell}^{XX'}$ and contain the
perturbation parameters $R$, $A_s$, $n_s$ and $n_t$, whose numerical values the
likelihood analysis seeks to find.
The three free perturbation parameters $R$, $A_s$, $n_s$ ($n_t=n_s-1$) were
determined by maximizing the likelihood function
\begin{eqnarray}
\nonumber\label{ctlikelihood1}
\mathcal{L}\propto \prod_{\ell}f(D_{\ell}^{TT}, D_{\ell}^{TE})
\end{eqnarray}
for $\ell=2...100$. The background cosmological model was fixed at the
best fit $\Lambda$CDM cosmology \cite{wmap5}.
The maximum likelihood (ML) values of the perturbation parameters
(i.e.~the best fit values), were found to be
\begin{eqnarray}
R=0.229,~~~n_s=1.086,~~~A_s=1.920\times10^{-9}
\label{best-fit}
\end{eqnarray}
and $n_t=0.086$. The region of maximum likelihood was probed by 10,000 sample points
using MCMC method. The projections of all 10,000 points on 2-dimensional
planes $R-n_s$ and $R-A_s$ are shown in Fig.~\ref{figurea1.1}.
The samples with relatively large values of the likelihood (red, yellow and green
colors) are concentrated along the curve, which projects into approximately straight
lines (at least, up to $R \approx 0.5$):
\begin{eqnarray}
\label{1Dmodel}
n_s=0.98+0.46R, ~~~~~A_s=(2.27-1.53R)\times10^{-9}.
\end{eqnarray}
These combinations of parameters $R, n_s, A_s$ produce roughly equal
responses in the CMB power spectra. The best fit model (\ref{best-fit}) is a
particular point on these lines, $R=0.229$. The family of models (\ref{1Dmodel}) is
used in Sec.\ref{section4} for setting the expectations for the Planck experiment.
The marginalized 2-dimensional and 1-dimensional distributions are plotted in
Fig.~\ref{figurea1} and Fig.~\ref{figureb12}, respectively. The ML values
for the 1-dimensional marginalized distributions and their $68.3\%$
confidence intervals are given by
\begin{eqnarray}\label{best-fit-1d}
R=0.266\pm0.171,~~n_s=1.107^{+0.087}_{-0.070} ,
~~A_s=(1.768^{+0.307}_{-0.245})\times10^{-9}.
\end{eqnarray}
\begin{figure}
\begin{center}
\includegraphics[width=6cm,height=7cm]{a1.eps}\includegraphics[width=6cm,height=7cm]{a2.eps}
\end{center}
\caption{The projection of 10,000 samples of the 3-dimensional likelihood
function onto the $R-n_s$ (left panel) and $R-A_s$ (right panel) planes.
The color of an individual point in Fig.~\ref{figurea1.1}
signifies the value of the 3-dimensional likelihood of
the corresponding sample. The black $+$ indicates the parameters listed in (\ref{best-fit}).
Figure adopted from Zhao et al \cite{zbg2}.}
\label{figurea1.1}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=6cm,height=6cm]{b3.eps}\includegraphics[width=6cm,height=6cm]{b4.eps}
\end{center}
\caption{The ML points (red $\times$) and the $68.3\%$ and $95.4\%$ confidence
contours (red solid lines) for 2-dimensional likelihoods: $R-n_s$ (left panel)
and $R-A_s$ (right panel). In the left panel, the WMAP confidence contours
are also shown for comparison. Figure adopted from Zhao et al \cite{zbg2}.}
\label{figurea1}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=5cm]{c5.eps}\includegraphics[width=5cm]{c6.eps}\\
\includegraphics[width=5cm]{c7.eps}
\end{center}\caption{1-dimensional likelihoods for $R$ (left), $n_s$ (middle)
and $A_s$ (right). Figure adopted from Zhao et al \cite{zbg2}.}\label{figureb12}
\end{figure}
The derived results allow one to conclude that
the maximum likelihood value of $R$ persistently indicates a
significant amount of relic gravitational waves, even
if with a considerable uncertainty. The $R=0$ hypothesis (no gravitational
waves) appears to be away from the $R=0.229$ model at about $1\sigma$
interval, or a little more, but not yet at a significantly
larger distance. The spectral indices $n_s, n_t$ persistently point
out to the `blue' shape of the primordial spectra, i.e. $n_s >1, n_t >0$, in
the interval of wavelengths responsible for the analyzed multipoles
$\ell\le\ell_{max}=100$. This puts in doubt the (conventional)
scalar fields as a possible driver for the initial kick, because the scalar
fields cannot support $\beta > -2$ and, consequently, cannot support
$n_s >1, n_t >0$.
\subsection{How relic gravitational waves can be overlooked in the likelihood
analysis of data\label{section3}}
The results of this analysis differ from the conclusions of WMAP
team \cite{wmap5}. The WMAP team found no evidence for gravitational waves
and arrived at a `red' spectral index $n_s=0.96$. The
WMAP findings are symbolized by black dashed and blue dash-dotted contours
in Fig.~\ref{figurea1}. It is important to discuss the likely reasons for
these disagreements.
Two main differences in the data analysis are as follows. First, the present
analysis is restricted only to multipoles $\ell\leq100$ (i.e. to the interval
in which there is any chance of finding gravitational waves), whereas the WMAP
team uses the data from all multipoles up to $\ell\sim1000$, keeping
spectral indices constant in the entire interval of participating
wavelengths (thus making the uncertainties smaller by increasing
the number of included data points which have nothing to do with
gravitational waves). Second, in the current work, the relation $n_t=n_s-1$
implied by the theory of quantum-mechanical generation of cosmological
perturbations is used, whereas
WMAP team uses the inflationary `consistency relation' $r=-8n_t$,
which automatically sends $r$ to zero when $n_t$ approaches zero.
It is important to realize that the inflationary `consistency relation'
\[
r= 16 \epsilon = - 8 n_t
\]
is a direct consequence of the single contribution of inflationary theory
to the subject of cosmological perturbations, which is the incorrect
formula for the power spectrum of density perturbations,
containing the ``zero in the denominator" factor:
\[
P_{s} \approx \frac{1}{\epsilon}\left(\frac{H}{H_{Pl}}\right)^2.
\]
The ``zero in the denominator" factor $\epsilon$ is
$\epsilon \equiv - {\dot H}/{H^2}$. This factor tends to zero in the limit
of standard de Sitter inflation ${\dot H}=0$ (any horizontal line in
Fig.~\ref{birth}) independently of the curvature of space-time and strength
of the generating gravitational field characterized by $H$. To make the wrong
theory look ``consistent", inflationary model builders push $H/H_{Pl}$ down,
whenever $\epsilon$ goes to zero (for example, down to the level marked by the
minutely inclined line ``legitimate transition in inflationary theory"
in Fig.~\ref{birth}), thus keeping $P_s$ at the observationally required level
and making the amount of relic gravitational waves arbitrarily small. In fact,
the most advanced inflationary theories based on strings, branes,
loops, tori, monodromies, etc. predict the ridiculously small amounts of
gravitational waves, something at the level of $r \approx 10^{-24}$, or so.
[There is no doubt, there will be new
inflationary theories which will predict something drastically
different. Inflationists are good at devising theories that only mother can
love, but not at answering simple physical questions such as quantization
of a cosmological oscillator with variable frequency. For a more detailed
criticism of inflationary theory, see Grishchuk\cite{a12a}.] Obviously,
the analysis in papers \cite{zbg,zbg2} does not use the inflationary theory.
Baskaran et al concluded that in the conditions of relatively noisy
WMAP data it was the assumed constancy of spectral indices in a broad
spectrum \cite{wmap5} that was mostly responsible for the strong
dissimilarity of data analysis results with regard to gravitational waves.
Having repeated the same analysis of data in the adjacent interval of multipoles
$101 \leq \ell \leq 220$, the authors of paper \cite{zbg2} found a
somewhat different (smaller) spectral index $n_s$ in this interval. They
came to the conclusion that the hypothesis of strictly constant spectral indices
(perfectly straight lines in Fig.~\ref{birth}) should be abandoned. It is
necessary to mention that the constancy of $n_s$ over the vast region of
wavenumbers, or possibly a simple running of $n_s$, is a usual assumption in a
number of works \cite{wmap5,othergroups,wishart3}.
It is now clear why it is dangerous, in the search for relic gravitational
waves, to include data from higher multipoles, and especially assuming a strictly
constant spectral index $n_s$. Although the higher multipole data
have nothing to do with gravitational waves, they bring $n_s$ down. If one
postulates that $n_s$ is one and the same constant at all $\ell$'s, this
additional `external' information about $n_s$ affects uncertainty about $R$
and brings $R$ down. This is clearly seen, for example, in the left panel of
Fig.~\ref{figurea1}. The localization of $n_s$ near, say, the line $n_s =0.96$
would cross the solid red contours along that line and would enforce lower, or
zero, values of $R$. However, as was shown \cite{zbg2}, $n_s$ is
sufficiently different even at the span of the two neighbouring intervals
of $\ell$'s, namely $2\leq \ell\leq 100$ and $101\leq \ell\leq 220$. These
considerations, as for how relic gravitational waves can be overlooked in a data
analysis, have general significance and will be applicable to any CMB data.
\subsection{Forecasts for the Planck mission \label{section4}}
The final part of the presentation by Baskaran et al dealt with
forecasts for the recently launched Planck satellite \cite{Planck}.
The ability of a CMB experiment to detect gravitational waves
is characterized by the signal-to-noise ratio defined by
\cite{zbg,zbg2}
\begin{eqnarray}
\label{snr}
S/N\equiv R/\Delta R,
\end{eqnarray}
where the numerator is the ``true" value of the parameter $R$ (its ML value
or the input value in a numerical simulation) while $\Delta R$ in the
denominator is the expected uncertainty in determination of $R$ from the
data.
In formulating the observational expectations, all of the available
information channels (i.e. $TT$, $TE$, $EE$ and $BB$ correlation functions)
were taken into account. The calculations were performed using the Fisher matrix
formalism. The total uncertainty entering the Fisher matrix calculations is
comprised of instrumental and foreground noises \cite{Planck,cmbpol-fore,zbg2}
as well as the statistical uncertainty of the inherently random signal under
discussion. The possibility of partial removal of contamination arising from foreground sources
was quantified by the residual noise factor $\sigma^{\rm fg}$. The three considered
cases included $\sigma^{\rm fg}=1$ (no foreground removal), $\sigma^{\rm fg}=0.1$
($10\%$ residual foreground noise) and $\sigma^{\rm fg}=0.01$ ($1\%$ residual
foreground noise). In order to gauge the worst case scenario, the `pessimistic'
case was added. It assumes $\sigma^{\rm fg}=1$ and the instrumental noise at
each frequency $\nu_i$ four times larger than the nominal value reported by the
Planck team. This increased noise is meant to mimic the situation where it is not
possible to get rid of various systematic effects \cite{systematics}, the $E$-$B$
mixing \cite{ebmixture}, cosmic lensing \cite{lensing}, etc. which all affect the
$BB$ channel.
The total $S/N$ for one parameter family of models defined by the
large WMAP5 likelihoods (\ref{1Dmodel}) is plotted in Fig.~\ref{figurev2}. This
graph is based on the conservative assumption that all unknown parameters
$R$, $n_s$ and $A_s$ are evaluated from one and the same dataset.
Four options are depicted: $\sigma^{\rm fg}=0.01,0.1,1$ and the pessimistic
case. The results for the ML model (\ref{best-fit}) are given by the
intersection points along the vertical line $R=0.229$. Specifically,
$S/N = 6.69,~6.20,~5.15$ for $\sigma^{\rm fg}=0.01,~0.1,~1$, respectively. The
good news is that even in the pessimistic case one gets $S/N>2$ for $R>0.11$, and
the Planck satellite will be capable of seeing the ML signal $R=0.229$ at
the level $S/N=3.65$.
\begin{figure}
\centerline{\includegraphics[height=8cm]{h14.eps}}
\caption{The $S/N$ as a function of $R$. Figure adopted from Zhao et al \cite{zbg2}.}
\label{figurev2}
\end{figure}
It is important to treat separately the contributions to the total signal-to-noise
ratio supplied by different information channels and different individual
multipoles. The $(S/N)^2$ calculated for three combinations
of channels, $TT+TE+EE+BB$, $TT+TE+EE$ and $BB$ alone, is shown in
Fig.~\ref{figurev11}. Four panels describe four noise models:
$\sigma^{\rm fg} = 0.01,~0.1,~1$ and the pessimistic case. Surely, the
combination $TT+TE+EE+BB$ is more sensitive than any of the other two,
i.e.~$TT+TE+EE$ and $BB$ alone. For example, in the case $\sigma^{\rm fg}=0.1$
the use of all correlation functions ensures $S/N$ which is
$\sim50\%$ greater than $BB$ alone and $\sim30\%$ greater than $TT+TE+EE$.
The situation is even more dramatic in the pessimistic case. The ML model
(\ref{best-fit}) can be barely seen through the $B$-modes alone,
because the $BB$ channel gives only $S/N=1.75$, whereas the use of all
of the correlation functions can provide a confident detection with $S/N=6.48$.
Comparing $TT+TE+EE$ with $BB$ one can see that the first method is better
than the second, except in the case when $\sigma^{\rm fg}=0.01$ and $R$ is
small ($R<0.16$). In the pessimistic case, the role of the $BB$ channel is so
small that the $TT+TE+EE$ method provides essentially the same sensitivity as
all channels $TT+TE+EE+BB$ together.
\begin{figure}
\centerline{\includegraphics[width=18cm,height=15cm]{i15.eps}}
\caption{The $S/N$ for different combinations of the information channels,
$TT+TE+EE+BB$, $TT+TE+EE$ and $BB$.
Figure adopted from Zhao et al \cite{zbg2}.}\label{figurev11}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=18cm,height=15cm]{j16.eps}}
\caption{The individual $(S/N)_{\ell}^{2}$ as functions of $\ell$ for
different combinations of information channels and different
levels of foreground contamination. Calculations are done for the ML
model (\ref{best-fit}) with $R=0.229$.
Figure adopted from Zhao et al \cite{zbg2}.}\label{figurep41}
\end{figure}
Finally, Fig.~\ref{figurep41} shows the contributions to the
total signal-to-noise ratio from individual multipoles.
It can be seen that a very deep foreground cleaning, $\sigma^{\rm fg}=0.01$,
makes the very low (reionization) multipoles
$\ell\lesssim 10$ the major contributors to the total $(S/N)^2$, and mostly in
the $BB$ channel. However, for large $\sigma^{\rm fg}=0.1,~1$, and especially in the
pessimistic case (see the lower right panels in Fig. \ref{figurep41}), the ability of the
$BB$ channel becomes very degraded at all $\ell$'s. At the same time, as Fig. \ref{figurep41}
illustrates, the $\ell$-decomposition of $(S/N)^2$ for $TT+TE+EE$ combination depends only
weakly on the level of $\sigma^{\rm fg}$. Furthermore, in this method,
the signal to noise curves generally peak at $\ell\sim(20-60)$. Therefore, it
will be particularly important for Planck mission to avoid excessive
noises in this region of multipoles.
\section{Conclusions}
In general, the OC1 parallel session was balanced and covered
both theoretical and experimental aspects of primordial gravitational waves
and the CMB. Thanks to excellent contributions of all participants, this subject
received a new momentum. Hopefully, new observations and theoretical work will bring
conclusive results in the near future.
\bibliographystyle{ws-procs975x65}
|
2,869,038,154,689 | arxiv | \section{Outline: Patterns in the Cosmic Web}
\noindent The spatial cosmic matter distribution on scales of a few up to more than a hundred
Megaparsec displays a salient and pervasive foamlike pattern. Revealed through the painstaking efforts
of redshift survey campaigns, it has completely revised our view of the matter distribution on these
cosmological scales. The weblike spatial arrangement of galaxies and mass into elongated filaments,
sheetlike walls and dense compact clusters, the existence of large near-empty void regions and the hierarchical nature
of this mass distribution -- marked by substructure over a wide range of scales and densities -- are three
major characteristics of we have come to know as the {\it Cosmic Web}.
\begin{figure*}
\begin{center}
\vskip -0.5truecm
\mbox{\hskip -0.5truecm\includegraphics[width=20.0cm,height=13.0cm,angle=90.0]{weyval.fig1.ps}}
\caption{The galaxy distribution uncovered by the 2dF GALAXY REDSHIFT SURVEY. Depicted are the
positions of 221414 galaxies in the final 2dFGRS catalogue. Clearly visible is the foamlike
geometry of walls, filaments and massive compact clusters surrounding near-empty void regions.
Image courtesy of M. Colless; see also Colless et al. (2003) and http://www2.aao.gov.au/2dFGRS/.}
\label{fig:2dfgaldist}
\end{center}
\end{figure*}
The vast Megaparsec cosmic web is undoubtedly one of the most striking examples of complex geometric patterns
found in nature, and the largest in terms of sheer size. In a great many physical systems, the spatial organization
of matter is one of the most readily observable manifestations of the forces and processes forming and moulding them.
Richly structured morphologies are usually the consequence of the complex and nonlinear collective action of basic
physical processes. Their rich morphology is therefore a rich source of information on the combination of physical forces at work and
the conditions from which the systems evolved. In many branches of science the study of geometric patterns
has therefore developed into a major industry for exploring and uncovering the underlying physics
\citep[see e.g.][]{balhaw1998}.
Computer simulations suggest that the observed cellular patterns are a prominent and natural
aspect of cosmic structure formation through gravitational instability \citep{peebles80},
the standard paradigm for the emergence of structure in our Universe. Structure in the Universe is the result
of the gravitational growth of tiny density perturbations and the accompanying tiny velocity perturbations in
the primordial Universe. Supported by an impressive body of evidence, primarily those of temperature fluctuations
in the cosmic microwave background \citep{smoot1992,bennett2003,spergel2006}, the character of the primordial random density
and velocity perturbation field is that of a {\it homogeneous and isotropic spatial Gaussian process}. Such
fields of primordial Gaussian perturbations in the gravitational potential are a natural product of an early
inflationary phase of our Universe.
The early linear phase of pure Gaussian density and velocity perturbations has been understood in great depth.
This knowledge has been exploited extensively in extracting a truely impressive score of global cosmological
parameters. Notwithstanding these successes, the more advanced phases of cosmic structure formation are still
in need of substantially better understanding. Mildly nonlinear structures do contain a wealth of information
on the emergence of cosmic structure at a stage features start to emerge as individually recognizable objects.
The anisotropic filamentary and planar structures, the characteristic large underdense void regions and the
hierarchical clustering of matter marking the weblike spatial geometry of the Megaparsec matter distribution
are typical manifestations of mildly advanced gravitational structure formation. The existence of the intriguing
foamlike patterns representative for this early nonlinear phase of evolution got revealed by major
campaigns to map the galaxy distribution on Megaparsec scales revealed while ever larger computer N-body
simulations demonstrated that such matter distributions are indeed typical manifestations of the gravitational
clustering process. Nonetheless, despite the enormous progres true insight and physical understanding has
remained limited. The lack of readily accessible symmetries and the strong nonlocal
influences are a major impediment towards the development relevant analytical descriptions. The
hierarchical nature of the gravitational clustering process forms an additional complication.
While small structures materialize before they merge into large entities, each cosmic structure
consists of various levels of substructure so that instead of readily recognizing one characteristic
spatial scale we need to take into account a range of scales. Insight into the complex interplay of
emerging structures throughout the Universe and at a range of spatial scales has been provided
through a variety of analytical approximations. Computer simulations have provided us with a
good impression of the complexities of the emerging matter distribution, but for the analysis
of the resulting patterns and hierarchical substructure the toolbox of descriptive measures
is still largely heuristic, ad-hoc and often biased in character.
\begin{figure*}
\begin{center}
\vskip 0.5truecm
\mbox{\hskip -1.0truecm\includegraphics[height=18.0cm]{weyval.fig2.ps}}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\vskip 0.0truecm
\caption{SDSS is the largest and most systematic sky survey in the history of astronomy. It is a
combination of a sky survey in 5 optical bands of 25\% of the celestial (northern) sphere. Each
image is recorded on CCDs in these 5 bands. On the basis of the images/colours and their brightness
a million galaxies are subsequently selected for spectroscopic follow-up. The total sky area covered
by SDSS is 8452 square degrees. Objects will be recorded to $m_{lim}=23.1$. In total the resulting
atlas will contain 10$^8$ stars, 10$^8$ galaxies and 10$^5$ quasars. Spectra are taken of around 10$^6$
galaxies, 10$^5$ quasars and 10$^5$ unusual stars (in our Galaxy). Of the 5 public data releases 4 have been
accomplished, ie. 6670 square degrees of images is publicly available, along with 806,400 spectra.
In total, the sky survey is now completely done (107\%), the spectroscopic survey for 68\%. This image
is taken from a movie made by Subbarao, Surendran \& Landsberg (see website:
http://astro.uchicago.edu/cosmus/projects/sloangalaxies/). It depicts the resulting redshift
distribution after the 3rd public data release. It concerns 5282 square degrees and contained
528,640 spectra, of which 374,767 galaxies.}
\end{center}
\label{fig:sdssgaldist}
\end{figure*}
While cosmological theories are describing the development of structure in terms of
continuous density and velocity fields, our knowledge stems mainly from discrete
samplings of these fields. In the observational reality galaxies are the main tracers of the
cosmic web and it is through measuring the redshift distribution of galaxies that we have
been able to map its structure. Likewise, simulations of the evolving cosmic matter
distribution are almost exclusively based upon N-body particle computer calculation,
involving a discrete representation of the features we seek to study.
Both the galaxy distribution as well as the particles in an N-body simulation are
examples of spatial point processes in that they are (1) {\it discretely sampled}
(2) have a {\it irregular spatial distribution}. The translation of {\it discretely sampled} and
{\it spatially irregularly distributed} sampled objects into the related continuous fields is not
necessarily a trivial procedure. The standard procedure is to use a filter to process the discrete
samples into a representative reconstruction of the underlying continuous field. It is the design
of the filter which determines the character of the reconstruction.
Astronomical applications are usually based upon a set of user-defined filter functions.
Nearly without exception the definition of these include pre-conceived knowledge about the features
one is looking for. A telling example is the use of a Gaussian filter. This filter will
suppress the presence of any structures on a scale smaller than the characteristic filter
scale. Moreover, nearly always it is a spherically defined filter which tends to smooth out
any existing anisotropies. Such procedures may be justified in situations in which
we are particularly interested in objects of that size or in which physical
understanding suggests the smoothing scale to be of particular significance.
On the other hand, they may be crucially inept in situations of which we do not know
in advance the properties of the matter distribution. The gravitational clustering process in
the case of hierarchical cosmic structure formation scenarios is a particularly notorious
case. As it includes structures over a vast range of scales and displays a rich palet
of geometries and patterns any filter design tends to involve a discrimination against
one or more -- and possibly interesting -- characteristics of the cosmic matter
distribution it would be preferrable to define filter and reconstruction procedures
that tend to be defined by the discrete point process itself.
A variety of procedures that seek to define and employ more {\bf ``natural''} filters
have been put forward in recent years. The scale of smoothing kernels can be adapted
to the particle number density, yielding a density field that retains to a large extent the
spatial information of the sampled density field. While such procedures may still have the
disadvantage of a rigid user-defined filter function and filter geometry, a more sophisticated,
versatile and particularly promising class of functions is that of wavelet-defined filters (see
contribution B.J.T.~Jones). These can be used to locate contributions on a particular scale, and
even to trace features of a given geometry. While its successes have been quite remarkable, the
success of the application is still dependent on the particular class of employed wavelets.
In this contribution we will describe in extensio the technique of the Delaunay Tessellation Field
Estimator. DTFE is based upon the use of the Voronoi and Delaunay tessellations of a given spatial
point distribution to form the basis of a natural, fully self-adaptive filter for
discretely sampled fields in which the Delaunay tessellations are used as multidimensional
interpolation intervals. The method has been defined, introduced and developed by \cite{schaapwey2000}
and forms an elaboration of the velocity interpolation scheme introduced by \cite{bernwey96}.
Our focus on DTFE will go along with a concentration on the potential of spatial {\bf tessellations} as
a means of estimating and interpolating discrete point samples into continuous field
reconstructions. In particular we will concentrate on the potential of {\it Voronoi}
and {\it Delaunay tessellations}. Both tessellations -- each others {\it dual} -- are fundamental
concepts in the field of stochastic geometry. Exploiting the characteristics of these tessellations,
we will show that the DTFE technique is capable of
delineating the hierarchical and anisotropic nature of spatial point distributions and in outlining the presence
and shape of voidlike regions. The spatial structure of the
cosmic matter distribution is marked by precisely these , and precisely this potential has been
the incentive for analyzing cosmic large scale structure. DTFE exploits three particular properties of
Voronoi and Delaunay tessellations. The tessellations are very sensitive to the local point density, in
that the volume of the tessellation cells is a strong function of the local (physical)
density. The DTFE method uses this fact to define a local estimate of the density. Equally important is their
sensitivity to the local geometry of the point distribution. This allows them to trace anisotropic
features such as encountered in the cosmic web. Finally it uses the
adaptive and minimum triangulation properties of Delaunay tessellations to use them as adaptive spatial
interpolation intervals for irregular point distributions. In this it is the first order version of the
{\it Natural Neighbour method} (NN method). The theoretical basis for the NN method, a generic smooth and
local higher order spatial interpolation technique developed by experts in the field of computational
geometry, has been worked out in great detail by \cite{sibson1980,sibson1981} and \cite{watson1992}.
As has been demonstrated by telling examples in geophysics \citep{braunsambridge1995} and solid mechanics
\citep{sukumarphd1998} NN methods hold tremendous potential for grid-independent analysis and computations.
Following the definition of the DTFE technique, we will present a systematic treatment of various virtues
of relevance to the cosmic matter distribution. The related performance of DTFE will be illustrated by means
of its success in analyzing computer simulations of cosmic structure formation as well as that of the galaxy
distribution in large-scale redshift surveys such as the 2dFGRS and SDSS surveys. Following the
definition and properties of DTFE, we will pay attention to extensions of the project. Higher order
Natural Neighbour renditions of density and velocity fields involve improvements in terms of
smoothness and discarding artefacts of the underlying tessellations.
Following the determination of the ``raw'' DTFE produces density and velocity fields -- and/or other sampled
fields, e.g. temperature values in SPH simulations -- the true potential of DTFE is realized in the subsequent
stage in which the resulting DTFE fields get processed and analyzed. We will discuss an array of
techniques which have been developed to extract information on aspects of the Megaparsec matter distribution.
Straightforward processing involves simple filtering and the production of images from the reconstructed
field. Also, we will shortly discuss the use of the tessellation fields towards defining new measures for
the topology and geometry of the underlying matter distribution. The determination of the Minkowski
functionals of isodensity surfaces, following the SURFGEN formalism \citep{sahni1998,jsheth2003,shandarin2004}, can be
greatly facilitated on the basis of the Delaunay triangulation itself and the estimated DTFE density field.
Extending the topological information contained in the Minkowski functionals leads us to the concept
of {\it Betti} numbers and $\alpha${\it -shapes}, a filtration of the Delaunay complex of a dataset
\citep{edelsbrunner1983,edelsbrunner1994,edelsbrunner2002}. With the associated persistence diagrams the $\alpha$-shapes encode
the evolution of the Betti numbers. We have started to add to the arsenal of tools to quantify the patterns in the
Megaparsec matter distribution \citep{vegter2007,eldering2006}. Particularly interesting are the recently developed elaborate and
advanced techniques of {\it watershed void identification} \citep{platen2007} and
{\it Multiscale Morphology Filter} \citep{aragonphd2007,aragonmmf2007}. These methods enable the unbiased identification
and measurement of voids, walls, filaments and clusters in the galaxy distribution.
Preceding the discussion on the DTFE and related tessellation techniques, we will first have
to describe the structure, dynamics and formation of the {\it cosmic web}. The complex
structure of the cosmic web, and the potential information it does contain, has been
the ultimate reason behind the development of the DTFE.
\section{Introduction: The Cosmic Web}
\label{sec:1}
\vskip -0.25truecm
Macroscopic patterns in nature are often due the collective action of basic, often even simple,
physical processes. These may yield a surprising array of complex and genuinely unique physical
manifestations. The macroscopic organization into complex spatial patterns is one of the most striking.
The rich morphology of such systems and patterns represents a major source of information on the underlying
physics. This has made them the subject of a major and promising area of inquiry.\\
\begin{figure*}
\begin{center}
\mbox{\hskip -0.5truecm\includegraphics[height=13.cm,angle=270.0]{weyval.fig3.ps}}
\vskip 0.25truecm
\caption{Equatorial view of the 2MASS galaxy catalog (6h RA at centre). The grey-scale represents the total
integrated flux along the line of sight -- the nearest (and therefore brightest) galaxies produce a vivid contrast
between the Local Supercluster (centre-left) and the more distant cosmic web. The dark band of the Milky Way clearly
demonstrates where the galaxy catalog becomes incomplete due to source confusion. Some well known large-scale structures
are indicated: P-P=Perseus-Pisces supercluster; H-R=Horologium-Reticulum supercluster; P-I=Pavo-Indus supercluster;
GA=`Great Attractor'; GC=Galactic Centre; S-C=Shapley Concentration; O-C=Ophiuchus Cluster; Virgo, Coma, and
Hercules=Virgo,Coma and Hercules superclusters. The Galactic `anti-centre' is front and centre, with the Orion and
Taurus Giant Molecular Clouds forming the dark circular band near the centre. Image courtesy of J.H. Jarrett.
Reproduced with permission from the Publications of the Astronomical Society of Australia 21(4): 396-403 (T.H. Jarrett).
Copyright Astronomical Society of Australia 2004. Published by CSIRO PUBLISHING, Melbourne Australia.}
\label{fig:2massgal}
\end{center}
\end{figure*}
\subsection{Galaxies and the Cosmic Web}
\noindent One of the most striking examples of a physical system displaying a salient geometrical morphology,
and the largest in terms of sheer size, is the Universe as a whole. The past few decades
have revealed that on scales of a few up to more than a hundred Megaparsec, the galaxies conglomerate
into intriguing cellular or foamlike patterns that pervade throughout the observable cosmos.
An initial hint of this cosmic web was seen in the view of the local Universe offered by the first CfA redshift slice
\citep{lapparent1986}. In recent years this view has been expanded dramatically to the present grand
vistas offered by the 100,000s of galaxies in the 2dF -- two-degree field -- Galaxy Redshift Survey, the 2dFGRS
\citep{colless2003} and SDSS \citep[e.g.][]{tegmark2004} galaxy redshift surveys.
\footnote{See {\tt http://www.mso.anu.edu.au/2dFGRS/} and {\tt http://www.sdss.org/}}.
\begin{figure*}
\begin{center}
\vskip -0.25truecm
\mbox{\hskip -0.5truecm\includegraphics[width=13.0cm]{weyval.fig4.ps}}
\caption{The CfA Great Wall (bottom slice, Geller \& Huchra 1989) compared with the Sloan Great Wall
(top slice). Both structures represent the largest coherent structural in the galaxy redshift surveys
in which they were detected, the CfA redshift survey and the SDSS redshift survey. The (CfA)
Great Wall is a huge planar concentration of galaxies with dimensions that are estimated to be of
the order of $60h^{-1} \times 170h^{-1} \times 5h^{-1}$ Mpc. Truely mindboggling is the Sloan Great Wall,
a huge conglomerate of clusters and galaxies. With a size in the order of $400h^{-1} {\rm Mpc}$ it is at least three times
larger than the CfA Great Wall. It remains to be seen whether it is a genuine physical structure or mainly a
stochastic arrangement and enhancement, at a distance coinciding with the survey's maximum in the radial selection
function. Image courtesy of M. Juri\'c, see also Gott et al. 2005. Reproduced by permission of the AAS.}
\vskip -0.5truecm
\label{fig:greatwall}
\end{center}
\end{figure*}
Galaxies are found in dense, compact clusters, in less dense filaments, and in sheetlike walls
which surround vast, almost empty regions called {\it voids}. This is most dramatically illustrated by the
2dFGRS and SDSS maps. The published maps of the distribution of nearly 250,000 galaxies in two narrow ``slice''
regions on the sky yielded by the 2dFGRS surveys reveal a far from homogeneous distribution (fig.~\ref{fig:2dfgaldist}).
Instead, we recognize a sponge-like arrangement, with galaxies aggregating in
striking geometric patterns such as prominent filaments, vaguely detectable walls and dense compact clusters
on the periphery of giant voids\footnote{It is important to realize that the interpretation of the
Megaparsec galaxy distribution is based upon the tacit yet common assumption that it forms a a fair
reflection of the underlying matter distribution. While there are various indications that
this is indeed a reasonable approximation, as long as the intricate and complex
process of the formation of galaxies has not been properly understood this should be
considered as a plausible yet heuristic working hypothesis.}. The three-dimensional view emerging from the SDSS
redshift survey provides an even more convincing image of the intricate patterns defined by the cosmic
web (fig.~\ref{fig:sdssgaldist}). A careful assessment of the galaxy distribution in our immediate vicintiy reveals us how
we ourselves are embedded and surrounded by beautifully delineated and surprisingly sharply defined weblike structures. In
particular the all-sky nearby infrared 2MASS survey (see fig.~\ref{fig:2massgal}) provides us with a meticulously
clear view of the web surrounding us \citep{jarr2004}.
\begin{figure*}
\begin{center}
\vskip -0.5truecm
\mbox{\hskip -1.7truecm\includegraphics[width=12.5cm,angle=270.0]{weyval.fig5.ps}}
\caption{The cosmic web at high redshifts: a prominent weblike features at a redshift $z\sim 3.1$ found
in a deep view obtained by the Subaru telescope. Large scale sky distribution of 283 strong Ly$\alpha$ emitters
(black filled circles), the Ly$\alpha$ absorbers (red filled circles) and
the extended Ly$\alpha$ emitters (blue open squares). The dashed lines indicate the high-density region of the strong
Ly$\alpha$ emitters. From Hayashino et al. 2004. Reproduced by permission of the AAS.}
\vskip -0.5truecm
\label{fig:filsubaru}
\end{center}
\end{figure*}
The cosmic web is outlined by galaxies populating huge {\it filamentary} and {\it wall-like}
structures, the sizes of the most conspicuous one frequently exceeding 100h$^{-1}$ Mpc. The
closest and best studied of these massive anisotropic matter concentrations can be identified
with known supercluster complexes, enormous structures comprising one or more rich clusters of
galaxies and a plethora of more modestly sized clumps of galaxies. A prominent and representative
nearby specimen is the Perseus-Pisces supercluster, a 5$h^{-1}$ wide ridge of at least 50$h^{-1}$ Mpc
length, possibly extending out to a total length of 140$h^{-1}$ Mpc. While such giant elongated
structures are amongst the most conspicuous features of the Megaparsec matter distribution,
filamentary features are encountered over a range of scales and seem to represent a ubiquitous and
universal state of concentration of matter. In addition to the presence of such filaments the galaxy
distribution also contains vast planar assemblies. A striking local example is the {\it Great Wall}, a
huge planar concentration of galaxies with dimensions that are estimated to be of the order of
$60h^{-1} \times 170h^{-1} \times 5h^{-1}$ Mpc \citep{gellhuch1989}. In both the SDSS and 2dF surveys even
more impressive planar complexes were recognized, with dimensions substantially in excess of those of
the local Great Wall. At the moment, the socalled {\it SDSS Great Wall} appears to be the largest known
structure in the Universe (see fig.~\ref{fig:greatwall}). Gradually galaxy surveys are opening the
view onto the large scale distribution of galaxies at high redshifts. The Subaru survey has even managed
to map out a huge filamentary feature at a redshift of $z\sim 3.1$, perhaps the strongest evidence for the
existence of pronounced cosmic structure at early cosmic epochs (fig.~\ref{fig:filsubaru}).
\begin{figure*}
\begin{center}
\mbox{\hskip -0.45truecm\includegraphics[width=12.35cm]{weyval.fig6.ps}}
\caption{Comparison of optical and X-ray images of Coma cluster. Top: optical
image. Image courtesy of O. L\'opez-Cruz. Lower: X-ray image (ROSAT).}
\end{center}
\label{fig:coma}
\end{figure*}
Of utmost significance for our inquiry into the issue of cosmic structure formation is the fact that the prominent
structural components of the galaxy distribution -- clusters, filaments, walls and voids -- are not merely randomly
and independently scattered features. On the contrary, they have arranged themselves in a seemingly
highly organized and structured fashion, the {\it cosmic foam} or {\it cosmic web}. They are woven into an intriguing
{\it foamlike} tapestry that permeates the whole of the explored Universe. The vast under-populated {\it void} regions in
the galaxy distribution represent both contrasting as well as complementary spatial components to the surrounding {\it planar}
and {\it filamentary} density enhancements. At the intersections of the latter we often find the most prominent density
enhancements in our universe, the {\it clusters} of galaxies. \\
\subsection{Cosmic Nodes: Clusters}
\label{sec:clusters}
Within and around these anisotropic features we find a variety of density condensations, ranging from
modest groups of a few galaxies up to massive compact {\it galaxy clusters}. The latter stand out
as the most massive, and most recently, fully collapsed and virialized objects in the Universe.
Approximately $4\%$ of the mass in the Universe is assembled in rich clusters. They may be regarded
as a particular population of cosmic structure beacons as they typically concentrate near the interstices
of the cosmic web, {\it nodes} forming a recognizable tracer of the cosmic matter distribution
\citep{borgguzz2001}. Clusters not only function as wonderful tracers of structure
over scales of dozens up to hundred of Megaparsec (fig.~\ref{fig:reflex}) but also as useful probes for
precision cosmology on the basis of their unique physical properties.
\begin{figure*}
\begin{center}
\vskip -0.25truecm
\mbox{\hskip -0.0truecm\includegraphics[width=11.8cm]{weyval.fig7.ps}}
\caption{The spatial cluster distribution. The full volume of the X-ray REFLEX
cluster survey within a distance of 600h$^{-1}$\hbox{Mpc}. The
REFLEX galaxy cluster catalogue (B\"ohringer et al. 2001),
contains all clusters brighter than an X-ray flux of $3\times 10^{-12} \hbox{erg}
\hbox{s}^{-1} \hbox{cm}^{-2}$ over a large part of the southern sky. The missing part of the
hemisphere delineates the region highly obscured by the Galaxy.
Image courtesy of S. Borgani \& L. Guzzo, see also Borgani \& Guzzo (2001). Reproduced by permission
of Nature.}
\vskip -0.25truecm
\end{center}
\label{fig:reflex}
\end{figure*}
The richest clusters contain many thousands of galaxies within a relatively small volume of only a few
Megaparsec size. For instance, in the nearby Virgo and Coma clusters more than a thousand galaxies have
been identified within a radius of a mere 1.5$h^{-1}$ Mpc around their core (see fig.~\ref{fig:coma}).
Clusters are first and foremost dense concentrations of dark matter, representing overdensities
$\Delta \sim 1000$. In a sense galaxies and stars only form a minor constituent of clusters.
The cluster galaxies are trapped and embedded in the deep gravitational wells of the dark matter.
These are identified as a major source of X-ray emission, emerging from the diffuse extremely hot
gas trapped in them. While it fell into the potential well, the gas got shock-heated to temperatures in
excess of $T>10^7$ K, which results in intense X-ray emission due to the bremsstrahlung radiated by the
electrons in the highly ionized intracluster gas. In a sense clusters may be seen as hot balls of X-ray
radiating gas. The amount of intracluster gas in the cluster is comparable to that locked into stars,
and stands for $\Omega_{ICM} \sim 0.0018$ \citep{fukupeeb2004}. The X-ray emission represents a particularly useful signature,
an objective and clean measure of the potential well depth, directly related to the total mass of the cluster
\citep[see e.g.][]{reiprich1999}. Through their X-ray brightness they can be seen out to large cosmic depths.
The deep gravitational dark matter wells also strongly affects the path of passing photons. While the
resulting strong lensing arcs form a spectacular manifestation, it has been the more moderate distortion of
background galaxy images in the weak lensing regime \citep{kaiser1992,kaisersq1993} which has opened up
a new window onto the Universe. The latter has provided a direct probe of the dark matter content of clusters
and the large scale universe \citep[for a review see e.g.][]{mellier1999,refregier2003}.
\begin{figure*}
\begin{center}
\vskip 0.25truecm
\mbox{\hskip -0.25truecm\includegraphics[height=12.5cm,angle=90.0]{weyval.fig8.ps}}
\vskip 0.25truecm
\caption{A region of the 6dF redshift survey marked by the presence of various major voids. The image concerns a
3D rendering of the galaxy distribution in a 1000 km/s thick slice along the supergalactic SGX direction,
at SGX=-2500 km/s. Image courtesy of A. Fairall.}
\end{center}
\label{fig:6dfvoid}
\end{figure*}
\subsection{Cosmic Depressions: the Voids}
\label{sec:webvoid}
Complementing this cosmic inventory leads to the existence of large {\it voids}, enormous regions with sizes in the range of
$20-50h^{-1}$ Mpc that are practically devoid of any galaxy, usually roundish in shape and occupying the major
share of space in the Universe. Forming an essential ingredient of the {\it Cosmic Web}, they are surrounded by elongated
filaments, sheetlike walls and dense compact clusters.
Voids have been known as a feature of galaxy surveys since the first surveys were compiled \citep{chincar75,gregthomp78,einasto1980}. Following the
discovery by \cite{kirshner1981,kirshner1987} of the most outstanding specimen, the Bo\"otes void, a hint of their central position within a weblike
arrangement came with the first CfA redshift slice \citep{lapparent1986}. This view has been expanded dramatically as maps of the spatial
distribution of hundreds of thousands of galaxies in the 2dFGRS \citep{colless2003} and SDSS redshift survey \citep{abaz2003} became available,
recently supplemented with a high-resolution study of voids in the nearby Universe based upon the 6dF survey \citep{heathjones2004,fairall2007}.
The 2dFGRS and SDSS maps of figs.~\ref{fig:2dfgaldist} and ~\ref{fig:sdssgaldist}, and the void map of the 6dF survey in
fig.~\ref{fig:6dfvoid} form telling illustrations.
Voids in the galaxy distribution account for about 95\% of the total volume \citep[see][]{kauffair1991,elad1996,elad1997,hoyvog2002,plionbas2002,
rojas2005}. The typical sizes of voids in the galaxy distribution depend on the galaxy population used to define the voids. Voids defined by
galaxies brighter than a typical $L_*$ galaxy tend to have diameters of order $10-20h^{-1}$Mpc, but voids associated with rare luminous
galaxies can be considerably larger; diameters in the range of $20h^{-1}-50h^{-1}$Mpc are not uncommon \citep[e.g.][]{hoyvog2002,plionbas2002}.
These large sizes mean that only now we are beginning to probe a sufficiently large cosmological volume to allow meaningful statistics with voids
to be done. As a result, the observations are presently ahead of the theory.
\section{Cosmic Structure Formation:\\
\ \ \ \ from Primordial Noise to Cosmic Web}
The fundamental cosmological importance of the {\it Cosmic Web} is that it comprises
features on a typical scale of tens of Megaparsec, scales at which the Universe still
resides in a state of moderate dynamical evolution. Structures have only freshly emerged
from the almost homogeneous pristine Universe and have not yet evolved beyond
recognition. Therefore they still retain a direct link to the matter distribution in
the primordial Universe and thus still contain a wealth of direct information on
the cosmic structure formation process.
In our exploration of the cosmic web and the development of appropriate tools towards the
analysis of its structure, morphology and dynamics we start from the the assumption that
the cosmic web is traced by a population of discrete objects, either galaxies in the real
observational world or particles in that of computer simulations. The key issue will be to
reconstruct the underlying continuous density and velocity field, retaining the geometry and
morphology of the weblike structures in all its detail.
In this we will pursue the view that filaments are the basic elements of the cosmic web,
the key features around which most matter will gradually assemble and the channels along which
matter is transported towards the highest density knots within the network, the clusters of galaxies.
Likewise we will emphasize the crucial role of the voids -- the large underdense and expanding
regions occupying most of space -- in the spatial organization of the various structural elements
in the cosmic web. One might even argue that it are the voids which should be seen as the key
ingredients of the cosmic matter distribution. This tends to form the basis for geometrical
models of the Megaparsec scale matter distribution, with the Voronoi model as its main
representative \citep[see e.g][]{weygaertphd1991,weygaert2002,weygaert2007a,weygaert2007b}\footnote{These Voronoi
models are spatial models for cellular/weblike galaxy distributions, not to be confused with the
application of Voronoi tessellations in DTFE and the tessellation based methods towards spatial
interpolation and reconstruction}.
\begin{figure*}
\vskip 0.25truecm
\center
\mbox{\hskip -0.25truecm\includegraphics[height=13.5cm,angle=90.0]{weyval.fig9.ps}}
\end{figure*}
\begin{figure*}
\caption{The Cosmic Web in a box: a set of four time slices from the Millennium simulation
of the $\Lambda$CDM model. The frames show the projected (dark) matter distribution in slices of
thickness $15h^{-1} {\rm Mpc}$, extracted at $z=8.55, z=5.72, z=1.39$ and $z=0$. These redshifts correspond
to cosmic times of 600 Myr, 1 Gyr, 4.7 Gyr and 13.6 Gyr after the Big Bang. The four frames have a size
of $125h^{-1} {\rm Mpc}$. The evolving mass distribution reveals the major
characteristics of gravitational clustering: the formation of an intricate filamentary web, the
hierarchical buildup of ever more massive mass concentrations and the evacuation of large underdense
voids. Image courtesy of V. Springel \& Virgo consortium, also see Springel et al. 2005.}
\label{fig:millennium}
\end{figure*}
\subsection{Gravitational Instability}
The standard paradigm of cosmic structure formation is that of gravititional instability
scenarios \citep{peebles80,zeld1970}. Structure in the Universe is the result of the gravitational growth of
tiny density perturbations and the accompanying tiny velocity perturbations in the primordial Universe.
Supported by an impressive body of evidence, primarily that of temperature fluctuations in the cosmic
microwave background \citep{smoot1992,bennett2003,spergel2006}, the character of the primordial random density and
velocity perturbation field is that of a {\it homogeneous and isotropic spatial Gaussian process}.
Such fields of primordial Gaussian perturbations in the gravitational potential are a natural product
of an early inflationary phase of our Universe.
The formation and moulding of structure is a result to the gravitational growth of
the primordial density- and velocity perturbations. Gravity in slightly overdense regions
will be somewhat stronger than the global average gravitational deceleration, as will be
the influence they exert over their immediate surroundings. In these regions the
slow-down of the initial cosmic expansion is correspondingly stronger and, when
the region is sufficiently overdense it may even come to a halt, turn around
and start to contract. If or as long as pressure forces are not sufficient to counteract
the infall, the overdensity will grow without bound, assemble more and more matter
by accretion of matter from its surroundings, and ultimately fully collapse to form
a gravitationally bound and virialized object. In this way the primordial overdensity
finally emerges as an individual recognizable denizen of our Universe, their precise
nature (galaxy, cluster, etc.) and physical conditions determined by the scale, mass
and surroundings of the initial fluctuation.
\subsection{Nonlinear Clustering}
Once the gravitational clustering process has progressed beyond the initial linear growth
phase we see the emergence of complex patterns and structures in the density field.
Highly illustrative of the intricacies of the structure formation process is that of the
state-of-the-art N-body computer simulation, the Millennium simulation
by \citep[][]{springmillen2005}. Figure~\ref{fig:millennium} shows four
time frames out of this massive 10$^{10}$ particle simulation of a $\Lambda CDM$ matter
distribution in a $500h^{-1} {\rm Mpc}$ box. The time frames correspond to redshifts $z=8.55$,
$z=5.72$, $z=1.39$ and $z=0$ (ie. at epochs 600 Myr, 1 Gyr, 4.7 Gyr and 13.6 Gyr after
the Big Bang). The earlies time frame is close to that of the condensation of the
first stars and galaxies at the end of the {\it Dark Ages} and the reionization of the
gaseous IGM by their radiation. The frames contain the Dark Matter
particle distribution in a $15h^{-1} {\rm Mpc}$ thick slice of a $125h^{-1} {\rm Mpc}$ region centered on
the central massive cluster of the simulation.
The four frames provide a beautiful picture of the unfolding Cosmic Web, starting from a
field of mildly undulating density fluctations towards that of a pronounced and intricate
filigree of filamentary features, dented by dense compact clumps at the nodes of the network.
Clearly visible is the hierarchical nature in which the filamentary network builds up. At first consisting of
a multitude of small scale edges, they quickly merge into a few massive elongated
channels.
Large N-body simulations like the Millennium simulation and the many others currently
available all reveal a few ``universal'' characteristics of the (mildly) nonlinear
cosmic matter distribution. Three key characteristics of the Megaparsec universe
stand out:\\
\begin{center}
{\vbox{\Large{
\begin{itemize}
\item[$\bullet$] Hierarchical clustering
\item[$\bullet$] Weblike spatial geometry
\item[$\bullet$] Voids
\end{itemize}}}}
\end{center}
\ \\
{\it Hierarchical} clustering implies that the first objects to condense first are small and that
ever larger structures form through the gradual merging of smaller structures. Usually an object
forms through the accretion of all matter and the fusion of all substructure within its realm, including
that of the small-scale objects which had condensed out at an earlier stage. The {\it second} fundamental
aspect is that of {\it anisotropic gravitational collapse}. Aspherical overdensities, on any scale and in
any scenario, will contract such that they become increasingly anisotropic. At first they turn into a flattened
{\it pancake}, rapidly followed by contraction into an elongated filament and possibly, dependent on scale,
total collapse into a galaxy or a cluster may follow. This tendency to collapse anisotropically
finds its origin in the intrinsic primordial flattening of the overdensity, augmented by the anisotropy of
the gravitational force field induced by the external matter distribution (i.e. by tidal forces). It is
evidently the major agent in shaping the weblike cosmic geometry. The {\it third} manifest feature of
the Megaparsec Universe is the marked and dominant presence of large roundish underdense regions, the
{\it voids}. They form in and around density troughs in the primordial density field. Because of their lower interior gravity
they will expand faster than the rest of the Universe, while their internal matter density rapidly decreases as
matter evacuates their interior. They evolve in the nearly
empty void regions with sharply defined boundaries marked by filaments and walls. Their essential role in the
organization of the cosmic matter distribution got recognized early after their discovery.
Recently, their emergence and evolution has been explained within the context of hierarchical gravitational scenarios
\cite{shethwey2004}.
The challenge for any viable analysis tool is to trace, highlight and measure each of these aspects
of the cosmic web. Ideally it should be able to do so without resorting to user-defined parameters or functions,
and without affecting any of the other essential characteristics. We will argue in this contributation that
the {\it DTFE} method, a linear version of {\it natural neighbour} interpolation, is indeed able to deal
with all three aspects (see fig.~\ref{fig:hierarchydtfe}).
\section{Spatial Structure and Pattern Analysis}
\label{sec:statspatial}
Many attempts to describe, let alone identify, the features and components of the Cosmic Web have been of a mainly heuristic nature.
There is a variety of statistical measures characterizing specific aspects of the large scale matter distribution \citep[for an
extensive review see][]{martinez2002}. For completeness and comparison, we list briefly a selection of methods for structure characterisation
and finding. It is perhaps interesting to note two things about this list:
\begin{enumerate}
\item[a)] each of the methods tends to be specific to one particular structural entity
\item[b)] there are no explicit wall-finders.
\end{enumerate}
This emphasises an important aspect of our Scale Space approach: it provides a uniform approach to finding Blobs, Filaments and Walls
as individual objects that can be catalogued and studied.
\subsection{Structure from higher moments}
The clustering of galaxies and matter is most commonly described in terms of a hierarchy of correlation functions \citep{peebles80}.
The two-point correlation function -- and the equivalent power spectrum, its Fourier transform \citep{peacockdodss1994,tegmark2004}
-- remains the mainstay of cosmological clustering analysis and has a solid physical basis. However, the nontrivial and nonlinear
patterns of the cosmic web are mostly a result of the phase correlations in the cosmic matter distribution \citep{rydengram1991,
chiang2000,pcoles2000}. While this information is contained in the moments of cell counts \citep{peebles80,lapparent1991,gaztanaga1992} and,
more formally so, in the full hierarchy of M-point correlation functions $\xi_M$, except for the lowest orders their measurement has proven
to be practically unfeasible \citep{peebles80,szapudi1998,jones2005}. Problem remains that these higher order correlation functions
do not readily translate into a characterization of identifiable features in the cosmic web.
The Void probability Function \citep{white1979,lachieze1992} provides a characterisation of the ''voidness'' of the Universe in terms of a function
that combined information from many higher moments of the point distribution. But, again, this did not provide any identification of
individual voids.
\subsection{Topological methods}
The shape of the local matter distribution may be traced on the basis of an analysis of the statistical properties of its inertial moments
\citep{babul1992,vishniac1995,basilakos2001}. These concepts are closely related to the full characterization of the topology of the matter
distribution in terms of four Minkowski functionals \citep{mecke1994,schmalzing1999}. They are solidly based on the theory of spatial statistics
and also have the great advantage of being known analytically in the case of Gaussian random fields. In particular, the \textit{genus} of the
density field has received substantial attention as a strongly discriminating factor between intrinsically different spatial patterns
\citep{gott1986,hoyle2002}.
The Minkowski functionals provide global characterisations of structure. An attempt to extend its scope towards providing locally defined
topological measures of the density field has been developed in the SURFGEN project defined by Sahni and Shandarin and their coworkers
\citep{sahni1998,jsheth2003,shandarin2004}. The main problem remains the user-defined, and thus potentially biased, nature of the continuous
density field inferred from the sample of discrete objects. The usual filtering techniques suppress substructure on a scale smaller than the
filter radius, introduce artificial topological features in sparsely sampled regions and diminish the flattened or elongated morphology of the
spatial patterns. Quite possibly the introduction of more advanced geometry based methods to trace the density field may prove a major advance
towards solving this problem. \citet{martinez2005} have generalized the use of Minkowski Functionals by calculating their values in variously
smoothed volume limited subsamples of the 2dF catalogue.
\subsection{Cluster and Filament finding}
In the context of analyzing distributions of galaxies we can think of cluster finding algorithms. There we might define a cluster as an
aggregate of neighbouring galaxies sharing some localised part of velocity space. Algorithms like HOP attempt to do this. However, there
are always issues arising such as how to deal with substructure: that perhaps comes down to the definition of what a cluster is. Nearly
always coherent structures are identified on the basis of particle positions alone. Velocity space data is often not used since there is
no prior prejudice as to what the velocity space should look like.
The connectedness of elongated supercluster structures in the cosmic matter distribution was first probed by means of percolation analysis,
introduced and emphasized by Zel'dovich and coworkers \citep{zeldovich1982}, while a related graph-theoretical construct, the minimum
spanning tree
of the galaxy distribution, was extensively probed and analysed by Bhavsar and collaborators \citep{barrow1985,colberg2007} in an attempt
to develop an objective measure of filamentarity.
Finding filaments joining neighbouring clusters has been tackled, using quite different techniques, by \citet{colberg2005} and by
\citet{pimbblet2005}. More general filament finders have been put forward by a number of authors. Skeleton analysis of the density field
\citep{novikov2006} describes continuous density fields by relating density field gradients to density maxima and saddle points. This is
computationally intensive but quite effective, though it does depend on the artefacts in the reconstruction of the continuous density field.
\citet{stoica2005} use a generalization of the classical Candy model to locate and catalogue filaments in galaxy surveys. This
appraoch has the advantage that it works directly with the original point process and does not require the creation of a continuous density
field. However, it is very computationally intensive.
A recently developed method, the Multiscale Morphology Filter \citep{aragonphd2007,aragonmmf2007} (see sect.~\ref{sec:mmf}), seeks to identify different
morphological features over a range of scales. Scale space analysis looks for structures of a mathematically specified type in a hierarchical,
scale independent, manner. It is presumed that the specific structural characteristic is quantified by some appropriate parameter (e.g.: density,
eccentricity, direction, curvature components). The data is filtered to produce a hierarchy of maps having different resolutions, and at each
point, the dominant parameter value is selected from the hierarchy to construct the scale independent map. While this sounds relatively
straightforward, in practice a number
of things are required to execute the process. There must be an unambiguous definition of the structure-defining characteristic. The
implementation of \cite{aragonmmf2007} uses the principal components of the local curvature of the density field at each point as a morphology
type indicator. This requires that the density be defined at all points of a grid, and so there must be a method for going from a discrete
point set to a grid sampled continuous density field. This is done using the DTFE methodology since that does minimal damage to the structural
morphology of the density field (see sect.~\ref{sec:dtfe}).
\subsection{Void Finding}
Voids are distinctive and striking features of the cosmic web, yet finding them systematically in surveys and simulations has proved rather
difficult. There have been extensive searches for voids in galaxy catalogues and in numerical simulations (see sect.~\ref{sec:webvoid}).
Identifying voids and tracing their outline within the complex spatial geometry of the Cosmic Web appears to be a nontrivial
issue. The fact that voids are almost empty of galaxies means that the sampling density plays a key role in determining what is or is
not a void \citep{schmidt2001}. There is not an unequivocal definition of what a void is and as a result there is considerable disagreement
on the precise outline of such a region \citep[see e.g.][]{shandfeld2006}.
Moreover, void finders are often predicated on building void structures out of cubic cells \citep{kauffair1991} or out of spheres
\citep[e.g.][]{patiri2006a}. Such methods attempt to synthesize voids from the intersection of cubic or spherical
elements and do so with varying degrees of success. Because of the vague and different definitions, and the range of different
interests in voids, there is a plethora of void identification procedures \citep{kauffair1991,elad1997,aikmah1998,hoyvog2002,arbmul2002,
plionbas2002,patiri2006a,colberg2005b,shandfeld2006}. The ``voidfinder'' algorithm of \cite{elad1997} has been at the basis of most
voidfinding methods. However, this succesfull approach will not be able to analyze complex spatial configurations in which voids may
have arbitrary shapes and contain a range and variety of substructures. The \textit{Void Finder Comparison Project} of \citet{colpear2007}
will clarify many of these issues.
The watershed-based WVF algorithm of \citet[][see sect.~\ref{sec:watershed}]{platen2007} aims to avoid issues of both sampling density
and shape. This new and objective voidfinding formalism has been specifically designed to dissect in a selfconsistent
manner the multiscale character of the void network and the weblike features marking its boundaries. The {\it Watershed Void Finder}
(WVF) is based on the watershed algorithm \citep{beulan1979,meyerbeucher1990,beumey1993}. This is a concept from the field of
{\it mathematical morphology} and {\it image analysis}. The WVF is defined with respect to the DTFE density field of a discrete point
distribution \citep{schaapwey2000},
assuring optimal sensitivity to the morphology of spatial structures and an unbiased probe over the full range of substructure in the mass
distribution. Because the WVF void finder does not impose a priori constraints on the size, morphology and shape of a voids it has the
potential to analyze the intricacies of an evolving void hierarchy.
\section{Structural Reconstruction}
In the real world it is impossible to get exhaustive values of data at every desired point of space. Also
astronomical observations, physical experiments and computer simulations often produce discretely sampled data
sets in two, three or more dimensions. This may involve the value of some physical quantity measured at an irregularly
distributed set of reference points. Also cosmological theories describe the development of structure in terms
of continuous (dark matter) density and velocity fields while to a large extent our knowledge stems from a
discrete sampling of these fields.
In the observational reality galaxies are the main tracers of the cosmic web and it is mainly through
measuring of the redshift distribution of galaxies that we have been able to map its structure. Another
example is that of the related study of cosmic flows in the nearby Universe, based upon the measured peculiar
velocities of a sample of galaxies located within this cosmic volume. Likewise, simulations of the evolving
cosmic matter distribution are almost exclusively based upon N-body particle computer calculation, involving
a discrete representation of the features we seek to study. Both the galaxy distribution as well as the particles
in an N-body simulation are examples of {\it spatial point processes} in that they are
\begin{itemize}
\item[-] {\it discretely sampled}
\item[-] have an {\it irregular spatial distribution}.
\end{itemize}
\noindent
The principal task for any formalism seeking to process the discretely sampled field is to optimally retain or
extract the required information. Dependent on the purpose of a study, various different strategies may be
followed. One strategy is to distill various statistical measures, or other sufficiently descriptive cosmological
measures, characterizing specific aspects of the large scale matter distribution
\citep[see][also see sect.~\ref{sec:statspatial}]{martinez2002}.
In essence this involves the compression of the available information into a restricted set of parameters or functions, with the
intention to compare or relate these to theoretical predictions. The alternative is to translate the {\it discretely sampled} and
{\it spatially irregularly distributed} sampled objects into related continuous fields. While demanding in itself, it is complicated by the
highly inhomogeneous nature of the sample point distribution. The translation is a far from trivial procedure. If
forms the subject of an extensive literature in computer science, visualization and applied sciences. An interesting
comparison and application of a few different techniques is shown in fig.~\ref{fig:landscapint}. It shows how
some methods to be discussed in the following sections fare when applied to the reconstruction of Martian
landscapes from measurements by the MOLA instrument on the Mars Global Surveyor \citep{abramov2004}.
\begin{figure*}
\begin{center}
\mbox{\hskip -0.15truecm\includegraphics[height=19.5cm]{weyval.fig10.ps}}
\end{center}
\end{figure*}
\begin{figure*}[t]
\caption{DTFE processed image of the Cosmic Web: GIF N-body simulation of structure formation in a $\Lambda$CDM
cosmology. Part of the central X-slice. Simulation courtesy: J. Colberg.}
\label{fig:gifdtfeweb}
\end{figure*}
\subsection{Spatial Data: Filtering and Interpolation}
Instead of direct statistical inference from datasets one can seek to reconstruct the underlying continuous field(s).
For a meaningful analysis and interpretation of spatial data it is indeed often imperative and/or preferrable
to use methods of parameterization and interpolation to obtain estimates of the related field values throughout
the sample volume. The {\it reconstructed} continuous field may subsequently be processed in order to yield
a variety of interesting parameters.
It involves issues of {\it smoothing} and {\it spatial interpolation} of the measured data over the
sample volume, of considerable importance and interest in many different branches of science.
Interpolation is fundamental to graphing, analysing and understanding of spatial data.
Key references on the involved problems and solutions include those by \cite{ripley1981,watson1992,cressie1993}. While
of considerable importance for astronomical purposes, many available methods escaped attention. A systematic
treatment and discussion within the astronomical context is the study by \cite{rybickipress1992},
who focussed on linear systems as they developed various statistical procedures related to linear prediction and
optimal filtering, commonly known as Wiener filtering. An extensive, systematic and more general survey of
available mathematical methods can be found in a set of publications by \cite{lombardi2001,lombardi2002,lombardi2003}.
A particular class of spatial point distributions is the one in which the point process forms a representative
reflection of an underlying smooth and continuous density/intensity field. The spatial distribution of the
points itself may then be used to infer the density field. This forms the basis for the interpretation and analysis
of the large scale distribution of galaxies in galaxy redshift surveys. The number density of galaxies in redshift survey
maps and N-body particles in computer simulations is supposed to be proportional to the underlying matter density.
\begin{figure}
\center
\mbox{\hskip -0.5truecm\includegraphics[width=12.5cm]{weyval.fig11.ps}}
\vskip -0.2truecm
\caption{Cosmic density field illustrating the large dynamic range
which is present in the large scale matter distribution. In the left-hand
frame the density field in a 10$h^{-1}$Mpc wide slice through a
cosmological simulation is depicted. In the subsequent frames
zoom-ins focusing on a particular structure are shown. On all
depicted scales structures are present.}
\label{fig:hierarchydtfe}
\end{figure}
\subsection{Local Interpolation: Natural Neighbour Methods}
The complex spatial geometry and large density variations marking the cosmic web ideally should be analyzed by a
technique which would (1) not lose information against the backdrop of a highly inhomogeneous spatial resolution and
(2) which is capable of tracing hierarchically structured and anisotropic spatial patterns in an entirely objective
fashion. Nearly all existing techniques for analyzing galaxy redshift surveys or numerical simulations of cosmic structure
formation have important shortcomings with respect to how they treat the weblike geometry of the large scale matter
distribution and trace the cosmic matter distribution over a wide range of densities. The limited available mathematical
machinery has often been a major obstacle in exploiting the potentially large information content of the cosmic web.
The various aspects characterizing the complex and nontrivial spatial structure of the cosmic web have proven to be notoriously difficult
to quantify and describe. For the analysis of weblike patterns the toolbox of descriptive measures is still largely ad-hoc
and is usually biased towards preconceived notions of their morphology and scale. None of the conventional, nor even
specifically designed, measures of the spatial matter distribution have succeeded in describing all relevant features
of the cosmic web. Even while they may succeed in quantifying one particular key aspect it usually excludes the ability
to do so with other characteristics.
For many applications a `local' interpolation and reconstruction method appears to provide the preferred path. In this case
the values of the variable at any point depend only on the data in its neighbourhood \citep[see e.g][]{sbm1995}. Local schemes
usually involve a discretization of the region into an adaptive mesh. In data interpolation this usually represents a more realistic
approach, and generically it also tends to be independent of specific model assumptions while they are very suited for
numerical modeling applications. When the points have a regular distribution many local methods are available for smooth
interpolation in multidimensional space. Smooth, local methods also exist for some specific irregular point distributions.
A telling example are the `locally constrained' point distributions employed in applications of the finite element method.
In this review we specifically concentrate on a wide class of tessellation-based {\it multidimensional} and entirely {\it local}
interpolation procedures, commonly known as {\it Natural Neighbour Interpolation} \citep[ch. 6]{watson1992,braunsambridge1995,
sukumarphd1998,okabe2000}. The local {\it nutural neighbour} methods are based upon the {\it Voronoi and Delaunay tessellations} of
the point sample, basic concepts from the field of stochastic and computational geometry \citep[see][and references
therein]{okabe2000}. These spatial volume-covering divisions of space into mutually disjunct triangular (2-D) or tetrahedral (3-D) cells
adapt to the local {\it density} and the local {\it geometry} of the point distribution (see fig.~\ref{fig:gifdtfeweb} and
fig.~\ref{fig:hierarchydtfe}). The natural neighbour interpolation schemes exploit these virtues and thus adapt automatically and in
an entirely natural fashion to changes in the density or the
geometry of the distribution of sampling points. For the particular requirements posed by astronomical and cosmological datasets, for which it
is not uncommon to involve millions of points, we have developed a linear first-order version of {\it Natural Neighbour Interpolation}, the
{\it Delaunay Tessellation Field Estimator} \citep[DTFE,][]{bernwey96,schaapwey2000,schaapphd2007}. Instead of involving user-defined filters which
are based upon artificial smoothing kernels the main virtue of natural neighbour methods is that they are intrinsically {\it self-adaptive}
and involve filtering kernels which are defined by the {\it local density and geometry} of the point process or object distribution.
\begin{figure}
\centering
\mbox{\hskip -0.1truecm\includegraphics[height=20.0cm]{weyval.fig12.ps}}
\vskip -0.5truecm
\end{figure}
\begin{figure}
\caption{Interpolation of simulated Mars Orbiter Laster Altimeter (MOLA) data acquired from a
digital elevation model of the Jokulsa and Fjollum region of Iceland, in anticipation of
landscape reconstructions of terrain on planet Mars. The ``original'' highly resolved image of the terrain
is shown in the top left frame. The comparison concerns data that were measured at the track points indicated
in the top centre image. The medium resolution ($209 \times 492$) interpolations are: natural neighbour (top right),
linear (DTFE) interpolation (bottom left), nearest neighbour interpolation (bottom centre) and
spline interpolation (bottom right). Evidently, the nn-neighbour map is the most natural looking
reconstruction. Image courtesy of O. Abramov and A. McEwen, figure modified from Abramov \& McEwen 2004}
\label{fig:landscapint}
\end{figure}
\subsection{Meshless methods}
\noindent The Natural Neighbour schemes, including DTFE, are mesh-based. With current technology this
is computationally feasible as long as the domain of the matter density field
has at most three dimensions. However, also interesting would be to extend attention to
six-dimensional phase-space, incorporating not only the location of
galaxies/particles but also their velocities. This will double the number of
dimensions and makes mesh-based methods a real challenge. While the
application of DTFE to the analysis of the phase-space of dark haloes has been
shown to lead to very good results \citep{arad2004},
studies by \cite{ascalbin2005} and \cite{sharma2006} argued it would be far more
efficient and reasonably accurate to resort to the simpler construct of a {\it
k-d tree} \citep[][for an early astronomical implementation]{bentley1975,frbent1977,bentfr1978,weygaert1987}.
While \cite{ascalbin2005,sharma2006} do not take into
account that phase-space is not a simple metric space but a symplectic one, it may indeed be a real
challenge to develop mesh-based methods for the analysis of structure in
phase-space. Although algorithms for the computation of higher dimensional
Voronoi and Delaunay tesselations have been implemented (e.g., in \texttt{CGAL}\ ), the
high running time and memory use make further processing computationally
infeasible.
Meshless spatial interpolation and approximation methods for datasets in spaces
of dimensions greater than three may therefore provide the alternative of choice.
There are a variety of meshless multivariate data interpolation schemes.
Examples are Shepard's interpolant \citep{shepard1968}, moving least squares
approximants \citep{lancaster1981}, or Hardy's multiquadrics \citep{hardy1971}.
\subsubsection{Spline interpolation}
\noindent Spline interpolation \citep{schoenberg46a,schoenberg46b} is based on
interpolating between sampling points by means of higher-order polynomials. The
coefficients of the polynomial are determined `slightly' non-locally,
such that a global smoothness in the interpolated function is
guaranteed up to some order of derivative. The order of the
interpolating polynomials is arbitrary, but in practice cubic splines
are most widely used. Cubic splines produce an interpolated function
that is continuous through the second derivative. To obtain a cubic
spline interpolation for a dataset of $N+1$ points $N$ separate cubics
are needed. Each of these cubics should have each end point match up
exactly with the end points to either side. At the location of these
points the two adjacent cubics should also have equal first and second
derivatives. A full mathematical derivation can be found in e.g.
\citet{geraldwheat1999,numrecipes}.
Spline interpolation is a widely used procedure. Equalising the
derivatives has the effect of making the resulting interpolation
appear smooth and visually pleasing. For this reason splines are for
example frequently used in graphical applications. Splines can
provide extremely accurate results when the original sample rate is
notably greater than the frequency of fluctuation in the data. Splines
however cannot deal very well with large gaps in the dataset. Because
the gap between two points is represented by a cubic, these result in
peaks or troughs in the interpolation. Also, splines are rather
artificially defined constructs.
\subsubsection{Radial Basis Functions}
\noindent One of the most promisings schemes may be that of Radial Basis Functions,
\citep[RBFs, see e.g.][]{powell1992,arnold1991,wendland2005}. RBFs may be used to determine a
smooth density field interpolating three-dimensional spatial and four-dimensional spatio-temporal data sets, or
even data sets in six-dimensional phase space. In the first step of this
approach the implicit function is computed as a linear combination of
translates of a single radial basis function. This function is determined by
the geometric constraint that the input sample points belong to its zero set.
If the input is a density map, the geometric constraint boils down to the
implicit function interpolating the densities at the input points (and some
additional constraints preventing the construction of the zero function).
The construction of Radial Basis Functions with suitable interpolation
properties is discussed in \cite{sw-ccrbf-01}, while an early review of the mathematical
problems related to RBF-interpolation may be found in \cite{powell1992}.
A nice overview of the state of the art in scattered data modeling using Radial
Basis Functions may be obtained from the surveys \cite{b-airf-01,i-sdmur-02,lf-sdts-99}.
In practice variational implicit surfaces, based on Radial Basis Functions which minimize
some global energy or curvature functional, turn out to be very flexible~\citep{dts-rsvru-02,
to-stuvi-99}: they are adaptive to curvature
variations, can be used for enhancement of fine detail and sharp features
that are missed or smoothed out by other implicit techniques, and can overcome
noise in the input data since they are approximating rather than interpolating.
Especially the use of parameter dependent or anisotropic radial basis
functions allows for graceful treatment of sharp features and provide multiple
orders of smoothness~\citep{to-rsuab-01}.
\section{Spatial Tessellations}
\begin{figure*}
\vskip -0.0truecm
\mbox{\hskip -0.0truecm\includegraphics[width=11.8cm]{weyval.fig13.ps}}
\vskip -0.0truecm
\caption{A full 3-D tessellation comprising 1000 Voronoi cells/polyhedra
generated by 1000 Poissonian distributed nuclei. Courtesy: Jacco Dankers}
\label{fig:vor3dtess}
\end{figure*}
\subsection{Stochastic and Computational Geometry}
{\it Random spatial tessellations} are a fundamental concept in the fields of {\it Stochastic Geometry} and {\it Computational
Geometry}.
{\it Stochastic Geometry}, or geometric probability theory, is the subject in mathematics concerned with the problems that arise
when we ascribe probability distributions to geometric objects such as points, lines, and planes (usually in Euclidian spaces), or
to geometric operations such as rotations or projections \citep[see e.g.][]{stoyan1987,stoyan1995}. A formal and restricted definition
of stochastic geometry was given by \cite{stoyan1987}, who defined the field as the branch of mathematics {\it devoted
to the study of geometrical structures whcih can be described by random sets, in particular by point processes, in suitable
spaces}. Since many problems in stochastic geometry are either not solvable analytically, or only in a few
non-generic cases, we have to resort to the help of the computer to find a (partial) answer to the problems under consideration. This makes
it necessary to find efficient algorithms for constructing the geometrical objects involved.
{\it Computational Geometry} is the branch of computer science that is concerned with finding the computational procedures for solving
geometric problems, not just the geometric problems arising from stochastic geometry but geometric problems in general
\citep{boissonnat1998,deberg2000,goodman2004}. It is concerned with the design and analysis of algorithms and software for processing geometric objects and data. Typical
problems are the construction of spatial tessellations, like Voronoi diagrams and Delaunay meshes, the reconstruction of objects from
finite point samples, or finding nearest neighbours in point sets. Methods from this field have many applications in applied areas like
Computer Graphics, Computer Vision, Robotics, Computer Aided Design and Manifacturing.
\begin{figure*}
\begin{center}
\mbox{\hskip -0.1truecm\includegraphics[width=12.2cm]{weyval.fig14.ps}}
\end{center}
\vskip -0.2truecm
\caption{An illustration of a distribution of nuclei (stars) in a square (top) and its corresponding
Delaunay triangulation (bottom left) and Voronoi tessellation (bottom right), assuming periodic
boundary conditions.}
\label{fig:vordeltess}
\end{figure*}
\subsection{Random Tessellations}
\noindent Random tessellations or mosaics occur as primary objects of various
processes of division of space $\Re^d$ into convex cells or as secondary or
auxiliary objects in e.g. various statistical problems.
Simply stated, a {\it tessellation} is an arrangement of polytopes (2-D: polygons,
3-D: polyehdra) fitting together without overlapping so as to cover d-dimensional
space $\Re^d \quad (d=1,2,\ldots$), or a subset $X \subset \Re^d$. Usually one requires
that the cells are convex and compact with disjoint interiors, although tessellations
may also involve non-convex cells (an example are Johnson-Mehl tessellations). Posing
the additional requirement of convexity implies that all interfaces separating
pairs of cells are planar (for 3-D), and all edges common to the boundaries of
three or more cells are linear, implying each cell to be a (convex) polyhedron.
Formally, a {\it tessellation} in $\Re^d$ is a set ${\cal T}=\{X_i\}$ of
$d$-dimensional sets $X_i \subset \Re^d$ called {\it cells} such that
\begin{eqnarray}
&&{\tilde X}_i \cap {\tilde X}_j = \emptyset \qquad\hbox{for\ } i \not= j,\nonumber\\
&&\bigcup_i X_i = \Re^d,\ \\
&&\#\{X_i \in {\cal T}: X_i \cap B \not= \emptyset\} < \infty \qquad \forall \hbox{\
bounded\ } B \subset \Re^d,\nonumber
\end{eqnarray}
\noindent with ${\tilde X}_i$ the interior of cell $X_i$ \citep{moeller1989,moeller1994}. The first
property implies the interiors of the cells to be disjoint, the second one that the
cell aggregate $\{X_i\}$ is space-filling, and the third one that ${\cal T}$ is a
countable set of cells.
\subsection{Voronoi Tessellation}
\label{sec:vortess}
\noindent The Voronoi tessellation ${\cal V}$ of a point set ${\cal P}$ is the division of space
into mutually disjunct polyhedra, each {\it Voronoi polyhedron consisting of the part of
space closer to the defining point than any of the other points} \citep{voronoi1908,okabe2000}.
Assume that we have a distribution of a countable set ${\cal P}$ of nuclei $\{{\bf x}_i\}$ in
$\Re^d$. Let ${\bf x}_1,{\bf x}_2,{\bf x}_3,\ldots$ be the coordinates of the nuclei. Then
the {\it Voronoi region} ${\cal V}_i$ of nucleus $i$ is defined by the points $\bf x$ (fig.~\ref{fig:vordeltess}),
\bigskip
\begin{center}
\framebox[11.7truecm]{\vbox{\Large
\begin{eqnarray}
\hskip -0.5truecm
{\cal V}_i = \{{\bf x} \vert d({\bf x},{\bf x}_i) < d({\bf x},{\bf x}_j)
\quad \forall j \not= i\} \nonumber
\label{eq:voronoidef}
\end{eqnarray}}}
\begin{equation}
\end{equation}
\vskip 0.25truecm
\end{center}
\noindent where $d({\bf x},{\bf y})$ is the Euclidian distance between ${\bf x}$
and ${\bf y}$. In other words, ${\cal V}_i$ is the set of points which is nearer
to ${\bf x}_i$ than to ${\bf x}_j,\quad j \not=i$. From this basic definition, we can
directly infer that each Voronoi region ${\cal V}_i$ is the intersection of the open
half-spaces bounded by the perpendicular bisectors (bisecting planes in 3-D) of
the line segments joining the nucleus $i$ and any of the the other nuclei. This
implies a Voronoi region ${\cal V}_i$ to be a convex polyhedron (a polygon when
in 2-D), a {\it Voronoi polyhedron}. Evidently, the concept can be extended
to any arbitrary distance measure (Icke, priv. comm.). The relation between the
point distribution ${\cal P}$ and its Voronoi tessellation can be clearly
appreciated from the two-dimensional illustration in fig.~\ref{fig:vordeltess}.
\begin{table*}
\begin{center}
{\large 3-D Voronoi Tessellation Elements\\}
{(see also fig.~\ref{fig:vorelm})\ \\
\ \\}
{{
\begin{tabular}{|lll|llll|lllll|}
\hline
&&&&&&&&&&&\ \\
&&{\Large ${\cal V}_i$}&&&{\it Voronoi cell}&&&-& Polyhedron&&\\
&&&&&&&&-&defined by nucleus $i\in{\cal P}$&&\\
&&&&&&&&-&volume of space closer to $i$&&\\
&&&&&&&&&than any other nucleus $m\in{\cal P}$&&\\
&&&&&&&&&&&\ \\
&&{\Large $\Sigma_{ij}$}&&&{\it Voronoi wall (facet)}&&&-& Polygon&&\\
&&&&&&&&-&defined by nuclei $(i,j)\in{\cal P}$ &&\\
&&&&&&&&-&all points ${\bf x}$ with equal distance to $(i,j)$&&\\
&&&&&&&&&\& larger distance to any other nucleus $m \in {\cal P}$&&\\
&&&&&&&&-&constitutes part surface cells: ${\cal V}_i$, ${\cal V}_j$&&\\
&&&&&&&&&&&\ \\
&&{\Large $\Lambda_{ijk}$}&&&{\it Voronoi edge}&&&-& Line segment&&\\
&&&&&&&&-&defined by nuclei $(i,j,k)\in{\cal P}$&&\ \\
&&&&&&&&-&all points ${\bf x}$ with equal distance to $(i,j,k)$&&\ \\
&&&&&&&&&\& larger distance to any other nucleus $m \in {\cal P}$&&\\
&&&&&&&&-&constitutes part rim Voronoi cells: ${\cal V}_i$, ${\cal V}_j$, ${\cal V}_k$&&\\
&&&&&&&&&constitutes part rim Voronoi walls: $\Sigma_{ij}$, $\Sigma_{ik}$ and $\Sigma_{jk}$&&\\
&&&&&&&&&&&\ \\
&&{\Large ${\cal D}_{ijkl}$}&&&{\it Voronoi vertex}&&&-&Point&&\\
&&&&&&&&-&defined by nuclei $(i,j,k,l)\in{\cal P}$&&\ \\
&&&&&&&&-&equidistant to nuclei $(i,j,k,l)$&&\ \\
&&&&&&&&-&closer to $(i,j,k,l)$ than to any other nucleus $m\in{\cal P}$&&\ \\
&&&&&&&&-&circumcentre of (Delaunay) tetrahedron $(i,j,k,l)$&&\ \\
&&&&&&&&&&&\\
\hline
\end{tabular}}}\\
\end{center}
\label{table:vorelm}
\end{table*}
\begin{figure*}
\begin{center}
\mbox{\hskip -0.2truecm\includegraphics[width=11.9cm]{weyval.fig15.ps}}
\end{center}
\caption{The four Voronoi elements of a Voronoi tessellation generated by a nucleus set {\cal P}. See
table~\ref{table:vorelm}}
\label{fig:vorelm}
\end{figure*}
\noindent The complete set of Voronoi polyhedra constitute a space-filling tessellation of
mutually disjunct cells, the {\it Voronoi tessellation} ${\cal V}({\cal P})$ relative to
${\cal P}$. A good impression of the morphology of a complete Voronoi tessellation can
be seen in fig.~\ref{fig:vor3dtess}, a tessellation of 1000 cells generated by a Poisson
distribution of 1000 nuclei in a cubic box. The Voronoi foam forms a packing of Voronoi
cells, each cell being a convex polyhedron enclosed by the bisecting planes between the
nuclei and their {\it natural neighbours}.
\subsubsection{Voronoi elements}
Taking the three-dimensional tessellation as the archetypical representation
of structures in the physical world, the Voronoi tessellation ${\cal V}({\cal P}$
consists of four constituent {\it elements}: {\it Voronoi cells}, {\it Voronoi
walls}, {\it Voronoi edges} and {\it Voronoi vertices}. Table~\ref{table:vorelm}
provides a listing of these elements together with a description of their
relation with respect to the nuclei of the generating point set ${\cal P}$,
augmented by the accompanying illustration in fig.~\ref{fig:vorelm}.
\subsubsection{Generalized Voronoi Tessellations}
The Voronoi tessellation can be generalized. \cite{miles1970} defined the
{\it generalized Voronoi tessellation ${\cal V}_n$}. The original Voronoi
tessellation is ${\cal V}_1={\cal V}$.
This involves the extension of the definition of the Voronoi cell ${\cal V}_i$ generated
by one nucleus $i$ to that of a {\it higher-order} Voronoi cell ${\cal V}^{k}(i_{1},\ldots,i_{k})$
generated by a set of $k$ nuclei $\{i_{1},\ldots,i_{k}\}\in{\cal P}$. Each $k-order$ Voronoi cell
${\cal V}^{k}(i_{1},\ldots,i_{k})$ consists of that part of space in which the points
${\bf x}$ have the $k$ nuclei $\{i_{1},\ldots,i_{k}\}\in{\cal P}$ as their $k$ nearest
neighbours. In addition to \cite{miles1970,miles1972,miles1982} see \citet[][chapter 3]{okabe2000}
for more details and references.
\subsubsection{Voronoi Uniqueness}
An inverse look at tessellations reveals the special and degenerate nature of Voronoi
tessellations. Given a particular tessellation one might wonder whether there is a point
process which would have the tessellation as its Voronoi tessellation. One may demonstrate
that in general this is not true. By defining a geometric procedure to reconstruct
the generating nucleus distribution one may straightforwardly infer that there is not
a unique solution for an arbitrary tessellation.
This may be inferred from the study by \cite{chiuweystoy1996}, who defined and developed a nucleus
reconstruction procedure to demonstrate that a two-dimensional section through a three- or higher-dimensional
Voronoi tessellation will itself not be a Voronoi tessellation. By doing so their work clarified the
special and degenerate nature of these tessellations.
\subsection{the Delaunay Tessellation}
\label{sec:delaunay}
Pursuing our census of Voronoi tessellation elements (table~\ref{table:vorelm}), we found that each set of
nuclei ${i,j,k,l}$ corresponding to a Voronoi vertex ${\cal D}(i,j,k,l)$ defines a unique
tetrahedron. This is known as {\it Delaunay tetrahedron} \citep{delaunay1934}.
Each {\it Delaunay tetrahedron} is defined by the {\it set of four points whose circumscribing
sphere does not contain any of the other points} in the generating set
\citep[][triangles in 2D: see fig.~\ref{fig:vordeltess}]{delaunay1934}. For the countable set
${\cal P}$ of points $\{{\bf x}_i\}$ in $\Re^d$, a
Delaunay tetrahedron ${\cal D}_m$ is the simplex $T$ defined by $(1+d)$ points
$\{{\bf x}_{i1},\ldots,{\bf x}_{i(d+1)}\}\,\in\,{\cal P}$ (the vertices of this $d$-dimensional
tetrahedron) such that the corresponding circumscribing sphere ${\cal S}_m({\bf y}_m)$ with
circumcenter ${\bf C}_m$ and radius $R_m$ does not contain any other point of ${\cal P}$,
\bigskip
\begin{center}
\framebox[11.7truecm]{\vbox{\Large
\begin{eqnarray}
\hskip 0.5truecm
{\cal D}_M\,=\,T({\bf x}_{i1},\ldots,{\bf x}_{i(d+1)})
\qquad&\hbox{with}& \ d({\bf C}_m,{\bf x}_j)\,>\,R_m \nonumber \\
&\forall& j \not= i1,\ldots,i(d+1)\,\nonumber
\label{eq:delaunaydef}
\end{eqnarray}}}
\begin{equation}
\end{equation}
\end{center}
\bigskip
\noindent Following this definition, the {\it Delaunay tessellation} of a point set ${\cal P}$ is
the uniquely defined and volume-covering tessellation of mutually disjunct {\it Delaunay tetrahedra}.
Figure~\ref{fig:vordeltess} depicts the Delaunay tessellation resulting from a given 2-D distribution
of nuclei. On the basis of the figure we can immediately observe the intimate relation between a
{\it Voronoi tessellation} and its {\it Delaunay tessellation}.
The Delaunay and Voronoi tessellations are like the opposite sides of the same coin, they are
each others {\it dual}: one may directly infer one from the other and vice versa. The combinatorial
structure of either tessellation is completely determined from its dual.
\begin{figure*}[b]
\begin{center}
\mbox{\hskip -0.2truecm\includegraphics[width=12.2cm]{weyval.fig16.ps}}
\end{center}
\caption{The {\it dual} relationship between Voronoi (solid) and Delaunay (dashed) tessellations of
a set of nuclei (circles). Left: Zoom-in on the three Delaunay triangles corresponding to a set
of five nuceli (black dots) and the corresponding Voronoi edges. The circle is the circumcircle
of the lower Delaunay triangle, its centre (arrow) is a vertex of the Voronoi cell. Note that the
circle does not contain any other nucleus in its interior ! Right: a zoom in on the Voronoi cell
${\cal V}_i$ of nucleus $i$ (black dot). The Voronoi cell is surrounded by its related
Delaunay triangles, and clearly delineates its {\it Natural Neighbours} (open circles).}
\label{fig:vordelcont}
\end{figure*}
The duality between Delaunay and Voronoi tessellations may be best appreciated on behalf of the
following properties:
\begin{itemize}
\item[$\bullet$] {\it Circumcentre \& Voronoi vertex}:\\
The center of the circumsphere of a Delaunay tetrahedron is a vertex of the Voronoi tessellation.
This follows from the definition of the Voronoi tessellation, wherein the four nuclei which form
the Delaunay tetrahedron are equidistant from the vertex.
\item[$\bullet$] {\it Contiguity condition}:\\
The circumsphere of a Delaunay tetrahedron is empty and cannot contain any nucleus in the
set ${\cal P}$. If there would be such a nucleus it would be closer to the centre than
the four tetrahedron defining nuclei. This would render it impossible for it being the
vertex of all corresponding Voronoi cells.
\end{itemize}
\subsubsection{Natural Neighbours}
A pair of nuclei $i$ and $j$ whose Voronoi polyhedra ${\cal V}_i$ and ${\cal V}_j$
have a face in common is called a {\it contiguous pair} and a member of the
pair is said to be {\it contiguous} to the other member. Contiguous pairs of nuclei
are each other's {\it natural neighbour}.
{\it Natural neighbours} of a point $i$ are the points $j$ with whose Voronoi ${\cal V}_j$
its Voronoi cell ${\cal V}_i$ shares a face or, in other words the points with which
it is connected via a Delaunay tetrahedron. This unique set of neighbouring points
defines the neighbourhood of the point and represents the cleanest definition of the
surroundings of a point (see fig.~\ref{fig:vordelcont}), an aspect which turns out to be
of {\it seminal} importance for the local interpolation method(s) discussed in this
contribution..
\subsection{Voronoi and Delaunay Statistics}
\label{sec:vordelstat}
\noindent In particular for practical applications the knowledge of statistical
properties of Voronoi and Delaunay tessellations as a function of the generating
stochastic point processes is of considerable interest. However, despite their seemingly
simple definition it has proven remarkably difficult to obtain solid
analytical expressions for statistical properties of Voronoi tessellations.
Moreover, with the exception of some general tessellation properties nearly all
analytical work on Voronoi and Delaunay tessellations has concentrated on those
generated by homogeneous Poissonian point distributions.
Statistical knowledge of Delaunay tessellations generated by Poissonian
nuclei in $d$- dimensional space is relatively complete. Some
important distribution functions are known, mainly due to the work of
\cite{miles1970,miles1972,miles1974,kendall1989,moeller1989,moeller1994}. For
Voronoi tessellations, even for Poissonian nuclei analytical results are quite
rare. Only very few distribution functions are known, most results are
limited to a few statistical moments: expectation values, variances and
correlation coefficients. Most of these results stem from the pioneering works
of \cite{meyering1953,gilbert1962,miles1970,moeller1989}. \cite{moeller1989} provides
analytical formulae for a large number of first-order moments of $d$-dimensional
Poisson-Voronoi tessellations, as well as of $s$-dimensional section through them,
following the work of \cite{miles1972,miles1974,miles1982} \citep[also see][]{weygaertphd1991}.
Numerical Monte Carlo evaluations have proven te be the main
source for a large range of additional statistical results. For an extensive listing
and discussion we refer the reader to \cite{okabe2000}.
Of prime importance for the tessellation interpolation techniques discussed in this review
are two fundamental characteristics: (1) the number of natural neighbours of
the generating nuclei and (2) the volume of Voronoi and Delaunay cells.
In two dimensions each point has on average exactly $6$ natural neighbours \citep[see e.g.][]{ickewey1987},
irrespective of the character of the spatial point distribution. This also implies that
each point belongs to on average $6$ Delaunay triangles. Going from two to three dimensions,
the character of the tessellation changes fundamentally:
\begin{itemize}
\item[$\bullet$] {\it Dependence Point Process}:\\ the average number of natural neighbours is
no longer independent of the underlying point distribution
\item[$\bullet$] {\it Integer}:\\ The average number of natural neighbours is not an integer:\\
for a Poisson distribution it is $\sim 13.4$ !
\item[$\bullet$] {\it Delaunay tetrahedra/Voronoi Vertices}:\\
For a Poisson distribution, the \# vertices per Voronoi cell $\sim 27.07$.\\
While in the two-dimensional case the number of vertices per Voronoi cell has same value as
the number of {\it natural neighbours}, in three dimensions it is entirely different !\\
\end{itemize}
As yet it has not been possible to derive, from first principle, a closed
analytical expression for the distribution functions of the volumes of Voronoi
polyhedra and Delaunay cells in a Poisson Voronoi tessellation. However, the fitting formula
suggested by \cite{kiang1966} has proven to represent a reasonably good approximation. Accordingly,
the probability distribution of the volume of of a Voronoi polyhedron in $d$-dimensional space
$\Re^d$ follows a gamma distribution
\begin{equation}
f_{\cal V}(V_{\cal V})\,{\rm d} V_{\cal V}\,=\,\frac{q}{\Gamma(q)}\,\,\left(q \frac{V_{\cal V}}{\langle V_{\cal V}\rangle}\right)^{(q-1)}\,
\exp{\left(-q\,\frac{V_{\cal V}}{\langle V_{\cal V}\rangle}\right)}\,\,{\rm d}{\left(\frac{V_{\cal V}}{\langle V_{\cal V}\rangle}\right)}\,,
\label{eq:vorvolpdf}
\end{equation}
\noindent with $V_{\cal V}$ the size of the Voronoi cell and $\langle V_{\cal V}\rangle$ the average cell size. The conjecture
of \cite{kiang1966} is that the index has a value $q=2d$ for a tessellation in $d$-dimensional space (ie. $q=4$ for
2-D space and $q=6$ for 3-D space). Even though other studies
indicated slightly different values, we have found the Kiang suggestion to be quite accurate (see
sect.~\ref{sec:samplingnoise}, also see \cite{schaapphd2007}).
While the distribution for Voronoi cell volumes involves a conjecture, for Delaunay cells ${\cal D}$ it is
possible to derive the ergodic distribution from first principle. Its $d+1$
vertices completely specify the Delaunay tetrahedron ${\cal D}$. If ${\bf c}$,
$R$ its circumradius, its vertices are the points $\{{\bf c} + R {\bf u}_i\}$.
In this the unit vectors $\{{\bf u}_i\}$ $(i=0,\ldots,d+1)$, directed towards
the vertices, determine the shape of the Delaunay tetrahedron. Given that
$\Delta_d$ is the volume of the unity simplex $\{{\vec u}_0,\ldots,{\vec u}_d\}$,
and the volume $V_{\cal D}$ of the Delaunay tetrahedron given by
\begin{equation}
V,=\,\Delta_d \,R^d\,.
\end{equation}
\noindent \cite{miles1974,moeller1989,moeller1994} found that for a Poisson point process of
intensity $n$ in $d$-dimensional space $\Re^d$ the distribution is specified by
\begin{eqnarray}
f_{\cal D}({\cal D})&\,=\,&f_{\cal D}(\{{\vec u}_0,\ldots,{\vec u}_d\},R)\nonumber\\
\ \\
&\,=\,&a(n,d)\,\Delta_d\,\,R^{d^2-1}\,\exp{\left(-n \omega_d\,R^d\right)}\nonumber\,,
\label{eq:delvolpdf}
\end{eqnarray}
\noindent with $\omega_d$ the volume of the unity sphere in $d$-dimensional
space ($\omega_2=2\pi$ and $\omega_3=4\pi$) and $a(n,d)$ is a constant dependent on number density $n$ and
dimension $d$.
In other words, the circumradius $R$ of the Delaunay tetrahedron is ergodically independent
of its shape, encapsulated in $\Delta_d$. From this one finds that the
the distribution law of $n \omega_d R^d$ is the $\chi^2_{2d}/2$
distribution \citep{kendall1989}:
\begin{equation}
f(R)\,{\rm d}R\,=\,\frac{(n \omega_d R^d)^{d-1}}{(d-1)!}\,\,\exp{(-n \omega_d R^d)}\,\,
{\rm d}(n \omega_d R^d)\,.
\end{equation}
\medskip
It is of considerable importance to realize that even for a uniform density field,
represented by a discrete point distribution of intensity $n$, neither the Voronoi
nor the Delaunay tessellation will involve a volume distribution marked by a
considerable spread and skewness.
\begin{figure*}
\begin{minipage}{\textwidth}
\begin{center}
\mbox{\hskip -0.2truecm\includegraphics[width=12.0 cm]{weyval.fig17.ps}}
\end{center}
\end{minipage}
\caption{The Delaunay tessellation of a point distribution in and around a filamentary feature. The
generated tessellations is shown at three successive zoom-ins. The frames form a testimony of the
strong adaptivity of the Delaunay tessellations to local density and geometry of the spatial point
distribution. }
\label{fig:delpntadapt}
\end{figure*}
\subsection{Spatial Adaptivity}
The distribution of the Delaunay and Voronoi cells adjusts itself to the characteristics of the point
distribution: in sparsely sampled regions the distance between the {\it natural neighbours} is large,
and also if the spatial distribution is anisotropic this will be reflected in their distribution. This
may be readily appreciated from fig.~\ref{fig:delpntadapt}. At three successive spatial scales the
Delaunay tessellations has traced the density and geometry of the local point distribution to a
remarkable degree. On the basis of these observations it is straightforward to appreciate that the
corresponding Delaunay tessellation forms an ideal adaptive multi-dimensional interpolation grid.
Note that not only the size, but also the shape of the Delaunay simplices is fully determined
by the spatial point distribution. The density and geometry of the local point distribution
will therefore dictate the resolution of spatial interpolation and reconstruction
procedures exploiting the Delaunay tessellation. The prime representatives of such
methods are the Natural Neighbour techniques (see sect.~\ref{sec:nn}) and the Delaunay Tessellation
Field Estimator.
These techniques exploit the fact that a Delaunay tetrahedron may be regarded as the optimal multidimensional
interpolation interval. The corresponding minimal coverage characteristics of the Delaunay tessellation thus
imply it to be optimal for defining a network of multidimensional interpolation intervals. The resulting interpolation
kernels of Natural Neighbour and DTFE interpolation not only embody an optimal spatial resolution, but also involve a
high level of adaptivity to the local geometry of the point distribution (see sect.~\ref{sec:nn} and sect.~\ref{sec:dtfe}).
Within this context it is no coincidence that in many computer visualisation applications Delaunay tessellations have
acquired the status of optimal triangulation. Moreover, the superb spatial adaptivity of the volumes of Delaunay and Voronoi
polyhedra to the local point density may be readily translated into measures for the
value of the local density. This forms a crucial ingredient of the DTFE formalism (see sect.~\ref{sec:dtfe}).
\subsection{Voronoi and Delaunay Tessellations: context}
The earliest significant use of Voronoi regions seems to have occurred in the work of \cite{dirichlet1850} and
\cite{voronoi1908} in their investigations on the reducibility of positive definite quadratic forms. However, Dirichlet
and Voronoi tessellations
as applied to random point patterns appear to have arisen independently in various fields of science and technology
\citep{okabe2000}. For example, in crystallography, one simple model of crystal growth starts with a fixed
collection of sites in two- and three-dimensional space, and allows crystals to begin growing from each site,
spreading out at a uniform rate in all directions, until all space is filled. The ``crystals'' then consist of all
points nearest to a particular site, and consequently are just the Voronoi regions for the original set of points.
Similarly, the statistical analysis of metereological data led to the formulation of Voronoi
regions under the name {\it Thiessen polygons} \citep{thiessen1911}.
Applications of Voronoi tessellations can therefore be found in fields as diverse as agriculture and forestry
\citep{fischermiles1973}, astrophysics \citep[e.g.][]{kiang1966,ickewey1987,ebeling1993,bernwey96,molchanov1997,schaapwey2000,
cappellari2003,ritzericke2006}, ecology, zoology and botany, cell biology \citep{shapiro2006}, protein research \citep{liang1998a,liang1998b,
liang1998c}, cancer research \citep{torquato2000,schaller2005}, chemistry, crystal growth and structure, materials science
\citep[see e.g.][incl. many references]{torquato2002}, geophysics \citep{sbm1995}, geography and geographic information
systems \citep{boots1984,gold1997}, communication theory \citep{baccelli1999} and art and archaeology \citep{kimia2001,leymarie2003}.
Due to the diversity of these applications it has acquired a set of alternative names, such as Dirichlet regions, Wigner-Seitz cells, and
Thiessen figures.
\section{Natural Neighbour Interpolation}
\label{sec:nn}
The {\it Natural Neighbour Interpolation} formalism is a generic higher-order multidimensional
interpolation, smoothing and modelling procedure utilizing the concept of natural neighbours to obtain
locally optimized measures of system characteristics. Its theoretical basis was developed and
introduced by \cite{sibson1981}, while extensive treatments and elaborations of nn-interpolation may be
found in \cite{watson1992,sukumarphd1998}.
\begin{figure*}
\begin{center}
\mbox{\hskip -0.2truecm\includegraphics[width=12.2cm]{weyval.fig18.ps}}
\end{center}
\caption{Natural Neighbour Interpolants. Example of NN interpolation in
two dimensions. Left: the Voronoi cell ${\cal V}({\bf x}$ generated by a
point ${\bf x}$. Right: the 2nd order Voronoi cell ${\cal V}_2({\bf x},{\bf x}_1)$, the region
of space for which the points ${\bf x}$ and ${\bf x_j}$ are the closest points.
Image courtesy M. Sambridge and N. Sukumar. Also see Sambridge, Braun \& McQueen 1995 and Sukumar 1998.}
\label{fig:sibson1}
\end{figure*}
Natural neighbour interpolation produces a conservative, artifice-free, result
by finding weighted averages, at each interpolation point, of the functional values associated with
that subset of data which are natural neighbors of each interpolation point. Unlike other schemes,
like Shepard's interpolant \citep{shepard1968}, where distance-based weights are used, the Sibson natural neighbour
interpolation uses area-based weights. According to the nn-interpolation scheme the
interpolated value ${\widehat f}({\bf x})$ at a position ${\bf x}$ is given by
\begin{equation}
{\widehat f}({\bf x})\,=\,\sum_i\,\phi_{nn,i}({\bf x})\,f_i\,,
\label{eq:nnint}
\end{equation}
\begin{figure*}
\begin{center}
\mbox{\hskip -0.1truecm\includegraphics[width=12.2cm]{weyval.fig19.ps}}
\end{center}
\caption{Natural Neighbour Interpolants. Example of NN interpolation in
two dimensions. The basis of the nn-interpolation kernel $\phi_{nn,1}$,
the contribution of nucleus ${\bf x}_1$ to the interpolated field values.
Image courtesy M. Sambridge and N. Sukumar. Also see Sambridge, Braun \& McQueen 1995 and Sukumar 1998.}
\label{fig:sibson2}
\end{figure*}
\begin{figure*}[b]
\begin{center}
\mbox{\hskip -0.2truecm\includegraphics[width=12.0cm]{weyval.fig20.ps}}
\end{center}
\caption{Natural Neighbour Interpolants. Example of NN interpolation in
two dimensions. 3-D illustration of the nn-interpolation kernel $\phi_{nn,1}$, the contribution
of nucleus ${\bf x}_1$ to the interpolated field values. Image courtesy M. Sambridge, also see Braun \& Sambridge 1995.
Reproduced by permission of Nature.}
\label{fig:sibson3}
\end{figure*}
in which the summation is over the natural neighbours of the point ${\bf x}$, i.e. the
sample points $j$ with whom the order-2 Voronoi cells ${\cal V}_2({\bf x},{\bf x}_j)$ are
not empty (fig.~\ref{fig:sibson1},~\ref{fig:sibson2}). Sibson interpolation is based upon
the interpolation kernel $\phi({\bf x},{\bf x}_j)$ to be equal to the normalized order-2 Voronoi cell,
\begin{equation}
\phi_{nn,i}({\bf x})\,=\,{\displaystyle {{\cal A}_{2}({\bf x},{\bf x}_i)} \over
{\displaystyle {\cal A}({\bf x})}}\,,
\label{eq:nnintint}
\end{equation}
in which ${\cal A}({\bf x})=\sum_j {\cal A}({\bf x},{\bf x}_j)$ is the volume of the
potential Voronoi cell of point ${\bf x}$ if it had been added to the point sample ${\cal P}$ and
the volume ${\cal A}_{2}({\bf x},{\bf x}_i)$ concerns the order-2 Voronoi cell
${\cal V}_2({\bf x},{\bf x}_i)$, the region of space for which the points ${\bf x}$ and ${\bf x_i}$
are the closest points. Notice that the interpolation kernels $\phi$ are always positive and sum to one.
\begin{figure*}[t]
\begin{center}
\mbox{\hskip -0.2truecm\includegraphics[width=12.0cm]{weyval.fig21.ps}}
\end{center}
\caption{Natural Neighbour Kernels: illustration of the locally defined natural neighbour
kernels $\phi_{nn,j}$ for four different nuclei $j$. Image courtesy M. Sambridge,
also see Braun \& Sambridge 1995. Reproduced by permission of Nature.}
\label{fig:sibson4}
\end{figure*}
The resulting function is continuous everywhere within the convex hull of the data, and has a continuous
slope everywhere except at the data themselves (fig.~\ref{fig:sibson3}, fig.~\ref{fig:sibson4}). Beautiful two-dimensional examples
of nn-interpolation applications testify of its virtues (see fig.~\ref{fig:landscapint}).
An interesting study of its performance, in comparison with other interpolation/approximation methods, concerned
the test involving data acquired by the Mars Orbiter Laser Altimeter (MOLA), one of the instruments on the
Mars Global Surveyor (deceased November 2006). Applying various schemes towards
the reconstruction of a terrain near the Korolev crater, a large crater in the north polar region of Mars,
demonstrated that nn-neighbour interpolation produced the most impressive reproduction of the original terrain
\citep[]{abramov2004}. In comparison to the other methods -- including nearest neighbour intp., spline intp., and
linear DTFE type interpolation -- the nn-neighbour map not only looks most natural but also proves to contain fewer
artefacts, both in number and severity. The test reconstructions of the Jokulsa and Fjollum region of Iceland in
fig.~\ref{fig:landscapint} provide ample proof.
Natural neighbour interpolation may rightfully be regarded as the most general and robust method
for multidimensional interpolation available to date. This smooth and local spatial interpolation technique
has indeed gradually acquired recognition as a dependable, optimal, local method. For the two-dimensional
applications it has seen highly interesting applications in geophysical fluid dynamics calculations
\citep{braunsambridge1995,sbm1995}, and in equivalent schemes for solid mechanics problems \citep{sukumarphd1998}.
Both applications used n-n interpolation to solve partial differential equations, showing its great potential in the field
of computational fluid mechanics.
While almost optimal in the quality of its reconstructions, it still involves a heavy computational effort. This
is certainly true for three or higher dimensional spaces. This has prodded us to define a related local, adaptive
method which can be applied to large astronomical datasets.
\section{DTFE: the Delaunay Tessellation Field Estimator}
\label{sec:dtfe}
For three-dimensional samples with large number of points, akin to those found in large cosmological computer simulations,
the more complex geometric operations involved in the pure nn-neighbour interpolation still represent a computationally
challenging task. To deal with the large point samples consisting of hundreds of thousands to several millions of points
we chose to follow a related nn-neigbhour based technique that restricts itself to pure linear interpolation.
DTFE uses to obtain optimal {\it local} estimates of the spatial density \citep[see][sect 8.5]{okabe2000}, while
the tetrahedra of its {\it dual} Delaunay tessellation are used as multidimensional intervals for {\it linear} interpolation of
the field values sampled or estimated at the location of the sample points \citep[ch. 6]{okabe2000}. The DTFE technique allows
us to follow the same geometrical and structural adaptive properties of the higher order nn-neighbour methods while allowing
the analysis of truely large data sets. The presented tests in this review will demonstrate that DTFE indeed is able to highlight
and analyze essential elements of the cosmic matter distribution.
\subsection{DTFE: the Point Sample}
\noindent The DTFE procedure, outlined in the flow diagram in fig.~\ref{fig:dtfescheme}, involves a sequence of steps.
It starts with outlining the discrete point sample ${\cal P}$ in $d$-dimensional space $Re^d$,
\begin{equation}
{\cal P}=\{{\bf x}_1,\ldots,{\bf x}_N\}\,,
\end{equation}
In the applications described in this study we restrict ourselves to Euclidian spaces, in
particular $2-$ or $3-$dimensional Euclidian spaces. At the locations of the points in the countable
set ${\cal P}$ the field values $\{f({\bf x}_i),i=1,\ldots,N\}$ are sampled, or can be estimated on
the basis of the spatial point distribution itself. The prime example of the latter is when the field
$f$ involves the density field itself.
On the basis of the character of the sampled field $f({\bf x})$ we need to distinguish two options.
The first option is the one defined in \cite{bernwey96}, the straightforward multidimensional linear interpolation of
measured field values on the basis of the Delaunay tessellation. \cite{schaapwey2000} and \cite{schaapphd2007}
extended this to the recovery of the density or intensity field. DTFE is therefore characterized by a second option,
the ability to reconstruct the underlying density field from the discrete point process itself.
The essential extension of DTFE wrt. \cite{bernwey96} is that it allows the option of using the {\it point sample
process} ${\cal P}$ itself as a measure for the value of the {\it density} at its position. The latter
poses some stringent requirements on the sampling process. It is crucial for the discrete spatial point
distribution to constitute a fair sample of the underlying continuous density field. In other words,
the discrete point sample ${\cal P}$ needs to constitute a general Poisson point process of the density
field.
Such stringent requirements on the spatial point process ${\cal P}$ are not necessary when the sampled field
has a more generic character. As long as the sample points are spread throughout most of the sample volume
the interpolation procedure will yield a properly representative field reconstruction. It was this situation
with respect to cosmic velocity fields which lead to the original definition of the Delaunay spatial
interpolation procedure forming the basis of DTFE \citep{bernwey96,bernwey97}.
\subsection{DTFE: Linear Interpolation}
At the core of the DTFE procedure is the use of the Delaunay tessellation of the discrete point
sample process (see sect.~\ref{sec:delaunay}) as an adaptive spatial linear interpolation grid.
Once the Delaunay tessellation of ${\cal P}$ is
determined, it is used as a {\it multidimensional linear interpolation grid} for a
field $f({\bf r})$. That is, each Delaunay tetrahedron is supposed to be a region with a constant
field gradient, $\nabla f$.
The linear interpolation scheme of DTFE exploits the same spatially adaptive characteristics of the Delaunay tessellation
generated by the point sample ${\cal P}$ as that of regular natural neighbour
schemes (see sect.~\ref{sec:nn}, eqn.~\ref{eq:nnint}). For DTFE the interpolation kernel $\phi_{dt,i}({\bf x})$
is that of regular linear interpolation within the Delaunay tetrahedron in which ${\bf x}$ is located,
\begin{equation}
{\widehat f}_{dt}({\bf x})\,=\sum_i\,\phi_{dt,i}({\bf x})\,f_i,
\label{eq:dtfeint}
\end{equation}
in which the sum is over the four sample points defining the Delaunay tetrahedron. Note that not only the size, but also
the shape of the Delaunay simplices is fully determined by the spatial point distribution. As a result the resolution of
the DTFE procedure depends on both the density and geometry of the local point distribution. Not only does the
DTFE kernel embody an optimal spatial resolution, it also involves a high level of adaptivity to the local geometry
of the point distribution (see sect.~\ref{sec:dtfekernel}).
Also note that for both the nn-interpolation as well as for the linear DTFE interpolation,
the interpolation kernels $\phi_i$ are unity at sample point location ${\bf x}_i$ and
equal to zero at the location of the other sample points $j$ (e.g. see fig.~\ref{fig:sibson3}),
\begin{eqnarray}
\phi_i({\bf x}_j)\,=\,
{\begin{cases}
1\hskip 2.truecm {\rm if}\,i=j\,,\\
0\hskip 2.truecm {\rm if}\,i\ne j\,,
\end{cases}}\,
\end{eqnarray}
where ${\bf x}_j$ is the location of sample point $j$.
In practice, it is convenient to replace eqn.~\ref{eq:dtfeint} with its equivalent expression
in terms of the (linear) gradient ${\widehat {\nabla f}}\bigr|_m$ inside the Delaunay simplex $m$,
\begin{equation}
{\widehat f}({\bf x})\,=\,{\widehat f}({\bf x}_{i})\,+\,{\widehat {\nabla f}} \bigl|_m \,\cdot\,({\bf x}-{\bf x}_{i}) \,.
\label{eq:fieldval}
\end{equation}
\noindent The value of ${\widehat {\nabla f}}\bigr|_m$ can be easily and uniquely determined from the $(1+D)$ field values $f_j$ at
the sample points constituting the vertices of a Delaunay simplex. Given the location ${\bf r}=(x,y,z)$ of the four
points forming the Delaunay tetrahedra's vertices, ${\bf r}_0$, ${\bf r}_1$, ${\bf r}_2$ and ${\bf r}_3$,
and the value of the sampled field at each of these locations, $f_0$, $f_1$, $f_2$ and $f_3$ and defining the
quantities
\begin{eqnarray}
\Delta x_n &\,=\,& x_n-x_0\,;\nonumber\\
\Delta y_n &\,=\,& y_n -y_0\,;\qquad \hbox{for}\ n=1,2,3\\
\Delta z_n &\,=\,&z_n -z_0\nonumber
\end{eqnarray}
\noindent as well as $\Delta f_n\,\equiv\,f_n-f_0\,(n=1,2,3)$ the gradient $\nabla f$ follows from the inversion
\begin{eqnarray}
\nabla f\,=\,
\begin{pmatrix}
{\displaystyle \partial f \over \displaystyle \partial x}\\
\ \\
{\displaystyle \partial f \over \displaystyle \partial y}\\
\ \\
{\displaystyle \partial f \over \displaystyle \partial z}
\end{pmatrix}
\,=\,{\bf A}^{-1}\,
\begin{pmatrix}
\Delta f_{1} \\ \ \\ \Delta f_{2} \\ \ \\ \Delta f_{3} \\
\end{pmatrix}\,;\qquad
{\bf A}\,=\,
\begin{pmatrix}
\Delta x_1&\Delta y_1&\Delta z_1\\
\ \\
\Delta x_2&\Delta y_2&\Delta z_2\\
\ \\
\Delta x_3&\Delta y_3&\Delta z_3
\end{pmatrix}
\label{eq:fieldgrad}
\end{eqnarray}
\noindent Once the value of $\nabla_f$ has been determined for each Delaunay tetrahedron in the tessellation, it is
straightforward to determine the DTFE field value ${\widehat f}({\bf x})$ for any location ${\bf x}$ by means
of straightforward linear interpolation within the Delaunay tetrahedron in which ${\bf x}$ is located
(eqn.~\ref{eq:fieldval}).
The one remaining complication is to locate the Delaunay tetrahedron ${\cal D}_m$ in which a particular
point ${\bf x}$ is located. This is not as trivial as one might naively think. It not necessarily concerns
a tetrahedron of which the nearest nucleus is a vertex. Fortunately, a very efficient method, the {\it
walking triangle algorithm}~\citep{lawson1977,sloan1987} has been developed. Details of the method may be
found in \cite{sbm1995,schaapphd2007}.
\subsection{DTFE: Extra Features}
While DTFE in essence is a first order version of {\it Natural Neighbour Interpolation} procedure, following the same adaptive
multidimensional interpolation characteristics of the Delaunay grid as the higher-order nn-neighbour techniques, it also
incorporates significant extensions and additional aspects. In particular, DTFE involves two extra and unique features
which are of crucial importance for the intended cosmological context and application:
\begin{itemize}
\item[$\bullet$] {\it Volume-weighted}:\\
The interpolated quantities are {\it volume-weighted}, instead of the implicit {\it mass-weighted} averages
yielded by conventional grid interpolations.
\item[$\bullet$] {\it Density estimates}:\\
The spatial adaptivity of the Delaunay/Voronoi tessellation to the underlying point distribution is used
to estimate the local density.
\end{itemize}
\begin{figure*}
\vskip 0.5truecm
\mbox{\hskip -1.5truecm\includegraphics[width=22.0cm,angle=90.0]{weyval.fig22.ps}}
\caption{Flow diagram illustrating the essential ingredients of the DTFE procedure.}
\label{fig:dtfescheme}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=11.5cm]{weyval.fig23.ps}
\caption{Summary: overview of the essential steps of the DTFE reconstruction
procedure. Given a point distribution (top left), one has to
construct its corresponding Delaunay tessellation (top right),
estimate the density at the position of the sampling points by
taking the inverse of the area of their corresponding contiguous
Voronoi cells (bottom right) and finally to assume that the
density varies linearly within each Delaunay triangle, resulting
in a volume-covering continuous density field (bottom left).}
\label{fig:dtfepanel}
\end{figure*}
\section{DTFE: the field reconstruction procedure}
\label{sec:dtfe_proc}
The complete DTFE reconstruction procedure, essential steps of which are illustrated
in Fig.~\ref{fig:dtfepanel}, can be summarized in terms of the flow diagram
in Fig.~\ref{fig:dtfescheme} and consists of the following sequence of steps.
\begin{enumerate}
\item[$\bullet$] {\bf Point sample}\\
Defining the spatial distribution of the point sample:
\begin{enumerate}
\item[+] {\it Density field}:\\ point sample needs
to be a general Poisson process of the (supposed) underlying density field, i.e.
it needs to be an unbiased sample of the underlying density field.
\item[+] {\it General (non-density) field}:\\
no stringent requirements upon the stochastic representativeness of the sampling
process will be necessary except that the sample volume is adequately covered.
\end{enumerate}
\medskip
\item[$\bullet$] {\bf Boundary Conditions}\\
An important issue, wrt the subsequent Delaunay tessellation computation and the self-consistency
of the DTFE density and velocity field reconstructions, is that of the assumed
boundary conditions. These will determine the Delaunay and Voronoi cells that overlap
the boundary of the sample volume. Dependent upon the sample at hand, a variety of
options exists:\\
\begin{enumerate}
\item[+] {\it Vacuum boundary conditions:}\\ outside the sample volume there are no points. This will lead
to infinitely extended (contiguous) Voronoi cells surrouding sample points near the boundary. Evidently,
these cells cannot be used for DTFE density field estimates and field interpolations: the volume of the
DTFE reconstruction is smaller than that of the sample volume. \\
\item[+] {\it Periodic boundary conditions:}\\ the point sample is supposed to be repeated periodically
in boundary boxes, defining a toroidal topology for the sample volume. The resulting Delaunay and
Voronoi tessellations are also periodic, their total volume exactly equal to the sampel volume. While specific
periodic tessellation algorithms do exist~\citep{weygaert1994}, this is not yet true for most available
routines in standard libraries. For the analysis of N-body
simulations this is the most straightforward and preferrable choice.\\
\item[+] {\it Buffer conditions:}\\ the sample volume box is surrounded by a bufferzone filled with
a synthetic point sample. The density of the synthetic buffer point sample should be related to the
density in the nearby sample volume. The depth of the bufferzone depends on the density of the synthetic point
sample, it should be sufficiently wide for any Delaunay or Voronoi cell related to a sample point not
to exceed the bufferzone. A clear condition for a sufficiently deep bufferzone has been specified by
\cite{neyrinck2005}.\\
\ \\
When involving velocity field analysis, the velocities of the buffer points should also follow the velocity
distribution of the sample and be in accordance with the continuity equation. Relevant examples of possible
choices are:\\
\par\indent \hangindent2\parindent \textindent{-} {\it internal:} the analyzed sample is a subsample embedded within a large sample volume,
a sufficient number of these sample points outside the analysis volume is taken along in the DTFE
reconstruction.
\par\indent \hangindent2\parindent \textindent{-} {\it random cloning technique:} akin to the technique described by \cite{yahil1991}.
\par\indent \hangindent2\parindent \textindent{-} {\it constrained random field:} realizations employing the existing correlations in the
field \citep[][]{edbert1987,hofmrib1991,weyedb1996,zaroubi1995}.
\end{enumerate}
\medskip
\item[$\bullet$] {\bf Delaunay Tessellation}\\
Construction of the Delaunay tessellation from the point sample (see fig.~\ref{fig:delpntadapt}).
While we still use our own Voronoi-Delaunay code~\citep{weygaert1994}, at present there is a score
of efficient library routines available. Particularly noteworthy is the \texttt{CGAL}\ initiative,
a large library of computational geometry routines\footnote{\texttt{CGAL}\ is a \texttt{C++} library of
algorithms and data structures for Computational Geometry, see \url{www.cgal.org}.}\\
\medskip
\item[$\bullet$] {\bf Field values point sample}\\
Dependent on whether it concerns the densities at the sample points or a
measured field value there are two options:
\begin{enumerate}
\item[+] {\it General (non-density) field}:\\ (Sampled) value of field at sample point. \\
\item[+] {\it Density field}:
The density values at the sampled points are determined from the corresponding Voronoi tessellations.
The estimate of the density at each sample point is the normalized inverse of the volume of its {\it contiguous}
Voronoi cell ${\cal W}_i$ of each point $i$. The {\it contiguous Voronoi cell} of a point $i$ is the union of
all Delaunay tetrahedra of which point $i$ forms one of the four vertices (see fig.~\ref{fig:voronoicont} for an
illustration). We recognize two applicable situations:\\
\par\indent \hangindent2\parindent \textindent{-} {\it uniform sampling process}: the point sample is an unbiased sample of the underlying density field. Typical
example is that of $N$-body simulation particles. For $D$-dimensional space the density estimate is,
\begin{equation}
{\widehat \rho}({\bf x}_i)\,=\,(1+D)\,\frac{w_i}{V({\cal W}_i)} \,.
\label{eq:densvor}
\end{equation}
\noindent with $w_i$ the weight of sample point $i$, usually we assume the same
``mass'' for each point. \\
\par\indent \hangindent2\parindent \textindent{-} {\it systematic non-uniform sampling process}: sampling density according to specified
selection process quantified by an a priori known selection function $\psi({\bf x})$, varying as
function of sky position $(\alpha,\delta)$ as well as depth/redshift. For $D$-dimensional space the
density estimate is ,
\begin{equation}
{\widehat \rho}({\bf x}_i)\,=\,(1+D)\,\frac{w_i}{\psi({\bf x}_i)\,V({\cal W}_i)} \,.
\label{eq:densvornu}
\end{equation}
\end{enumerate}
\medskip
\item[$\bullet$] {\bf Field Gradient}\\
Calculation of the field gradient estimate $\widehat{\nabla f}|_m$
in each $D$-dimensional Delaunay simplex $m$ ($D=3$: tetrahedron; $D=2$: triangle)
by solving the set of linear equations for the field values at the positions
of the $(D+1)$ tetrahedron vertices,\\
\begin{eqnarray}
\widehat{\nabla f}|_m \ \ \Longleftarrow\ \
{\begin{cases}
f_0 \ \ \ \ f_1 \ \ \ \ f_2 \ \ \ \ f_3 \\
\ \\
{\bf r}_0 \ \ \ \ {\bf r}_1 \ \ \ \ {\bf r}_2 \ \ \ \ {\bf r}_3 \\
\end{cases}}\,
\label{eq:dtfegrad}
\end{eqnarray}
Evidently, linear interpolation for a field $f$ is only meaningful when the field
does not fluctuate strongly. Particularly relevant for velocity field reconstructions
is that there should be no orbit crossing flows within the volume of the Delaunay
cell which would involve multiple velocity values at any one location. In other words,
DTFE velocity field analysis is only significant for {\it laminar} flows.
Note that in the case of the sampled field being the velocity field ${\bf v}$ we may
not only infer the velocity gradient in a Delaunay tetrahedron, but also the directly
related quantities such as the {\it velocity divergence}, {\it shear} and {\it vorticity}.
\medskip
\begin{figure}
\centering
\mbox{\hskip -0.1truecm\includegraphics[width=11.8cm]{weyval.fig24.ps}}
\caption{Two-dimensional
display-grids in the VTFE, DTFE and grid-based reconstruction
methods. The grid is overlaid on top of the basic underlying
structure used to reconstruct the density field. SPH-like methods
are not shown, because of the inherent difficulty in visualizing
their underlying structure, which does not consist of a
subdivision of space in distinct non-overlapping structural
elements, but of circles of different radius at each position in
space.}
\label{fig:dtfevtfegrid}
\end{figure}
\item[$\bullet$] {\bf Interpolation}.\\
The final basic step of the DTFE procedure is the field interpolation. The processing
and postprocessing steps involve numerous interpolation calculations, for each of the
involved locations ${\bf x}$.
\medskip
Given a location ${\bf x}$, the Delaunay tetrahedron $m$ in which it is embedded is
determined. On the basis of the field gradient $\widehat{\nabla f}|_m$ the field value
is computed by (linear) interpolation (see eq.~\ref{eq:fieldval}),
\begin{equation}
{\widehat f}({\bf x})\,=\,{\widehat f}({\bf x}_{i})\,+\,{\widehat {\nabla f}} \bigl|_m \,\cdot\,({\bf x}-{\bf x}_{i}) \,.
\end{equation}
In principle, higher-order interpolation procedures are also possible. Two relevant high-order procedures are:
\par\indent \hangindent2\parindent \textindent{-} Spline Interpolation
\par\indent \hangindent2\parindent \textindent{-} Natural Neighbour Interpolation (see eqn.~\ref{eq:nnint} and eqn.~\ref{eq:nnintint}).\\
Implementation of natural neighbour interpolation, on the basis of \texttt{CGAL}\ routines, is presently in progress.
\bigskip
\begin{figure}
\vskip -0.25truecm
\centering
\mbox{\hskip -0.5truecm\includegraphics[width=12.5cm]{weyval.fig25.ps}}
\caption{Image of a characteristic filamentary region in the
outcome of a cosmological $N$-body simulation. Left-hand
frame: particle distribution in a thin slice through the simulation
box. Right-hand frame: two-dimensional slice through the three-dimensional DTFE density
field reconstruction. From \citep{schaapphd2007}.}
\label{fig:filpartdtfe}
\end{figure}
\item[$\bullet$] {\bf Processing}.\\
Though basically of the same character for practical purposes we make a distinction between
straightforward processing steps concerning the production of images and simple smoothing
filtering operations on the one hand, and more complex postprocessing on the other hand.
The latter are treated in the next item. Basic to the processing steps is the determination
of field values following the interpolation procedure(s) outlined above.\\
Straightforward ``first line'' field operations are {\it ``Image reconstruction''} and,
subsequently, {\it ``Smoothing/Filtering''}.\\
\begin{enumerate}
\item[+] {\it Image reconstruction}.\\ For a set of {\it image points}, usually grid points,
determine the {\it image value}: formally the average field value within the corresponding gridcell.
In practice a few different strategies may be followed, dictated by accuracy requirements. These
are:\\
\par\indent \hangindent2\parindent \textindent{-} {\it Formal geometric approach}: integrate over the field values within
each gridcell. This implies the calculation of the intersection of the relevant
Delaunay tetrahedra and integration of the (linearly) running field values within the
intersectio. Subsequently the integrands of each Delaunay intersection are added and
averaged over the gridcell volume.
\par\indent \hangindent2\parindent \textindent{-} {\it Monte Carlo approach}: approximate the integral by taking the
average over a number of (interpolated) field values probed at randomly distributed
locations within the gridcell around an {\it image point}. Finally average over the obtained field values
within a gridcell.
\par\indent \hangindent2\parindent \textindent{-} {\it Singular interpolation approach}: a reasonable and usually satisfactory alternative to the
formal geometric or Monte Carlo approach is the shortcut to limit the field value calculation to that at
the (grid) location of the {\it image point}. This offers a reasonable approximation for gridcells which
are smaller or comparable to that of intersecting Delaunay cells, on the condition the field gradient within
the cell(s) is not too large.
\item[+] {\it Smoothing} and {\it Filtering}:
\par\indent \hangindent2\parindent \textindent{-} Linear filtering of the field ${\widehat f}$: convolution of the field
${\widehat f}$ with a filter function $W_s({\bf x},{\bf y})$, usually user-specified,
\begin{equation}
f_s({\bf x})\,=\,\int\,{\widehat f}({\bf x'})\, W_s({\bf x'},{\bf y})\,d{\bf x'}
\end{equation}
\par\indent \hangindent2\parindent \textindent{-} Median (natural neighbour) filtering: the DTFE density field is adaptively
smoothed on the bais of the median value of densities within the {\it
contiguous Voronoi cell}, the region defined by the a point and its
{\it natural neighbours} \citep[see][]{platen2007}.
\par\indent \hangindent2\parindent \textindent{-} (Nonlinear) diffusion filtering: filtering of (sharply defined) features in
images by solving appropriately defined diffusion equations \citep[see e.g][]{mitra2000}.
\end{enumerate}
\medskip
\begin{figure}
\vskip -0.25truecm
\centering
\mbox{\hskip -0.15truecm\includegraphics[width=12.0cm]{weyval.fig26.ps}}
\caption{Filtering of density fields. The left-hand frame
depicts the original DTFE density field. The subsequent frames show
filtered DTFE density fields. The FWHM of the Gaussian filter is indicated by the shaded circle
in the lower left-hand corner of these frames. From \citep{schaapphd2007}.}
\vskip -0.5truecm
\label{fig:filfilt}
\end{figure}
\medskip
\item[$\bullet$] {\bf Post-processing}.\\
The real potential of DTFE fields may be found in sophisticated applications,
tuned towards uncovering characteristics of the reconstructed fields.
An important aspect of this involves the analysis of structures in the
density field. Some notable examples are:\\
\begin{enumerate}
\item[+] Advanced filtering operations. Potentially interesting
applications are those based on the use of wavelets \citep{martinez2005}.
\item[+] Cluster, Filament and Wall detection by means of the
{\it Multiscale Morphology Filter} \citep{aragonphd2007,aragonmmf2007}.
\item[+] Void identification on the basis of the {\it cosmic watershed}
algorithm~\citep{platen2007}.
\item[+] Halo detection in N-body simulations \citep{neyrinck2005}.
\item[+] The computation of 2-D surface densities for the study of
gravitational lensing \citep{bradac2004}.
\end{enumerate}
\medskip
In addition, DTFE enables the simultaneous and combined analysis of density
fields and other relevant physical fields. As it allows the simultaneous
determination of {\it density} and {\it velocity} fields, it can serve as
the basis for studies of the dynamics of structure formation in the cosmos.
Its ability to detect substructure as well as reproduce the morphology
of cosmic features and objects implies DTFE to be suited for assessing their
dynamics without having to invoke artificial filters.\\
\begin{enumerate}
\item[+] DTFE as basis for the study of the full {\it phase-space} structure of structures
and objects. The phase-space structure dark haloes in cosmological
structure formation scenarios has been studied by \cite{arad2004}.
\end{enumerate}
\end{enumerate}
\section{DTFE: densities and velocities}
\begin{figure}
\centering
\vskip -0.5truecm
\mbox{\hskip -0.25truecm\includegraphics[height=12.4cm,angle=270.0]{weyval.fig27.ps}}
\vskip 0.0truecm
\caption{Illustration of a contiguous Voronoi cell. A contiguous Voronoi cell
is the union of all Delaunay tetrahedra of which a point $i$ is one of the vertices.}
\vskip -0.5truecm
\label{fig:voronoicont}
\end{figure}
\subsection{DTFE density estimates}
The DTFE procedure extends the concept of interpolation of field values sampled
at the point sample ${\cal P}$ to the estimate of the density ${\widehat \rho}({\bf x})$
from the spatial point distribution itself. This is only feasible if the
spatial distribution of the discrete point sample forms a fair and
unbiased reflection of the underlying density field.
It is commonly known that an optimal estimate for the spatial density at the location of a point ${\bf x}_i$ in a
discrete point sample ${\cal P}$ is given by the inverse of the volume of the corresponding
Voronoi cell \citep[see][section 8.5, for references]{okabe2000}. Tessellation-based methods for estimating the density have
been introduced by \cite{brown1965} and \cite{ord1978}. In astronomy, \cite{ebeling1993} were the first to use tessellation-based density
estimators for the specific purpose of devising source detection algorithms. This work has recently been applied to cluster detection
algorithms by \citep{ramella2001,kim2002,marinoni2002,lopes2004}. Along the same lines, \cite{ascalbin2005} suggested that the use of
a multidimensional binary tree might offer a computationally more efficient alternative. However, these studies have been restricted to
raw estimates of the local sampling density at the position of the sampling points and have not yet included the more elaborate interpolation
machinery of the DTFE and Natural Neighbour Interpolation methods.
\begin{figure}
\centering
\mbox{\hskip -0.25truecm\includegraphics[width=12.5cm]{weyval.fig28.ps}}
\caption{Relation between density and volume contiguous Voronoi cells. Two example
points embedded within a filamentary structure (see fig.~\ref{fig:delpntadapt}).}
\vskip -0.5truecm
\label{fig:voronoicont}
\end{figure}
\subsubsection{Density definition}
\noindent The density field reconstruction of the DTFE procedure consists of two steps,
the zeroth-order estimate ${\widehat \rho}_0$ of the density values at
the location of the points in ${\cal P}$ and the subsequent linear
interpolation of those zeroth-order density estimates over the corresponding Delaunay
grid throughout the sample volume. This yields the DTFE density field estimate
${\widehat \rho}({\bf x})$.
It is straightforward to infer (see next sect.~\ref{sec:masscons}) that if the the zeroth-order
estimate of the density values would be the inverse of the regular Voronoi volume the condition
of mass conservation would not be met. Instead, the DTFE procedure employs a slightly modified
yet related zeroth-order density estimate, the normalized inverse of the volume $V({\cal W}_i)$
of the {\it contiguous Voronoi cell} ${\cal W}_i$ of each point $i$. For $D$-dimensional
space this is
\begin{equation}
{\widehat \rho}({\bf x}_i)\,=\,(1+D)\,\frac{m_i}{V({\cal W}_i)} \,.
\label{eq:densdtfe}
\end{equation}
The {\it contiguous Voronoi cell} of a point $i$ is the union of all Delaunay tetrahedra
of which point $i$ forms one of the four vertices (see fig.~\ref{fig:voronoicont}).
\subsubsection{Mass Conservation}
\label{sec:masscons}
An {\it essential} requirement for the cosmological purposes of our
interpolation scheme is that the estimated DTFE density field ${\widehat \rho}({\bf x})$
should guarantee {\it mass conservation}: the total mass corresponding to the density
field should be equal to the mass represented by the sample points. Indeed,
this is an absolutely crucial condition for many applications of a
physical nature. Since the mass $M$
is given by the integral of the density field $\rho({\bf x})$ over space, this
translates into the integral requirement
\begin{eqnarray}
{\widehat M}&\,=\,&\int {\widehat \rho}({\bf x})\,d{\bf x}\nonumber\\
&\,=\,&\sum_{i=1}^{N} m_i\,=\,M\,=\,{\rm cst.},
\label{eq:integratemass}
\end{eqnarray}
with $m_i=m$ is the mass per sample point: the interpolation procedure
should conserve the mass $M$.
The integral eq.~\ref{eq:integratemass} is equal to the volume below the linearly varying
${\widehat \rho}$-surface in $({\bf x},{\widehat \rho})$-space. In this space each Delaunay
tetrahedron $m$ is the base ``hyper-plane'' of a polyhedron ${\cal D}^+_m$ (here we
use the term ``tetrahedron'' for any multidimensional Delaunay simplex). The
total mass corresponding to the density field may therefore be written as the sum of the
volumes of these polyhedra,
\begin{equation}
M~=~\sum_{m=1}^{N_T} V({\cal D}^*_m) \,,
\end{equation}
\noindent with $N_T$ being the total number of Delaunay tetrahedra
in the tessellation and $V({\cal D}^*_m)$ the volume of polyhedron
${\cal D}^+_m$. This volume may be written as the average density at the
vertices of Delaunay tetrahedron ${\cal D}_m$ times its
volume $V({\cal D}_m)$
\begin{equation}
V({\cal D}^*_m)~=~\frac{1}{D+1}\,
\left({\widehat \rho}_{m1}+{\widehat \rho}_{m2}+\ldots+
{\widehat \rho}_{m(D+1)}\right)\, V({\cal D}_m) \, .
\end{equation}
The points $\{m1,m2,\ldots,m(D+1)\}$ are the nuclei which are vertices of the
Delaunay tetrahedron ${\cal D}^*_m$. The total mass $M$ contained in the
density field is the sum over all Delaunay tetrahedra within the
sample volume:
\begin{equation}
M~=~\frac{1}{D+1}\,\sum_{m=1}^{N_T}\,
\left({\widehat \rho}_{m1}+{\widehat \rho}_{m2}+\ldots+
{\widehat \rho}_{m(D+1)}\right)\, V({\cal D}_m) \, .
\label{eq:massordered}
\end{equation}
A simple reordering of this sum yields
\begin{equation}
M~=~\frac{1}{D+1}\,\,\sum_{i=1}^N{\widehat \rho}_i\,\sum_{m=1}^{N_{D,i}}
V({\cal D}_{m,i}) \,,
\label{eq:reorderedsum}
\end{equation}
in which ${\cal D}_{m,i}$ is one of the $N_{D,i}$ Delaunay tetrahedra of which
nuclues $i$ is a vertex. The complete set ${\cal D}_{m,i}$ constitutes teh
{\it contiguous Voronoi cell} ${\cal W}_i$ of nucleus $i$. Mass $M$ may therefore
be written as
\begin{equation}
M~=~\frac{1}{D+1}\,\,\sum_{i=1}^N{\widehat \rho}_i\,V(\mathcal{W}_i) \,.
\label{eq: massform}
\end{equation}
This equation immediately implies that $M$ is equal to the
total mass of the sampling points (eq.~\ref{eq:integratemass}) on
the condition that the density at the location of the sampling points
is
\begin{equation}
\label{3eq: contigvordens}
{\widehat \rho}({\bf x}_i)~=~(D+1)\,\frac{m_i}{V(\mathcal{W}_i)} \, .
\end{equation}
This shows that the DTFE density estimate (eq.~\ref{eq:densdtfe}),
proportional to the inverse of contiguous Voronoi cell volume,
indeed guarantees mass conservation. The corresponding normalization
constant ($1+D$) stands for the number of times each Delaunay
tetrahedron is used in defining the DTFE density field, equal
to the number of (1+D) sample points constituting its vertices.
\subsubsection{Non-uniform sampling}
Astronomical situations often involve a non-uniform sampling process. Often the non-uniform
selection may be quantified by an a priori known selection function $\psi({\bf x})$. This situation
is typical for galaxy surveys: $\psi({\bf x})$ may encapsulate differences in sampling density as
function of
\begin{itemize}
\item[$\bullet$] Sky position $(\alpha,\delta)$\\
In practice, galaxy (redshift) surveys are hardly ever perfectly uniform. Various
kinds of factors -- survey definition, observing conditions, instrumental
aspects, incidental facts -- will result in a non-uniform coverage of the
sky. These may be encapsulated in a {\it survey mask} $\psi(\alpha,\delta)$.
\item[$\bullet$] Distance/redshift: $\psi_z(r)$\\
Magnitude- or flux-limited surveys induce a systematic redshift selection.
At higher redshifts, magnitude-limited surveys can only probe galaxies whose luminosity
exceeds an ever increasing value $L(z)$. The corresponding radial selection function
$\psi_z$ is given by
\begin{equation}
\psi_z(z)~=~\frac{\int_{L(z)}^\infty \Phi(L) {\rm d}L}{\int_{L_0}^\infty \Phi(L) {\rm d}L} \, ,
\label{eq: selfnc}
\end{equation}
where $\Phi(L)$ is the galaxy luminosity function.
\end{itemize}
\noindent Both selection effects will lead to an underestimate of density value. To correct
for these variations in sky completeness and/or redshift/depth completeness the
estimated density at a sample point/galaxy $i$ is weighted by the inverse
of the selection function,
\begin{equation}
\psi({\bf x}_i)\,=\,\psi_s(\alpha_i,\delta_i)\,\psi_z(r_i)\,,
\end{equation}
\noindent so that we obtain,
\begin{equation}
{\widehat \rho}({\bf x}_i)\,=\,(1+D)\,\frac{m_i}{\psi({\bf x}_i)\,V({\cal W}_i)} \,.
\label{eq:densvornu}
\end{equation}.
While it is straightforward to correct the density estimates accordingly,
a complication occurs for locations where the completeness is very low or
even equal to zero. In general, regions with redshift completeness zero should
be excluded from the correction procedure, even though for specific cosmological
contexts it is feasible to incorporate field reconstruction procedures utilizing
field correlations. A {\it constrained random field} approach \citep{edbert1987,
hofmrib1991,zaroubi1995} uses the autocorrelation function of the presumed
density field to infer a realization of the corresponding Gaussian field. Recently,
\citep{aragonphd2007} developed a DTFE based technique which manages to
reconstruct a structured density field pattern within selection gaps
whose size does not exceed the density field's coherence scale.
\subsection{DTFE Velocity Fields}
\subsubsection{DTFE density and velocity field gradients}
The value of the density and velocity field gradient in each Delaunay tetrahedron is directly and
uniquely determined from the location ${\bf r}=(x,y,z)$ of the four points forming the Delaunay
tetrahedra's vertices, ${\bf r}_0$, ${\bf r}_1$, ${\bf r}_2$ and ${\bf r}_3$,
and the value of the estimated density and sampled velocities at each of these locations,
$({\widehat \rho}_0,{\bf v}_0)$, $({\widehat \rho}_1,{\bf v}_1)$, $({\widehat \rho}_2,{\bf v}_2)$ and
$({\widehat \rho}_3,{\bf v}_3)$,
\begin{eqnarray}
\begin{matrix}
\widehat{\nabla \rho}|_m\\
\ \\
\widehat{\nabla {\bf v}}|_m
\end{matrix}
\ \ \Longleftarrow\ \
{\begin{cases}
{\widehat \rho}_0 \ \ \ \ {\widehat \rho}_1 \ \ \ \ {\widehat \rho}_2 \ \ \ \ {\widehat \rho}_3 \\
{\bf v}_0 \ \ \ \ {\bf v}_1 \ \ \ \ {\bf v}_2 \ \ \ \ {\bf v}_3 \\
\ \\
{\bf r}_0 \ \ \ \ {\bf r}_1 \ \ \ \ {\bf r}_2 \ \ \ \ {\bf r}_3 \\
\end{cases}}\,
\label{eq:dtfegrad}
\end{eqnarray}
\noindent The four vertices of the Delaunay tetrahedron are both necessary and sufficient for computing the
entire $3 \times 3$ velocity gradient tensor $\partial v_i/\partial x_j$. Evidently, the same
holds for the density gradient $\partial \rho/\partial x_j$. We define the matrix ${\bf A}$
is defined on the basis of the vertex distances $(\Delta x_n,\Delta y_n, \Delta z_n)$ (n=1,2,3),
\begin{eqnarray}
\begin{matrix}
\Delta x_n \,=\,x_n-x_0\\
\Delta y_n \,=\,y_n -y_0\\
\Delta z_n \,=\,z_n -z_0\\
\end{matrix}
\quad \Longrightarrow\quad
{\bf A}\,=\,
\begin{pmatrix}
\Delta x_1&\Delta y_1&\Delta z_1\\
\ \\
\Delta x_2&\Delta y_2&\Delta z_2\\
\ \\
\Delta x_3&\Delta y_3&\Delta z_3
\end{pmatrix}
\end{eqnarray}
\noindent Similarly defining
$\Delta {\bf v}_n\,\equiv\,{\bf v}_n-{\bf v}_0\,(n=1,2,3)$ and
$\Delta {\rho}_n\,\equiv\,{\rho}_n-{\rho}_0\,(n=1,2,3)$ it is
straightforward to compute directly and simultaneously the
density field gradient $\nabla \rho|_m$ and the velocity field gradient
$\nabla {\bf v}|_m\,=\,\partial v_i/\partial x_j\,$ in Delaunay tetrahedron
$m$ via the inversion,
\begin{eqnarray}
\begin{pmatrix}
{\displaystyle \partial \rho \over \displaystyle \partial x}\\
\ \\
{\displaystyle \partial \rho \over \displaystyle \partial y}\\
\ \\
{\displaystyle \partial \rho \over \displaystyle \partial z}
\end{pmatrix}
\,=\,{\bf A}^{-1}\,
\begin{pmatrix}
\Delta \rho_{1}\\
\ \\
\Delta \rho_{2}\\
\ \\
\Delta \rho_{3}\\
\end{pmatrix}\,;\qquad\qquad
\begin{pmatrix}
{\displaystyle \partial v_x \over \displaystyle \partial x} & \ \ {\displaystyle \partial v_y \over \displaystyle \partial x} & \ \
{\displaystyle \partial v_z \over \displaystyle \partial x}\\
\ \\
{\displaystyle \partial v_x \over \displaystyle \partial y} & \ \ {\displaystyle \partial v_y \over \displaystyle \partial y} & \ \
{\displaystyle \partial v_z \over \displaystyle \partial y} \\
\ \\
{\displaystyle \partial v_x \over \displaystyle \partial z} & \ \ {\displaystyle \partial v_y \over \displaystyle \partial z} & \ \
{\displaystyle \partial v_z \over \displaystyle \partial z}
\end{pmatrix}
&\,=\,&{\bf A}^{-1}\,
\begin{pmatrix}
\Delta v_{1x} & \ \ \Delta v_{1y} & \ \ \Delta v_{1z} \\ \ \\
\Delta v_{2x} & \ \ \Delta v_{2y} & \ \ \Delta v_{2z} \\ \ \\
\Delta v_{3x} & \ \ \Delta v_{3y} & \ \ \Delta v_{3z} \\
\end{pmatrix}\,.
\label{eq:velgrad}
\end{eqnarray}
\subsubsection{Velocity divergence, shear and vorticity}
From the nine velocity gradient components $\partial v_i / \partial x_j$ we can directly
determine the three velocity deformation modes, the velocity divergence $\nabla \cdot {\bf v}$, the
shear $\sigma_{ij}$ and the vorticity ${\bf \omega}$,
\begin{eqnarray}
\nabla \cdot {\bf v}&\,=\,&
\left({\displaystyle \partial v_x \over \displaystyle \partial x} +
{\displaystyle \partial v_y \over \displaystyle \partial y} +
{\displaystyle \partial v_z \over \displaystyle \partial z}\right)\,,\nonumber\\
\ \nonumber\\
\sigma_{ij}&\,=\,&
{1 \over 2}\left\{
{\displaystyle \partial v_i \over \displaystyle \partial x_j} +
{\displaystyle \partial v_j \over \displaystyle \partial x_i}
\right\}\,-\, {1 \over 3} \,(\nabla\cdot{\bf v})\,\delta_{ij} \,,\\
\ \nonumber\\
\omega_{ij}&\,=\,&
{1 \over 2}\left\{
{\displaystyle \partial v_i \over \displaystyle \partial x_j} -
{\displaystyle \partial v_j \over \displaystyle \partial x_i}
\right\}\,.\nonumber
\label{eq:vgradcomp}
\end{eqnarray}
\noindent where ${\bf \omega}=\nabla \times {\bf v}=\epsilon^{kij} \omega_{ij}$ (and $\epsilon^{kij}$ is the completely
antisymmetric tensor). In the theory of gravitational instability, there will be no vorticity contribution as
long as there has not been shell crossing (ie. in the linear and mildly nonlinear regime).
\subsection{DTFE: velocity field limitations}
\label{sec:vellimit}
DTFE interpolation of velocity field values is only feasible in regions devoid of
multistream flows. As soon as there are multiple flows -- notably in high-density cluster
concentrations or in the highest density realms of the filamentary and planar caustics
in the cosmic web -- the method breaks down and cannot be apllied.
In the study presented here this is particularly so in high-density clusters. The
complication can be circumvented by filtering the velocities over a sufficiently
large region, imposing an additional resolution constraint on the DTFE velocity field.
Implicitly this has actually already been accomplished in the linearization procedure
of the velocity fields preceding the DTFE processing.
The linearization of the input velocities involves a kernel size of
$\sqrt{5} h^{-1} {\rm Mpc}$, so that the resolution of the velocity field is set
to a lower limit of $\sqrt{5} h^{-1} {\rm Mpc}$. This is sufficient to assure the viability
of the DTFE velocity field reconstructions.
\subsection{DTFE: mass-weighted vs. volume-weighted}
\label{sec:massvolweight}
A major and essential aspect of DTFE field estimates is that it concerns {\it volume-weighted}
field averages
\begin{equation}
{\widehat f}_{vol}({\bf x})\,\equiv\,{\displaystyle \int\,d{\bf y}\, f({\bf y})\,W({\bf x}-{\bf y})
\over \displaystyle \int\,d{\bf y}\,\,W({\bf x}-{\bf y})}
\label{eq:fvol}
\end{equation}
instead of the more common {\it mass-weighted} field averages.
\begin{equation}
{\widehat f}_{mass}({\bf x})\,\equiv\,{\displaystyle \int\,d{\bf y}\, f({\bf y})\,\rho({\bf y})\,W({\bf x}-{\bf y})
\over \displaystyle \int\,d{\bf y}\,\rho({\bf y})\,W({\bf x}-{\bf y})}
\label{eq:fmass}
\end{equation}
\noindent where $W({\bf x},{\bf y})$ is the adopted filter function defining
the weight of a mass element in a way that is dependent on its position $y$ with respect to
the position ${\bf x}$.
Analytical calculations of physical systems in an advanced stage of evolution do quite often
concern a perturbation analysis. In a cosmological context we may think of the nonlinear evolution
of density and velocity field perturbations. In order to avoid unnecessary mathematical
complications most results concern {\it volume-weighted} quantities. However, when seeking
to relate these to the results of observational results or numerical calculations,
involving a discrete sample of measurement points, nearly all conventional techniques
implicitly involve {\it mass-weighted} averages. This may lead to considerable confusion, even
to wrongly motivated conclusions.
Conventional schemes for translating a discrete sample of field values $f_i$ into a continuous
field ${\hat f}({\bf x})$ are usually based upon a suitably weighted sum over the discretely sampled field
values, involving the kernel weight functions $W({\bf x},{\bf y})$. It is straightforward to
convert this discrete sum into an integral over Dirac delta functions,
\begin{eqnarray}
{\hat f}({\bf x})&\,=\,&{\displaystyle \sum\nolimits_{i=1}^N\,{\tilde f}_i\ W({\bf x}-{\bf x}_i) \over
\displaystyle \sum\nolimits_{i=1}^N\,W({\bf x}-{\bf x}_i)}\nonumber\\
&\,=\,& {\displaystyle \int\,d{\bf y}\,f({\bf y})\,W({\bf x}-{\bf y})\
\sum\nolimits_{i=1}^N\,\delta_D({\bf y}-{\bf x}_i) \over \displaystyle
\int\,d{\bf y}\,W({\bf x}-{\bf y})\ \sum\nolimits_{i=1}^N\,\delta_D({\bf y}-{\bf x}_i)}\\
&\,=\,&{\displaystyle \int\,d{\bf y}\, f({\bf y})\,\rho({\bf y})\,W({\bf x}-{\bf y})
\over \displaystyle \int\,d{\bf y}\,\rho({\bf y})\,W({\bf x}-{\bf y})}\,.\nonumber
\label{eq:fintmass}
\end{eqnarray}
\noindent Evidently, a weighted sum implicitly yields a mass-weighted field average. Notice that
this is true for rigid grid-based averages, but also for spatially adaptive SPH like
evaluations.
A reasonable approximation of volume-averaged quantities may be obtained by volume averaging
over quantities that were mass-filtered with, in comparison, a very small scale for the
mass-weighting filter function. This prodded \cite{juszk1995} to suggest a (partial)
remedy in the form of a two-step scheme for velocity field analysis. First, the field values
are interpolated onto a grid according to eqn.~\ref{eq:fintmass}. Subsequently, the resulting
grid of field values is used to determine volume-averages according to eqn.~\ref{eq:fvol}.
We can then make the interesting observation that the asymptotic limit of this, using
a filter with an infinitely small filter length, yields
\begin{eqnarray}
f_{mass}({\bf x}_0)\,=\,{\displaystyle \sum_i w_i f({\bf x}_i)
\over \displaystyle \sum_i w_i}\,=\,{\displaystyle f({\bf x_1})\,+\,
\sum_{i=2}^N {w_i \over w_1} f({\bf x}_i) \over \displaystyle 1 + \sum_{i=2}^N {w_i \over w_1}}
\quad \longrightarrow \quad f({\bf x}_1)\,
\end{eqnarray}
\noindent where we have ordered the locations $i$ by increasing distance to ${\bf x}_0$ and thus by decreasing
value of $w_i$.
The interesting conclusion is that the resulting estimate of the volume-weighted average
is in fact the field value at the location of the closest sample point ${\bf x}_1$, $f({\bf x}_1)$.
This means we should divide up space into regions consisting of that part of space closer to a
particular sample point than to any of the other sample points and take the field value
of that sample point as the value of the field in that region. This is nothing but
dividing up space according to the Voronoi tessellation of the set of sample points ${\cal P}$.
This observation formed the rationale behind the introduction and definition of
Voronoi and Delaunay tessellation interpolation methods for velocity fields by \cite{bernwey96}.
While the resulting {\it Voronoi Tessellation Field Estimator} would involve a discontinuous
field, the step towards a multidimensional linear interpolation procedure would guarantee
the continuity of field values. The subsequent logical step, invoking the dual Delaunay tessellation
as equally adaptive spatial linear interpolation grid, leads to the definition of the
DTFE interpolation scheme.
\subsection{DTFE Density and Velocity Maps: non-uniform resolution}
For a proper quantitative analysis and assessment of DTFE density and velocity field
reconstructions it is of the utmost importance to take into account the inhomogeneous
spatial resolution of raw DTFE maps.
Cosmic density and velocity fields, as well as possible other fields of relevance, are composed of contributions
from a range of scales. By implicitly filtering out small-scale contributions
in regions with a lower sampling density DTFE will include a smaller range of spatial scales contributing
to a field reconstruction. Even while selection function corrections are taken into account,
the DTFE density and velocity fields will therefore involve lower amplitudes.
DTFE velocity fields are less affected than DTFE density fields \citep{weyrom2007}, a manifestation of
the fact that the cosmic velocity field is dominated by larger scale modes than the density field.
In addition, it will also lead to a diminished ``morphological resolution''. The sampling density
may decrease to such a level lower than required to resolve the geometry or morphology of
weblike features. Once this level has been reached DTFE will no longer be suited for an analysis
of the Cosmic Web.
\begin{figure}
\centering
\mbox{\hskip 0.0truecm\includegraphics[height=19.0cm]{weyval.fig29.ps}}
\caption{Typical interpolation kernels for points embedded within three different morphological
environments: a point in a high-density peak (bottom), a point in a filament (centre) and a
point in a low-density field environment (top). The three kernels depicted concern three different
environments: a point embedded within a Gaussian density peak (lefthand), a point within an
elongated filamentary concentration of points (centre) and a point in a low-density environment
(righthand).}
\vskip -0.5truecm
\label{fig:dtfekernel}
\end{figure}
\section{DTFE: Technical Issues}
\subsection{DTFE Kernel and Effective Resolution}
\label{sec:dtfekernel}
DTFE distributes the mass $m_i$ corresponding to each sampling point $i$ over its
corresponding contiguous Voronoi cell. A particular sampling point $i$ will therefore
contribute solely to the mass of those Delaunay simplices of which it is a vertex.
This restricted spatial smoothing is the essence of the strict locality of the DTFE
reconstruction scheme.
The expression for the interpolation kernel of DTFE provides substantial
insight into its local spatial resolution. Here we concentrate on the
density field kernel, the shape and scale of the interpolation kernels
of other fields is comparable.
In essence, a density reconstruction scheme distributes the mass
$m_i$ of a sampling particle $i$ over a region of space according
to a distributing function $\mathcal{F}_i({\bf x})$,
\begin{equation}
\rho({\bf x})~=~\sum_{i=1}^N m_i \mathcal{F}_i({\bf x}) \, ,
\label{eq:kerndef}
\end{equation}
with $\int {\rm d}{\bf x}\,\mathcal{F}_i({\bf x})\,=\,1\ \ \forall\, i$.
Generically, the shape and scale of the {\it interpolation
kernel} $\mathcal{F}_i({\bf x})$ may be different for each sample point.
The four examples of nn-neighbour kernels in fig.~\ref{fig:sibson4} are a
clear illustration of this.
For the linear interpolation of DTFE we find that (see eqn.~\ref{eq:dtfeint})
for a position ${\bf x}$ within a Delaunay tetrahedron $m$ defined by
$(1+D)$ sample points $\{{\bf x}_{m0}, {\bf x}_{m1}, \dots, {\bf x}_{mD}\}$ the
interpolation indices $\phi_{dt,i}$ are given by
\begin{eqnarray}
\phi_{dt,i}({\bf x})\,=\,
{\begin{cases}
1\,+\,t_1\,+\,t_2\,+\,\dots\,+\,t_D\,\hfill\hfill\qquad i\in\{m0,m1,\ldots,mD\}\\
\ \\
0\hfill\hfill\qquad i \notin \{m0,m1,\ldots,mD\}\\
\end{cases}}\,
\end{eqnarray}
\noindent In this, for $i\in\{m0,m1,\ldots,mD\}$, the $D$ parameters $\{t_1,\ldots,t_D\}$
are to be solved from
\begin{equation}
{\bf x}\,=\,{\bf x}_{i}\,+\,t_1({\bf x}_{m1}-{\bf x}_i)\,+\,
t_2({\bf x}_{m2}-{\bf x}_i)\,+\,\ldots\,+\,t_D({\bf x}_{mD}-{\bf x}_i)\,,
\end{equation}
\noindent On the basis of eqns. (\ref{eq:kerndef}) and (\ref{eq:dtfeint}) the
expression for the DTFE kernel is easily derived:
\begin{eqnarray}
\mathcal{F}_{{\rm DTFE,}i}({\bf x})\,=\,
\begin{cases}
{\displaystyle (D+1)\over \displaystyle V(\mathcal{W}_i)}\,\phi_{dt,i}({\bf x})\hfill\hfill\qquad {\rm if}\ \ \ {\bf x}
\in \mathcal{W}_i\\
\ \\
0 \hfill\hfill\qquad {\rm if} \ \ \ {\bf x} \notin \mathcal{W}_i
\end{cases}
\end{eqnarray}
\noindent in which $\mathcal{W}_i$ is the contiguous Voronoi cell of sampling
point $i$. In this respect we should realize that in two dimensions the contiguous
Voronoi cell is defined by on average 7 sample points: the point itself and its
{\it natural neighbours}. In three dimensions this number is, on average, 13.04.
Even with respect to spatially adaptive smoothing such as embodied by the kernels
used in Smooth Particle Hydrodynamics, defined a certain number of nearest neighbours
(usually in the order of 40), the DTFE kernel is indeed highly localized.
A set of DTFE smoothing kernels is depicted in fig.~\ref{fig:dtfekernel}. Their local and
adaptive nature may be best appreciated from the
comparison with the corresponding kernels for regular (rigid) gridbased TSC interpolation,
a scale adaptive SPH smoothing kernel (based on 40 nearest neighbours) and a zeroth-order
Voronoi (VTFE) kernel (where the density at a point is set equal to the inverse of the volume
of the corresponding Voronoi cell). The three kernels depicted concern three different
environments: a point embedded within a Gaussian density peak (lefthand), a point within an
elongated filamentary concentration of points (centre) and a point in a low-density environment
(righthand). The figure clearly illustrates the adaptivity in scale and geometry of the
DTFE kernel.
\begin{figure}
\centering
\mbox{\hskip -0.1truecm\includegraphics[height=11.8cm,angle=90.0]{weyval.fig30.ps}}
\vskip -0.5truecm
\label{fig:vordelsample}
\end{figure}
\begin{figure}
\caption{Poisson sampling noise in uniform fields. The rows illustrate three
Poisson point samplings of a uniform field with increasing sampling
density (from top to bottom consisting of 100, 250 and 1000
points). From left to right the point distribution, the
corresponding Voronoi tessellation, the zeroth-order VTFE
reconstructed density field, the first-order DTFE reconstructed
density field and the corresponding Delaunay tessellation are
shown. From Schaap 2007.}
\end{figure}
\subsection{Noise and Sampling characteristics}
\label{sec:samplingnoise}
In order to appreciate the workings of DTFE one needs to take into
account the noise characteristics of the method. Tessellation based
reconstructions do start from a discrete random sampling of a field.
This will induce noise and errors. Discrete sampling noise will
propagate even more strongly into DTFE density reconstructions as
the discrete point sample itself will be the source for the
density value estimates.
In order to understand how sampling noise affects the reconstructed
DTFE fields it is imperative to see how intrinsically uniform
fields are affected by the discrete sampling. Even though a uniform
Poisson process represents a discrete reflection of a uniform density
field, the stochastic nature of the Poisson sample will induce
a non-uniform distribution of Voronoi and Delaunay volumes (see
sect.~\ref{sec:vordelstat}). Because the volumes of Voronoi and/or
Delaunay cells are at the basis of the DTFE density estimates their
non-uniform distribution propagates immediately into fluctuations
in the reconstructed density field.
This is illustrated in fig.~\ref{fig:vordelsample}, in which three
uniform Poisson point samples, each of a different sampling density $n$,
are shown together with the corresponding Voronoi and Delaunay tessellations.
In addition, we have shown the first-order DTFE reconstructed density
fields, along with the zeroth-order VTFE density field (where the density
at a point is set equal to the inverse of the volume of the corresponding
Voronoi cell). The variation in both the Delaunay and Voronoi cells, as well as
in the implied VTFE and DTFE density field reconstructions, provide a reasonable
impression of the fluctuations going along with these interpolation schemes.
Following the Kiang suggestion \citep{kiang1966} for the Gamma type
volume distribution for the volumes of Voronoi cells (eqn.~\ref{eq:vorvolpdf}), we may find an
analytical expression for the density distribution for the zeroth-order VTFE field:
\begin{eqnarray}
f({\tilde \rho})\,=\,
\begin{cases}
{\displaystyle 128 \over \displaystyle 3}\,\tilde{\rho}^{-6} \,\exp{\left(-\frac{\displaystyle 4}{\displaystyle \tilde{\rho}}\right)}\ \ \qquad 2D\\
\ \\
{\displaystyle 1944 \over \displaystyle 5}\,\tilde{\rho}^{-8} \,\exp{\left(-\frac{\displaystyle 6}{\displaystyle \tilde{\rho}}\right)}\qquad 3D
\end{cases}
\label{eq:dtfedenpdf}
\end{eqnarray}
\noindent The normalized density value ${\tilde \rho}$ is the density estimate $\rho$ in units of the
average density $\langle \rho \rangle$. While in principle this only concerns the zeroth-order density
estimate, it turns out that these expression also provide an excellent description of the one-point
distribution function of the first-order DTFE density field, both in two and three dimensions.
This may be appreciated from Fig.~\ref{fig:vordelpdf}, showing the pdfs for DTFE density field
realizations for a Poisson random field of $10,000$ points (2-D) and $100,000$ point (3-D).
The superimposed analytical expressions (eqn.~\ref{eq:dtfedenpdf}) represent excellent fits.
\begin{figure}
\centering
\mbox{\hskip -0.6truecm\includegraphics[width=12.25cm]{weyval.fig31.ps}}
\vskip -0.0truecm
\caption{One-point distribution functions of the DTFE density field for a
Poisson point process of $10\,000$ points (for 2-d space, lefthand) and
$100\,000$ points (for 3-d space, righthand). Superimposed are the analytical
approximations given by eqn.~\ref{eq:dtfedenpdf}}.
\label{fig:vordelpdf}
\end{figure}
The 2-D and 3-D distributions are clearly non-Gaussian, involving a tail
extending towards high density values. The positive value of the skewness
($2\sqrt{3}$ for 2D and $\sqrt{5}$ for 3D) confirms the presence of this
tail. Interestingly, the distribution falls off considerably more rapid for $d=3$
than for $d=2$. Also we see that the distributions are more narrow than in the
case of a regular Gaussian: the variance is $1/3$ for $d=2$ and $1/5$ for d=3,
confirmed byt the strongly positive value of the curtosis. The larger value for $d=2$
indicates that it is more strongly peaked than the distribution for $d=3$.
On the basis of the above one may also evaluate the significance of DTFE density
field reconstructions, even including that for intrinsically inhomogeneous fields.
For details we refer to \cite{schaapphd2007}.
\subsection{Computational Cost}.
The computational cost of DTFE is not overriding. Most computational
effort is directed towards the computation of the Delaunay tessellation
of a point set of $N$ particles. The required CPU time is in the
order of $O(N\,{\rm log}\,N)$, comparable to the cost of generating
the neighbour list in SPH. The different computational steps of the
DTFE procedure, along with their scaling as a function of
number of sample points $N$, are:
\begin{enumerate}
\item Construction of the Delaunay tessellation: ${\mathcal O}(N \,
{\rm log}\,N)$;
\item Construction of the adjacency matrix: ${\mathcal O}(N)$;
\item Calculation of the density values at each location: ${\mathcal O}(N)$;
\item Calculation of the density gradient inside each Delaunay
triangle: ${\mathcal O}(N)$;
\item Interpolation of the density to an image grid: ${\mathcal O}(ngrid\hspace*{.05cm}^2 \cdot N^{1/2})$.
\end{enumerate}
Step2, involving the calculation of the adjacency matrix necessary for
the {\it walking triangle} algorithm used for Delaunay tetrahedron
identification, may be incorporated in the construction of the Delaunay
tessellation itself and therefore omitted. The last step, the interpolation of
the density to an image grid, is part of the post-processing operation and
could be replaced by any other desired operation. It mainly depends on
the number of gridcells per dimension.
\section{DTFE: Hierarchical and Anisotropic Patterns}
\noindent To demonstrate the ability of the Delaunay Tessellation Field
Estimator to quantitatively trace key characteristics of the cosmic web we
investigate in some detail two aspects, its ability to trace the hierarchical
matter distribution and its ability to reproduce the shape of anisotropic --
filamentary and planar -- weblike patterns.
\subsection{Hierarchy and Dynamic Range}
\label{sec:dtfescaling}
\noindent The fractal Soneira-Peebles model \citep{soneirapeebles1977} has been used to assess the
level to which DTFE is able to trace a density field over the full range
of scales represented in a point sample. The Soneira-Peebles model is an
analytic self-similar spatial point distribution which was defined for the purpose of
modelling the galaxy distribution, such that its statistical properties would be tuned
to reality. An important property of the Soneira-Peebles model is that it is one
of the few nonlinear models of the galaxy distribution whose statistical properties
can be fully and analytically evaluated. This concerns its power-law two-point
correlation function, correlation dimension and its Hausdorff dimension.
\begin{figure}
\centering
\mbox{\hskip -0.6truecm\includegraphics[height=19.0cm]{weyval.fig32.ps}}
\vskip -0.0truecm
\label{fig:eta4panel}
\end{figure}
\begin{figure*}
\caption{Singular Soneira-Peebles model with $\eta=4$, $\lambda=1.9$ and $L=8$.
Top row: full point distribution (left-hand frame) and zoom-ins focusing on a
particular structure (central and right-hand frames). Rows 2~to~4:
corresponding density field reconstructions produced using the
TSC, SPH and DTFE methods.}.
\end{figure*}
The Soneira-Peebles model is specified by three parameters. The starting
point of the model is a level-$0$ sphere of radius $R$. At each level-$m$
a number of $\eta$ subspheres are placed randomly within their parent level-$m$
sphere: the level-$(m+1)$ spheres have a radius $R/\lambda$ where $\lambda>1$,
the size ratio between parent sphere and subsphere. This process is repeated
for $L$ successive levels, yielding $\eta^L$ level-$L$ spheres of radius
$R/\lambda^L$. At the center of each of these spheres a point is placed,
producing a point sample of $\eta^L$ points. While this produces a pure
{\it singular} Soneira-Peebles model, usually a set of these is
superimposed to produce a somewhat more realistically looking model of the
galaxy distribution, an {\it extended} Soneira-Peebles model.
An impression of the self-similar point sets produced by the Soneira-Peebles
process may be obtained from fig.~\ref{fig:eta4panel}. The top row contains
a series of three point distributions, zoomed in at three consecutive levels on
a singular Soneira-Peebles model realization with $(\eta=4,\lambda=1.90,L=8)$.
The next three columns depict the density field renderings produced
by three different interpolation schemes, a regular rigid grid-based TSC
scheme, a spatially adaptive SPH scheme and finally the DTFE reconstruction.
The figure clearly shows the supreme resolution of the latter. By comparison,
even the SPH reconstructions appear to fade for the highest resolution frame.
\begin{figure}
\centering
\mbox{\hskip -0.0truecm\includegraphics[width=19.00cm,angle=90.0]{weyval.fig33.ps}}
\vskip -0.0truecm
\end{figure}
\label{fig:dtfescaling}
\begin{figure}
\caption{Top row: Average PDFs of the density field in circles of different level (see text for a description) for
the TSC, SPH and DTFE density field reconstructions. Model with $\eta=2$, $\lambda=1.75$ and $L=14$.
Bottom row: scaled PDFs of Soneira-Peebles density field reconstructions. Each frame corresponds to a Soneira-Peebles
realization of a different fractal dimension, denoted in the upper right-hand corner. In each frame the TSC, SPH and DTFE
reconstructed PDFs are shown.}
\label{fig:sonpeebscale}
\end{figure}
\subsubsection{Self-similarity}
\noindent One important manifestations of the self-similarity of the defined
Soneira-Peebles distribution is reflected in the power-law two-point correlation
function. For $M$ dimensions
it is given by
\begin{eqnarray}
\xi(r)&\,\sim\,&r^{-\gamma}\,,\nonumber\\
\ \\
\gamma&\,=\,&M-\left(\frac{{\rm log} \, \eta}{{\rm log} \,
\lambda}\right)\,\,\,{\rm for}\,\,\,\frac{R}{\lambda^{L-1}}<r<R \nonumber\,.
\label{eq:corr2ptsonpeebles}
\end{eqnarray}
The parameters $\eta$ and $\lambda$ may be adjusted such that they
yield the desired value for the correlation slope $\gamma$.
To probe the self-similarity we look at the one-point distribution
of the density field, both for the point distribution as well as the
TSC, SPH and DTFE density field reconstructions. Mathematically, the
condition of self-similarity implies that the PDF corresponding to a
density field $\rho({\bf x})$ inside an $n$-level circle of radius
$R/\lambda^n$ should be equal to the PDF inside the reference circle
of radius $R$, after the density field in the $n$-level circle has been scaled according to
\begin{equation}
\rho({\bf x})~\rightarrow~\rho_n({\bf x})~=~\rho({\bf x})/\lambda^{Mn} \, ,
\label{eq:scaling}
\end{equation}
\noindent in which $M$ is the dimension of space. Yet another
multiplication factor of $\lambda^{M n}$ has to be included to
properly normalize the PDF (per density unit). In total this results
in an increase by a factor $\lambda^{2M n}$. Self-similar distributions
would look exactly the same at different levels of resolution once
scaled accordingly. This self-similarity finds its expression in a
{\it universal} power-law behaviour of the PDFs at different levels.
We have tested the self-similar scaling of the pdfs for a range of
Soneira-Peebles models, each with a different fractal dimensions \citep{schaapwey2007b}. For
a Soneira-Peebles model with parameters $(\eta=2,\lambda=1.75,L=14)$
we show the resulting scaled PDFs for the TSC, SPH and DTFE density
field reconstructions in the top row of Fig.~\ref{fig:sonpeebscale} .
The self-similar scaling of the TSC density field reconstructions is hardly convincing. On the
other hand, while the SPH reconstruction performs considerably better, it
is only the DTFE rendering which within the density range probed by
each level displays an almost perfect power-law scaling ! Also notice that
the DTFE field manages to probe the density field over a considerably larger
density range than e.g. the SPH density field.
\begin{table*}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
\hline
&&&&\\
$\hskip 0.9cm D\hskip 0.9cm $ &\hskip 0.55cm $\alpha$(theory) {\hskip 0.55cm} &\hskip 0.55cm $\alpha({\rm TSC})$ {\hskip 0.55cm} & \hskip 0.55cm $\alpha({\rm SPH})${\hskip 0.55cm} &\hskip 0.55cm $\alpha({\rm DTFE})$ {\hskip 0.55cm} \\
&&&&\\
\hline
&&&&\\
$0.63$ & $-1.69$ & $-0.81$ & $-1.32$ & $-1.70$ \\
$0.86$ & $-1.57$ & $-0.82$ & $-1.24$ & $-1.60$ \\
$1.23$ & $-1.39$ & $-0.79$ & $-1.13$ & $-1.38$ \\
&&&&\\
\hline
\hline
\end{tabular}
\end{center}
\caption{\small Slopes of the power-law region of the PDF of a Soneira-Peebles density field as reconstructed by the TSC, SPH and DTFE procedures.
The theoretical value (Eqn.~\ref{eq:slope}) is also listed. Values are listed for three different Soneira-Peebles realizations, each
with a different fractal dimension $D$.}
\label{table:slopes}
\end{table*}
Subsequently we determined the slope $\alpha$ of the {\it ``universal''} power-law
PDF and compared it with the theoretical predictions. The set of three frames in the bottom row of
Fig.~\ref{fig:sonpeebscale} show the resulting distributions with
the fitted power-laws. Going from left to right, the frames in this figure correspond
to Soneira-Peebles realizations with fractal dimensions of $D=0.63$, $D=0.85$ and
$D=1.23$. The slope $\alpha$ of the PDF can be found
by comparing the PDF at two different levels,
\begin{eqnarray}
\alpha &\,=\,& \frac{\log {\rm PDF} (\rho_1) ~-~ \log {\rm PDF} (\rho_0)}{\log \rho_1 ~-~ \log \rho_0}\nonumber\\
\,\,\\
&\,=\,&\frac{\log (\lambda^{2M n}/\eta^n)}{\log (1/\lambda^{M n})}\,=\,\frac{D}{M}~-~2 \nonumber\,,
\label{eq:slope}
\end{eqnarray}
\noindent in which $D$ is the fractal dimension of the singular Soneira-Peebles model.
When turning to table~\ref{table:slopes} we not only find that the values of $\alpha$
derived from the TSC, SPH and DTFE fields do differ significantly, a fact which has
been clear borne out by fig.~\ref{fig:sonpeebscale}, but also that the DTFE density field
PDFs do reproduce to an impressively accurate level the analytically expected power-law
slope of the model itself \citep{schaapwey2007b}. It is hardly possible to find a more convincing argument for
the ability of DTFE to deal with hierarchical density distributions !
\subsection{Shapes and Patterns}
DTFE's ability to trace anisotropic weblike patterns is tested on the basis of a class of heuristic models of cellular
matter distributions, {\it Voronoi Clustering Models} \citep{schaapwey2007c,weygaert2007a}. Voronoi models use the frame of
a Voronoi tessellation as the skeleton of the galaxy distribution, identifying the structural
frame around which matter will gradually assemble during the emergence of cosmic structure. The interior of Voronoi
{\it cells} correspond to voids and the Voronoi {\it planes} with sheets of galaxies. The {\it edges} delineating the rim of each wall
are identified with the filaments in the galaxy distribution. What is usually denoted as a flattened ``supercluster''
will comprise an assembly of various connecting walls in the Voronoi foam, as elongated ``superclusters'' of
``filaments'' will usually consist of a few coupled edges. The most outstanding structural elements are the
{\it vertices}, corresponding to the very dense compact nodes within the cosmic web, the rich
clusters of galaxies.
The Voronoi clustering models offer flexible templates for cellular patterns, and they are easy to tune towards a particular
spatial cellular morphology. To investigate the shape performance of DTFE we use pure {\it Voronoi Element Models}, tailor-made
heuristic ``galaxy'' distributions either and exclusively in and around 1) the Voronoi walls, 2) the Voronoi edges, 3) the
Voronoi vertices. Starting from a random initial distribution, all model galaxies are projected onto the relevant wall, edge
or vertex of the Voronoi cell in which they are located.
\begin{figure*}
\centering
\vskip 0.0truecm
\subfigure
{\mbox{\hskip -1.1truecm{\includegraphics[width=14.0cm]{weyval.fig34.ps}}}}
\vskip -0.5cm
\subfigure
{\mbox{\hskip -0.3truecm{\includegraphics[width=12.5cm]{weyval.fig35.ps}}}}
\caption{Three Voronoi element clustering models. Top row: box with periodic boundary conditions, boxsize 100h$^{-1}$Mpc.
Left: Wall Model; Centre: Filament Model; Right: Cluster Model. Second to fourth row: Corresponding density Reconstructions of
the three Voronoi element clustering models. Second: TSC, Third: SPH, Fourth: DTFE.}
\label{fig:vorgaldist}
\end{figure*}
The three boxes in the top row of fig.~\ref{fig:vorgaldist} depict a realization for a 3-D {\it Voronoi wall model}, a
{\it Voronoi filament model} and a {\it Voronoi cluster model}. Planar sections through the TSC, SPH and DTFE density field
reconstructions of each of these models are shown in three consecutive rows, by means of greyscale maps. The distinctly
planar geometry of the {\it Voronoi wall model} and the one-dimensional geometry if the {\it Voronoi filament model} is
clearly recognizable from the sharply defined features in the DTFE density field reconstruction. While the
SPH reconstructions outline the same distinct patterns, in general the structural features have a more puffy appearance.
The gridbased TSC method is highly insensitive to the intricate weblike Voronoi features, often the effective rigid gridscale
TSC filter is not able to render and detect them.
The DTFE reconstruction of the {\it Voronoi cluster models} (fig.~\ref{fig:vorgaldist}, lower righthand) does depict
some of the artefacts induced by the DTFE technique. DTFE is succesfull in identifying nearly every cluster, even the
poor clusters which SPH cannot find. The compact dense cluster also present a challenge in that they reveal
low-density triangular wings in the regions between the clusters. Even though these wings include only a minor fraction
of the matter distribution, they are an artefact which should be accounted for. Evidently, the SPH reconstruction of
individual clusters as spherical blobs is visually more appealing.
\subsubsection{Voronoi Filament Model}
The best testbench for the ability of the different methods to recover the patterns and morphology of the fully
three-dimensional density field is that of the contrast rich {\it Voronoi filament models}. In Fig.~\ref{fig:vorfil3d} the central
part of a sample box of the {\it Voronoi filament model} realization is shown. Isodensity contour levels are chosen such
that $65\%$ of the mass is enclosed within regions of density equal to or higher the corresponding density value.
The resulting TSC, SPH and DTFE density fields are depicted in the lower lefthand (TSC), lower righthand (SPH) and top frame (DTFE).
The galaxy distribution in the upper lefthand frame does contain all galaxies within the region. Evidently, the galaxies have
distributed themselves over a large range of densities and thus occupy a larger fraction of space than that outlined by the density
contours.
\begin{figure}[t]
\centering
\mbox{\hskip -0.7truecm\includegraphics[width=13.25cm]{weyval.fig36.ps}}
\vskip -0.0truecm
\caption{Three-dimensional visualization of the Voronoi filament model
and the corresponding TSC, SPH and DTFE density field
reconstructions. The density contours have been chosen such that
$65\%$ of the mass is enclosed. The arrows indicate two
structures which are visible in both the galaxy distribution and
the DTFE reconstruction, but not in the TSC and SPH
reconstructions.}.
\vskip -0.25truecm
\label{fig:vorfil3d}
\end{figure}
\begin{figure}[t]
\centering
\mbox{\hskip -0.8truecm\includegraphics[width=12.60cm]{weyval.fig37.ps}}
\vskip -0.0truecm
\caption{Anisotropy measurements for the Voronoi models. Plotted is the longest-to-shortest axis ratio of the
intertia tensor inside a series of concentric spheres centered on a characteristic structure as a function of
the radius of the sphere. The radius is given in units of the standard
deviation ($\sigma$) of the corresponding Gaussian density
profiles. The lefthand frame corresponds to the Voronoi wall
model, the central frame to the Voronoi filament model and the
righthand frame to the Voronoi cluster model. In each frame
the results are shown for the TSC, SPH and DTFE
reconstructions, as well as for the galaxy distribution. The
meaning of the symbols is depicted in the right-hand frame.}.
\vskip -0.25truecm
\label{fig:vorshape}
\end{figure}
The appearances of the TSC, SPH and DTFE patterns do differ substantially. Part of this is due to a different effective scale of
the filter kernel. The $65\%$ mass contour corresponds to a density contour $\rho=0.55$ in the TSC field, $\rho=1.4$ in the SPH
reconstruction and $\rho=2.0$ in the DTFE reconstruction ($\rho$ in units of the average density). The fine filamentary maze seen
in the galaxy distribution is hardly reflected in the TSC grid based reconstruction even though the global structure, the almost
ringlike arrangement of filaments, is clearly recognizable. The SPH density field fares considerably better, as it outlines
the basic configuration of the filamentary web. Nonetheless, the SPH rendered filaments have a width determined by the
SPH kernel scale, resulting in a pattern of tubes. Bridging substantial density gradients is problematic for SPH reconstructions,
partly due to the lack of directional information.
It is the DTFE reconstruction (top frame fig.~\ref{fig:vorfil3d}) which yields the most outstanding
reproduction of the filamentary weblike character of the galaxy distribution. A detailed comparison between
the galaxy distribution and the density surfaces show that it manages to trace the most minute details
in the cosmic web. Note that the density contours do enclose only $65\%$ of the mass, and thus relates to a smaller volume
than suggested by the features in the galaxy distribution itself. The success of the DTFE method is underlined by identifying
a few features in the galaxy distribution which were identified by DTFE but not by SPH and TSC. The arrows in
Fig.~\ref{fig:vorfil3d} point at two tenuous filamentary features visible in the galaxy distribution as well as in
the DTFE field, yet entirely absent from the TSC and SPH fields. In comparison to the inflated contours of the
SPH and TSC reconstructions, the structure outlined by the DTFE density field has a more intricate,
even somewhat tenuous, appearance marked by a substantial richness in structural detail and contrast.
Some artefacts of the DTFE method are also visible: in particular near intersections of filaments we tend
to find triangular features which can not be identified with similar structures in the galaxy distribution.
Nearby filaments are connected by relatively small tetrahedra, translating into high density features of
such shape.
\subsubsection{Shape and Morphology Analysis}
An important measure of the local density distribution concerns the shape of the density contour levels.
Various representative features in the three Voronoi element models were identified, followed by a
study of their shape over a range of spatial scales. The axis ratios of the local density distribution,
within a radius $R$, was computed from the eigenvalues of the mass intertia tensor. The results of the shape
analysis are shown in Fig.~\ref{fig:vorshape}. From left to right, the three frames present the axis ratio
of the longest over the smallest axis, $a_1/a_3$, for walls, filaments and clusters, as a function of the
scale $R$ over which the shape was measured. The open circles represent the shape of the particle distribution,
the triangles the shape found in the equivalent DTFE density field, while crosses and squares stand for the
findings of SPH and TSC.
In the case of the Voronoi cluster models all three density field reconstructions agree with the
sphericity of the particle distributions. In the central and righthand frame of Fig.~\ref{fig:vorshape} we
see that the intrinsic shapes of the walls and filaments become more pronounced as the radius $R$ increases.
The uniform increase of the axis ratio $a_1/a_3$ with $R$ is a reflection of the influence of the
intrinsic width of the walls and filaments on the measured shape. For small radii the mass
distribution around the center of one of these features is largely confined to the interior of
the wall or filament and thus near-isotropic. As the radius $R$ increases in value, the intrinsic
shape of these features comes to the fore, resulting in a revealing function of shape
as function of $R$.
The findings of our analysis are remarkably strong and unequivocal: over the complete range
of radii we find a striking agreement between DTFE and the corresponding particle distribution.
SPH reveals systematic and substantial differences in that they are tend to be more
spherical than the particle distribution, in particular for the strongly anisotropic
distributions of the walls and filaments. In turn, the SPH shapes are substantially
better than those obtained from the TSC reconstructions. The rigidity
of the gridbased TSC density field reconstructions renders them the worst descriptions of the
anisotropy of the local matter distribution. These results show that DTFE is indeed capable of
an impressively accurate description of the shape of walls, filaments and clusters. It
provides a striking confirmation of the earlier discussed visual impressions.
\section{DTFE: Velocity Field Analysis}
\noindent De facto the DTFE method has been first defined in the context of a description and analysis of
cosmic flow fields which have been sampled by a set of discretely and sparsely sampled galaxy peculiar
velocities. \cite{bernwey96} demonstrated the method's superior performance with respect to conventional interpolation
procedures. In particular, they also proved that the obtained field estimates involved those of the proper
{\it volume-weighted} quantities, instead of the usually implicit {\it mass-weighted} quantities (see
sect.~\ref{sec:massvolweight}). This corrected a few fundamental biases in estimates of higher order
velocity field moments.
\begin{figure*}
\begin{center}
\vskip -0.0truecm
\mbox{\hskip -0.6truecm\includegraphics[width=12.1cm]{weyval.fig38.ps}}
\vskip 0.0truecm
\caption{The density and velocity field of the LCDM GIF N-body simulation, computed by DTFE. The
bottom two frames show the density (bottom left) and velocity field (bottom right) in a central
slice through the complete simulation box. The density map is a grayscale map. The velocity
field is shown by means of velocity vectors: the vectors are the velocities component within the
slice, their size proportional to the velocity's amplitude. By means of DTFE the images in both top frames zoom in
on the density structure (left) and flow field (bottom) around a filamentary structure (whose
location in the box is marked by the solid square in the bottom righthand panel). The shear flow along
the filaments is meticulously resolved. Note that DTFE does not need extra operations to zoom in,
one DTFE calculation suffices to extract all necessary information. From Schaap 2007.}
\label{fig:giffil}
\end{center}
\end{figure*}
The potential of the DTFE formalism may be fully exploited within the context of the analysis
of $N$-body simulations and the galaxy distribution in galaxy redshift surveys. Not only it allows
a study of the patterns in the nonlinear matter distribution but also a study of the
related velocity flows. Because DTFE manages to follow both the density distribution and the
corresponding velocity distribution into nonlinear features it opens up the window towards
a study of the dynamics of the formation of the cosmic web and its corresponding elements. Evidently,
such an analysis of the dynamics is limited to regions and scales without multistream flows
(see sect.~\ref{sec:vellimit}).
A study of a set of GIF $\Lambda$CDM simulations has opened up a detailed view of the
dynamics in and around filaments and other components of the cosmic web by allowing
an assessment of the detailed density and velocity field in and around them
(see fig.~\ref{fig:giffil}). DTFE density and velocity fields may be depicted at any
arbitrary resolution without involving any extra calculation: zoom-ins represent
themselves a real magnification of the reconstructed fields. This is in stark contrast to
conventional reconstructions in which the resolution is arbitrarily set by the users and
whose properties are dependent on the adopted resolution.
\begin{figure*}
\begin{center}
\mbox{\hskip -0.6truecm\includegraphics[width=13.1cm]{weyval.fig39.ps}}
\vskip 0.0truecm
\caption{The density and velocity field around a void in the GIF LCDM simulation. The
top righthand panel shows the N-body simulation particle distribution within a slice
through the simulation box, centered on the void. The top righthand panel shows the
grayscale map of the DTFE density field reconstruction in and around the void, the
corresponding velocity vector plot is shown in the bottom lefthand panel. Notice the
detailed view of the velocity field: within the almost spherical global outflow
of the void features can be recognized that can be identified with the diluted
substructure within the void. Along the solid line in these panels we determined
the linear DTFE density and velocity profile (bottom righthand frame). We can recognize
the global ``bucket'' shaped density profile of the void, be it marked by substantial
density enhancements. The velocity field reflects the density profile in detail,
dominated by a global super-Hubble outflow. From Schaap 2007.}
\label{fig:gifvoid}
\end{center}
\end{figure*}
\subsection{A case study: Void Outflow}
\label{sec:voidflow}
In Fig.~\ref{fig:gifvoid} a typical void-like region is shown, together with the resulting DTFE
density and velocity field reconstructions. It concerns a voidlike region selected from a $\Lambda$CDM
GIF simulation \citep{kauffmann1999}. Figure~\ref{fig:gifdtfeweb} shows a major part of the
(DTFE) density field of the entire GIF simulation which contains the void.
It concerns a $256^3$ particles GIF $N$-body simulation, encompassing
a $\Lambda$CDM ($\Omega_m=0.3,\Omega_{\Lambda}=0.7,H_0=70\, {\rm km/s/Mpc}$)
density field within a (periodic) cubic box with length $141h^{-1}
{\rm Mpc}$ and produced by means of an adaptive ${\rm P^3M}$ $N$-body code.
The top left frame shows the particle distribution in and around the void
within this $42.5h^{-1} {\rm Mpc}$ wide and $1h^{-1} {\rm Mpc}$ thick slice through the
simulation box. The corresponding density field (top righthand frame)
and velocity vector field (bottom lefthand frame) are a nice illustration of
the ability of DTFE to translate the inhomogeneous particle distribution into
highly resolved continuous and volume-filling fields.
The DTFE procedure clearly manages to render the void as a slowly varying
region of low density. Notice the clear distinction between the empty
(dark) interior regions of the void and its edges. In the interior of
the void several smaller {\it subvoids} can be distinguished, with
boundaries consisting of low density filamentary or planar structures.
Such a hierarchy of voids, with large voids containing the traces of
the smaller ones from which it formed earlier through merging,
has been described by theories of void evolution \citep{regoes1991,dubinski1993,
weykamp1993,shethwey2004}.
\bigskip
The velocity vector field in and around the void represents a nice
illustration of the qualities of DTFE to recover the general velocity
flow. It also demonstrates its ability identify detailed features within
the velocity field. The flow in and around the void is dominated by the
outflow of matter from the void, culminating
into the void's own expansion near the outer edge. The comparison
with the top two frames demonstrates the strong relation with
features in the particle distribution and the DTFE density field. Not only
is it slightly elongated along the direction of the void's shape, but it is also
sensitive to some prominent internal features of the void. Towards the ``SE''
direction the flow appears to slow down near a ridge, while near the
centre the DTFE reconstruction identifies two expansion centres.
The general characteristics of the void expansion are most evident
by following the density and velocity profile along a one-dimensional
section through the void. The bottom-left frame of fig.~\ref{fig:gifvoid} shows
these profiles for the linear section along the solid line indicated in the other three
frames. The first impression is that of the {\it bucket-like} shape of the void, be it
interspersed by a rather pronounced density enhancement near its centre.
This profile shape does confirm to the general trend of low-density regions
to develop a near uniform interior density surrounded by sharply defined
boundaries. Because initially emptier inner regions expand faster than the
denser outer layers the matter distribution gets evened out while the
inner layers catch up with the outer ones. The figure forms a telling
confirmation of DTFE being able to recover this theoretically expected
profile by interpolating its density estimates across the particle diluted
void (see sect.~\ref{sec:voidflow}).
\bigskip
The velocity profile strongly follows the density structure of the void. The linear
velocity increase is a manifestation of its general expansion. The near constant velocity
divergence within the void conforms to the {\it super-Hubble flow} expected for the near
uniform interior density distribution (see sect.~\ref{sec:voidflow}). Because voids are emptier
than the rest of the universe they will expand faster than the rest of the universe with a net
velocity divergence equal to
\begin{eqnarray}
\theta&\,=\,&{\displaystyle \nabla\cdot{\bf v} \over \displaystyle H}\,=\,3 (\alpha-1)\,,\qquad\alpha=H_{\rm void}/H\,,
\end{eqnarray}
\noindent where $\alpha$ is defined to be the ratio of the super-Hubble expansion rate of the
void and the Hubble expansion of the universe.
Evidently, the highest expansion ratio is that for voids which are completely empty,
ie. $\Delta_{{\rm void}}=-1$. The expansion ratio $\alpha$ for such voids may be
inferred from Birkhoff's theorem, treating these voids as empty FRW universes
whose expansion time is equal to the cosmic time. For a matter-dominated Universe
with zero cosmological constant, the maximum expansion rate that a void may
achieve is given by
\begin{equation}
\theta_{\rm max}\,=\,1.5\Omega_m^{0.6}\,,
\label{eq:voidmax}
\end{equation}
\noindent with $\Omega_m$ the cosmological mass density parameter. For empty voids in a Universe with a
cosmological constant a similar expression holds, be it that the value of $\alpha$ will have to be numerically
calculated from the corresponding equation. In general the dependence on $\Lambda$ is only weak. Generic voids
will not be entirely empty, their density deficit $|\Delta_{\rm void}|\approx 0.8-0.9$ (cf. eg. the linear density
profile in fig.~\ref{fig:gifvoid}). The expansion rate $\theta_{void}$ for
such a void follows from numerical evaluation of the expression
\begin{eqnarray}
\theta_{\rm void}&\,=\,&\frac{3}{2}\,\frac{\Omega_{m}^{0.6}-\Omega_{m,{\rm void}}^{0.6}}{1+\frac{1}{2}\Omega_{m,{\rm void}}^{0.6}}\,;
\qquad \Omega_{m, {\rm void}}\,=\,\frac{\Omega_m ({\Delta}_{\rm void} +1)}{(1+\frac{1}{3}\theta)^{2}} \,
\end{eqnarray}
\noindent
\noindent in which $\Omega_{m,{\rm void}}$ is the effective cosmic density parameter inside the
void.
\begin{figure}[t]
\centering
\mbox{\hskip -1.1truecm\includegraphics[width=13.00cm]{weyval.fig40.ps}}
\vskip -0.0truecm
\caption{The DTFE reconstructed velocity and velocity divergence field of a
(low-resolution) SCDM N-body simulation, compared with the corresponding DTFE density field
(top righthand frame). The velocity divergence field is split in two parts. The negative
velocity divergence regions, those marked by inflow, are shown in the bottom lefthand
frame. They mark both the high-density cluster regions as well as the more moderate
filamentary regions. The positive divergence outflow regions in the bottom righthand
panel not only assume a larger region of space but also have a more roundish morphology.
They center on the large (expanding) void regions in the matter distribution. The figure
represents a nice illustration of DTFE's ability to succesfully render the non-Gaussian
nonlinear velocity field. From Romano-D\'{\i}az 2004.}
\label{fig:dtfeveldensdivv}
\end{figure}
\subsection{Velocity Field Evolution}
\label{sec:velstat}
On a mildly nonlinear scale the velocity field contains important information on
the cosmic structure formation process. Density and velocity perturbations are supposed
to grow gravitationally out of an initial field of Gaussian density and velocity perturbations.
Once the fluctuations start to acquire values in the order of unity or higher the growth rapidly
becomes more complex. The larger gravitational acceleration exerted by the more massive structures
in the density field induce the infall of proportionally large amounts of matter and hence to an
increase of the growth rate, while the opposite occurs in and around the underdense regions.
The overdensities collapse into compact massive objects whose density excess may achieve values
exceeding unity by many orders of magnitude. Meanwhile the underdensities expand and occupy
a growing fraction of space while evolving into empty troughs with a density deficit tending towards
a limiting value of -1.0. The migration flows which are induced by the evolving matter distribution
are evidently expected to strongly reflect the structure formation process.
In practice a sufficiently accurate analysis of the nonlinear cosmic velocities is far from trivial.
The existing estimates of the peculiar velocities of galaxies are still ridden by major uncertainties
and errors. This is aggravated by the fact that the sampling of the velocity field is discrete, rather
poor and diluted and highly inhomogeneous. While the conventional approach involves a smoothing
of the velocity field over scales larger than $10h^{-1} {\rm Mpc}$ in order to attain a map of the linear
velocity field, we have argued in section~\ref{sec:massvolweight} that implicitly this usually
yields a {\it mass-weighted} velocity field. DTFE represents a major improvement on this. Not only
does it offer an interpolation which guarantees an optimal resolution, thereby opening the
window onto the nonlinear regime, it also guarantees a {\it volume-weighted} flow field (see
sect.~\ref{eq:fvol}).
\subsubsection{Density and Velocity Divergence}
The implied link between the matter distribution and the induced migration
flows is most strongly expressed through the relation between the density field $\delta({\bf x}$ and
the velocity divergence field. The bottom frames of fig.~\ref{fig:dtfeveldensdivv} contain greyscale
maps of the DTFE normalized velocity divergence estimate $\widehat \theta$ (with $H_0$ the Hubble
constant),
\begin{equation}
{\widehat \theta}\,\equiv\,{\displaystyle \widehat{\nabla \cdot {\bf v}} \over \displaystyle H_0}\,=
{\displaystyle 1 \over \displaystyle H_0}\,\left({\widehat{\frac{\partial v_x}{\partial x}}} +
{\widehat{\frac{\partial v_y}{\partial y}}} + {\widehat{\frac{\partial v_z}{\partial z}}}\right) \,,
\label{eq:dtfe_div_dtfe}
\end{equation}
\noindent for an N-body simulation. For illustrative purposes we have depicted the regions of
negative and positive velocity divergence separately. The comparison between these maps and the
corresponding density field (upper righthand frame) and velocity field (upper lefthand) provides
a direct impression of their intimate relationship. The two bottom frames clearly delineate the
expanding and contracting modes of the velocity field. The regions with a negative divergence are
contracting, matter is falling in along one or more directions. The inflow region delineates
elongated regions coinciding with filaments and peaks in the density field. The highest inflow
rates are seen around the most massive matter concentrations. Meanwhile the expanding regions
may be seen to coincide with the surrounding large and deep underdense voids, clearly occupying
a larger fraction of the cosmic volume.
The strong relation between the density contrast and velocity divergence is a manifestation of the
continuity equation. In the linear regime this is a strict linear one-to-one relation,
\begin{equation}
{\displaystyle \nabla \cdot {\bf v} ({\bf x,t}) \over \displaystyle H}\,=\,- a(t) f(\Omega_m,\Omega_{\Lambda})
\,\,\delta({\bf x},t)\,,
\label{eq:divvdenlin}
\end{equation}
\noindent linking the density perturbation field $\delta$ to the peculiar velocity field ${\bf v}$ via the
factor $f(\Omega_m)$ \citep[see][]{peebles80}. There remains a 1-1 relation between velocity
divergence and density into the mildly nonlinear regime (see eqn.~\ref{eq:divvdennlin}). This explains
why the map of the velocity divergence in fig.~\ref{fig:dtfeveldensdivv} is an almost near perfect negative image
of the density map.
Even in the quasi-linear and mildly nonlinear regime the one-to-one correspondance between velocity divergence and density
remains intact, be it that it involves higher order terms \citep[see][for an extensive review]{bernardeau2002}. Within the
context of Eulerian perturbation theory \citet[][(B)]{bernardeau1992} derived an accurate 2nd order approximation form
the relation between the divergence and the density perturbation $\delta({\bf x})$. \citet[][(N)]{nusser1991}
derived a similar quasi-linear approximation within the context of the Lagrangian Zel'dovich approximation. According
to these approximate nonlinear relations,
\begin{eqnarray}
{\displaystyle 1 \over \displaystyle H}\,\nabla \cdot {\bf v} ({\bf x})\,=\,
\begin{cases}
{\displaystyle 3 \over \displaystyle 2} f(\Omega_m) \left[1-(1+\delta({\bf x}))^{2/3}\right]\,\qquad\hfill\hfill \hbox{\rm (B)}\\
\ \\
-f(\Omega_m)\,{\displaystyle \delta({\bf x}) \over \displaystyle 1 + 0.18\,\delta({\bf x})}\,\qquad\hfill\hfill \hbox{\rm (N)}\\
\end{cases}
\label{eq:divvdennlin}
\end{eqnarray}
\noindent for a Universe with Hubble parameter $H(t)$ and matter density parameter $\Omega_m$. The studies by
\cite{bernwey96,weyrom2007,schaapphd2007} have shown that the DTFE velocity field reconstructions are indeed able
to reproduce these quasi-linear relations.
\subsubsection{Velocity Field Statistics}
\noindent The asymmetric nonlinear evolution of the cosmic velocity and density field manifests itself in an
increasing shift of the statistical distribution of density and velocity perturbations away from the initial Gaussian
probability distribution function. The analytical framework of Eulerian perturbation theory provides a reasonably accurate
description for the early nonlinear evolution \citep[see][for a review]{bernardeau2002}.
As for the velocity field, robust results on the evolution and distribution of the velocity divergence,
$\nabla \cdot {\bf v}$, were derived in a series of papers by Bernardeau \citep[e.g.][]{bernardeau1995}.
The complete probability distribution function (pdf) of the velocity divergence may be evaluated via the summation
of the series of cumulants of the non-Gaussian distribution function. The velocity divergence pdf is strongly
sensitive to the underlying cosmological parameters, in particular the cosmological density parameter $\Omega_m$.
It represents a considerable advantage to the more regular analysis of the cosmic velocity flows on large linear scales.
While the latter yields constraints on a combined function of $\Omega_m$ and the bias $b$ between the galaxy and
matter distribution, access to nonlinear scales would break this degeneracy.
\begin{figure}
\vskip 0.0truecm
\centering
\mbox{\hskip -1.0truecm\includegraphics[width=14.00cm]{weyval.fig41.ps}}
\vskip -0.0truecm
\end{figure}
\begin{figure}
\caption{DTFE determination of the probability distribution function (pdf) of the
velocity divergence $\theta$. Top frame illustrates the sensitivity of the $theta$ pdf
to the underlying cosmology. The superposition of the theoretical pdf curves of
three cosmologies immediately shows that the location of the maximum of the pdf and
its sharp cutoff at the positive end are highly sensitive to $\Omega$. The other
four frames demonstrate DTFE's ability to succesfully trace these marks.
Lefthand frames: confrontation of the DTFE velocity divergence pdf and that
determined by more conventional two-step fixed grid interpolation method. Both
curves concern the same $128^3$ SCDM N-body particle simulation (i.e. $\Omega=1$).
Top lefthand frame: tophat filter radius $R_{\rm TH}=10h^{-1} {\rm Mpc}$. Bottom lefthand
panel: $R_{\rm TH}=15h^{-1} {\rm Mpc}$. The solid lines represent theoretical predictions of the PDF for
the measured values of $\sigma_{\theta}$ (Bernardeau 1994). Black triangles are the pdf values
measured by the DTFE method, the black squares by the equivalent VTFE Voronoi method.
The open squares show the results obtained by a more conventional two-step fixed grid
interpolation method. Especially for low and high $\theta$ values the tessellation methods
prove to produce a much more genuine velocity divergence pdf. From Bernardeau \& van de
Weygaert 1996. Righthand frames: on the basis of DTFE's ability to trace the velocity divergence pdf
we have tested its operation in a $\Omega=1$ and a $\Omega=0.4$ CDM N-body simulation. For both
configurations DTFE manages to trace the pdf both at its high-value edge and near its peak. From
Bernardeau et al. 1997}
\label{fig:divvpdf}
\end{figure}
An impression of the generic behaviour and changing global shape of the resulting non-Gaussian pdf as a function
of $\Omega_m$ and the amplitude $\sigma_{\theta}$ of the velocity divergence perturbations may be obtained from the
top frame of fig.~\ref{fig:divvpdf}. The curves correspond to the theoretically predicted velocity divergence pdf
for three different (matter-dominated) FRW cosmologies. Not only does $\Omega_m$ influence the overall
shape of the pdf, it also changes the location of the peak -- indicated by $\theta_{\rm max}$ -- as well
as that of the cutoff at the high positive values of $\theta$. By means of the Edgeworth expansion one may show that
the velocity divergenc pdf reaches its maximum for a peak value $\theta_{\rm peak}$ \citep{juszk1995,bernkofm1995},
\begin{equation}
\theta_{\rm peak}\,=\,-{\displaystyle T_3(\theta) \over \displaystyle 2}\,\sigma_{\theta}\,;\qquad
\langle \theta^3\rangle\,=\,T_3\,\langle \theta^2 \rangle^2\,.
\end{equation}
\noindent where the coefficient $T_3$ relates the 3rd order moment of the pdf to the second moment \citep[see e.g][]{bernardeau1994}.
While the exact location of the peak is set by the bias-independent coefficient $T_3$, it does
not only include a dependence on $\Omega_m$, but also on the shape of the power spectrum, the geometry of the
window function that has been used to filter the velocity field and on the value of the cosmological
constant $\Lambda$. To infer the value of $\Omega_m$ extra assumptions need to be invoked. Most direct therefore
is the determination of $\Omega_m$ via the sharp cutoff of the pdf, related to the expansion velocity of the deepest
voids in a particular cosmology (see eqn.~\ref{eq:voidmax}).
While conventional techniques may reproduce the pdf for moderate values of the velocity divergence $\theta$, they
tend to have severe difficulties in tracing the distribution for the more extreme positive values and the
highest inflow velocities. An illustration of the tendency to deviate from the analytically predicted distribution
can be seen in the two lefthand frames of fig.~\ref{fig:divvpdf}, showing the velocity divergence pdf determined
from a SCDM N-body simulation for two scales of tophat filtered velocity fields ($R=10h^{-1} {\rm Mpc}$ and $R=15h^{-1} {\rm Mpc}$). The
open squares depict the velocity divergence pdf determined from an N-body simulation through a two-step grid procedure
(see sect.~\ref{sec:massvolweight}). Conventional grid interpolation clearly fails by huge margins to recover
inflow onto the high-density filamentary structures as well as the outflow from voids.
What came as the first evidence for the success of tessellation based interpolation is the rather
tight correspondence of the Delaunay tessellation interpolation results with the analytically
predicted pdf. This finding by \cite{bernwey96} suggested that it would indeed be feasible to
probe the nonlinear velocity field and infer directly accur4ate estimates of $\Omega_m$. In a follow-up
study \cite{bernwey97} succesfully tested this on a range of N-body simulations of structure formation,
showing Delaunay interpolation indeed recovered the right values for $\Omega_m$. The centre and bottom
righthand frames depict two examples: the pdf's for a $\Omega=1$ and a $\Omega=0.4$ Universe accurately
traced by the Delaunay interpolated velocity field.
\subsection{PSCz velocity field}
In a recent study \cite{weyrom2007} applied DTFE towards reconstructing the velocity flow map throughout
the nearby Universe volume sampled by the PSCz galaxy redshift survey \citep[also see][]{emiliophd2004}.
\subsubsection{The PSCz sample of the Local Universe}
The IRAS-\pscz ~catalog \citep{saunders2000} is an extension of
the $1.2$-Jy catalog \citep{fisher1995}. It contains $\sim 15
~500$ galaxies with a flux at $60 \mu$m larger than $0.6$-Jy. For a
full description of the catalog, selection criteria and the procedures
used to avoid stellar contamination and galactic cirrus, we refer the
reader to \cite{saunders2000}. For our purposes the most
important characteristics of the catalog are the large area sampled
($\sim 84 \%$ of the sky), its depth with a median redshift of $8~500
\ {\rm km~s^{-1} }$, and the dense sampling (the mean galaxy separation at $10~000
\ {\rm km~s^{-1} }$ is $\langle l \rangle = 1~000 \ {\rm km~s^{-1} }$). It implies that PSCz contains
most of the gravitationally relevant mass concentrations in our local
cosmic neighbourhood, certainly sufficient to explain and study the
cosmic flows within a local volume of radius $\sim 120 h^{-1} {\rm Mpc}$.
To translate the redshift space distribution of galaxies in the PSCz catalog into galaxy positions and
velocities in real space the study based itself on an iterative processing of the galaxy sample by
\cite{branchini1999} based on the linear theory of gravitational instability~\citep{peebles80}. The
method involved a specific realization of an iterative technique to minimize
redshift-space distortions \citep{yahil1991}. While the resulting galaxy velocities relate by
construction to the linear clustering regime, the reconstruction of the velocity field throughout
the sample volume does appeal to the capability of DTFE to process a discretely sampled field into
a continuous volume-filling field and its ability to resolve flows in both high-density
regions as well as the outflows from underdense voids.
\subsubsection{The PSCz velocity and density field}
\begin{figure}
\vskip 1.0truecm
\centering
\mbox{\hskip -0.0truecm\includegraphics[height=13.0cm,angle=90.0]{weyval.fig42.ps}}
\vskip -0.0truecm
\end{figure}
\begin{figure}
\caption{Density and velocity field in the local Universe determined by DTFE on the basis
of the PSCz galaxy redshift survey. Our Local Group is at the centre of the map. To the left
we see the Great Attractor region extending out towards the Shapley supercluster. To the lefthand
side we can find the Pisces-Perseus supercluster. The peculiar velocities of the galaxies in the PSCz galaxy
redshift catalogue were determined by means of a linearization procedure (Branchini et al. 1999).
The resulting galaxy positions and velocities (vectors) of the input sample for the DTFE
reconstruction are shown in the bottom lefthand frame. The density values range from
$\sim 4.9$ (red) down to $\sim -0.75$ (darkblue), with cyan coloured regions having
a density near the global cosmic average ($\delta \sim 0$). The velocity
vectors are scaled such that a vector with a length of $\approx 1/33rd$ of the
region's diameter corresponds to $650$ km/s. The density and velocity field have an
effective Gaussian smoothing radius of $R_G\sim\sqrt{5} h^{-1} {\rm Mpc}$. The top righthand insert
zooms in on the Local Supercluster and Great Attractor complex. From: Romano-D\'{\i}az
2004.}
\label{fig:psczvel}
\end{figure}
The central circular field of fig.~\ref{fig:psczvel} presents the DTFE velocity field in
the Supergalactic Plane. For comparison, the bottom lefthand insert shows the discrete sample
of galaxy velocities which formed the input for the reconstruction. The velocity field is shown
by means of the projected velocity vectors within the Z-supergalactic plane,
superposed upon the corresponding DTFE density contourmaps inferred from the same PSCz galaxy sample.
The length of the velocity arrows can be inferred from the arrow in the lower lefthand corner, which corresponds
to a velocity of $650$ km/s. The Local Group is located at the origin of the map.
The map of fig.~\ref{fig:psczvel} shows the success of DTFE in converting
a sample of discretely sampled galaxy velocities and locations into a sensible
volume-covering flow and density field. The processed DTFE velocity field reveals intricate details along the whole
volume. The first impression is that of the meticulously detailed DTFE flow field, marked by sharply defined flow
regions over the whole supergalactic plane. Large scale bulk flows, distorted flow patterns such as shear, expansion
and contraction modes of the velocity field are clear features revealed through by the DTFE technique. DTFE recovers
clearly outlined patches marked by strong bulk flows, regions with characteristic shear flow patterns around
anisotropically shaped supercluster complexes, radial inflow towards a few massive clusters and, perhaps most
outstanding, strong radial outflows from the underdense void regions.
Overall, there is a tight correspondence with the large scale structures in the
underlying density distribution. While the density field shows features down to a scale of
$\sqrt{5} h^{-1} {\rm Mpc}$, the patterns in the flow field clearly have a substantially larger
coherence scale, nearly all in excess of $10 h^{-1} {\rm Mpc}$. The DTFE velocity flow sharply
follows the elongated ridge of the Pisces-Perseus supercluster. In addition we
find the DTFE velocity field to contain markedly sharp transition regions between
void expansion and the flows along the body of a supercluster.
The local nature of the DTFE interpolation guarantees a highly
resolved flow field in high density regions. Massive bulk motions are concentrated
near and around the massive structure extending from the Local Supercluster (center map)
towards the Great Attractor region and the Shapley concentration. The DTFE map nicely
renders this pronounced bulk flow towards the Hydra-Centaurus region and shows
that it dominates the general motions at our Local Group and Local Supercluster.
The top righthand insert zooms in on the flow in and around this region. The most massive
and coherent bulk flows in the supergalactic plane appear to be connected
to the Sculptor void and the connected void regions (towards the lefthand side of the
figure). They are the manifestation of the combination of gravitational attraction by the
heavy matter concentration of the Pavo-Indus-Telescopium complex (at [SGX,SGY]$\approx[-40,-10] h^{-1} {\rm Mpc}$),
the more distant ``Hydra-Centaurus-Shapley ridge'', and the effective push by the Sculptor void region.
Conspicuous shear flows can be recognized along the ridge defined by the Cetus wall towards
the Pisces-Perseus supercluster ([SGX,SGY]$\approx[20,-40] h^{-1} {\rm Mpc}$. A similar strong shear
flow is seen along the extension of the Hydra-Centaurus supercluster towards the Shapley
concentration.
The influence of the Coma cluster is beautifully outlined by the strong and
near perfect radial infall of the surrounding matter, visible at the top-centre of
figure~\ref{fig:psczvel}. Also the velocity field near the Perseus cluster,
in the Pisces-Perseus supercluster region, does contain a strong radial inflow
component.
Perhaps most outstanding are the radial outflow patterns in and around voids.
In particular its ability to interpolate over the low-density and thus sparsely sampled
regions is striking: the voids show up as regions marked by a near-spherical outflow. The
intrinsic suppression of shot noise effects through the adaptive spatial interpolation procedure
of DTFE highlights these important components of the Megaparsec flow field and emphasizes the dynamical
role of voids in organizing the matter distribution of the large scale
Cosmic Web. By contrast, more conventional schemes, such as TSC or SPH \citep[see][]{schaapwey2007a},
meet substantial problems in defining a sensible field reconstruction in low density
regions without excessive smoothing and thus loss of resolution.
\section{DTFE meets reality: 2dFGRS and the Cosmic Web}
To finish the expos\'e on the potential of the Delaunay Tessellation Field Estimator,
we present a reconstruction of the foamy morphology of the galaxy distribution in the
2dF Galaxy Redshift Survey (2dFGRS). DTFE was used to reconstruct the projected galaxy surface
density field as well as the full three-dimensional galaxy density field.
\begin{figure*}
\begin{center}
\vskip -0.0truecm
\mbox{\hskip -0.8truecm\includegraphics[width=13.0cm,height=19.0cm]{weyval.fig43.ps}}
\vskip 0.0truecm
\caption{DTFE galaxy surface density reconstructions of the projected galaxy distribution
in the two 2dF slices (north and south). For comparison see the galaxy distribution in fig.~\ref{fig:2dfgaldist}.
A description may be found in the text (sect.~\ref{sec:2dfsurface}). From Schaap 2007 and
Schaap \& van de Weygaert 2007d.}
\label{fig:2dfsurfdens}
\end{center}
\end{figure*}
\subsection{the 2dF Galaxy Redshift Survey}
The 2dFGRS is one of the major spectroscopic surveys in which the
spectra of $245 \, 591$ objects have been obtained, with the scope of
obtaining a representative picture of the large scale distribution of
galaxies in the nearby universe \citep{colless2003}. It is a
magnitude-limited survey, with galaxies selected down to a limiting
magnitude of \mbox{$b_J\sim19.45$} from the extended APM Galaxy Survey
\citep{maddox1990a,maddox1990b,maddox1990c}. The galaxy redshifts of galaxies were measured
with the 2dF multifibre spectrograph on the Anglo-Australian Telescope,
capable of simultaneously observing 400 objects in a $2^{\circ}$ diameter field.
The galaxies were confined to three regions, together covering an area of approximately
$1500$ squared degrees. These regions include two declination strips, each consisting of
overlapping $2^{\circ}$ fields, as well as a number of `randomly'
distributed $2^{\circ}$ control fields. One strip (the SGP strip) is
located in the southern Galactic hemisphere and covers about
\mbox{$80^{\circ}\times15^{\circ}$} close to the South Galactic Pole
(\mbox{$21^h40^m<\alpha<03^h40^m$}, \mbox{$-37.5^{\circ}<\delta<-22.5^{\circ}$}).
The other strip (the NGP strip) is located in the northern Galactic hemisphere and
covers about \mbox{$75^{\circ}\times10^{\circ}$}(\mbox{$09^h50^m<\alpha<14^h50^m$},
\mbox{$-7.5^{\circ}<\delta<+2.5^{\circ}$)}. Reliable redshifts were obtained for
$221\,414$ galaxies.
These data have been made publically available in the form of the 2dFGRS final data release
(available at {\tt http://msowww.anu.edu.au/2dFGRS/}).
\begin{figure*}
\begin{center}
\vskip 0.75truecm
\mbox{\hskip -0.3truecm\includegraphics[width=13.0cm]{weyval.fig44.ps}}
\vskip 0.0truecm
\end{center}
\end{figure*}
\begin{figure*}
\caption{DTFE galaxy surface density in selected regions in the 2dFGRS galaxy surface
density field. For the density field in the total 2dFGRS region see fig.~\ref{fig:2dfsurfdens}.
For a discussion see sect.~\ref{sec:2dfindividual}. Frame 1 zooms in on the
Great Wall in the southern (SGP) 2dF slice, frame 5 on the one in the northern (NGP) slice.
Note the sharply rendered ``fingers of God'' marking the sites of clusters of
galaxies. Perhaps the most salient feature is the one seen in frame 3, the cross-like
configuration in the lower half of the NGP region. From Schaap 2007 and
Schaap \& van de Weygaert 2007d.}
\label{fig:2dfdetails}
\end{figure*}
\subsection{Galaxy surface density field reconstructions}
\label{sec:2dfsurface}
The galaxy distribution in the 2dFGRS is mainly confined to the two
large strips, NGP and SGP. Since their width is reasonably thin, a good impression
of the spatial patterns in the galaxy distribution may be obtained
from the 2-D projection shown in fig.~\ref{fig:2dfgaldist}.
We have reconstructed the galaxy surface density field in redshift
space in the 2dFGRS NGP and SGP regions.
All density field reconstructions are DTFE reconstructions on the basis
of the measured galaxy redshift space positions. As no corrections
were attempted to translate these into genuine positions in
physical space, the density reconstructions in this section
concern redshift space. In order to warrant a direct comparison
with the galaxy distribution in fig.~\ref{fig:2dfgaldist} the results
shown in this section were not corrected for any observational selection
effect, also not for the survey radial selection function. For
selection corrected density field reconstructions we refer to the
analysis in \cite{schaapphd2007,schaapwey2007c}.
Fig.~\ref{fig:2dfsurfdens} shows the resulting DTFE reconstructed density field.
DTFE manages to reveal the strong density contrasts in the large scale density
distribution. The resolution is optimal in that the smallest interpolation
units are also the smallest units set by the data. At the same time the
DTFE manages to bring out the fine structural detail of the intricate
and often tenuous filamentary structures. Particularly noteworthy are the
thin sharp edges surrounding voidlike regions.
\begin{figure*}
\begin{center}
\vskip -0.0truecm
\mbox{\hskip -0.5truecm\includegraphics[width=13.0cm]{weyval.fig45.ps}}
\vskip 0.0truecm
\caption{Isodensity surface of the galaxy distribution in the north (top) and
south region (bottom) of the 2dFGRs. The densities are determined by means of the
DTFE technology, subsequently Gaussian smoothed on a scale of $2h^{-1} {\rm Mpc}$. Several well-known
structures are indicated. From Schaap 2007 and Schaap \& van de Weygaert 2007d.}
\label{fig:2dfdtfe3d}
\end{center}
\end{figure*}
\subsubsection{Individual Structures}
\label{sec:2dfindividual}
The impressive resolution and shape sensitivity of the DTFE reconstruction
becomes particularly visible when zooming in on structural details in
the Cosmic Web. Figure~\ref{fig:2dfdetails} zooms in on a few interesting
regions in the map of fig.~\ref{fig:2dfgaldist}. Region~1 focuses on the major
mass concentration in the NGP region, the Sloan Great Wall.
Various filamentary regions emanate from the high density core. In region~2
a void-like region is depicted. The DTFE renders the void as a low density area
surrounded by various filamentary and wall-like features. Two fingers of God are
visible in the upper right-hand corner of region~2, which show up as very sharply
defined linear features. Many such features can be recognized in high density environments.
Note that the void is not a totally empty or featureless region. The void is marked
by substructure and contains a number of smaller subvoids, reminding of the
hierarchical evolution of voids \citep{dubinski1993,shethwey2004}. Region~3
is amongst the most conspicuous structures encountered in the 2dF survey. The
cross-shaped feature consists of four tenuous filamentary structures emanating from
a high density core located at the center of the region. Region~4 zooms in on some of
the larger structures in the SGP region. Part of the Pisces-Cetus supercluster is
visible near the bottom of the frame, while the concentration at the top of this region
is the upper part of the Horologium-Reticulum supercluster. Finally, region~5
zooms in on the largest structure in the SGP region, the Sculptor
supercluster.
\subsubsection{DTFE artefacts}
Even though the DTFE offers a sharp image of the cosmic web, the reconstructions
also highlight some artefacts. At the highest resolution we can directly discern
the triangular imprint of the DTFE kernel. Also a considerable amount of noise is
visible in the reconstructions. This is a direct consequence of the high resolution
of the DTFE reconstruction. Part of this noise is natural, a result of the statistical
nature of the galaxy formation process. An additional source of noise is due to the fact
that the galaxy positions have been projected onto a two-dimensional plane. Because DTFE
connects galaxies which lie closely together in the projection, it may involve objects which
in reality are quite distant. A full three-dimensional reconstruction followed
by projection or smoothing on a sufficiently large scale would alleviate the problem.
\subsection{Three-dimensional structure 2dFGRS}
We have also reconstructed the full three-dimensional galaxy density
field in the NGP and SGP regions of the 2dFGRS. The result is shown
in Fig.~\ref{fig:2dfdtfe3d}. It shows the three-dimensional rendering
of the NGP (left) and SGP slices (right) out to a redshift $z=0.1$.
The maps demonstrate that DTFE is able of recovering the three-dimensional
structure of the cosmic web as well as its individual elements. Although less
obvious than for the two-dimensional reconstructions, the
effective resolution of the three-dimensional reconstructions is also
varying across the map.
The NGP region is dominated by the large supercluster at a redshift of about 0.08,
the Sloan Great Wall. The structure near the upper edge at a redshift of 0.05 to 0.06
is part of the upper edge of the Shapley supercluster. In the SGP region several
known superclusters can be recognized as well. The supercluser in the center
of this region is part of the Pisces-Cetus supercluster. The huge
concentration at a redshift of about 0.07 is the upper part of the enormous
Horologium-Reticulum supercluster.
\section{Extensions, Applications and Prospects}
In this study we have demonstrated that DTFE density and velocity fields are optimal in the sense of defining a continuous and unbiased
representation of the data while retaining all information available in the point sample.
In the present review we have emphasized the prospects for the analysis of weblike structures or, more general, any
discretely sampled complex spatial pattern. In the meantime, DTFE has been applied in a number of studies of cosmic structure formation.
These studies do indeed suggest a major improvement over the more conventional analysis tools. Evidently, even though density/intensity field
analysis is one of the primary objectives of the DTFE formalism one of its important features is its ability to extend its
Delaunay tessellation based spatial interpolation to any corresponding spatially sampled physical quantity. The true potential of DTFE
and related adaptive random tessellation based techniques will become more apparent as further applications and extensions will come to
the fore. The prospects for major developments based on the use of tessellations are tremendous. As yet we may identify a diversity of
astrophysical and cosmological applications which will benefit substantially from the availability of adaptive random tessellation
related methods. A variety of recent techniques have recognized the high dynamic range and adaptivity of tessellations to the spatial
and morphological resolution of the systems they seek to analyze.
The first major application of DTFE concerns its potential towards uncovering morphological and statistical characteristics
of spatial patterns. A second major class of applications is that of issues concerning the dynamics of many particle systems.
They may prove to be highly useful for the analysis of phase space structure and evolution of gravitational
systems in their ability to efficiently trace density concentrations and singularities in higher dimensional space.
Along similar lines a highly promising avenue is that of the application of similar formalisms within the context
of computational hydrodynamics. The application of Delaunay tessellations as a fully self-adaptive grid may finally
open a Lagrangian grid treatment of the hydrodynamical equations describing complex multiscale systems such as
encountered in cosmology and galaxy formation. Various attempts along these lines have already been followed and
with the arrival of efficient adaptive Delaunay tessellation calculations they may finally enable a practically
feasible implementation. A third and highly innovative application, also using both the {\it adaptive} and {\it random}
nature of Delaunay tessellations, is their use as Monte Carlo solvers for systems of complex equations describing complex
physical systems. Particularly interesting has been the recent work by \cite{ritzericke2006}. They managed to exploit the
random and adaptive nature of Delaunay tessellations as a stochastic grid for Monte Carlo lattice treatment of
radiative transfer in the case of multiple sources. In the following we will describe a few of these applications
in some detail.
\subsection{Gravitational Dynamics}
Extrapolating the observation that DTFE is able to simultaneously handle spatial density and velocity
field \citep[e.g.][]{bernwey96,emiliophd2004,weyrom2007}, and encouraged by the success of Voronoi-based methods
in identifying dark halos in N-body simulations \citep{neyrinck2005}, \cite{arad2004} used DTFE to assess the six-dimensional
phase space density distribution of dark halos in cosmological $N$-body simulations. While a fully six-dimensional analysis may be
computationally cumbersome \citep{ascalbin2005}, and not warranted because of the symplectic character of phase-space,
the splitting of the problem into a separate spatial and velocity-space three-dimensional tessellation may indeed hold
promise for an innovative analysis of the dynamical evolution of dark halos.
\subsubsection{Gravitational Lensing and Singularity Detection}
A related promising avenue seeks to use the ability of DTFE to trace sharp density contrasts. This impelled \cite{bradac2004}
to apply DTFE to gravitational lensing. They computed the surface density map for a galaxy from the projection of the DTFE
volume density field and used the obtained surface density map to compute the gravitational lensing pattern around the object.
Recently, \cite{li2006} evaluated the method and demonstrated that it is indeed a promising tool for tracing
higher-order singularities.
\subsection{Computational Hydrodynamics}
Ultimately, the ideal fluid dynamics code would combine the advantages of the Eulerian as well as of the Lagrangian approach.
In their simplest formulation, Eulerian algorithms cover the volume of study with a fixed grid and compute the fluid transfer
through the faces of the (fixed) grid cell volumes to follow the evolution of the system. Lagrangian formulations, on the other hand,
compute the system by following the ever changing volume and shape of a particular individual element of gas\footnote{Interestingly,
the {\it Lagrangian} formulation is also due to Euler \citep{eul1862} who employed this formalism in a letter to Lagrange, who later
proposed these ideas in a publication by himself \citep{lag1762}}. Their emphasis on mass resolution makes Lagrangian codes usually
better equipped to follow the system into the highest density regions, at the price of a decreased resolution in regions of a lower
density.
As we may have appreciated from the adaptive resolution and morphological properties of Delaunay tessellations in DTFE and
the more general class of {\it Natural Neighbour} interpolations the recruitment of Delaunay tessellations may define
an appropriate combination of Eulerian and Lagrangian description of fluid dynamical systems.
\subsubsection{Particle Hydrodynamics}
A well-known strategy in computational hydrodynamics is to follow the Lagrangian description by discretizing the
fluid system in terms of a (large) number of fluid particles which encapsulate, trace and sample the relevant
dynamical and thermodynamical characteristics of the entire fluid \citep[see][for a recent review]{koumoutsakos2005}. Smooth
Particle Hydrodynamics (SPH) codes \citep{lucy1977,gingold1977,monaghan1992} have found widespread use in many areas of science
and have arguably become the most prominent computational tool in cosmology \citep[e.g.][]{katz1996,springel2001,springel2005}. SPH particles
should be seen as discrete balls of fluid, whose shape, extent and properties is specified according to user-defined
criteria deemed appropriate for the physical system at hand. Notwithstanding substantial virtues of SPH -- amongst which
one should include its ability to deal with systems of high dynamic range, its adaptive spatial resolution, and its
flexibility and conceptual simplicity -- it also involves various disadvantages.
One straightforward downside concerns its comparatively bad performance in low-density regions.
In a highly topical cosmological issue such as that of the {\it reionization of the Universe} upon the formation
of the first galaxies and stars it may therefore not be able to appropriately resolve the void patches
which may be relatively important for the transport of radiation. Even in high density regions it may
encounter problems in tracing singularities. As a result of its user-defined finite size it may not
succeed in a sufficiently accurate outlining of shocks. The role of the user-dependent and artificial
particle kernel has also become apparent when comparing the performance of SPH versus the more natural
DTFE kernels (see fig.~\ref{fig:eta4panel}, fig.~\ref{fig:dtfescaling} and sect.~\ref{sec:dtfescaling}).
In summary: SPH involves at least four user-defined aspects which affect its performance,
\begin{enumerate}
\item{} SPH needs an arbitrary user-specified kernel function W.
\item{} SPH needs a smoothing length h;\\
\ \ \ even though the standard choice is a length adapting to the particle density it does imply a finite extent.
\item{} SPH kernel needs a shape; a spherical shape is usually presumed.
\item{} SPH needs an (unphysical) artificial viscosity to stabilize solutions
and to be able to capture shocks.
\end{enumerate}
Given the evidently superior density and morphology tracing abilities of DTFE and related methods based upon
adaptive grids \cite{pelu2003} investigated the question what the effect would be of replacing a regular
SPH kernel by an equivalent DTFE based kernel. In an application to a TreeSPH simulation of the (neutral) ISM
they managed to identify various density and morphological environments where the natural adaptivity of
DTFE proves to yield substantially better results. They concluded with various suggestions for the formulation of
an alternative particle based adaptive hydrodynamics code in which the kernel would be defined by DTFE.
A closely related and considerably advanced development concerns the formulation of a complete and self-consistent
particle hydrodynamics code by Espa{\~nol}, Coveney and collaborators \citep{espanol1998,flekkoy2000,serrano2001,
serrano2002,fabritiis2003}. Their {\it (Multiscale) Dissipative Particle Hydrodynamics} code is entirely formulated on the basis of
{\it Voronoi fluid particles}. Working out the Lagrangian equations for a fluid system they demonstrate
that the subsequent compartimentalization of the system into discrete thermodynamic systems - {\it fluid
particles} - leads to a set of equations whose solution would benefit best if they are taken to be
defined by the Voronoi cells of the corresponding Voronoi tessellation. In other words, the geometrical
features of the Voronoi model is directly connected to the physics of the system by interpretation of the
Voronoi volumes as coarse-grained ``fluid clusters''. Not only does their formalism
capture the basic physical symmetries and conservation laws and reproduce the continuum for (sampled)
smooth fields, but it also suggests a striking similarity with turbulence. Their
formalism has been invoked in the modeling {\it mesoscale} systems, simulating the molecular
dynamics of complex fluid and soft condense matter systems which are usually are marked fluctuations
and Brownian motion. While the absence of long-range forces such as gravity simplifies the description
to some extent, the {\it Voronoi particle} descriptions does provide enough incentive for looking into
possible implementations within an astrophysical context.
\subsubsection{Adaptive Grid Hydrodynamics}
For a substantial part the success of the DTFE may be ascribed to the use of Delaunay tessellations as
optimally covering grid. This implies them to also be ideal for the use in {\it moving \& adaptive grid} implementations
of computational hydrodynamics.
\begin{figure*}
\begin{center}
\mbox{\hskip -0.1truecm\includegraphics[width=11.7cm]{weyval.fig46.ps}}
\vskip 0.0truecm
\caption{Application of the Natural Neighbour scheme solution of partial differential equations on highly irregular
evolving Delaunay grids, described by Braun \& Sambridge 1995. It involves the natural-element method (NEM) solution
of the Stokes flow problem in which motion is driven by the fall of an elasto-plastic plate denser than the viscous fluid.
The problem is solved by means of a natural neighbour scheme, the Delaunay grid is used as unstructured computation mesh. The Stokes
equation is solved at the integration points in the linear fluid, the equations of force balance at the integration points
located within the plate. The solution is shown at two timesteps (1 and 10000 of the integration). Image courtesy
of M. Sambridge also see Braun \& Sambridge 1995.}
\label{fig:nnsbmhydro}
\end{center}
\end{figure*}
Unstructured grid methods originally emerged as a viable alternative to structured or block-structured grid techniques
for discretization of the Euler and Navier-Stokes equations for structures with complex geometries
\citep[see][for an extensive review]{mavripilis1997}. The use of unstructured grids provides greater flexibility for
discretizing systems with a complex shape and allows the adaptive update of mesh points and connectivities in order
to increase the resolution. A notable and early example of the use of unstructured grids may be found General Relativity
in the formulation of Regge calculus \citep{regge1961}. Its applications includes quantum gravity formulations on the
basis of Voronoi and Delaunay grids \citep[e.g.][]{wmiller1997}.
One class of unstructured grids is based upon the use of specific geometrical elements (triangles in 2-D, tetrahedra in 3-D) to
cover in a non-regular fashion a structure whose evolution one seeks to compute. It has become a prominent technique in a large
variety of technological applications, with those used in the design of aerodynamically optimally shaped cars and the design
of aeroplanes are perhaps the most visible ones. The definition and design of optimally covering and suitable meshes is a
major industry in computational science, including issues involved with the definition of optimal Delaunay meshes
\citep[see e.g][]{amenta1998,amenta1999,shewchuk2002,chengdey2004,dey2004,alliez2005a,alliez2005b,boissonnat2006}. A second
class of unstructured grids is based upon the use of structural elements of a mixed type with irregular connectivity. The
latter class allows the definition of a self-adaptive grid which follows the evolution of a physical system. It is in this
context that one may propose the use of Voronoi and Delaunay tessellations as adaptive hydrodynamics lattices.
Hydrocodes with Delaunay tessellations at their core warrant a close connection to the underlying matter
distribution and would represent an ideal compromise between an Eulerian and Lagrangian description,
guaranteeing an optimal resolution and dynamic range while taking care of an appropriate coverage of
low-density regions. Within astrophysical context there have been a few notable attempts to develop {\it moving grid}
codes. Although these have shown their potential \citep{gnedin1995,pen1998}, their complexity and the implicit
complications raised by the dominant presence of the long-range force of gravity have as yet prevented their
wide-range use.
It is here that Delaunay tessellations may prove to offer a highly promising alternative. The advantages of a moving
grid fluid dynamics code based on Delaunay tessellations has been explicitly addressed in a detailed and
enticing study by \cite{whitehurst1995}. His two-dimensional FLAME Lagrangian hydrocode used a first-order
solver and proven to be far superior to all tested first-order and many second-order Eulerian codes. The adaptive
nature of the Langrangian method and the absence of preferred directions in the grid proved to be key
factors for the performance of the method. \cite{whitehurst1995} illustrated this with impressive examples
such as the capturing of shock and collision of perfectly spherical shells. The related higher-order
Natural Neighbour hydrocodes used by \cite{braunsambridge1995}, for a range of geophysical problems,
and \cite{sukumarphd1998} are perhaps the most convincing examples and applications of Delaunay grid
based hydrocodes. The advantages of Delaunay grids in principle apply to any algorithm invoking them, in particular
also for three-dimensional implementations (of which we are currently unaware).
\begin{figure*}
\begin{center}
\mbox{\hskip -0.5truecm\includegraphics[width=12.3cm]{weyval.fig47.ps}}
\vskip 0.0truecm
\caption{Nine non-linear gray-scale images of the density evolution of the 2D interacting blast waves test of the
FLAME Delaunay grid hydrocode of Whitehurst (1995). The circular shape of the shockfronts is well represented; the
contact fronts are clearly visible. Of interest is the symmetry of the results, which is not enforced and so is
genuine, and the instability of the decelerated contact fronts. From Whitehurst 1995.}
\label{fig:blastwhitehurst}
\end{center}
\vskip -0.5truecm
\end{figure*}
\subsubsection{Kinetic and Dynamic Delaunay Grids}
If anything, the major impediment towards the use of Delaunay grids in evolving (hydro)dynamical systems is the
high computational expense for computing a Delaunay grid. In the conventional situation one needs to
completely upgrade a Delaunay lattice at each timestep. It renders any Delaunay-based code prohibitively
expensive.
In recent years considerable attention has been devoted towards developing kinetic and dynamic Delaunay
codes which manage to limit the update of a grid to the few sample points that have moved so far that
the identity of their Natural Neighbours has changed. The work by Meyer-Hermann and collaborators \citep{schaller2004,beyer2005,
beyerhermann2006} may prove to represent a watershed in having managed to define a parallel code for the kinetic and dynamic
calculation of Delaunay grids. Once the Delaunay grid of the initial point configuration has been computed,
subsequent timesteps involve a continuous dynamical upgrade via local Delaunay simplex upgrades as points move
around and switch the identity of their natural neighbours.
The code is an elaboration on the more basic step of inserting one point into a Delaunay grid and
evaluating which Delaunay simplices need to be upgraded. Construction of Delaunay triangulations
via incremental insertion was introduced by \cite{fortune1992} and \cite{edelsbrunner1996}. Only sample
points which experience a change in identity of their natural neighbours need an update of their Delaunay
simplices, based upon {\it vertex flips}. The update of sample points that have retained the
same natural neighbours is restricted to a mere recalculation of the position and shape of
their Delaunay cells.
\subsection{Random Lattice Solvers and \texttt{SimpleX}}
The use of Delaunay tessellations as adaptive and dynamic meshes for evaluating the
fluid dynamical equations of an evolving physical system emphasizes their adaptive nature.
Perhaps even more enticing is the use of the random nature of the these meshes. Taking
into account their spatial adaptivity as well as their intrinsic stochastic nature,
they provide an excellent template for following physical systems along the lines
of Monte Carlo methods.
\begin{figure*}
\begin{center}
\mbox{\hskip -0.1truecm\includegraphics[width=11.9cm]{weyval.fig48.ps}}
\vskip 0.0truecm
\caption{The result of two simple \texttt{SimpleX} radiative transfer tests on a 2D Poisson-Delaunay random
lattice with $N=5\times10^4$ points. Both are logarithmic plots of the number of particles at
each site. Left: illustration of the conservation of momentum by means of the transport of particles
with constant momentum vectors. Right: Illustration of a scattering transport process. Image
courtesy of J. Ritzerveld, also see Ritzerveld 2007.}
\label{fig:simplex1}
\end{center}
\end{figure*}
Generally speaking, Monte Carlo methods determine the characteristics of many body systems as
statistical averages over a large number of particle histories, which are computed with the
use of random sampling techniques. They are particularly useful for transport problems
since the transport of particles through a medium can be described stochastically as a random
walk from one interaction event to the next. The first to develop a computer-oriented Monte Carlo
method for transport problems were \cite{metropolis1949}. Such transport problems may in general
be formulated in terms of a stochastic Master equation which may evaluated by means
of Monte Carlo methods by simulating large numbers of different particle trajectories or
histories \citep[see][for a nice discussion]{ritzerveldphd2007}.
\begin{figure*}
\begin{center}
\mbox{\hskip -0.1truecm\includegraphics[width=11.9cm]{weyval.fig49.ps}}
\vskip 0.0truecm
\end{center}
\caption{A volume rendering of the result of using the \texttt{SimpleX} method to transport ionizing photons
through the intergalactic medium at and around the epoch of reionization. The \texttt{SimpleX} method was
applied to a PMFAST simulation of the large scale structure in a $\Lambda$CDM universe. The results of 6 different
timesteps are plotted, in which the white corresponds to hydrogen to the hydrogen that is still neutral
(opaque), the blue to the already ionized hydrogen (transparent). Image courtesy of J. Ritzerveld, also see Ritzerveld 2007.}
\label{fig:simplexreion}
\end{figure*}
While the asymmetry of regular meshes and lattices for Monte Carlo calculations introduces
various undesirable effects, random lattices may alleviate or solve the problematic lack of
symmetry. Lattice Boltzmann studies were amongst the first to recognize this
\citep[see e.g.][]{ubertini2005}. In a set of innovative publications \cite{christ1982a,
christ1982b,christ1982c} applied random lattices, including Voronoi and Delaunay tessellations,
to solve (QCD) field theoretical equations.
Recently, a highly innovative and interesting application to (astrophysical) radiative
transfer problems followed the same philosophy as the Random Lattice Gauge theories.
\cite{ritzericke2006} and \cite{ritzerveldphd2007} translated the problem of radiative transfer
into one which would be solved on an appropriately defined (Poisson)-Delaunay grid. It leads to
the formulation of the \texttt{SimpleX} radiation transfer technique which translates the transport
of radiation through a medium into the {\it deterministic} displacement of photons from one
vertex to another vertex of the {\it stochastic} Delaunay mesh.
The perfect random and isotropic nature of the Delaunay grid assures an unbiased Monte Carlo sampling of the
photon paths. The specification of appropriate absorption and scattering coefficients at the nodes
completes the proper Monte Carlo solution of the Master equation for radiative transfer.
One of the crucial aspects of \texttt{SimpleX} is the sampling of the underlying density field
$\rho({\bf x})$ such that the Delaunay grid edge lengths $L_{\rm D}$ represent a proper random
sample of the (free) paths $\lambda_{\gamma}$ of photons through a scattering and absorbing medium.
In other words, \texttt{SimpleX} is built on the assumption that the (local) mean Delaunay edge
length $\langle L_{\rm D}\rangle$ should be proportional to the (local) mean free path of the photon.
For a medium of density $\rho({\bf x})$ in d-dimensional space (d=2 or d=3), with absorption/scattering
coefficient $\sigma$, sampled by a Delaunay grid generated by a point sample with local number
density $n_{\rm X}(\bf x)$,
\begin{eqnarray}
\langle L_{\rm D} \rangle \,\propto\, \lambda_{\gamma}\ \ \ \Longleftarrow\ \ \
\begin{cases}
\langle L_{\rm D} \rangle\,=\,\zeta(d)\, n_{\rm X}^{-1/d}\\
\ \\
\lambda_{\gamma}\,=\,{\displaystyle 1 \over \displaystyle \rho({\bf x})\,\sigma}
\end{cases}
\end{eqnarray}
where $\zeta(d)$ is a dimension dependent constant. By sampling of the underlying density
field $\rho({\bf x})$ by a point density
\begin{eqnarray}
n_{\rm X}({\bf x})\,\propto \rho({\bf x})^d
\end{eqnarray}
\texttt{SimpleX} produces a Delaunay grid whose edges are guaranteed to form a representative stochastic
sample of the photon paths through the medium.
To illustrate the operation of \texttt{SimpleX} Fig.~\ref{fig:simplex1} shows the outcome of a
two-dimensional \texttt{SimpleX} test calculations. One involves a test of the ability of
\texttt{Simplex} to follow a beam of radiation, the second one its ability to follow the
spherical spread of radiation emitted by an isotropically emitting source. The figure nicely
illustrates the success of \texttt{SimpleX}, meanwhile providing an impression of the ebffect
and its erratic action at the scale of Delaunay grid cells.
The \texttt{SimpleX} method has been applied to the challenging problem of the reionization of the
intergalactic medium by the first galaxies and stars. A major problem for most radiative transfer
techniques is to deal with multiple sources of radiation. While astrophysical problems may often be a
pproximated by a description in terms of a single source, a proper evaluation of reionization
should take into account the role of a multitude of sources. \texttt{SimpleX} proved its ability to
yield sensible answers in a series of test comparisons between different radiative transfer
codes applied to aspects typical for reionization \citep{illiev2006}. The first results of the
application of \texttt{SimpleX} to genuine cosmological circumstances, by coupling it to a cosmological
SPH code, do yield highly encouraging results \citep{ritzerveld2007b}. Fig.~\ref{fig:simplexreion}
is a beautiful illustration of the outcome of one of the reionization simulations.
If anything, \texttt{SimpleX} proves that the use of random Delaunay grids have the potential of
representing a genuine breakthrough for addressing a number of highly complex
astrophysical problems.
\subsection{Spatial Statistics and Pattern Analysis}
Within its cosmological context DTFE will meet its real potential in more sophisticated applications tuned towards uncovering
morphological characteristics of the reconstructed spatial patterns.
A potentially interesting application would be its implementation in the SURFGEN machinery. SURFGEN seeks to provide locally defined
topological measures, cq. local Minkowski functionals, of the density field \citep{sahni1998,jsheth2003,shandarin2004}. A recent
new sophisticated technique for tracing the cosmic web is the {\it skeleton} formalism developed by \cite{novikov2006}, based upon
the framework of Morse theory \citep{colombi2000}. The {\it skeleton } formalism seeks to identify filaments in the web by identifying
ridges along which the gradient of the density field is extremal along its isocontours \citep[also see][] {sousbie2006}. The use
of unbiased weblike density fields traced by DTFE would undoubtedly sharpen the ability of the skeleton formalism to
trace intricate weblike features over a wider dynamical range.
Pursuing an alternative yet related track concerns the direct use of tessellations themselves in outlining topological properties of
a complex spatial distribution of points. {\it Alpha-shapes} of a discrete point distribution are subsets of Delaunay triangulations which
are sensitive tracers of its topology and may be exploited towards inferring Minkowski functionals and Betti numbers \citep{edelsbrunner1983,edelsbrunner1994,
edelsbrunner2002}. A recent and ongoing study seeks their application to the description of the
Cosmic Web \citep{vegter2007} (also see sec~\ref{sec:alphashape}).
Two major extensions of DTFE already set out to the identification of key aspects of the cosmic web. Specifically focussed on the
hierarchical nature of the Megaparsec matter distribution is the detection of weblike anisotropic features over a range of spatial
scales by the Multiscale Morphology Filter (MMF), introduced and defined by \cite{aragonmmf2007}. The {\it Watershed Void Finding} (WVF)
algorithm of \cite{platen2007} is a void detection algorithm meant to outline the hierarchical nature of the cosmic void population.
We will shortly touch upon these extensions in order to illustrate the potential of DTFE.
\subsection{Alpha-shapes}
\label{sec:alphashape}
{\it Alpha shape} are a description of the (intuitive) notion of the shape of a discrete point set. {\it Alpha-shapes} of a discrete
point distribution are subsets of a Delaunay triangulation and were introduced by Edelsbrunner and collaborators \citep{edelsbrunner1983,
mueckephd1993,edelsbrunner1994,edelsbrunner2002}. Alpha shapes are generalizations of the convex hull of a point set and are concrete geometric objects which are uniquely defined for a particular point set. Reflecting the topological structure of a point distribution, it
is one of the most essential concepts in the field of Computational Topology \citep{dey1998,vegter2004,zomorodian2005}.
\begin{figure*}
\begin{center}
\mbox{\hskip -0.1truecm\includegraphics[width=12.2cm]{weyval.fig50.ps}}
\vskip 0.0truecm
\caption{Examples of {\it alpha shapes} of the LCDM GIF simulation. Shown are central slices through
two alpha shapes (top: low alpha; bottom: high alpha). The image shows the sensitivity of alpha shapes
to the topology of the matter distribution. From: Vegter et al. 2007. Courtesy: Bob Eldering.}
\label{fig:gifalpha}
\end{center}
\end{figure*}
If we have a point set $S$ and its corresponding Delaunay triangulation, we may identify all {\it Delaunay simplices}
-- tetrahedra, triangles, edges, vertices -- of the triangulation. For a given non-negative value of $\alpha$, the
{\it alpha complex} of a point sets consists of all simplices in the Delaunay triangulation which have
an empty circumsphere with squared radius less than or equal to $\alpha$, $R^2\leq \alpha$. Here ``empty'' means that the
open sphere does not include any points of $S$. For an extreme value $\alpha=0$ the alpha complex merely consists of
the vertices of the point set. The set also defines a maximum value $\alpha_{\rm max}$, such that for $\alpha \geq
\alpha_{\rm max}$ the alpha shape is the convex hull of the point set.
The {\it alpha shape} is the union of all simplices of the alpha complex. Note that it implies that although the alpha shape
is defined for all $0\leq \alpha < \infty$ there are only a finited number of different alpha shapes for any one point set.
The alpha shape is a polytope in a fairly general sense, it can be concave and even disconnected. Its components can be
three-dimensional patches of tetrahedra, two-dimensional ones of triangles, one-dimensional strings of edges and
even single points. The set of all real numbers $\alpha$ leads to a family of shapes capturing the intuitive notion of
the overall versus fine shape of a point set. Starting from the convex hull of a point set and gradually decreasing $\alpha$ the shape
of the point set gradually shrinks and starts to develop cavities. These cavities may join to form tunnels and
voids. For sufficiently small $\alpha$ the alpha shape is empty.
Following this description, one may find that alpha shapes are intimately related to the topology of a point set.
As a result they form a direct and unique way of characterizing the topology of a point distribution. A complete
quantitative description of the topology is that in terms of Betti numbers $\beta_{\rm k}$ and these may indeed
be directly inferred from the alpha shape. The first Betti number $\beta_0$ specifies the number of
independent components of an object. In ${\mathbb R}$ $\beta_1$ may be interpreted as the number of independent
tunnels, and $\beta_2$ as the number of independent enclose voids. The $k^{th}$ Betti number effectively counts
the number of independent $k$-dimensional holes in the simplicial complex.
Applications of alpha shapes have as yet focussed on biological systems. Their use in characterizing the topology
and structure of macromolecules. The work by Liang and collaborators \citep{edelsbrunner1998,liang1998a,liang1998b,liang1998c}
uses alpha shapes and betti numbers to assess the voids and pockets in an effort to classify complex protein structures, a highly challenging
task given the 10,000-30,000 protein families involving 1,000-4,000 complicated folds. Given the interest in the topology
of the cosmic mass distribution \citep[e.g.][]{gott1986,mecke1994,schmalzing1999}, it is evident that {\it alpha shapes}
also provide a highly interesting tool for studying the topology of the galaxy distribution and N-body simulations of cosmic
structure formation. Directly connected to the topology of the point distribution itself it would discard the need of
user-defined filter kernels.
In a recent study \cite{vegter2007} computed the alpha shapes for a set of GIF simulations of cosmic structure
formation (see fig.~\ref{fig:gifalpha}). On the basis of a calibration of the inferred Minkowski functionals and
Betti numbers from a range of Voronoi clustering models their study intends to refine the knowledge of
the topological structure of the Cosmic Web.
\subsection{the Multiscale Morphology Filter}
\label{sec:mmf}
The multiscale detection technique -- MMF -- is used for characterizing the different morphological
elements of the large scale matter distribution in the Universe \cite{aragonmmf2007}. The method is ideally suited
for extracting catalogues of clusters, walls, filaments and voids from samples of galaxies in redshift surveys or particles
in cosmological N-body simulations.
The multiscale filter technology is particularly oriented towards recognizing and identifying features in a
density or intensity field on the basis of an assessment of their coherence along a range of spatial scales and with the
virtue of providing a generic framework for characterizing the local morphology of the density field and enabling the
selection of those morphological features which the analysis at hand seeks to study. The Multiscale Morphology Filter (MMF)
method has been developed on the basis of visualization and feature extraction techniques in computer vision and medical research
\citep{florack1992}. The technology, finding its origin in computer vision research, has been optimized within the context of feature
detections in medical imaging. \cite{frangi1998} and \cite{sato1998} presented its operation for the specific situation
of detecting the web of blood vessels in a medical image. This defines a notoriously complex pattern of elongated tenuous features
whose branching make it closely resemble a fractal network.
The density or intensity field of the sample is smoothed over a range of multiple scales. Subsequently, this signal is processed through
a morphology response filter. Its form is dictated by the particular morphological feature it seeks to extract, and depends on the local
shape and spatial coherence of the intensity field. The morphology signal at each location is then defined to be the one with the maximum
response across the full range of smoothing scales. The MMF translates, extends and optimizes this technology towards the recognition
of the major characteristic structural elements in the Megaparsec matter distribution. It yields a unique framework for the combined
identification of dense, compact bloblike clusters, of the salient and moderately dense elongated filaments and of tenuous planar walls.
Figure~\ref{fig:mmf} includes a few of the stages involved in the MMF procedure.
Crucial for the ability of the method to identify anisotropic features such as filaments and walls is the use of a morphologically
unbiased and optimized continuous density field retaining all features visible in a discrete galaxy or particle distribution.
Accordingly, DTFE is used to process the particle distribution into a continuous density field (top centre frame, fig.~\ref{fig:mmf}).
The morphological intentions of the MMF method renders DTFE a key element for translating the particle or galaxy distribution into a
representative continuous density field $f_{\tiny{\textrm{DTFE}}}$.
\begin{table}
\label{tab:morphmask}
\centering
\begin{large}
\begin{tabular} {|p{2.65cm}|p{3.75cm}|p{5.0cm}|}
\hline
\hline
\ &&\\
\hskip 0.5truecm Structure & \hskip 1.0truecm $\lambda$ ratios & \hskip 1.2truecm $\lambda$ constraints \\
\ &&\\
\hline
\ &&\\
\hskip 0.5truecm Blob \hskip 0.5truecm & \hskip 0.6truecm $\lambda_1 \simeq \lambda_2 \simeq \lambda_3$ \hskip 0.5truecm & \hskip 0.4truecm $\lambda_3 <0\,\,;\,\, \lambda_2 <0 \,\,;\,\, \lambda_1 <0 $ \hskip 0.5truecm \\
\hskip 0.5truecm Line \hskip 0.5truecm & \hskip 0.6truecm $\lambda_1 \simeq \lambda_2 \gg \lambda_3$ \hskip 0.5truecm & \hskip 0.4truecm $\lambda_3 <0 \,\,;\,\, \lambda_2 <0 $ \\
\hskip 0.5truecm Sheet \hskip 0.5truecm & \hskip 0.6truecm $\lambda_1 \gg\lambda_2 \simeq \lambda_3$ \hskip 0.5truecm & \hskip 0.4truecm $\lambda_3 <0 $ \\
\ &&\\
\hline
\hline
\end{tabular}
\end{large}
\vskip 0.25truecm
\caption{Behaviour of the eigenvalues for the characteristic morphologies. The
lambda conditions describe objects with intensity higher that their
background (as clusters, filaments and walls). For voids we must reverse the
sign of the eigenvalues. From the constraints imposed by the $\lambda$
conditions we can describe the blob morphology as a subset of the line
which is itself a subset of the wall.}
\end{table}
In the implementation of \cite{aragonmmf2007} the scaled representations of the data are obtained by repeatedly smoothing the
DTFE reconstructed density field $f_{\tiny{\textrm{DTFE}}}$ with a hierarchy of spherically symmetric Gaussian filters
$W_{\rm G}$ having different widths $R$:
\begin{equation}
f_{\rm S}({\vec x}) =\, \int\,{\rm d}{\vec y}\,f_{\tiny{\textrm{DTFE}}}({\vec y})\,W_{\rm G}({\vec y},{\vec x})\nonumber
\end{equation}
where $W_{\rm G}$ denotes a Gaussian filter of width $R$. A pass of the Gaussian smoothing filter attenuates structure on
scales smaller than the filter width. The Scale Space itself is constructed by stacking these variously smoothed data sets,
yielding the family $\Phi$ of smoothed density maps $f_n$:
\begin{equation}
\label{eq:scalespace}
\Phi\,=\,\bigcup_{levels \; n} f_n
\end{equation}
In essence the {\it Scale Space} structure of the field is the $(D+1)$ dimensional space defined by the $D$-dimensional density
or intensity field smoothed over a continuum of filter scales $R_G$. As a result a data point can be viewed at any of the scales
where scaled data has been generated.
\begin{figure*}
\begin{center}
\vskip 0.5truecm
\mbox{\hskip -0.7truecm\includegraphics[height=14.0cm,angle=90.0]{weyval.fig51.ps}}
\vskip 0.0truecm
\end{center}
\end{figure*}
\begin{figure*}
\caption{Schematic overview of the Multiscale Morphology Filter (MMF) to isolate and
extract elongated filaments (dark grey), sheetlike walls (light grey) and clusters (black
dots) in the weblike pattern of a cosmological N-body simulation. The first stage is the
translation of a discrete particle distribution (top lefthand frame) into a DTFE density
field (top centre). The DTFE field is filtered over a range of scales (top righthand stack
of filtered fields). By means of morphology filter operations defined on the basis of the
Hessian of the filtered density fields the MMF successively selects the regions which
have a bloblike (cluster) morphology, a filamentary morphology and a planar morphology,
at the scale at which the morphological signal is optimal. This produces a feature map
(bottom lefthand). By means of a percolation criterion the physically significant
filaments are selected (bottom centre). Following a sequence of blob, filament and wall
filtering finally produces a map of the different morphological features in the particle
distribution (bottom lefthand). The 3-D isodensity contours in the bottom lefthand
frame depict the most pronounced features. From Arag\'on-Calvo et al. 2007.}
\label{fig:mmf}
\end{figure*}
The crux of the concept is that the neighbourhood of a given point will look different at each scale. While there are potentially
many ways of making a comparison of the scale dependence of local environment, \cite{aragonmmf2007} chose to calculate the
Hessian matrix in each of the smoothed replicas of the original data to describe the the local "shape" of the density field
in the neighbourhood of that point. In terms of the Hessian, the local variations around a point $\vec{x}_0$ of the density
field $f(\vec{x})$ may be written as the Taylor expansion
\begin{equation}\label{eq:taylor_exp_1}
f(\vec{x}_0 + \vec{s})\,=\,f(\vec{x}_0)\,+\,\vec{s}^T \nabla f(\vec{x}_0)\,+\,
\frac{1}{2}\vec{s}^T \mathcal{H} (\vec{x}_0) \vec{s} + ...
\end{equation}
where
\begin{equation}\label{eq:hessian_1}
\mathcal{H}\,=\,\left ( \begin{array}{ccc}
f_{xx} & f_{yx} & f_{zx} \\
f_{xy} & f_{yy} & f_{zy} \\
f_{xz} & f_{yz} & f_{zz}
\end{array} \right )
\end{equation}
is the Hessian matrix. Subscripts denote the partial derivatives of $f$ with respect to the named variable. There are many
possible algorithms for evaluating these derivatives. In practice, the Scale Space procedure evaluates the Hessian directly
for a discrete set of filter scales by smoothing the DTFE density field by means of Mexican Hat filter,
\begin{eqnarray}
\frac{\partial^2}{\partial x_i \partial x_j} f_S({\vec x})&\,=\,&f_{\tiny{\textrm{DTFE}}}\,\otimes\,\frac{\partial^2}{\partial x_i \partial x_j} W_{\rm G}(R_{\rm S})\nonumber \\
&\,=\,&\int\,{\rm d}{\vec y}\,f({\vec y})\,\,\frac{(x_i-y_i)(x_j-y_j)-\delta_{ij}R_{\rm S}^2}{R_{\rm S}^4}\,W_{\rm G}({\vec y},{\vec x})
\end{eqnarray}
with ${x_1,x_2,x_3}={x,y,z}$ and $\delta_{ij}$ the Kronecker delta. In other words, the scale space representation of the Hessian matrix for each level $n$ is evaluated by means of a convolution with the second derivatives of the Gaussian filter, also known as the {\it Marr or ''Mexican Hat''} Wavelet.
The eigenvalues of the Hessian matrix at a point encapsulate the information on the local shape of the field. Eigenvalues are denoted
as $\lambda_{a}(\vec{x})$ and arranged so that
$ \lambda_1 \ge \lambda_2 \ge \lambda_3 $:
\begin{eqnarray}
\qquad \bigg\vert \; \frac{\partial^2 f_n({\vec x})}{\partial x_i \partial x_j} - \lambda_a({\vec x})\; \delta_{ij} \; \bigg\vert
&=& 0, \quad a = 1,2,3 \\
\mathrm{with} \quad \lambda_1 &>& \lambda_2 > \lambda_3 \nonumber
\end{eqnarray}
The $\lambda_{i}(\vec{x})$ are coordinate independent descriptors of the behaviour of the density field in the locality of the point
$\vec{x}$ and can be combined to create a variety of morphological indicators. They quantify the rate of change of the field gradient in
various directions about each point. A small eigenvalue indicates a low rate of change of the field values in the corresponding
eigen-direction, and vice versa. The corresponding eigenvectors show the local orientation of the morphology characteristics.
Evaluating the eigenvalues and eigenvectors for the renormalised Hessian $\tilde {\mathcal{H}}$ of each dataset in a Scale Space shows
how the local morphology changes with scale. First regions in scale space are selected according to the appropriate morphology
filter, identifying bloblike, filamentary and wall-like features at a range of scales. The selections are made according
to the eigenvalue criteria listed in table~\ref{tab:morphmask}. Subsequently, a sophisticated machinery of filters
and masks is applied to assure the suppression of noise and the identification of the locally relevant scales for the
various morphologies.
Finally, for the cosmological or astrophysical purpose at hand the identified
spatial patches are tested by means of an {\it erosion} threshold criterion. Identified blobs should have a
critical overdensity corresponding to virialization, while identified filaments should fulfil a percolation requirement
(bottom central frame). By successive repetition for the identification of blobs, filaments and sheets -- each with their
own morphology filter -- the MMF has dissected the cosmological density field into the corresponding features. The box
in the bottom lefthand frame shows a segment from a large cosmological N-body simulation: filaments are coloured
dark grey, the walls light grey and the clusters are indicated by the black blobs.
Once these features have been marked and identified by MMF, a large variety of issues may be adressed. An important
issue is that of environmental influences on the formation of galaxies. The MMF identification of filaments
allowed \citep{aragonmmf2007} to show that galaxies in filaments and walls do indeed have a mass-dependent alignment.
\subsection{the Watershed Void Finder}
The Watershed Void Finder (WVF) is an implementation of the {\it Watershed
Transform} for segmentation of images of the galaxy and matter distribution
into distinct regions and objects and the subsequent identification of voids
\citep{platen2005,platen2007}. The watershed transform is a
concept defined within the context of mathematical morphology. The basic
idea behind the watershed transform finds its origin in geophysics. It
delineates the boundaries of the separate domains, the {\it basins}, into which yields of
e.g. rainfall will collect. The analogy with the cosmological context is
straightforward: {\it voids} are to be identified with the {\it basins},
while the {\it filaments} and {\it walls} of the cosmic web are the ridges
separating the voids from each other.
\begin{figure*}
\begin{center}
\mbox{\hskip -0.1truecm\includegraphics[width=11.9cm]{weyval.fig52.ps}}
\vskip 0.0truecm
\caption{Three frames illustrating the principle of the watershed
transform. The lefthand frame shows the surface to be segmented.
Starting from the local minima the surrounding basins of the surface
start to flood as the water level continues to rise (dotted plane
initially below the surface). Where two basins meet up near a ridge
of the density surface, a ``dam'' is erected (central frame). Ultimately,
the entire surface is flooded, leaving a network of dams
defines a segmented volume and delineates the corresponding
cosmic web (righthand frame). From: Platen, van de Weygaert \& Jones 2007.}
\label{fig:wvfcart}
\end{center}
\end{figure*}
The identification of voids in the cosmic matter distribution is hampered by
the absence of a clearly defined criterion of what a void is. Unlike overdense
and virialized clumps of matter voids are not genuinely defined physical
objects. The boundary and identity of voids is therefore mostly a matter of
definition. As a consequence there is a variety of voidfinding algorithms
\citep{kauffair1991,elad1996,plionbas2002,hoyvog2002,shandfeld2006,colberg2005b,
patiri2006a}. A recent study \citep{colpear2007} contains a balanced comparison of
the performance of the various void finders with respect to a small region taken
from the Millennium simulation \cite{springmillen2005}
\begin{figure*}
\begin{center}
\mbox{\hskip -0.1truecm\includegraphics[width=11.9cm]{weyval.fig53.ps}}
\vskip 0.0truecm
\caption{A visualization of several intermediate steps of the
Watershed Void Finding (WVF) method. The top lefthand frame shows the
particles of a slice in the LCDM GIF simulation. The corresponding
DTFE density field is shown in the top righthand frame. The
next, bottom lefthand, frame shows the resulting $n$-th order
median-filtered image. Finally, the bottom righthand frame shows
the resulting WVF segmentation computed on the basis of the median
filtered image. From: Platen, van de Weygaert \& Jones 2007.}
\label{fig:wvf}
\end{center}
\end{figure*}
\subsubsection{Watersheds}
\label{sec:watershed}
With respect to the other void finders the watershed algorithm seeks to define
a natural formalism for probing the hierarchical nature of the void distribution in
maps of the galaxy distribution and in N-body simulations of cosmic
structure formation. The WVF has several advantages \citep[see e.g.][]{meyerbeucher1990},
Because it is identifies a void segment on the basis of the crests in a density field
surrounding a density minimum it is able to trace the void boundary even though
it has a distored and twisted shape. Also, because the contours around well chosen
minima are by definition closed the transform is not sensitive to local protrusions between
two adjacent voids. The main advantage of the WVF is that for an ideally smoothed
density field it is able to find voids in an entirely parameter free fashion.
The word {\it watershed} finds its origin in the analogy of the procedure
with that of a landscape being flooded by a rising level of water.
Figure~\ref{fig:wvfcart} illustrates the concept. Suppose
we have a surface in the shape of a landscape. The surface is pierced at the location of
each of the minima. As the waterlevel rises a growing fraction of the landscape will be flooded
by the water in the expanding basins. Ultimately basins will meet at
the ridges corresponding to saddlepoints in th e density field. These
define the boundaries of the basins, enforced by means of a
sufficiently high {\it dam}. The final result of the completely immersed landscape
is a division of the landscape into individual cells, separated
by the {\it ridge dams}.
\subsubsection{Formalism}
The WVF consists of eightfold crucial steps. The two essential first steps
relate directly to DTFE. The use of DTFE is essential to infer a continuous
density field from a given N-body particle distribution of galaxy redshift
survey. For the success of the WVF it is of utmost importance that the density
field retains its morphological character, ie. the hierarchical nature, the weblike
morphology dominated by filaments and walls, and the presence of voids. In particular
in and around low-density void regions the raw density field is characterized by
a considerable level of noise. In an essential second step noise gets suppressed
by an adaptive smoothing algorithm which in a consecutive sequence of repetitive
steps determines the median of densities within the {\it contiguous Voronoi cell}
surrounding a point. The determination of the median density of the natural
neighbours turns out to define a stable and asymptotically converging smooth
density field fit for a proper watershed segmentation.
Figure~\ref{fig:wvf} is an illustration of four essential stages in the WVF procedure.
Starting from a discrete point distribution (top left), the continuous density field is
determined by the DTFE procedure (top right). Following the natural smoothing by {\it Natural Neighbour Median}
filtering (bottom left), the watershed formalism is able to identify the void segments in the density field
(bottom right).
\section{Concluding Remarks}
This extensive presentation of tessellation-based machinery for the analysis of weblike patterns in the
spatial matter and galaxy distribution intends to provide the interested reader with a framework to
exploit and extend the large potential of Voronoi and Delaunay Tessellations. Even though conceptually
not too complex, they do represent an intriguing world by themselves. Their cellular geometry paves the
way towards a natural analysis of the intricate fabric known as Cosmic Web.
\section{Acknowledgments}
RvdW wishes to thank Vicent Mart\'{\i}nez, Enn Saar and Maria Pons for their invitation and hospitality
during this fine and inspiring September week in Valencia, and their almost infinite
patience regarding my ever shifting deadlines. We owe thanks to O. Abramov, S. Borgani, M. Colless,
A. Fairall, T. Jarrett, M. Juri\'C, R. Landsberg, O. L\'opez-Cruz, A. McEwen, M. Ouchi, J. Ritzerveld, M. Sambridge,
V. Springel and N. Sukumar for their figures used in this manuscript. The work described in this lecture is the
result of many years of work, involving numerous collaborations. In particular we wish
to thank Vincent Icke, Francis Bernardeau, Dietrich Stoyan, Sungnok Chiu, Jacco Dankers, Inti Pelupessy,
Emilio Romano-D\'{\i}az, Jelle Ritzerveld, Miguel Arag\'on-Calvo, Erwin Platen, Sergei Shandarin,
Gert Vegter, Niko Kruithof and Bob Eldering for invaluable contributions and discussions, covering nearly
all issues touched upon in this study. In particularly grateful we are to Bernard Jones, for his enthusiastic and crucial
support and inspiration, and the many original ideas, already over two decades, for all that involved
tessellations, DTFE, the Universe, and much more .... RvdW is especially grateful to Manolis and Menia for
their overwhelming hospitality during the 2005 Greek Easter weeks in which a substantial fraction of this
manuscript was conceived ... What better setting to wish for than the view from the Pentelic mountain overlooking
the cradle of western civilization, the city of Pallas Athena. Finally, this work could not have
been finished without the patience and support of Marlies ...
\printindex
|
2,869,038,154,690 | arxiv | \part{Use this type of header for very long papers only}
\section{Introduction.}
The Euclidean algorithm is the process of comparison of commensurable magnitudes and the modular group $\mathrm{PSL}_2 (\Z)$ is an encoding of this algorithm. Since the intellect is ultimately about comparison of magnitudes, it should come as no surprise that the modular group manifests itself in diverse contexts through its action on mathematical objects, no matter what our level of abstraction is. Among all manifestations of $\mathrm{PSL}_2 (\Z)$ the following four classical actions are of fundamental nature:
\begin{itemize}
\item [1.] its left-action on the infinite trivalent plane tree,
\item [2.] its left action on the upper half plane $\H$ by M\"obius transformations,
\item [3.] its right-action on the binary quadratic forms, and
\item [4.] its left-conjugation action on itself.
\end{itemize}
Our aim in this paper is to clarify the connections between these four actions. See \cite{uludag/actions/of/the/modular/group} for an overview of the related subjects from a wider perspective. In particular, the actions in consideration will play a crucial role in observing non-trivial relations between Teichm\"{u}ller theory and arithmetic. Such a point of view will be taken in a forthcoming paper where we construct a global groupoid whose objects are (roughly speaking) all ideal classes in real quadratic number fields and morphisms correspond to basic graph transformations known as flips. And this work can also be considered as an introduction to this upcoming work.
Let us turn back to our list of actions. The first one is transitive but not free on the set of neither edges nor vertices of the tree in question. In order to make it free on the set of edges, we add the midpoints as extra vertices thereby doubling the set of edges and call the resulting infinite tree the {\it bipartite Farey tree $\mathcal F$}. The modular group action is still transitive on the edge set of $\mathcal F$. Now since $\mathrm{PSL}_2 (\Z)$ acts on $\mathcal F$ by automorphisms; freely on the set of edges of $\mathcal F$, so does any subgroup $\Gamma$ of $\mathrm{PSL}_2 (\Z)$, and by our definition a {\it modular graph}\footnote{Contributing to the long list of names and equivalent/dual notions with various nuisances: trivalent diagrams, cyclic trivalent graphs, cuboid tree diagrams, Jacobi diagrams, trivalent ribbon graphs, triangulations; more generally, maps, ribbon graphs, fat graphs, dessins, polygonal decompositions, lozenge tilings, coset diagrams, etc.} is simply a quotient graph $\Gamma\backslash\mathcal F$. This is almost the same thing as a trivalent ribbon graph, except that we consider the midpoints as extra 2-valent vertices and pending edges are allowed. Modular graphs parametrize subgroups up to conjugacy and modular graphs with a base edge classify subgroups of the modular group.
\begin{figure}[h!]
\centering
\begin{subfigure}[]{5.5cm}
\centering
\includegraphics[scale=0.35]{kleindessins3.jpg}
\caption{\scriptsize A dessin (linienzug of Klein) from 1879 \protect\cite{klein/linienzuge}}
\label{fig:linienzuge/of/klein}
\end{subfigure}
\qquad
\begin{subfigure}[]{4cm}
\centering
\vspace{1mm}
\includegraphics[scale=0.2]{annulus.jpg}
\vspace{4mm}
\caption{\scriptsize A \c{c}ark in its ambient annulus}
\label{fig:an/example/of/a/cark}
\end{subfigure}
\caption{}
\label{fig:cark/and/its/short/form}
\end{figure}
The second action is compatible with the first one in the following sense: The tree $\mathcal F_{top}\subset \H$ which is built as the $\mathrm{PSL}_2 (\Z)$-orbit of the arc connecting two elliptic points on the boundary of the standard fundamental domain, is a topological realization of the Farey tree $\mathcal F$. Consequently, $\Gamma\backslash\mathcal F_{top} \subset \Gamma\backslash\H$ is a topological realization of the graph $\Gamma\backslash\mathcal F$, as a graph embedded in the orbifold $\Gamma\backslash\H$. This latter has no orbifold points if $\Gamma$ is torsion-free but always has punctures due to the parabolic elements of $\Gamma$, or it has some boundary components. These punctures are in one-to-one correspondence with the left-turn circuits in $\Gamma\backslash\mathcal F$. Widening these punctures gives a deformation retract of the ambient orbifold to the graph, in particular the upper half plane $\H$ retracts to the Farey tree $\mathcal F_{top}$. To recover the orbifold from the modular graph one glues punctured discs along the left-turn paths of the graph.
If $\Gamma$ is torsion-free of finite index, then $\Gamma\backslash\H$ is an algebraic curve which can be defined over a number field since it is a finite covering of the modular curve ${\mathcal M} = \mathrm{PSL}_2 (\Z) \backslash \H$. According to Bely\u{\i}'s theorem, \cite{belyi}, any arithmetic surface can be defined this way, implying in particular that the action of the absolute Galois group defined on the set of finite coverings $\{ \Gamma\backslash\H\rightarrow {\mathcal M}\}$ is faithful. But these coverings are equivalently described by the graphs $\Gamma\backslash\mathcal F$. This striking correspondence between combinatorics and arithmetic led Grothendieck to study dessins from the point of view of the action of the absolute Galois group, see \cite{lando-zvonkine-lowdimtop}. However, explicit computations of covering maps $\Gamma\backslash \H\rightarrow {\mathcal M}$ required by this approach turned out to be forbiddingly hard if one wants to go beyond some basic cases and only a few uniform theorems could be obtained. In fact, dessins are more general graphs that correspond to finite coverings of the thrice punctured sphere, which is equivalent to a subsystem of coverings of ${\mathcal M}$ since ${\mathbf P}^1\backslash\{0,1,\infty\}$ is a degree-6 covering of ${\mathcal M}$.
The third action in our list is due to Gauss. Here $\mathrm{PSL}_2 (\Z)$ acts on the set of binary quadratic forms via change of variables in the well-known manner. Orbits of this action are called {\it classes} and forms in the same class are said to be {\it equivalent}. Here we are interested in the action on \emph{indefinite} forms. This action always has a cyclic stabilizer group, which is called the proper automorphism group of the form $f$ and denoted $\langle M_f \rangle$. Indefinite binary quadratic forms represent ideal classes in the quadratic number field having the same discriminant as the form and hence are tightly related to real quadratic number fields \cite{computational/nt/cohen}. We provide a succinct introduction to binary quadratic forms later in the paper.
The correspondence between forms and dessins can be described briefly as follows: to an indefinite binary quadratic form $f$ we associate its proper automorphism group $\langle M_f \rangle$ and to $\langle M_f \rangle$ we associate the infinite graph $\langle M_f \rangle \backslash\mathcal F$, which is called a {\it \c{c}ark}\footnote{Turkish {\it \c cark} (pronounced as ``chark'') is borrowed from Persian, and it has a common etymology with Indian {\it chakra}, Greek {\it kyklos} and English {\it wheel}.}. Via the topological realization of $\mathcal F$, this is a graph embedded in the annulus $\langle M_f \rangle \backslash\H$. The form $f_M$ corresponding to the matrix $M\in \mathrm{PSL}_2 (\Z)$ is found by homogenizing the fixed-point equation of $M$.
\c{C}arks are infinite ``transcendental" graphs whereas the dessins literature consider only finite graphs. (``transcendental" since they correspond to non-algebraic extensions of the function field of the modular curve). This transcendence implies that \c carks go undetected in the algebraic fundamental group approach, nevertheless we shall see that this does not keep them away from being arithmetic objects.
Equivalent forms have conjugate stabilizers (automorphism groups) and conjugate subgroups have isomorphic quotient graphs. It turns out that the set of classes is exactly the set of orbits of hyperbolic elements of $\mathrm{PSL}_2 (\Z)$ under the fourth (conjugation) action in our list. This set of orbits can be identified with the set of bracelet diagrams with beads of two colors.
In fact, \c carks can be thought of as $\mathbf{Z}$-quotients of periodic rivers of Conway \cite{conway/sensual/quadratic/form} or graphs dual to the coset diagrams of Mushtaq, \cite{mushtaq/modular/group}. As we shall see later in the paper, \c{c}arks provide a very nice reformulation of various concepts pertaining to indefinite binary quadratic forms, such as reduced forms and the reduction algorithm, ambiguous forms, reciprocal forms, the Markoff value of a form, etc. For example, \c{c}arks of reciprocal classes admit an involutive automorphism, and the quotient graph gives an infinite graph with two pending edges. These graphs parametrize conjugacy classes of dihedral subgroups of the modular group. \c{C}arks also provide a more conceptual way to understand the relation between coset diagrams and quadratic irrationalities and their properties as studied in \cite{mushtaq/modular/group} or in \cite{malik/zafar/real/quadratic/irrational/numbers}.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.50]{rreeciprocal.jpg}
\caption{The graph of a reciprocal class. Edges of this graph parametrize the reciprocal forms in this class.}
\label{fig:recip}
\end{figure}
For us the importance of this correspondence between \c carks and forms lies in that it suggests a concrete and clear way to consider modular graphs as arithmetic objects viz. Gauss’ binary quadratic forms, as it was much solicited by Grothendieck’s dessins school. We also would like to remark that the clarity of the graph language provides us with new points of view on the classical and deep questions concerning the behavior of class numbers yet the structure of class groups via such graphs remain meager. Moreover, the second named author, in \cite{reduction}, has presented an improvement of the age-old reduction algorithm of Gauss and gave an algorithmic solution to the representation problem of binary quadratic forms. The language of \c{c}arks might also provide a new insight o the real multiplication project of Manin and Marcoli, see \cite{manin/real/multiplication}.
Our computations concerning forms and their reduction are done in PARI/GP, \cite{PARI2} with certain subroutines of our own (source code is available upon request).
\section{Farey tree and modular graphs}
\label{sec:farey/tree/and/modular/graphs}
It is well known that the two elliptic transformations $S (z)=-1/z$ and $R (z)= (z-1)/z$, respectively of orders 2 and 3, generate a group of M\"obius transformations which is isomorphic to the projective group of two by two integral matrices having determinant $1$, the modular group
\cite{bqf/vollmer}. It is also well-known that $\mathrm{PSL}_2 (\Z) \cong \langle S\rangle \ast \langle R\rangle =\mathbf{Z}/2\mathbf{Z} \ast \mathbf{Z} / 3\mathbf{Z}$. Let us now consider the graph $\mathcal F$ (the bipartite Farey tree), given by the following data:
\begin{eqnarray*}
E (\mathcal F) & = & \{\{W\} \colon W \in \mathrm{PSL}_2 (\Z) \} \\
V (\mathcal F) & = & V_{{{\otimes}}} (\mathcal F) \sqcup V_{\bull} (\mathcal F);
\end{eqnarray*}
\noindent where
\begin{eqnarray*}
V_{{\otimes}} (\mathcal F) & = & \{ \{W,WS\} \colon W \in \mathrm{PSL}_2 (\Z) \}, \\
V_{\bull} (\mathcal F) & = & \{ \{W,WR,WR^{2}\} \colon W \in \mathrm{PSL}_2 (\Z) \}.
\end{eqnarray*}
\noindent is an edge between a vertex $v=\{W,WS\} \in V_{{\otimes}} (\mathcal F)$ and another vertex $v'=\{W',W'R,W'R^{2}\}$ if and only if $\{W,WS\} \cap \{W',W'R,W'R^{2}\} \neq \emptyset$ and there are no other edges. Thus the edge connecting $v$ and $v'$ {\it is} $v\cap v'$, if this intersection is non-empty. Observe that by construction the graph is bipartite. The edges incident to the vertex $\{W,WR,WR^{2}\} \in V_{\bull}$ are $ \{W\}, \{WR\}, \{WR^{2}\} $, and these edges inherit a natural cyclic ordering from the vertex. Thus the Farey tree $\mathcal F$ is an infinite bipartite ribbon graph\footnote{A ribbon graph is a graph together with an ordering of the edges that are incident to each vertex in the graph.}. It is a tree since $\mathrm{PSL}_2 (\Z)$ is freely generated by $S$ and $R$.
The group $\mathrm{PSL}_2 (\Z)$ acts on $\mathcal F$ from the left, by ribbon graph automorphisms, where $M\in \mathrm{PSL}_2 (\Z)$ acts by
\begin{eqnarray*}
\{W\}\in E (\mathcal F) & \mapsto & \{MW\}\in E (\mathcal F)\\
\{W,WS\}\in V_{{{\otimes}}} (\mathcal F) & \mapsto & \{MW,MWS\}\in V_{{{\otimes}}} (\mathcal F)\\
\{W,WR,WR^2\} \in V_{\bull} (\mathcal F) & \mapsto & \{MW,MWR,MWR^2\}\in V_{\bull} (\mathcal F)
\end{eqnarray*}
Notice that the action on the set of edges is nothing but the left-regular action of $\mathrm{PSL}_2 (\Z)$ on itself and therefore is free.
On the other hand the action is not free on the set of vertices: The vertex $\{W,WS\}$ is fixed by the order-2 subgroup generated by $M=WSW^{-1}$,
and the vertex $\{W,WR,WR^2\}$ is fixed by the order-3 subgroup generated by $M=WRW^{-1}$.
Let $\Gamma$ be any subgroup of $\mathrm{PSL}_2 (\Z)$. Then $\Gamma$ acts on $\mathcal F$ from the left and to $\Gamma$ we associate a quotient graph $\Gamma\backslash\mathcal F$ as follows:
\medskip\noindent
\hspace{1.5cm} $E (\Gamma\backslash\mathcal F) = \{\Gamma \!\cdot\! \{W\} \colon W \in \mathrm{PSL}_2 (\Z) \}$
\noindent
\hspace{1.5cm} $V (\Gamma\backslash\mathcal F) = V_{{\otimes}} (\mathcal F/\Gamma) \cup V_{\bull} (\mathcal F/\Gamma)$;
\noindent where
\noindent
\hspace{1.5cm} $V_{{\otimes}} (\Gamma\backslash\mathcal F) = \{ \Gamma \!\cdot\!\{W , WS\} \colon W \in \mathrm{PSL}_2 (\Z) \}$, and
\noindent
\hspace{1.5cm} $V_{\bull} (\Gamma\backslash\mathcal F) = \{ \Gamma \!\cdot\!\{W, WR, WR^{2}\} \colon W \in \mathrm{PSL}_2 (\Z) \}$.
\medskip\noindent It is easy to see that the incidence relation induced from the Farey tree gives a well-defined incidence relation and gives us the graph which we call a {\it modular graph}. Thus the edge connecting the vertices $v=\Gamma\!\cdot\! \{W,WS\}$ and $v'=\Gamma\!\cdot\!\{W',W'R,W'R^{2}\}$ is the intersection $v \sqcap v'$, which is of the form $\Gamma\!\cdot\! \{M\}$ if non-empty. There are no other edges. Observe that by construction the graph is bipartite. The edges incident to the vertex $\Gamma\!\cdot\! \{W,WR,WR^{2}\}$ are $ \Gamma\!\cdot\! \{W\}, \Gamma\!\cdot\! \{WR\}, \Gamma\!\cdot\! \{WR^{2}\} $, and these edges inherit a natural cyclic ordering from the vertex\footnote{The ribbon graph structure around vertices of degree 2 is trivial.}. In general $\Gamma\backslash\mathcal F$ is a bipartite ribbon graph possibly with pending vertices that corresponds to the conjugacy classes of elliptic elements that $\Gamma$ contains. Conversely, any connected bipartite ribbon graph $G$, with $V (G)=V_{{\otimes}} (G) \sqcup V_{\bull} (G)$, such that every ${\otimes}$-vertex is of degree 1 or 2 and every $\bull$-vertex is of degree 1 or 3, is modular since the universal covering of $G$ is isomorphic to $\mathcal F$. It takes a little effort to define the fundamental group of $\Gamma\backslash\mathcal F$, see \cite{nedela/graphs/and/their/coverings}, so that there is a canonical isomorphism $\pi_1 (\Gamma\backslash\mathcal F, \Gamma\!\cdot\! \{I\})\simeq \Gamma<\mathrm{PSL}_2 (\Z)$, with the canonical choice of $\Gamma\!\cdot\! \{I\}$ as a base edge. In general, subgroups $\Gamma$ of the modular group (or equivalently the fundamental groups $\pi_1 (\Gamma\backslash\mathcal F)$) are free products of copies of $\mathbf{Z}$, $\mathbf{Z}/2\mathbf{Z}$ and $\mathbf{Z}/3Z$, see \cite{kulkarni/subgroups/of/the/modular/group}. Note that two distinct isomorphic subgroups $\Gamma_1$, $\Gamma_2$ of the modular group may give rise to non-isomorphic ribbon graphs $\Gamma_1\backslash\mathcal F$ and $\Gamma_2\backslash\mathcal F$. We shall see shortly that \c{c}arks constitute good examples of this phenomena. In other words, the fundamental group does not characterize the graph. Another basic invariant of $\Gamma\backslash\mathcal F$ is its genus, which is defined to be the genus of the surface constructed by gluing discs along left-turn paths. This genus is the same as the genus of the Riemann surface ${\mathcal H} / \Gamma$.
\begin{figure}[t!]
\centering
\begin{subfigure}[]{6cm}
\centering
\begin{tikzpicture} [scale=1.1]
\draw [line width=0.75mm] (0,0)-- (0,2)-- (4,0)-- (0,0);
\fill[gray!40] (0,0)-- (0,2)-- (4,0)-- (0,0);
\draw (0,0) circle (0.8mm);
\fill[white] (0,0) circle (0.8mm);
\node at (0,0) {$\otimes$};
\draw (4,0) circle (0.8mm);
\fill[white] (4,0) circle (0.8mm);
\draw (0,2) circle (0.8mm);
\fill[black] (0,2) circle (0.8mm);
\draw (-0.05,1)-- (-0.25,1);
\node at (-0.25,0.99) {\scriptsize $<$};
\node at (0.25,1)[rotate=90] {\scriptsize{modular arc}};
\node at (-0.75,1)[rotate=0] {\scriptsize{modular}};
\node at (-0.75,0.75)[rotate=0] {\scriptsize{graph}};
\draw (2,-0.05)-- (2,-0.25);
\node at (2,-0.25)[rotate=90] {\scriptsize $<$};
\node at (2,-0.5)[rotate=0] {\scriptsize{triangulations}};
\draw [rotate around={0: (0,0)}] (2.05,1)-- (2.2,1.3);
\node at (2.2,1.3) [rotate=70]{\scriptsize $>$};
\node at (2.25,1.5)[rotate=-30] {\scriptsize{lozenges}};
\end{tikzpicture}
\caption{The fundamental region for the modular curve in the upper half plane model.}
\end{subfigure}
\begin{subfigure}[]{6cm}
\centering
\begin{tikzpicture} [scale=1.1,dash pattern=on 2pt off 1pt]
\fill [gray!40] (-0.5,0) rectangle (0.5,2);
{ \pgfsetfillopacity{1}
\fill[white] (0:0cm) circle (1cm);}
\draw [solid,line width=1] (-2,0)-- (2,0);
\draw [dash phase=1pt] (-0.5,0)-- (-0.5,2);
\draw [solid,line width=1] (0.5,cos{30})-- (0.5,2);
\draw [solid,line width=1] (-0.5,cos{30})-- (-0.5,2);
\draw [dash phase=1pt] (0.5,0)-- (0.5,2);
\draw [dash phase=1pt] (0,1)-- (0,2.5);
\node at (0,2.5)[rotate=-90] {\scriptsize $<$};
\draw [solid,line width=1.25] (1,0) arc (0:180:1);
\node at (-0.5,-0.2) {\scriptsize $-1/2$};
\node at (0.5,-0.2) {\scriptsize $1/2$};
\draw (0,1) circle (0.75mm);
\draw (cos{60},sin{60}) circle (0.75mm);
\fill[white] (cos{60},sin{60}) circle (0.75mm);
\draw (-cos{60},sin{60}) circle (0.75mm);
\fill[white] (-cos{60},sin{60}) circle (0.75mm);
\fill[white] (0,1) circle (0.75mm);
\node at (0,1) {\small $\otimes$};
\fill[black] (cos{60},sin{60}) circle (0.75mm);
\fill[black] (-cos{60},sin{60}) circle (0.75mm);
\end{tikzpicture}
\caption{The modular curve. Note that there are two triangles, the second is on the back of the page, glued to this one.}
\end{subfigure}
\caption{}
\label{fig:cark/and/its/short/form}
\end{figure}
The set of edges of $\Gamma\backslash\mathcal F$ is identified with the set of right-cosets of $\Gamma$, so that the graph $\Gamma\backslash\mathcal F$ has $[\mathrm{PSL}_2 (\Z) : \Gamma]$ many edges. In case $\Gamma$ is a finite index subgroup, the graph $\Gamma\backslash\mathcal F$ is finite. In case $\Gamma = \mathrm{PSL}_2 (\Z)$, the quotient graph $\mathrm{PSL}_2 (\Z) \backslash \mathcal F$ is a graph with one edge that looks like as follows:
\begin{figure}[h!]
$$ \stackrel{\mathrm{PSL}_2 (\Z)\!\cdot\!\{I,S\}}{{\otimes}}\!\!\!\!\!\!\!\!\!\!\!\!\!\hspace{-.1mm}\stackrel{\mathrm{PSL}_2 (\Z)\!\cdot\! \{I\}}{-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\stackrel{\mathrm{PSL}_2 (\Z)\!\cdot\!\{I,R,R^2\}}{{
\mbox{\raisebox{-.2mm}{\Large $\bullet$}} }}$$
\caption{ The modular arc.}
\label{fig:modular/arc}
\end{figure}
We call this graph the {\it modular arc}. It is a graph whose fundamental group is $\mathrm{PSL}_2 (\Z)$ and whose universal cover is the Farey tree $\mathcal F$. In other words modular graphs are coverings of the modular arc. If we consider the action of the modular group on the topological realization $\mathcal F_{top}$ of $\mathcal F$ mentioned in the introduction, the topological realization of $\mathrm{PSL}_2 (\Z)\backslash\mathcal F$ is the arc $\mathrm{PSL}_2 (\Z)\backslash\mathcal F_{top}$ in the modular curve connecting two elliptic points.
Every modular graph $\Gamma\backslash \mathcal F$ has a canonical ``analytical" realization $\Gamma\backslash \mathcal F_{top}$ on the Riemann surface $\Gamma \backslash {\mathcal H}$ with edges being geodesic segments. Equivalently, these edges are lifts of the modular arc by $\Gamma \backslash {\mathcal H} \longrightarrow \mathrm{PSL}_2 (\Z)\backslash {\mathcal H}$. If instead we lift the geodesic arc connecting the ${\otimes}$- elliptic point to the cusp to the surface $\Gamma \backslash {\mathcal H}$, then we obtain another graph on the surface, which is called an {\it ideal triangulation}. Lifting the remaining geodesic arc gives rise to yet another type of graph, called a {\it lozenge tiling}. So there is a triality, not just duality, of these graphs, see Figure~\ref{fig:triality} in which the bold figures represent the members of the triality.
\begin{figure}[h!]
\centering
\begin{subfigure}{3cm}
\includegraphics[scale=0.18]{triangulation.jpg}
\caption{A triangulation}
\end{subfigure}
\begin{subfigure}{3cm}
\includegraphics[scale=0.18]{ribbon.jpg}
\caption{its dual graph}
\end{subfigure}
\begin{subfigure}{3cm}
\includegraphics[scale=0.18]{lozenge.jpg}
\caption{and its lozenge}
\end{subfigure}
\caption{ Triality of graphs}
\label{fig:triality}
\end{figure}
In topology, there is a well-known correspondence between subgroups of the fundamental group of a space and the coverings of that space. The following two results are orbifold (or ``stacky") analogues of this correspondence for coverings of the modular curve, stated in terms of graphs. For more details on fundamental groups and covering theory of graphs see \cite{nedela/graphs/and/their/coverings}.
\begin{proposition}
If $\Gamma_1$ and $\Gamma_2$ are conjugate subgroups of $\mathrm{PSL}_2 (\Z)$, then the graphs $\Gamma_1\backslash\mathcal F$ and $\Gamma_2\backslash\mathcal F$ are isomorphic as ribbon graphs. Hence there is a 1-1 correspondence between modular graphs and conjugacy classes of subgroups of the modular group.
\label{conjugatesubgroups}
\end{proposition}
\begin{proof}
Let $\Gamma_2=M\Gamma_1M^{-1}$. The desired isomorphism is then the map
\begin{eqnarray*}
\varphi:E (\Gamma_1\backslash\mathcal F) & \rightarrow E (\Gamma_2\backslash\mathcal F) \\
\Gamma_1\!\cdot\! \{W\} & \,\,\, \mapsto \Gamma_2\!\cdot\! \{MW\}.
\end{eqnarray*}
\noindent Note that one has $\varphi (\Gamma_1\!\cdot\! \{I\})=\Gamma_2\!\cdot\! \{M\}$. Suppose now that $\varphi: E (\Gamma_1\backslash\mathcal F) \rightarrow E (\Gamma_2\backslash\mathcal F)$ is a ribbon graph isomorphism and let $\varphi (\Gamma_1\!\cdot\! \{I\})=\Gamma_2\!\cdot\! \{M\}$. This induces an isomorphism of fundamental groups
$$\varphi_*:\pi_1 (\Gamma_1\backslash\mathcal F, \Gamma_1\!\cdot\! \{I\}) \simeq \pi_1 (\Gamma_2\backslash\mathcal F, \Gamma_2\!\cdot\! \{M\})$$
\noindent Since $\varphi$ is a ribbon graph isomorphism, these two groups are also isomorphic as subgroups of the modular group. The former group is canonically isomorphic to $\Gamma_1$ a whereas the latter group is canonically isomorphic to
$$M^{-1}\pi_1 (\Gamma_2\backslash\mathcal F, \Gamma_2\!\cdot\! \{I\})M\simeq M^{-1}\Gamma_2 M$$
\end{proof}
Therefore modular graphs parametrize conjugacy classes of subgroups of the modular group, whereas the edges of a modular graph parametrize subgroups in the conjugacy class represented by the modular graph. In conclusion we get:
\begin{theorem}
There is a 1-1 correspondence between modular graphs with a base edge $(G,e)$ (modulo ribbon graph isomorphisms of pairs $ (G,e)$) and subgroups of the modular group (modulo the automorphisms induced by conjugation in $\mathrm{PSL}_2 (\Z)$).
\label{thm:modular/graph/vs/subgroups}
\end{theorem}
\begin{theorem}
{There is a 1-1 correspondence between modular graphs with two base edges $ (G,e,e')$ (modulo ribbon graph isomorphisms of pairs $ (G,e,e')$) and cosets of subgroups of the modular group ((modulo the automorphisms induced by conjugation in $\mathrm{PSL}_2 (\Z)$)).
\label{thm:modular/graph/vs/cosets}}
\end{theorem}
\section{\c{C}arks}
\label{sec:carks}
A {\it \c{c}ark} is a modular graph of the form $\mbox{\it\c C}\,_M:=\langle M \rangle \backslash \mathcal F$ where $M$ is a hyperbolic element of the modular group. One has
$$
\pi_1 (\langle M \rangle\backslash \mathcal F)=\langle M \rangle\simeq \mathbf{Z},
$$
so the \c{c}ark $\langle W \rangle\backslash \mathcal F$ is a graph with only one circuit, which we call the {\it spine} of the \c{c}ark. Every \c{c}ark has a canonical realization as a graph $\langle M \rangle \backslash \mathcal F_{top}$ embedded in the surface $\langle M \rangle \backslash \H$, which is an annulus since $M$ is hyperbolic. In fact $\langle M \rangle \backslash \H$ is the annular uniformization of the modular curve ${\mathcal M}$ corresponding to $M\in\pi_1 ({\mathcal M})$. Again by hyperbolicity of $M$, this graph will have infinite ``Farey branches" attached to the spine in the direction of both of the boundary components of the annulus\footnote{If $M$ is parabolic, then $\langle W \rangle\backslash \mathcal F$ has Farey branches attached to the spine in only one direction, and its topological realization $\langle M \rangle \backslash \mathcal F_{top}$ sits on a punctured disc. If $M$ is elliptic, $\langle W \rangle\backslash \mathcal F$ is a tree with a pending edge which abut at a vertex of type ${\otimes}$ when $M$ is of order 2 and of type $\bullet$ when $M$ is of order 3. Its topological realization $\langle M \rangle \backslash \mathcal F_{top}$ sits on a disc with an orbifold point.}. By Proposition~\ref{conjugatesubgroups} the graphs $\mbox{\it\c C}\,_M$ and $\mbox{\it\c C}\,_{XMX^{-1}}$ are isomorphic for every element $X$ of the modular group and by Theorem~\ref{thm:modular/graph/vs/subgroups} we deduce the following result, see \cite{merve/tez}:
\begin{theorem}
There are one-to-one correspondences between
\begin{itemize}
\item[i.] \c{c}arks and conjugacy classes of subgroups of the modular group generated by a single hyperbolic element, and
\item[ii.] \c{c}arks with a base edge and subgroups of the modular group generated by a single hyperbolic element.
\end{itemize}
\label{cor:1/to/1/corr/between/undirected/carks}
\end{theorem}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.23]{aannulus.jpg}
\caption{ The \c cark $\mathcal F/\langle S R^2 S R \rangle$.}
\label{fig:quotient/by/hyperbolic}
\end{figure}
A \c{c}ark is said to be directed if we choose an orientation for the spine.
\begin{corollary}
There are one-to-one correspondences between
\begin{itemize}
\item[i.] hyperbolic elements of the modular group and directed \c carks with a base edge, and
\item[ii.] conjugacy classes of hyperbolic elements of the modular group and directed \c carks.
\end{itemize}
\label{cor:1/to/1/corr/between/directed/carks}
\end{corollary}
\vspace{-5mm}
\subsection{Counting \c{C}arks}
\label{sec:necklace/bracelet}
\c{C}arks are infinite graphs, and each edge of a \c{c}ark carries a name which is an infinite coset. In fact, all the combinatorial information of a \c cark can be encoded in a finite storage as follows: First remove all ${\otimes}$-vertices of the \c{c}ark. Next, turn once around the spine. Upon meeting a $\bullet$-vertex on which a branch attached by $R$, cut that branch and tag that $\bullet$-vertex with a ``0". In a similar fashion, upon meeting a $\bullet$-vertex on which a branch attached by $R^{2}$, cut that branch and tag that $\bullet$-vertex with a ``1". We obtain a finite graph called a {\it binary bracelet} which is by definition an equivalence class of binary strings under cyclic permutations (i.e. rotations) and reversals. Conversely, by using the convention $0\leftrightarrow R$ and $1\leftrightarrow R^2$ we can reconstruct the \c{c}ark from its bracelet.
Rotations and reversals generate a finite dihedral group, and a binary bracelet may equivalently be described as an orbit of this action.
\begin{figure}[h!]
\centering
\begin{subfigure}{5cm}
\includegraphics[scale=0.35]{anycark.jpg}
\caption{}
\end{subfigure}
\begin{subfigure}{5cm}
\includegraphics[scale=0.43]{bracelet.jpg}
\caption{}
\end{subfigure}
\caption{From \c{c}arks to bracelets}
\label{fig:cark/and/its/short/form}
\end{figure}
For $n = 1,2,...,15$ the number of binary bracelets with $n$ vertices is
\[
2, 3, 4, 6, 8, 13, 18, 30, 46, 78, 126, 224, 380, 687, 1224.
\]
This is sequence A000029 (M0563) in OEIS~\cite{oeis/integer/sequences}.
The number of binary bracelets (\c{c}arks) of length $n$ is
\begin{eqnarray*}
B (n) = {\frac{1}{2}}N (n) + {\frac{3}{4}}2^{n/2}
\end{eqnarray*}
if $n$ is even and
\begin{eqnarray*}
B(n) = {\frac{1}{2}}N (n) + {\frac{1}{2}}2^{ (n+1)/2}
\end{eqnarray*}
if $n$ is odd where $N (n)$ is the number of binary necklaces of length $n$.
An equivalence class of binary strings under rotations (excluding thus reversals) is called a {\it binary necklace}, or a {\it cyclic binary word}.
They are thus orbits of words under the action of a cyclic group and they correspond to directed \c{c}arks.
For $n=1,2,...,15$ the number of binary necklaces of length $n$ is
\[
N (n)=2, 3, 4, 6, 8, 14, 20, 36, 60, 108, 188, 352, 632, 1182, 2192,
\]
which is sequence A000031 (M0564) in OEIS.
The number of necklaces (directed \c carks) of length $n$ is given by MacMahon's formula from 1892 (also called Witt's formula) reads (see \cite{bouallegue/on/primitive/words}, \cite{macmahon}):
\[
N (n)={1\over n}\sum_{d\mid n}\varphi (d)2^{n/d}={1\over n}\sum_{j=1}^n2^{\gcd (j,n)}
\]
where $\varphi$ is Euler's totient function.
A \c{c}ark is called {\it primitive} if its spine is not periodic. Aperodic binary necklaces correspond to primitive directed \c{c}arks. For $n=1,2,...,15$ the number of aperiodic necklaces of length $n$ is
$$
L (n)=2, 1, 2, 3, 6, 9, 18, 30, 56, 99, 186, 335, 630, 1161, 2182,
$$
which is sequence A000031 (M0564) in the database. There is a formula for the number of aperiodic necklaces of length $n$ in terms of M\"obius' function $\mu$:
$$
L (n)={1\over n}\sum_{d\mid n}\mu (d)2^{n/d}={1\over n}\sum_{d\mid n}\mu (n/d)2^{d}
$$
As mentioned, binary necklaces (or cyclic binary words or directed \c{c}arks) may be viewed as orbits of words under the action of the cyclic group. Choosing an ordering of our letters $\{0,1\}$ (i.e. $0<1$) and imposing the lexicographic ordering of the words, one may choose a minimal representative in each orbit. The minimal representative of a primitive (aperiodic) word is called a {\it Lyndon word}. They were first studied in connection with the construction of bases for free lie algebras and they appear in numerous contexts. In our case they are
$$
0,1,01,001,011, 0001, 0011, 0111, 00001,00011,00101, 00111,01011,01111 \dots
$$
One can similarly find representatives for aperiodic binary bracelets (=primitive indefinite binary quadratic forms; see below). There are effective algorithms to list all primitive necklaces and bracelets up to a given length
(i.e. Duval's algorithm \cite{duval/classes/de/conjugation}, the algorithm due to Fredricksen, Kessler and Maiorana \cite{FKM}, Sawada's algorithm \cite{sawada}, etc). Translated into the language of binary quadratic forms, this means that it is possible to single out a unique reduced representative in each class of a primitive indefinite binary quadratic form and that it is possible to effectively enumerate all classes of primitive indefinite binary quadratic forms by specifying those reduced representatives.
To sum up, we may represent primitive \c carks by primitive bracelets. In order to shorten this representation further,
we may count the number of consecutive 0's and 1's and represent \c carks as sequences of natural numbers
$(n_0, n_1, \dots n_{2k})^{0,1}$, if we agree that\footnote{Note that a Lyndon word always start with a 0 and ends with a 1.} this sequence encodes a bracelet that starts with a $0$ if the exponent is $0$ and $1$ if the exponent is $1$. This representation is directly connected to the ``minus" continued fractions (see Zagier \cite{zagier/zetafunktionen/quadratische/zahlkorper}).
A primitive word may have two types of symmetries: invariance under the swap of symbols $0 \leftrightarrow 1$ and invariance under reversal, i.e. palindromic symmetry. The first symmetry corresponds to ambiguous binary quadratic forms and the second symmetry corresponds to reciprocal binary quadratic forms, as we shall see. The swap of symbols $0 \leftrightarrow 1$ corresponds to inversion in the class group.
\subsection{\c Cark Invariants}
There are several natural invariants associated to a \c cark {\it \c C}. The combinatorial length $l_c (\mbox{\it\c C}\,)$ of its spine is an invariant. A hyperbolic invariant of a \c cark is the metric length $l_h (\mbox{\it\c C}\,)$ of the closed geodesic in the annular surface under its hyperbolic metric induced by the \c cark. A conformal invariant of a \c cark is the modulus $m (\mbox{\it\c C}\,)$ of the associated annulus. Finally, the discriminant $\Delta (\mbox{\it\c C}\,)$ of the associated form and the absolute value of the trace $\tau (\mbox{\it\c C}\,)$ of the associated matrix are two arithmetic invariants with $\Delta=\tau^2-4$. One has
$$
l_h (\mbox{\it\c C}\,)=2 \mbox{ arccosh } (\tau/2),
\quad
m (\mbox{\it\c C}\,)= \exp\left (
\frac{\pi^2}
{
\log |\frac{\tau\pm \sqrt{\Delta}}{2}|
}
\right)
$$
The modulus is found as follows: Any hyperbolic element $M\in PSL_2 (\mathbf R)$ is conjugate to an element of the form
\[
N:=XMX^{-1}=\mat{\alpha & 0 }{0 & \frac{1}{\alpha}}\]
where $\alpha$ is the multiplier of $M$. Since the trace is invariant under conjugation, one has
$\tau:=\mathrm{tr} (M)=\alpha+1/\alpha \Rightarrow \alpha^2-\tau\alpha+1=0 \Rightarrow \alpha=\frac{\tau\pm \sqrt{\tau^2-4}}{2}$.
Now $N$ acts by M\"obius transformation $z\mapsto \alpha^2 z$, and the quotient map is
$f (z)=z^{2\pi i/\log \alpha^2}$ with the annulus $f ({\mathcal H})=\{z\, :\, e^{-2\pi^2/\log \alpha^2} < |z| <1\}$ as its image.
Hence the modulus of the ambient annulus of the \c cark is $e^{2\pi^2/\log \alpha^2}=e^{\pi^2/\log |\alpha|}$.
It is possible to write down the uniformization $U_M:\H\rightarrow \mbox{\it\c C}\,_M$ explicitly, which is a quite involved expression.
The annular uniformization $\mbox{\it\c C}\,_M\rightarrow \mathrm{PSL}_2 (\Z)\backslash \H$ can be written as $j\circ U_M^{-}$.
\section{Binary Quadratic Forms and \c{C}arks}
\label{sec:bqfs/and/carks}
A \emph{binary quadratic form} is a homogeneous function of degree two in two variables $f (x,y)=Ax^2+Bxy+Cy^2$ (denoted $f= (A,B,C)$ or in the matrix form:
\begin{equation}
W_{f} = \mat{A & B / 2}{B / 2 & C}
\end{equation}
so that $f (x,y)= (x,y)W_{f} (x,y)^{t}$). If the coefficients $A,B,C$ are integers the form is called \emph{integral} with discriminant $\Delta (f) = B^{2} - 4AC$. If $f$ is integral and $\gcd (A,B,C)=1$ then $f$ is called \emph{primitive}. Following Gauss we will call a form $f= (A,B,C)$ \emph{ambiguous} if $B=kA$ for some $k \in \mathbf{Z}$. Finally a form $f= (A,B,C)$ will be referred to as \emph{reciprocal} whenever $C=-A$, \cite{sarnak/reciprocal/geodesics}.
Note that $\Delta (f) = -4 \det (W_{f})$. Given a symmetric two by two matrix we write $f_{W}$ to denote the binary quadratic form associated to $W$. Recall that a form $f$ is called
\begin{itemize}
\item positive definite if and only if $\Delta (f)< 0$ and $A>0$,
\item negative definite if and only if $\Delta (f)<0$ and $0>A$,
\item indefinite if and only if $\Delta (f)>0$.
\end{itemize}
The group $\mathrm{PSL}_2 (\Z)$ acts on the set of all integral binary quadratic forms by
\begin{eqnarray*}
\mathit{Forms} \times \mathrm{PSL}_2 (\Z) \to & \mathit{Forms} & \\
(f,U) \mapsto & {U \!\cdot\! f} := f (U (x,y)^{t}) \\
& = (x,y)U^{t}W_{f}U (x,y)^{t}
\end{eqnarray*}
We call two binary quadratic forms \emph{equivalent} if they belong to the same $\mathrm{PSL}_2 (\Z)$ orbit under the above action, under which discriminant is invariant. Let us denote the $\mathrm{PSL}_2 (\Z)$-orbit (or the equivalence class) of $f$ by $[f]$. The stabilizer of $f$ is called its {\it automorphism group}, denoted by $\aut{f}$, and elements of $\aut{f}$ are called automorphisms of $f$. For a positive definite binary quadratic form $f$, the group $\aut{f}$ is trivial unless $\Delta (f) = -3 $ or $-4$; $\aut{f} \simeq \mathbf{Z} / 4\mathbf{Z}$ if $\Delta (f) = -4$ and $\aut{f} \simeq \mathbf{Z} / 6\mathbf{Z}$ in case $\Delta (f) = -3$, \cite[p.29]{bqf/vollmer}. On the other hand, for an indefinite binary quadratic form one has $\aut{f} \simeq \mathbf{Z}$.
Given an indefinite binary quadratic form $f= (A,B,C)$ a generator of its automorphism group will be called its \emph{fundamental automorphism}. Note that there are two fundamental automorphisms, one being $M_{f}$, the other being its inverse, $M_{f}^{-1}$. Every integral solution $ (\alpha, \beta)$ of Pell's equation:
\begin{equation}
X^{2} - \Delta(f) Y^{2} = + 4
\label{eqn:Pell}
\end{equation}
corresponds to an automorphism of $f$ given by the matrix:
\begin{eqnarray*}
\mat{\frac{\alpha - B \beta }{2} & -C \beta}{A \beta & \frac{\alpha + \beta B}{2}}.
\end{eqnarray*}
It turns out that the fundamental automorphism is the one having minimal $\beta$ \cite[Proposition 6.12.7]{bqf/vollmer}.
Conversely, to any given hyperbolic element, say $M = \mat{p&q}{r&s}\in \mathrm{PSL}_2 (\Z)$ let\\[-1.6mm] us associate the following binary quadratic form:
\begin{equation}
f_{M} = \frac{\mathrm{sgn} (p+s) }{\textrm{gcd} (q,s-p,r)}\bigl(r, s-p,-q\bigr)
\label{eq:from/matrices/to/bqfs}
\end{equation}
Observe first that $M\to f_{M}$ is well-defined and that its image is always primitive and indefinite. At this point let us state a direct consequence of Theorem~\ref{thm:modular/graph/vs/subgroups}:
\begin{corollary}
The maps $\langle M \rangle\backslash \mathcal F \longleftrightarrow M\longrightarrow f_{M}$ defines a surjection from the set of oriented \c{c}arks with a base edge to primitive indefinite binary quadratic forms.
\label{cor:oriented/carks/with/a/base/edge/vs/bqf}
\end{corollary}
\begin{proof}
We saw that an oriented \c{c}ark with a base edge determines a hyperbolic element of $\mathrm{PSL}_2 (\Z)$. And this element in turn determines an indefinite binary quadratic form via $M\to f_{M}$. Conversely, given a primitive indefinite binary quadratic form $f = (A,B,C)$ to find $\beta \in \mathbf{Z}$ such that the matrix
\begin{eqnarray*}
\mat{\beta & A}{-C & B + \beta} \in \mathrm{PSL}_2 (\Z)
\end{eqnarray*}
we look at solutions $(x,y)$ of Pell's equation $X^{2} - \Delta (f)Y^{2} = 4$. Using any such $y$ we construct the hyperbolic element:
\begin{eqnarray*}
M_{f} = \mat{\beta & y C}{y A & yB + \beta},
\end{eqnarray*}
where $\beta = \frac{-y B \pm x}{2}$. Both choices of the sign produces a matrix which maps onto $f$. In fact, the two matrices are inverses of each other in $\mathrm{PSL}_2 (\Z)$.
\end{proof}
\begin{example}
Consider the form $ (1,7,-1)$. It has discriminant $53$. The pair $ (51,7)$ is a solution to the Pell equation $X^{2} - 53Y^{2} = 4$. The two $\beta$ values corresponding to this solution are $-50$ and $1$. Plugging these two values into the matrix above we get:
\begin{eqnarray*}
M_{o} = \mat{1 & 7}{7 & 50} \mbox{ and } M_{o}^{-1}= \mat{-50 & 7}{7 & -1}.
\end{eqnarray*}
The pair $ (2599,357)$ is also a solution to the above Pell equation, and the corresponding matrices are:
\begin{eqnarray*}
\mat{50 & 357}{357 & 2549} \mbox{ and } \mat{-2549 & 357}{357 & -50}.
\end{eqnarray*}
We would like to remark also that
\begin{eqnarray*}
M_{o}^{2} = \mat{50 & 357}{357 & 2549}.
\end{eqnarray*}
In fact, $M_{o}$ is one of the two fundamental automorphisms of $f$.
\end{example}
Note that the map $W \mapsto f_{W}$ is infinite to one because any indefinite binary quadratic form has infinite automorphism group. Any matrix in the automorphism group of $f$ maps onto $f$.
Let $\mathcal{D}:=\{ d\in \mathbf{Z}_{>0} \colon d \equiv 0,1 \,\, (\mbox{mod }4), \, d \mbox{ is not a square}\}$. Recall the following:
\begin{proposition}[{\cite{sarnak/reciprocal/geodesics}}]
There is a bijection between the set of conjugacy classes of primitive hyperbolic elements in $\mathrm{PSL}_2 (\Z)$ and the set of classes of primitive binary quadratic forms of discriminant $\Delta \in \mathcal{D}$; where a hyperbolic element is called primitive if it is not a power of another hyperbolic element.
\end{proposition}
\subsection{Reduction Theory of Binary Quadratic Forms}
\label{sec:reducetion/theory}
We say that an indefinite binary quadratic form $f= (A,B,C)$ is {\it reduced} if the geodesic in $\mathcal{H}$ connecting the two real fixed points of $W_{f}$, called the {\it axis} of $W_{f}$ and denoted by $\mathfrak{a}_{W_{f}}$, intersects with the standard fundamental domain of the modular group. Remark that this definition is equivalent to the one given by Gauss in \cite{disquisitiones}\footnote{Recall that Gauss defined a form to be reduced if $|\sqrt{\Delta} - 2 |A|| < B<\sqrt{\Delta}$.}. The equivalence of the two definitions is folklore.
The $\mathrm{PSL}_2 (\Z)$ class of an indefinite binary quadratic form contains more than one reduced form as opposed to definite binary quadratic forms where the reduced representative is unique, see \cite[Section 6.8]{bqf/vollmer} or \cite[Section 5.6]{computational/nt/cohen} for further discussion. The classical reduction is the process of acting on a non-reduced form $f= (A,B,C)$ by the matrix
\begin{eqnarray*}
\rho (f) = \mat{0 & 1}{1 & t (f)} = S (RS)^{t (f)};
\end{eqnarray*}
where $$ t (f) = \left\{
\begin{array}{crcr}
\mathop{\mathrm{sgn}} (c) \left\lfloor \frac{b}{2|c|}\right\rfloor & \hbox{if} & |c| \geq \sqrt{\Delta}\\
\mathop{\mathrm{sgn}} (c) \left\lfloor \frac{\sqrt{\Delta} + b}{2|c|}\right\rfloor & \hbox{if} & |c| < \sqrt{\Delta}
\end{array}\right\},$$
\noindent and checking whether the resulting form is reduced or not. It is known that after finitely many steps one arrives at a reduced form, call $f_{o}$. Applying $\rho(f_{o})$ to $f_{o}$ produces again a reduced form. Moreover, after finitely many iterations one gets back $f_{o}$. And this set of reduced indefinite binary quadratic forms is called the cycle of the class.
Our aim is now to reveal the reduction method due to Gauss in terms of \c{c}arks. Recall that every edge of a \c{c}ark may be labeled with a unique coset of the corresponding subgroup. That is to say binary quadratic forms may be used to label the edges of the \c{c}ark by Corollary~\ref{cor:1/to/1/corr/between/directed/carks}.
Given a hyperbolic element $W$ as a word in $R$, $R^{2}$ and $S$ we define the length of $W$, $\ell (W)$, to be the total number of appearances of $R$, $R^{2}$ and $S$. For instance for $W = RS R^{2}S (RS)^{2}$, $\ell (W) = 8$.
\begin{lemma}
Given an indefinite binary quadratic form (reduced or non-reduced), $f$, let $W_{f}$ be a primitive hyperbolic element corresponding to $f$. Then
\begin{eqnarray}
\ell (W_{\rho (f)\!\cdot\! f}) \leq \ell (W_{f}).
\end{eqnarray}
\label{lemma:length/of/reduced/form/is/minimal}
\end{lemma}
Let us assume from now on that our \c{c}arks are embedded into an annulus, with an orientation which we will assume to be the usual one.\footnote{Although theoretically unnecessary, the choice of an orientation will simplify certain issues. For instance, we shall see that inversion in the class group is reflection with respect to spine.} In addition we also introduce the following shorter notation for our \c{c}arks: in traversing the spine (in either direction) if there are $n$ consecutive Farey branches in the direction of the same boundary component, then we denote this as a single Farey component and write $n$ on the top of the corresponding branch, see Figure~\ref{fig:cark/and/its/short/form}. We will call such \c{c}arks \emph{weighted}.
\begin{definition}
Let $\mbox{\it\c C}\,$ be a weighted \c{c}ark. Edges of the spine are called \emph{semi-reduced}. In particular, an edge on the spine of $\mbox{\it\c C}\,$ is called \emph{reduced} if and only if it is on the either side of a Farey component which is in the direction of the inner boundary component.
\label{defn:reduced/edge}
\end{definition}
\begin{figure}[h!]
\begin{subfigure}{5cm}
\centering
\includegraphics[scale=0.35]{cark.jpg}
\end{subfigure}
\begin{subfigure}{5cm}
\centering
\includegraphics[scale=0.39]{bcark.jpg}
\end{subfigure}
\caption{A \c{c}ark and its short form. }
\end{figure}
\noindent Remark that as we have fixed our orientation to be the usual one, there is no ambiguity in this definition. In addition note that semi-reduced edges are in one to one correspondence between the forms $f = (A,B,C)$ in a given class for which $AC<0$. We are now ready to describe reduction theory of binary quadratic forms in terms of \c{c}arks. We have seen that multiplication by the matrix $\rho (f)$ is, in general, the process of moving the base edge of the \c{c}ark to the spine as a result of Lemma~\ref{lemma:length/of/reduced/form/is/minimal}. However, this is not enough. That is, not every edge on the spine corresponds to a reduced form. Reduced forms correspond to edges where the Farey branches switch from one boundary component to the other. More precisely, we have:
\begin{theorem}
Reduced forms in an arbitrary indefinite binary quadratic form class $[f]$ are in one to one correspondence between the reduced edges of the \c{c}ark corresponding to the given class.
\label{thm:reduced/edges/reduced/bqfs}
\end{theorem}
\noindent As we have remarked the action of $\mathrm{PSL}_2 (\Z)$ on binary quadratic forms is equivalent to the change of base edge on the set of \c{c}arks. Hence the above Theorem is an immediate consequence of the following:
\begin{lemma}
Let $\c{C}_{f}$ denote the \c{c}ark associated to an arbitrary indefinite binary quadratic form $f$. The reduction operator $\rho (f)$ is transitive on the set of reduced edges of $\c{C}_{f}$.
\end{lemma}
Let us give some examples:
\begin{example}
Let us consider the form $f = (7,33,-15)$. It is easy to check that $f$ is reduced. $W_{f} = (R^{2}S)^2 \, (RS)^2 \, R^2 S \, RS \, (R^{2}S)^{7} \, (RS)^{5} = \mat{-38 & -195}{ -91 & -467}$. The trace of the class is $-505$. By Gauss' theory the class $[f]$ is an element in the quadratic number field with discriminant $1509$.
\end{example}
\begin{figure}
\centering
\includegraphics[scale=0.23]{11.jpg}
\caption{ \c{C}ark corresponding to the class represented by the form $ (7,33,-15)$. Bold edges are reduced.}
\label{fig:cark/general/example}
\end{figure}
\begin{example}
Let $\Delta = n^{2}+4n$ for some positive integer $n$. Then the identity in the class group is given by the \c{c}ark in Figure~16a and the corresponding form is $ (-n,n,1)$. If $\Delta = n^{2}+4$, then the identity is represented by the form $\frac{1}{n} (-n,n^{2},n) = (1,n,-1)$. The corresponding \c{c}ark has two Farey branches, see Figure~16b.
\label{ex:identities/having/two/Farey/components}
\end{example}
\begin{figure}[h!]
\begin{subfigure}[]{4.5cm}
\centering
\includegraphics[scale=0.23]{iddentity-n2+4n.jpg}
\caption{\scriptsize (a) Identity for $\Delta = n^{2} + 4n$.}
\label{fig:identity/n2}
\end{subfigure}
\qquad
\begin{subfigure}[]{4cm}
\centering
\includegraphics[scale=0.23]{iddentity-5n2.jpg}
\caption{\scriptsize (b) Identity for $\Delta = n^{2}+4$.}
\label{fig:identity/5n2}
\end{subfigure}
\caption{ \ }
\label{fig:identities}
\end{figure}
\noindent However, one has to admit that there are very complicated \c{c}arks representing the identity of the class group. For instance, the \c{c}ark corresponding to the form $ (-7,23,16)$ has 42 Farey branches.
\subsection{Ambiguous and Reciprocal forms}
Let us now discuss certain symmetries of a \c{c}ark. For a given \c{c}ark $\mbox{\it\c C}\,$ let $\mbox{\it\c C}\,^{r}$ be the \c{c}ark which is the mirror image of $\mbox{\it\c C}\,$ about any line passing through the `center" of the spine (assuming that the Farey components coming out of the spine in its shorter notation that we have introduced is evenly spaced). It is easy to see that both ideal classes represented by the two \c{c}arks $\mbox{\it\c C}\,$ and $\mbox{\it\c C}\,^{r}$ have the same discriminant. A straightforward computation leads to the following:
\begin{proposition}
Given a \c{c}ark $\mbox{\it\c C}\,$ the binary quadratic form class represented by $\mbox{\it\c C}\,^{r}$ is inverse of the class represented by $\c{C}$.
\label{propn:inverse/carks}
\end{proposition}
\begin{example}
Let us consider the form $f = (-2377, 10173, 1349)$ having discriminant $116316221$. The form $g = (-4027, 8915, 2287)$ is an element in the ideal class represented by this form. The corresponding \c{c}arks are shown in Figure~\ref{fig:inverse/cark}. The forms are inverses of each other.
\label{ex:inverse}
\end{example}
\begin{figure}[h!]
\centering
\begin{subfigure}{4cm}
\centering
\includegraphics[scale=0.23]{inverse-1.jpg}
\caption{\scriptsize \c{C}ark corresponding to $f = (-2377, 10173, 1349)$.}
\end{subfigure}
\begin{subfigure}{4cm}
\centering
\includegraphics[scale=0.23]{inverse-2.jpg}
\caption{\scriptsize \c{C}ark corresponding to $f^{-1} = (-4027, 8915, 2287)$.}
\end{subfigure}
\begin{subfigure}{4cm}
\centering
\includegraphics[scale=0.23]{inverse-product.jpg}
\caption{\scriptsize \c{C}ark of the product of $f \times f^{-1}$.}
\end{subfigure}
\caption{ Two \c{c}arks inverses of one another and their product.}
\label{fig:inverse/cark}
\end{figure}
Recall that Gauss has defined a binary quadratic form to be ambiguous if it is equivalent to its inverse or equivalently if the corresponding equivalence class contains $ (a,ka,c)$ for some $a$, $c$ and $k$. Following Gauss, we define a \c{c}ark $\mbox{\it\c C}\,$ \emph{ambiguous} if $\mbox{\it\c C}\,$ and $\mbox{\it\c C}\,^{r}$ are isomorphic as \c{c}arks, or equivalently correspond to the same subgroup of $\mathrm{PSL}_2 (\Z)$. So from Proposition~\ref{propn:inverse/carks} we deduce:
\begin{corollary}
Ambiguous \c{c}arks correspond to ambiguous forms.
\label{cor:ambiguous/carks}
\end{corollary}
\noindent In addition to all the examples considered in Example~\ref{ex:identities/having/two/Farey/components}, which represent ambiguous classes as they are of the form $ (a, ka, c)$, let us give one more example:
\begin{example}
Consider the form $f = (3,18,-11)$. The form is reduced and ambiguous as one immediately checks. The corresponding \c{c}ark is given in Figure~\ref{fig:ambiguous}
\label{ex:ambiguous/carks}
\end{example}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.23]{ambiguous.jpg}
\caption{\c{C}ark corresponding to the ambiguous form $f = (3,18,-11)$.}
\label{fig:ambiguous}
\end{figure}
Let us now discuss ``rotational" symmetries. In Section~\ref{sec:necklace/bracelet} we defined a directed \c{c}ark with a base edge \emph{primitive} if and only if its spine is not periodic. Let $\mathfrak{c}_{prim}$ denote the set of primitive \c{c}arks. It is easy to see that primitive hyperbolic elements\footnote{Recall that an element $M\in \mathrm{PSL}_2 (\Z)$ is said to be {\it primitive} if it is not a positive power of another element of the modular group.} in $\mathrm{PSL}_2 (\Z)$ correspond to primitive \c{c}arks or equivalently to prime geodesics in $\mathbb{H}$.
\begin{corollary}
There is a one to one correspondence between the following two sets:
\begin{center}
\begin{tabular}{c c c}
$\mathfrak{c}_{prim}$ & $\longleftrightarrow$ &$ \bigg \{$\begin{tabular}{c} $\mathrm{PSL}_2 (\Z)$ classes of primitive \\ indefinite binary quadratic forms \\ having discriminant $\Delta \in \mathcal{D}$ \end{tabular} $\bigg\}$
\end{tabular}
\end{center}
\end{corollary}
Finally, let $\mbox{\it\c C}\,^{m}$ denote the mirror of a given \c{c}ark, that is the \c{c}ark obtained by reflecting $\mbox{\it\c C}\,$ with respect to the spine. Once again observe that both $\mbox{\it\c C}\,$ and $\mbox{\it\c C}\,^{m}$ have the same discriminant. In fact, an indefinite binary quadratic form say $f = (A,B,C)$ is given which is represented by the \c{c}ark $\mbox{\it\c C}\,$ then the \c{c}ark $\mbox{\it\c C}\,^{m}$ represents the form $f' = (-A,B,-C)$ and the same holds for every element in $[f]$. We conclude that both \c{c}arks represent ideal classes that have the same order in the class group.
Let $W$ be a hyperbolic element in $\mathrm{PSL}_2 (\Z)$. In \cite{sarnak/reciprocal/geodesics}, Sarnak has defined $W$ to be reciprocal if $W$ is conjugate to its inverse. The conjugation turns out to be done by a unique element (up to multiplication by an element in $\langle W \rangle$) of order $2$, and thus reciprocal elements correspond to dihedral subgroups of the modular group\footnote{Remember that primitive \c{c}arks correspond to maximal $\mathbf{Z}$-subgroups of $\mathrm{PSL}_2 (\Z)$.}. A form $f = (A,B,C)$ is called reciprocal if $C = -A$. It is known that reciprocal hyperbolic elements correspond to reciprocal indefinite binary quadratic forms, \cite{sarnak/reciprocal/geodesics}. In a similar fashion we call a \c{c}ark \emph{reciprocal} if $\mbox{\it\c C}\,$ and $ (\mbox{\it\c C}\,^{m})^{r}$ are isomorphic as \c{c}arks. In fact since two operators $\cdot^{m}$ and $\cdot^{r}$ commute, if $\mbox{\it\c C}\,$ is a reciprocal \c{c}ark then so is $\mbox{\it\c C}\,^{m}$.
\begin{proposition}
Reciprocal forms correspond to reciprocal \c{c}arks.
\label{propn:reciprocal/carks}
\end{proposition}
\begin{figure}[H]
\centering
\includegraphics[scale=0.75]{rreciprocal.jpg}
\caption{The graph $\mathcal{F}/\langle S,R^2SR \rangle $}
\label{fig:quotient/by/hyperbolic}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[scale=0.75]{rreeciprocal.jpg}
\caption{ The graph $\mathcal{F}/\langle RSR^2,S (RSR^2)S \rangle $}
\label{fig:quotient/by/hyperbolic}
\end{figure}
\begin{example}
Consider the form $f = (-8,11,8)$. The corresponding hyperbolic element in $\mathrm{PSL}_2 (\Z)$ is $\mat{101 & -192}{-192 & 365}$. The corresponding \c{c}ark is shown in Figure~\ref{fig:reciprocal}, where it is easy to see that $\mbox{\it\c C}\,$ and $ (\mbox{\it\c C}\,^{m})^{r}$ are same.
\label{ex:reciprocal/cark}
\end{example}
\begin{example}[Reciprocal Identities]
The forms $f = (1,n^{2},-1)$ already appeared in Example~\ref{ex:identities/having/two/Farey/components} are reciprocal and represent identity in the class group. Note also that such forms come from the word $ (R^{2}S)^{n} (RS)^{n}$. The \c{c}arks of these reciprocal identities are in Figure~\ref{fig:identity/5n2}.
\label{ex:reciprocal/identities}
\end{example}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.23]{reciprocal.jpg}
\caption{\c{C}ark corresponding to the reciprocal form $f = (-8,11,8)$.}
\label{fig:reciprocal}
\end{figure}
\subsection{Miscellany}
Binary quadratic forms is a central and classical topic and have connections to diverse fields. Here we touch upon some of these.
\subsubsection{Computational Problems}
There are several important computational problems related to \c carks, in connection with the class number problems in the indefinite case. The most basic invariant of a \c cark is the length of its spine. The (absolute) trace of the associated matrix is another, much subtler invariant. The problem of listing \c{c}arks of the same trace is equivalent to the problem of computing class numbers. Also, the Gauss product on classes of forms defines an abelian group structure on the set \c{c}arks of the same trace, namely the {\it class group}. It is a work in progress to reach to a new understanding of class groups in terms of the graphical representation of their elements by \c carks.
\subsubsection{Closed geodesics on the modular surface.}
Let us note in passing that primitive \c{c}arks parametrize closed geodesics on the modular curve, and so \c{c}arks are closely connected to symbolic dynamics on the modular curve,
see \cite{katok/ugarcovici}, encoding of geodesics, and Selberg's trace formula, see \cite{zagier/new/points/of/view/on/the/selberg/zeta}.
\subsubsection{The Markoff number of an indefinite binary quadratic form.}
There is an arithmetic invariant of indefinite binary quadratic forms called the Markoff value $\mu (F)$ which is defined as
$$
\mu (F):= \frac{\sqrt{\Delta (F)}}{m(f)} , \mbox{ where }
m(f):={\min_{ (x,y)\in \mathbf{Z}^2 \setminus \{ (0,0)\}} |F (x,y)|}
$$
Alternatively one can run over the class of $F$ and compute the minima of equivalent forms at a fixed point $p_0$, for example $ (x,y)= (0,1)$.
Hence the choice of this fixed point $p_0$ defines a function on the set of edges of the associated \c{c}ark, and the Markoff value of the form is the maximal value attained by this function defined on the \c{c}ark. There are also \c{c}arks associated to Markoff irrationalities which we call {\it Markoff \c{c}arks}. A solution to the representation problem of indefinite binary quadratic forms is given in \cite{reduction} and as a by-product Markoff value of a given form can be computed. The algorithms will be available within the software developed by the first two authors and their collaborators, \cite{sunburst}.
\begin{figure}[H]
\centering
\begin{tikzpicture} [scale=0.5]
\draw[line width=0.75mm] (0,0) circle (4cm);
\draw[line width=0.75mm] (0,-4)-- (0,-5.5);
\draw[line width=0.75mm] (0,4)-- (0,2.5);
\draw[line width=0.75mm] (-4,0)-- (-2.5,0);
\draw[line width=0.75mm] (4,0)-- (5.5,0);
\draw[line width=0.5mm,rotate=90] (-0.5,2.5)-- (0.5,2.5);
\draw[line width=0.5mm,rotate=90] (-0.5,-5.5)-- (0.5,-5.5);
\draw[line width=0.5mm] (-0.5,2.5)-- (0.5,2.5);
\draw[line width=0.5mm] (-0.5,-5.5)-- (0.5,-5.5);
\node at (0,2) [ultra thick]{\scriptsize $ (-1,4,-2)$};
\node at (0,-6) [ultra thick]{\scriptsize $\star (2,-4,{\mathbf 1})$};
\node at (6,0) [ultra thick,rotate=90]{\scriptsize $ (1,-4,2)$};
\node at (-2,0) [ultra thick,rotate=90]{\scriptsize $\star (-2,4,-{\mathbf 1})$};
\node at (5.5*cos{22.5},5.5*sin{22.5}) [ultra thick,rotate=22.5]{\scriptsize $\star (-1,2,{\mathbf 1})$};
\node at (5.5*cos{67.5},5.5*sin{67.5}) [ultra thick,rotate=67.5]{\scriptsize $\quad \star (1,-2,-{\mathbf 1})$};
\node at (-5.5*cos{22.5},5.5*sin{22.5}) [ultra thick,rotate=-22.5]{\scriptsize $ (-1,0,2)$};
\node at (-5.5*cos{67.5},5.5*sin{67.5}) [ultra thick,rotate=-67.5]{\scriptsize $\star (2,0,-{\mathbf 1})$};
\node at (5.5*cos{22.5},-5.5*sin{22.5}) [ultra thick,rotate=-22.5]{\scriptsize $\star (-2,0,{\mathbf 1})$};
\node at (5.5*cos{67.5},-5.5*sin{67.5}) [ultra thick,rotate=-67.5]{\scriptsize $ (1,0,-2)$};
\node at (-5.5*cos{22.5},-5.5*sin{22.5}) [ultra thick,rotate=22.5]{\scriptsize $ (-1,0,2)$};
\node at (-5.5*cos{67.5},-5.5*sin{67.5}) [ultra thick,rotate=67.5]{\scriptsize $\star (2,0,-{\mathbf 1})$};
\draw (4*cos{45},4*sin{45}) circle (2.25mm);
\fill[white] (4*cos{45},4*sin{45}) circle (2.25mm);
\node at (4*cos{45},4*sin{45}) [ultra thick]{ $\oplus$};
\draw (4*cos{-45},4*sin{-45}) circle (2.25mm);
\fill[white] (4*cos{-45},4*sin{-45}) circle (2.25mm);
\node at (4*cos{-45},4*sin{-45}) [ultra thick]{ $\oplus$};
\draw (4*cos{135},4*sin{135}) circle (2.25mm);
\fill[white] (4*cos{135},4*sin{135}) circle (2.25mm);
\node at (4*cos{135},4*sin{135}) [ultra thick]{ $\oplus$};
\draw (4*cos{-135},4*sin{-135}) circle (2.25mm);
\fill[white] (4*cos{-135},4*sin{-135}) circle (2.25mm);
\node at (4*cos{-135},4*sin{-135}) [ultra thick]{ $\oplus$};
\end{tikzpicture}
\caption{Minimum edges of $\mathcal F / Aut(\{ (1,0,-2)\})$($\star$ stands for forms which attain the minimum.).}
\label{fig:quotient/by/hyperbolic}
\end{figure}
To conclude the paper, let us rephrase our main result: we show that the class of every primitive indefinite binary quadratic form is not simply a set but it has the extra structure of an infinite graph, namely a \c{c}ark, such that the forms in the class are identified with the edges of the graph. This graph admits a topological realization as a subset of an annulus and explains very well some known phenomena around Gauss' reduction theory of forms and Zagier's reduction of elements of $\mathrm{PSL}_2 (\Z)$ as explained in \cite{katok/ugarcovici}. In our point of view both Gauss reduced forms and Zagier reduced forms correspond to edges on the what we call spine of the \c{c}ark. Various properties of forms and their classes are manifested in a natural way on the \c{c}ark. The first instance of such a question concerning binary quadratic forms has been addressed by the second named author in \cite{reduction}, where he has given an improvement of Gauss' reduction of binary quadratic forms, and has given solutions to the minimum problem and representation problem of binary quadratic forms.
\paragraph{Acknowledgements.}
\label{ackref}
The first named author is thankful to Max Planck Institute at Bonn for their hospitality during the preparation of the current paper. This research has been funded by the T\"UB\.ITAK grant 110T690. The first named author was also funded by the Galatasaray University Research Grant 12.504.001. The second named author is funded by Galatasaray University Research Grant 13.504.001.
|
2,869,038,154,691 | arxiv | \section{Introduction}
Electromagnetism is the theory of electric charge, light, and spin. Maxwell electromagnetism is the standard model [1]. However, although all successful, it contains limitations as being linear, subsidiary potential fields, polarization, and magnetization vectors defined by hand, passive light, and heuristic spin. Different subjects such as condensate matter, plasma, and astrophysics are requiring an extension of Maxwell equations. New impacts on light should be considered. Light signals at different frequencies are no longer able to be interpreted by Maxwell linear EM, especially with gamma rays bursts [2] and super-power lasers [3].
The $21^{th}$ century challenge is an EM beyond Faraday-Maxwell [4]. Nonlinearity, strong magnetic fields, and photonics are showing new phenomenologies. A new EM is required. Nevertheless, preserving Maxwell principles as light invariance, electric charge conservation, EM fields in pairs, and EM connected equations. Literature provides 57 models beyond Maxwell were 14 in the Standard Model context, 14 by extensions motivated beyond SM, 19 nonlinear extensions, and 10 due to dimensionality extensions [5]. Most of them are Euler-Heisenberg and Born-Infeld effective theories [6] and LSV [7] types.
A fundamental EM is expected. A model reinterpreting on an electric charge, light, and spin. Our research considers charge transfer. There is a microscopic EM that was not registered by Coulomb balance and Maxwell equations to be explored. The elementary particle's electric charge mutation contains an EM to be studied. Analyse the charges set $\{+,0,-\}$ transformations. A physics where charges are created and destroyed, and their energies converted between themselves.
New completeness for EM processes is introduced. Charge transfer extends on EM meaning. A physical context to transmit the electric charge tryad is introduced. The presence of four bosons intermediations is required. However, they do not act isolated, but as an EM set. A four bosons EM to be explored as a whole. A relationship interlaced by electric charge symmetry.
Thus, a first consideration to look for a fundamental EM comes from electric charge symmetry. An electric charge homotethy is proposed. While homotethy introduces an angle $\theta$ as the mathematical parameter for trigonometry, electric charge provides a gauge parameter $\alpha$ for gauge theory. A performance is constituted by associating four bosons. New EM Lagrangian and Noether identities are expressed. And, corresponding to functions sin$\theta$, cos$\theta$, new EM observables are derived. A meaning of electricity and magnetism beyond Maxwell is discovered.
The second aspect of improving EM is primordial light. The relationship between light and EM is still open. Since Al Haytham light interrogates physics [8]. Maxwell's chief discussion of his electromagnetic theory of light is in his dynamical theory of the electromagnetic field [9]. However, light as an electromagnetic wave contains a contradiction. Concurrently, that light is invariant, it is produced from electric charge oscillations. Arising the discussion: light is a cause or a consequence?
This conundrum is the essence of electromagnetism. However, neither Standard Model nor QED answer that. Following the Big Bang, the fiat lux occurred at $10^{-10} s$, just after the Early Universe second phase transition $SU_{C}(3)\times SU_{L}(2)\times U_{R}(1) \stackrel{SSB}{\rightarrow} SU(3)_{C}(3) \times U_{em}(1)$. For QED, virtual photons are depending on the fine structure constant. A perspective beyond the electroweak phase transition (EWPT) is necessary [10].
The light metric should work as a framework for searching for such primordial light. It defines space-time symmetries. Historically, light symmetry opened three rooms for physics to be delivered. Maxwell and electric charge [1]; relativity with space-time and matter-energy correlations [11]; Lorentz Group and spin [12]. Nevertheless, the light remained as a consequence.
Physics contains the mission of discovering a fundamental EM containing the primordial light. Perceive as primordial light properties as invariance, ubiquous, and selfinterating photons. There are various light manifestations beyond Maxwell to be investigated by a Lagrangian. Transcript at tree level a light physics beyond Maxwell. Firstly, develop the $\gamma \gamma$ phenomenology as a revival to Breit-Wheeler scattering [13]. Include others EM manifestations, as magnetogenese [14], light and the Later Universe [15], light at LHC [16], $e^{+}-e^{-}$ in lasers [17], astrophysics [18], $\gamma - Z^{0}$ interaction [19], new light states [20].
A fourth light metric consideration will support this investigation. It covers the existence of field families at Lorentz Group. The $\{\frac{1}{2}, \frac{1}{2} \}$ Lorentz Group representation includes the fields set $A_{\mu I} \equiv \{A_{\mu}, U_{\mu}, V_{\mu}^{\pm} \}$ [21]. It supports the four messengers required for conducting the electric charge microscopic flux processes with $\Delta Q = 0, \pm1$. Four photons intermediations will correspond to the most generic microscopic electrodynamics where $A_{\mu}$ means the usual photon, $U_{\mu}$ a massive photon, and $V_{\mu}^{\pm}$ two charged photons.
A four bosons electromagnetism is generated [22]. It provides a fundamental microscopic electromagnetism. Based on electric charge symmetry, an extended abelian electrodynamics is associated to $U(1)_{q} \equiv U(1)\times SO(2)_{global}$. A Lagrangian with a quadruplet of interconnect fields is obtained and three Noether identities are derived [23]. The third aspect for improving EM is to incorporate spin on fields as $A'_{\mu I} = \left(e^{i\omega_{\alpha \beta}\Sigma^{\alpha \beta}}\right)^{\nu}_{\mu}A_{\nu}$ [24].
A new EM completeness was introduced. It is based on primitive quadruplet potential fields. Literature at most considers the presence of two photons together. It works with the usual photon, massive Proca field, longitudinal, paraphoton, photino [25]. Just the Standard Model propitiates four intermediate bosons together [26]. The four bosons EM includes four photons associated through electric charge symmetry.
The objective of this work is to study the vectorial equations corresponding to the four fundamental EM. Starting by writing down the quadri-potentials physics associated by the $U(1)_{q}$ extended abelian symmetry. It gives,
\begin{eqnarray}
A'_\mu &=& A_\mu + k_1 \partial_\mu \alpha \label{Usual photon}\\
U'_\mu &=& U_\mu + k_2 \partial_\mu \alpha \\
V_\mu^{+'} &=& e^{iq\alpha} \left( V_\mu^{+} +k_{+} \partial_\mu \alpha\right) \\
V_\mu^{-'} &=& e^{-iq\alpha} \left( V_\mu^- + k_{-} \partial_\mu \alpha\right). \label{Charge photon}
\end{eqnarray}
Eqs. (1.1-1.4) introduce a potential fields system whose interconnectivity is ruled by the electric charge symmetry. A wholeness is expressed by the parameter $\alpha$. It expresses a new performance beyond Maxwell equations. The so-called four-four Maxwell equations.
The corresponding antisymmetric vectorial fields strengths are written as
\begin{align}
F^I_{\mu \nu} &= \partial_{\mu} A^{I}_{\nu}-\partial_{\nu}A^{I}_{\mu};& \vec{E}_{i}^{I}&=F^{I}_{0i}; & \vec{B}_{i}^{I}&=\frac{1}{2}\epsilon_{ijk}F^{I}_{jk}
\end{align}
where $A^{\mu}_{I} \equiv \left(\phi_{I}, \vec{A}_{I}\right)$ and $I$ is a flavour indice $I = 1,...4$ corresponding to $A_{\mu}, U_{\mu}, V^{\pm}_{\mu}$
For the granular symmetric fields strengths, one gets
\begin{align}
S^{I}_{\mu \nu} &=\partial_{\mu}A^{I}_{\nu} + \partial_{\nu}A^{I}_{\mu};& S^{\alpha I}_{\alpha}& =2\partial_{\alpha}A^{\alpha I}
\end{align}
\begin{align}
S^{I}_{0i} &= \partial_{0}A^{I}_{i} + \partial_{i}A^{I}_0, & S^{I}_{ij} &= \partial_{i}A^{I}_{j} + \partial_{j}A^{I}_{i}
\end{align}
The antisymmetric collective fields strengths are written as
\begin{align}
e_{[\mu \nu]} &= \mathbf{e}_{[IJ]}A^{I}_{ \mu}A^{J}_\nu,& \vec{\mathbf{e}}_{i}&=\mathbf{e}_{[0i]},& \vec{\mathbf{b}}_{i} &= \frac{1}{2}\epsilon_{ijk}\mathbf{e}_{[jk]}
\end{align}
It yields the following antisymmetric group of collective vectors fields:
\begin{align}
\vec{\mathbf{e}}_{AU} &= \mathbf{e}^{[0i]}_{AU} = \mathbf{e}_{[12]}\left(\phi_{A}\vec{U}-\phi_{U}\vec{A}\right) & \vec{b}_{AU} &= \mathbf{e}^{[ij]}_{AU} = \mathbf{e}_{[12]}\left(\vec{A}\times \vec{U}\right)
\\
\vec{\mathbf{e}}_{+-} &= \mathbf{e}^{[0i]}_{+-} = \mathbf{e}_{[34]}\left(\phi_{-}\vec{V}^+-\phi_{+}\vec{V}^{-}\right)& \vec{b}_{+-} &= \mathbf{e}^{[ij]}_{+-}= \mathbf{e}_{[34]}\left(\vec{V}^+ \times \vec{V}^{-}\right)
\\
\vec{\mathbf{e}}_{+A} &= \mathbf{e}^{[0i]}_{+A}\left(\mathbf{e}_{[13]}+i\mathbf{e}_{[14]}\right)\left(\phi_{A}\vec{V}^{+}-\phi_{+}\vec{A}\right)& \vec{b}_{+A}&= \mathbf{e}^{[ij]}_{+A}=\left(\mathbf{e}_{[13]}+i\mathbf{e}_{[14]}\right)\left(\vec{A}\times\vec{V}^+\right)
\\
\vec{\mathbf{e}}_{+A} &= \mathbf{e}^{[0i]}_{-A}\left(\mathbf{e}_{[13]}-i\mathbf{e}_{[14]}\right)\left(\phi_{A}\vec{V}^{-}-\phi_{-}\vec{A}\right)& \vec{b}_{-A}&= \mathbf{e}^{[ij]}_{-A}=\left(\mathbf{e}_{[13]}-i\mathbf{e}_{[14]}\right)\left(\vec{A}\times\vec{V}^-\right)
\\
\vec{\mathbf{e}}_{+U} &= \mathbf{e}^{[0i]}_{+U}\left(\mathbf{e}_{[23]}+i\mathbf{e}_{[24]}\right)\left(\phi_{U}\vec{V}^{+}-\phi_{+}\vec{U}\right)& \vec{b}_{+U}&= \mathbf{e}^{[ij]}_{+U}=\left(\mathbf{e}_{[23]}+i\mathbf{e}_{[24]}\right)\left(\vec{U}\times\vec{V}^+\right)
\\
\vec{\mathbf{e}}_{-U} &= \mathbf{e}^{[0i]}_{-U}\left(\mathbf{e}_{[23]}-i\mathbf{e}_{[24]}\right)\left(\phi_{U}\vec{V}^{-}-\phi_{-}\vec{U}\right)& \vec{b}_{-U}&= \mathbf{e}^{[ij]}_{+U}=\left(\mathbf{e}_{[23]}+i\mathbf{e}_{[24]}\right)\left(\vec{U}\times\vec{V}^+\right)
\end{align}
For symmetric collective vector fields:
\begin{align}
s^{\alpha}_{\alpha AA} &= \mathbf{e}_{(11)}A^{\alpha}A_{\alpha}, & s^{\alpha}_{\alpha UU} &= \mathbf{e}_{(22)}U^{\alpha}U_{\alpha} & s^{\alpha}_{\alpha AU}&=\mathbf{e}_{(12)}A^{\alpha}U_{\alpha} \\
\vec{s}_{AA}& = \mathbf{e}^{(0i)}_{(AA)} = \mathbf{e}_{(11)}\phi_{A}\vec{A} & \vec{s}& = \mathbf{e}^{(0i)}_{UU} = \mathbf{e}_{(22)}\phi_{U}\vec{U} & \vec{s}_{AU} = \mathbf{e}^{(0i)}_{AU} &= \mathbf{e}_{(12)}\left(\phi_{U}\vec{A} + \phi_{A}\vec{U}\right)
\end{align}
and
\begin{align}
s^{\alpha}_{\alpha +-}& =\left(\mathbf{e}_{(33)}+\mathbf{e}_{(44)}\right)V^{ + \alpha }V^{-}_{\alpha} & \vec{\mathbf{s}}_{+-}&= \left(\mathbf{e}_{(33)}+\mathbf{e}_{(44)}\right)\left(\phi_{-}\vec{V}^{+}+\phi_{+}\vec{V}^{-}\right)\\
s^{\alpha}_{\alpha ++}& = \left(\mathbf{e}_{(33)}-\mathbf{e}_{(44)}\right)V^{+\alpha}V_{+\alpha}& \vec{s}_{++}&=\mathbf{e}^{(0i)}_{++} = \left(\mathbf{e}_{(33)}-\mathbf{e}_{(44)}\right)\phi_{+}\vec{V}^{+}\\
s^{\alpha}_{\alpha --}& = \left(\mathbf{e}_{(33)}-\mathbf{e}_{(44)}\right)V^{-\alpha}V_{-\alpha}& \vec{s}_{--}&=\mathbf{e}^{(0i)}_{--} = \left(\mathbf{e}_{(33)}-\mathbf{e}_{(44)}\right)\phi_{+}\vec{V}^{-}\\
s^{\alpha}_{\alpha A+}&= \left(\mathbf{e}_{(13)} + i\mathbf{e}_{(14)}\right)A_{\alpha}V^{+ \alpha}& \vec{s}_{+A}&=\mathbf{e}^{0i}_{A+} =\left(\mathbf{e}_{(13)} + i\mathbf{e}_{(14)}\right)\left(\phi_{+}\vec{A} + \phi_{A}\vec{V}^{+}\right) \\
s^{\alpha}_{\alpha A-}&= \left(\mathbf{e}_{(13)} - i\mathbf{e}_{(14)}\right)A_{\alpha}V^{- \alpha}& \vec{s}_{-A}&=\mathbf{e}^{0i}_{A-} =\left(\mathbf{e}_{(13)} - i\mathbf{e}_{(14)}\right)\left(\phi_{-}\vec{A} + \phi_{A}\vec{V}^{-}\right) \\
s^{\alpha}_{\alpha U+}&= \left(\mathbf{e}_{(23)} + i\mathbf{e}_{(24)}\right)U_{\alpha}V^{+ \alpha}& \vec{s}_{+U}&=\mathbf{e}^{0i}_{U+} =\left(\mathbf{e}_{(23)} + i\mathbf{e}_{(24)}\right)\left(\phi_{+}\vec{U} + \phi_{U}\vec{V}^{+}\right) \\
s^{\alpha}_{\alpha U-}&= \left(\mathbf{e}_{(23)} - i\mathbf{e}_{(24)}\right)U_{\alpha}V^{- \alpha}& \vec{s}_{U-}&=\mathbf{e}^{0i}_{U-} =\left(\mathbf{e}_{(23)} + i\mathbf{e}_{(24)}\right)\left(\phi_{-}\vec{U} + \phi_{U}\vec{V}^{-}\right)
\end{align}
and
\begin{eqnarray}
\stackrel{\leftrightarrow}{s}_{+-} & =&s^{ij}_{+-} = \left(\mathbf{e}_{(33)}+ \mathbf{e}_{(44)}\right)\left(V^{+i}V^{-j}+V^{-i}V^{+j}\right)
\\
\stackrel{\leftrightarrow}{s}_{AU} & =&s^{ij}_{AU} = \mathbf{e}_{(12)}\left(A^{i}U^{j}+U^{i}A^{j}\right)
\\
\stackrel{\leftrightarrow}{s}_{++} & =&s^{ij}_{++} = \left(\mathbf{e}_{(33)}- \mathbf{e}_{(44)}\right)V^{+i}V^{+j}
\\
\stackrel{\leftrightarrow}{s}_{--} & =s&^{ij}_{--} = \left(\mathbf{e}_{(33)}- \mathbf{e}_{(44)}\right)\left(V^{-i}V^{-j}+V^{-i}V^{-j}\right)
\\
\stackrel{\leftrightarrow}{s}_{A+} & =&s^{ij}_{A+} = \left(\mathbf{e}_{(13)}+ \mathbf{e}_{(14)}\right)\left(A^{i}V^{+j}+V^{+i}A^{j}\right)
\\
\stackrel{\leftrightarrow}{s}_{-A} & =&s^{ij}_{--} = \left(\mathbf{e}_{(13)}- \mathbf{e}_{(14)}\right)\left(A^{i}V^{-j}+V^{-i}A^{j}\right)
\\
\stackrel{\leftrightarrow}{s}_{U+} & =&s^{ij}_{U+} = \left(\mathbf{e}_{(23)}- \mathbf{e}_{(24)}\right)\left(U^{i}V^{+j}+V^{+i}A^{j}\right)
\\
\stackrel{\leftrightarrow}{s}_{U-} & =&s^{ij}_{U-} = \left(\mathbf{e}_{(23)}- \mathbf{e}_{(24)}\right)\left(U^{i}V^{-j}+V^{-i}U^{j}\right)
\end{eqnarray}
Eqs (1.5)-(1.27) are expressing a new perspective on EM energy. New electromagnetic observables are considered. Electricity and magnetism become a more complete set when granular and collective fields are introduced. Notice that $\mathbf{e}_{[12]}, ..., \mathbf{e}_{(24)}$ are parameters expressed in terms of theory-free coefficients. They can take any value without violating the gauge symmetry. Depending on their relationship, gauge-invariant properties are derived. Gauge invariance is proved either through the Lagrangian [27] or by the individual field's strengths [23]. And so, given these physical entities, we should study their corresponding dynamical theory.
\section{Euler-Lagrange and Bianchi Equations}
For the last 150 years, Maxwell's equations have been the dominant electromagnetic theory. QED supplemented with $1\%$ of deviations due to quantum fluctuations. However, they were not enough to produce features such as nonlinear EM, physical potential fields, origin for polarization and magnetization, light carrying its own EM fields, and spin incorporated in the fields. An enlargement to EM becomes necessary.
A four bosons electromagnetic Lagrangian is constituted through charge transfer [22-24]. The associated electric charge tryad $\{+,0,-\}$ exchanges introduces an EM under a fields set $\{A_{\mu}, U_{\mu}, V^{\pm}_{\mu}\}$. The corresponding Gauss, Amp$\grave{e}$re, Faraday laws are extended. It yields,
\\
\\
For $A^{T}_{\mu}$ (spin-1):
\begin{equation}\label{Gauss equations for A
\vec{\nabla} \cdot \left[4a_1 \vec{E_A} + 2 b_1\left(\vec{\mathbf{e}}_{AU}+ \vec{\mathbf{e}}_{+-} \right)\right] = \rho^T_A
\end{equation}
with
\begin{multline}\label{Densitity Current A
\rho^T_A \equiv -2\mathbf{e}_{[12]}\left(\vec{E}_A + \vec{E}_U\right)\cdot\vec{U} - \sqrt{2}\left(\mathbf{e}_{[13]}-i\mathbf{e}_{[14]}\right)\left(\vec{E}_+ + \vec{\mathbf{e}}_{+A} + \vec{\mathbf{e}}_{+U} \right)\cdot \vec{V}^- +\\
-\sqrt{2}\left(\mathbf{e}_{[13]}+i\mathbf{e}_{[14]}\right)\left(\vec{E}_- + \vec{\mathbf{e}}_{-A} + \vec{\mathbf{e}}_{-U}\right) \cdot \vec{V}^+
- 4\left(\mathbf{e}_{[12]}\vec{\mathbf{e}}_{AU} + \mathbf{e}_{[34]}\vec{\mathbf{e}}_{+-}\right)\cdot \vec{U}\\
\end{multline}
and
\begin{equation}\label{Ampere-maxwell for A
\vec{\nabla} \times \left[\vec{B}_A + 2b_1\left(\vec{b}_{AU} + \vec{b}_{+-}\right)\right] - \frac{\partial}{\partial t}\left[\vec{E}_A + 2 b_1\left(\vec{\mathbf{e}}_{AU}+ \vec{\mathbf{e}}_{+-} \right) \right] = \vec{J}^{T}_A
\end{equation}
with
\begin{eqnarray
&&\vec{J}^T_A \equiv -2 \mathbf{e}_{[12]}\left[\left(\vec{E}_A + \vec{E}_U + 2\vec{\mathbf{e}}_{AU}\right)\phi_U + \left(\vec{B}_A + \vec{B}_U + \vec{b}_{AU}\right) \times \vec{U} \right] +\nonumber
\\
&&- \sqrt{2}\left(\mathbf{e}_{[13]}-i\mathbf{e}_{[14]}\right)\big[\left(\vec{E}_+ + \vec{ \mathbf{e}}_{+A} +\vec{\mathbf{e}}_{+U}\right) \phi^- + \big(\vec{B}_+ + \vec{b}_{+A} +
\\
&&+\vec{b}_{+U}\big) \times \vec{V}^-\big] - \sqrt{2}\left(\mathbf{e}_{[13]}+i\mathbf{e}_{[14]}\right)\big[\left(\vec{E}_- + \vec{\mathbf{e}}_{-A} +\vec{\mathbf{e}}_{-U} \right) \phi^+
\\
&&+ \left(\vec{B}_- + \vec{b}_{-A} + \vec{b}_{-U} \right) \times \vec{V}^+\big]\nonumber
\end{eqnarray}
with the following conservation law.
\begin{equation}
\frac{\partial \rho^{T}_{A}}{\partial t} + \vec{\nabla}\cdot\vec{j}^{T}_{A} = 0
\end{equation}
The corresponding granular Bianchi identity is
\begin{equation
\vec{\nabla} \cdot \vec{B}_A = 0, \qquad \vec{\nabla} \times \vec{E}_A = -\frac{\partial\vec{B}_A}{\partial t}
\end{equation}
Eqs. (2.1-2.6) are introducing a new photon physics. Different from Maxwell, it is no more an electric charge consequence. It produces its own EM field. It yields dynamics with photon fields producing granular and collective EM fields and interconnected to other three intermediate bosons. As predicted by Sch winger critical fields, $E_{c} = \frac{m_{e}^2c^3}{eh}$, $B_{c} = \frac{m_{e}^2c^2}{eh}$, nonlinearity is incorporated. A photon coupling without requiring the presence of an electric charge appears.
Other associated equations are:
\\
\\
For $U^{T}_{\mu}$: the associated equations are
\begin{equation}\label{Gauss equation for U
\vec{\nabla}\cdot \left[4a_2 \vec{E}_U + 2b_2\left(\vec{\mathbf{e}}_{AU}+ \vec{\mathbf{e}}_{+-} \right) \right] = \rho^T_{U}
\end{equation}
and
\begin{equation}\label{Ampere-Maxwell For U
\vec{\nabla} \times \left[\vec{B}_U + 2b_2\left(\vec{b}_{AU} + \vec{b}_{+-}\right)\right] - \frac{\partial}{\partial t}\left[\vec{E}_U + 2 b_2\left(\vec{\mathbf{e}}_{AU}+ \vec{\mathbf{e}}_{+-} \right) \right] = \vec{J}^{T}_U
\end{equation}
Where the charges and current densities are expressed in Appendix A.
The corresponding granular Bianchi identity is
\begin{equation
\vec{\nabla} \cdot \vec{B}_U = 0, \qquad \vec{\nabla} \times \vec{E}_U = -\frac{\partial\vec{B}_U}{\partial t}
\end{equation}
For $V^{T+}_{\mu}$: it yields,
\begin{equation}\label{Gauss equation V+
\vec{\nabla} \cdot \left[2a_3 \vec{E_+} + 2 b_3\left(\vec{\mathbf{e}}_{A+}+ \vec{\mathbf{e}}_{U+} \right)\right] = \rho^T_+
\end{equation}
and
\begin{equation}\label{Ampere-Maxwell for V+
\vec{\nabla} \times \left[\vec{B}_+ + 2b_3\left(\vec{b}_{+A} + \vec{b}_{+U}\right)\right] - \frac{\partial}{\partial t}\left[\vec{E}_+ + 2 b_3\left(\vec{\mathbf{e}}_{+A}+ \vec{\mathbf{e}}_{+U} \right) \right] = \vec{J}^{T}_+
\end{equation}
The associated granular Bianchi identity is
\begin{equation
\vec{\nabla} \cdot \vec{B}_+ = 0, \qquad \vec{\nabla} \times \vec{E}_+ = -\frac{\partial\vec{B}_+}{\partial t}
\end{equation}
For $V^{T-}_{\mu}$: one gets,
\begin{equation}\label{Gauss equation for V-
\vec{\nabla} \cdot \left[2a_3 \vec{E}_- + 2 b_3\left(\vec{\mathbf{e}}_{A-}+ \vec{\mathbf{e}}_{U-} \right)\right] = \rho^T_-
\end{equation}
and
\begin{equation}\label{Ampere-Maxwell for V-
\vec{\nabla} \times \left[\vec{B}_- + 2b_3\left(\vec{b}_{-A} + \vec{b}_{-U}\right)\right] - \frac{\partial}{\partial t}\left[\vec{E}_- + 2 b_3\left(\vec{\mathbf{e}}_{-A}+ \vec{\mathbf{e}}_{-U} \right) \right] = \vec{J}^{T}_-
\end{equation}
The corresponding Bianchi identity is
\begin{equation
\vec{\nabla} \cdot \vec{B}_- = 0, \qquad \vec{\nabla} \times \vec{E}_- = -\frac{\partial\vec{B}_-}{\partial t}
\end{equation}
Eqs. (2.7-2.20) will follow the same conservation law as eq. (2.5). The correspondent charges and currents expressions are related at Appendix A.
Thus, eqs. (2.1)-(2.20) are expressing new EM vectorial equations. Four interconnected photons work as their own sources. Proposing granular and collective EM fields (collective fields identified with polarization and magnetization vectors), nonlinearity, and potential fields interacting with EM fields. Equations preserving rotational and translation symmetries [28] and covariant under $A'_{\mu I} = \Lambda_{\mu}^{\nu}A_{\nu I}$ where $\Lambda^{\nu}_{\mu}$ is the Lorentz transformation matrix [29].
\\
\\
Considering the spin-0 sector, one gets the corresponding scalar Gauss and Amp$\grave{e}$re laws. It gives,
\\
\\
For $A^{L}_{\mu}$ (spin-0):
\begin{eqnarray}\label{spin-0 spatial equation A
&&\frac{\partial}{\partial t}[4\left( \beta_1 + \beta_1 \rho_1 + 11\rho_1 \right)S^{\alpha}_{\alpha A} + 2\left(\rho_1\beta_2 + \rho_2 \beta_1 + 22 \rho_1 \rho_2\right) S^{\alpha}_{\alpha U} + s^{\alpha}_{\alpha AA} +\nonumber
\\
&&+2\left(17 \rho_1 - \beta_1 \right)(s^{\alpha}_{\alpha UU} + s^{\alpha}_{\alpha AU} + s^{\alpha}_{\alpha +-})] = \rho^{s}_A
\end{eqnarray}
with
\begin{eqnarray
&&\rho^{s}_{A} \equiv 2 \left(\mathbf{e}_{(11)} + \mathbf{e}_{(12)}\right) \big\{ \big(\beta_1\vec{S}_A + \beta_2 \vec{S}_U \big) \cdot \left(\vec{A} + \vec{U}\right) \nonumber
\\
&&+ \big[\left( \beta_1 + 17\rho_1 \right)S^{\alpha}_{\alpha A} + \left(\beta_2 +17\rho_2\right) S^{\alpha}_{\alpha U} \big] \left(\phi_A + \phi_U\right) \big\} +\nonumber
\\
&&+\sqrt{2}\left(\mathbf{e}_{(13)} - i\mathbf{e}_{(14)}\right)\big[\beta_+ \vec{S}^+\vec{V}^- +\left(\beta_+ + 17\rho_+\right)S^{\alpha}_{\alpha +}\phi^-\big] \nonumber
\\
&&- \beta_1\mathbf{e}_{(11)}\left(S^{\alpha}_{\alpha A} \phi_A + 2 \vec{S}_A \cdot \vec{A}\right)+\beta_1\mathbf{e}_{(12)}\big(S^{\alpha}_{\alpha A}\phi_U + S^{\alpha}_{\alpha U}\phi_A + \vec{S}_A \cdot \vec{U} \nonumber
\\
&&+ \vec{S}_U \cdot \vec{A}\big) - \beta_1\left(\mathbf{e}_{(33)} + \mathbf{e}_{(44)}\right) \left(S^{\alpha}_{\alpha +}\phi^- + S^{\alpha}_{\alpha -}\phi^+ + \vec{S}_+\vec{V}^- + \vec{S}_-\vec{V}^+\right)\nonumber
\\
&&+\sqrt{2}\left(\mathbf{e}_{(13)} + i\mathbf{e}_{(14)}\right)\left[\beta_- \vec{S}^-\vec{V}^+ + \left(\beta_- + 17\rho_-\right)S^{\alpha}_{\alpha -}\phi^+\right] \nonumber
\\
&&+ 4\mathbf{e}_{(11)}\left(\vec{s}_{AA} + \vec{s}_{UU} + \vec{s}_{AU} +\vec{s}_{+-} \right) \cdot \vec{A} +72\mathbf{e}_{(11)}\big(s^{\alpha}_{\alpha UU}\nonumber
\\
&& + s^{\alpha}_{\alpha AU} + s^{\alpha}_{\alpha+-}\big)\phi_A + 4 \mathbf{e}_{(12)} \left( \vec{s}_{AA} + \vec{s}_{UU} \right) \cdot \vec{U}
\\
&&+ 72 \mathbf{e}_{(12)}\left(s^{\alpha}_{\alpha AA} + \mathbf{e}^{\alpha}_{\alpha UU} + s^{\alpha}_{+-}\right)\phi_U-\beta_1\mathbf{e}_{(22)}\left(\vec{S}_U\cdot \vec{U} + S^{\alpha}_{\alpha U} \right) \nonumber
\\
&&+\sqrt{2}\left(\mathbf{e}_{(13)} + i\mathbf{e}_{(14)}\right) \left[\vec{s}_{-A}\vec{V}^+ + 18 s^{\alpha}_{\alpha A-} \phi^+\right] + \nonumber
\\
&&\sqrt{2}\left(\mathbf{e}_{(13)} - i\mathbf{e}_{(14)}\right) \left[\vec{s}_{+A}\vec{V}^- + 18 s^{\alpha}_{\alpha A+} \phi^-\right]\nonumber
\\
&&+\sqrt{2}\left(\mathbf{e}_{(23)} + i\mathbf{e}_{(24)}\right) \left[\vec{s}_{-U}\vec{V}^+ + 18 s^{\alpha}_{\alpha U-} \phi^+\right] +\nonumber
\\
&&+ \sqrt{2}\left(\mathbf{e}_{(23)} - i\mathbf{e}_{(24)}\right) \left[\vec{s}_{+U}\vec{V}^- + 18 s^{\alpha}_{\alpha U+} \phi^-\right]\nonumber
\end{eqnarray}
and the corresponding scalar-Amp$\grave{e}$re law
\begin{eqnarray
&&\vec{\nabla}[4\left( \beta_1 + \beta_1 \rho_1 + 11\rho_1 \right)S^{\alpha}_{\alpha A} + 2\left(\rho_1\beta_2 + \rho_2 \beta_1 + 22 \rho_1 \rho_2\right) S^{\alpha}_{\alpha U} + \nonumber
\\
&&+2\left(17 \rho_1 - \beta_1 \right) ( s^{\alpha}_{\alpha AA} + s^{\alpha}_{\alpha UU} + s^{\alpha}_{\alpha AU} + s^{\alpha}_{\alpha +-})] =- \vec{j}^{s}_A
\end{eqnarray}
with
\begin{eqnarray
&&\vec{j}^{s}_A \equiv 2\beta_1\big\{S^{i0}_A\left(\mathbf{e}_{(11)}A_0 + \mathbf{e}_{(12)}U_0\right) + S^{ij}_A\left( \mathbf{e}_{(11)}A_j + \mathbf{e}_{(12)}U_j \right) \nonumber
\\
&&+ S^{\alpha}_{\alpha A}\left( \mathbf{e}_{(11)}A^i + \mathbf{e}_{(12)}U^i \right)\big\} +2\beta_2\big\{S^{i0}_U\left(\mathbf{e}_{(11)}A_0 + \mathbf{e}_{(12)}U_0\right) \nonumber
\\
&& + S^{ij}_U\left( \mathbf{e}_{(11)}A_j + \mathbf{e}_{(12)}U_j \right) + S^{\alpha}_{\alpha U}\left( \mathbf{e}_{(11)}A^i + \mathbf{e}_{(12)}U^i \right)\big\}\nonumber
\\
&&34\rho_1S^{\alpha}_{\alpha A}\left(\mathbf{e}_{(11)}A^i + \mathbf{e}_{(12)}U^i\right) + 34\rho_2S^{\alpha}_{\alpha U}\left(\mathbf{e}_{(11)}A^i + \mathbf{e}_{(12)}U^i\right) \nonumber
\\
&&+ \sqrt{2}\left(\mathbf{e}_{(13)} - i\mathbf{e}_{(14)}\right)[\beta_+S^{i0}_+ V_0^- +\beta_+S^{ij}_+V_j^- \left(\beta_+ + 17\rho_+\right)S^{\alpha}_{\alpha +}V^{i-}] + \nonumber
\\
&&+\sqrt{2}\left(\mathbf{e}_{(13)} + i\mathbf{e}_{(14)}\right)[\beta_-S^{i0}_- V_0^+ +\beta_-S^{ij}_-V_j^+ \left(\beta_- + 17\rho_-\right)S^{\alpha}_{\alpha -}V^{i+}]+\nonumber
\\
&&+4\mathbf{e}_{(11)}[\left(s^{i0}_{AA} + s^{i0}_{UU} + s^{i0}_{AU} + s^{i0}_{+-} \right)A_0 + \left(s^{ij}_{AA} + s^{ij}_{UU} + s^{ij}_{AU} + s^{ij}_{+-} \right)A_j \nonumber
\\
&&+ (2s^{\alpha}_{\alpha AA} + 17s^{\alpha}_{\alpha UU} + 17s^{\alpha}_{\alpha UU}+ s^{\alpha}_{\alpha AU} + s^{\alpha}_{\alpha +-})A^i] + 4\mathbf{e}_{(12)}[\big(s^{i0}_{AA} + s^{i0}_{UU} \nonumber
\\
&& + s^{i0}_{AU} \big)U_0 + \big(s^{ij}_{AA} + s^{ij}_{UU} + s^{ij}_{AU} \big)U_j + 18(s^{\alpha}_{\alpha AA} + s^{\alpha}_{\alpha UU} + s^{\alpha}_{\alpha AU} + s^{\alpha}_{\alpha +-})U^i] \nonumber
\\
&&+ \sqrt{2}\left(\mathbf{e}_{(13)} + i\mathbf{e}_{(14)}\right)\left(s^{i0}_{-A}V^+_0 + s^{ij}_{-A}V^+_j + 18s^{\alpha}_{\alpha -A}V^{i+}\right) +
\\
&&+\sqrt{2}\left(\mathbf{e}_{(13)} - i\mathbf{e}_{(14)}\right)(s^{i0}_{+A}V^-_0 + s^{ij}_{+A}V^-_j 18s^{\alpha}_{\alpha +A}V^{i-}) \nonumber
\\
&& + \sqrt{2}\left(\mathbf{e}_{(23)} + i\mathbf{e}_{(24)}\right)\left(s^{i0}_{-U}V^+_0 + s^{ij}_{-U}V^+_j + 18s^{\alpha}_{\alpha -U}V^{i+}\right) + \nonumber
\\
&&\sqrt{2}\left(\mathbf{e}_{(23)} -i\mathbf{e}_{(24)}\right)(s^{i0}_{+U}V^-_0++s^{ij}_{+U}V^-_j + 18s^{\alpha}_{\alpha +U}V^{i-}) \nonumber
\\
&&-\beta_1\left[\mathbf{e}_{(11)}\left(S^{\alpha}_{\alpha A}A^i + 2S^{i0}_A A_0 + S^{ij}_A A_j\right) + \mathbf{e}_{(22)}\left(S^{\alpha}_{\alpha U}U^i + 2S^{i0}_U U_0 + S^{ij}_U U_j\right)\right]\nonumber
\\
&&-\beta_1\mathbf{e}_{(12)}\left[S^{\alpha}_{\alpha U}A^i +S^{\alpha}_{\alpha A}U^i + 2S^{i0}_U A_0 + 2S^{i0}_A U_0 + 2S^{ij}_U A_j + 2 S^{ij}_U A_j \right]\nonumber
\\
&&\beta_1\left(\mathbf{e}_{(33)} + \mathbf{e}_{(44)}\right)\big(S^{\alpha}_{\alpha + }V^{i -} + S^{\alpha}_{\alpha -}V^{i+} + 2S^{i0}_+ V^-_0 \nonumber
\\
&&+ 2S^{i0}_- V^+_0 + 2S^{ij}_+ V^-_j + 2S^{ij}_- V^+_j\big)\nonumber
\end{eqnarray}
Notice that due to operating on scalar field strengths, the above longitudinal equations are not related to divergence and rotational. Their field variations are depending just on time and space. Eq. (2.21) is expressing a kind of scalar fields strengths velocity and eq. (2.23) a spatial evolution. The corresponding conservation law is
\begin{equation}
\nabla \rho^{s}_{A} + \frac{\partial j^{s}_{A}}{\partial t} = 0
\end{equation}
\\
\\
For $U^{L}_{\mu}$:
\begin{eqnarray
&&\frac{\partial}{\partial t}[4\left( \beta_2 + \beta_2 \rho_2 + 11\rho_1 \right)S^{\alpha}_{\alpha U} + 2\left(\rho_1\beta_2 + \rho_2 \beta_1 + 22 \rho_1 \rho_2\right) S^{\alpha}_{\alpha A} \nonumber
\\
&& 2\left(17 \rho_2 - \beta_2 \right) ( s^{\alpha}_{\alpha AA} +s^{\alpha}_{\alpha UU} + s^{\alpha}_{\alpha AU} + s^{\alpha}_{\alpha +-})] - 2\mathbf{m}^2_U\phi_U = \rho^{s}_U
\end{eqnarray}
and
\begin{eqnarray
&&\vec{\nabla}[4\left( \beta_2 + \beta_2 \rho_2 + 11\rho_2 \right)S^{\alpha}_{\alpha U} + 2\left(\rho_1\beta_2 + \rho_2 \beta_1 + 22 \rho_1 \rho_2\right) S^{\alpha}_{\alpha A} \nonumber
\\
&&+ 2\left(17 \rho_2 - \beta_2 \right) ( s^{\alpha}_{\alpha AA} +s^{\alpha}_{\alpha UU} + s^{\alpha}_{\alpha AU} + s^{\alpha}_{\alpha +-})] 2\mathbf{m}^2_U\vec{U} = -\vec{j}^{s}_U
\end{eqnarray}
For $V^{L-}_{\mu}$:
\begin{eqnarray
&&\frac{\partial}{\partial t}\left\{2\left(\beta_+\beta_- + 16\rho_+\rho_- + \rho_+\beta_- + \rho_+ \beta_+\right)S^{\alpha}_{\alpha -} + \left(34\rho_+ - 2\beta_-\right)\left(s^{\alpha}_{\alpha -A} + s^{\alpha}_{\alpha -U}\right)\right\}\nonumber
\\
&&- \mathbf{m}^2_{V}\phi^- = \rho^{-s}_{V}
\end{eqnarray}
and
\begin{eqnarray
&&\vec{\nabla}\left\{2\left(\beta_+\beta_- + 16\rho_+\rho_- + \rho_+\beta_- + \rho_+ \beta_+\right)S^{\alpha}_{\alpha -} + \left(34\rho_+ - 2\beta_-\right)\left(s^{\alpha}_{\alpha -A} + s^{\alpha}_{\alpha -U}\right)\right\}\nonumber
\\
&&- \mathbf{m}^2_{V}\vec{V}^- = -\vec{j}^{-s}_{V}
\end{eqnarray}
For $V_{\mu}^{L+}$:
\begin{eqnarray
&&\frac{\partial}{\partial t}\left\{2\left(\beta_+\beta_- + 16\rho_+\rho_- + \rho_+\beta_- + \rho_+ \beta_+\right)S^{\alpha}_{\alpha +} + \left(34\rho_+ - 2\beta_+\right)\left(s^{\alpha}_{\alpha +A} + s^{\alpha}_{\alpha +U}\right)\right\}\nonumber
\\
&&- \mathbf{m}^2_{V}\phi^+ = \rho^{+s}_{V}
\end{eqnarray}
and
\begin{eqnarray
&&\vec{\nabla}\left\{2\left(\beta_+\beta_- + 16\rho_+\rho_- + \rho_+\beta_- + \rho_+ \beta_+\right)S^{\alpha}_{\alpha +} + \left(34\rho_+ - 2\beta_+\right)\left(s^{\alpha}_{\alpha +A} + s^{\alpha}_{\alpha +U}\right)\right\}\nonumber
\\
&&- \mathbf{m}^2_{V}\vec{V}^+ = -\vec{j}^{+s}_{V}
\end{eqnarray}
where the above charges and current densities are written at Apendice A.
Concluding, these Euler-Lagrange equations, as Maxwell equations, make an interconnected system. Enlarging the EM properties with new electric and magnetic field relationships. Lorentz's symmetry also relates these spatial and temporal equations according to covariance.
\section{Electric Charge Homothety}
Charge transfer $\{+,0,-\}$ introduces a generic electric charge conservation. While Maxwell constructed charge conservation by introducing the displacement current, and as consequence, emerges the connections of the electric and magnetic field, here, a new physicality appears. Maxwell abelian gauge symmetry is extended for $U_{q}(1) \equiv U(1)\times SO(2)_{global}$. It yields the four bosons EM Homothety introduces an electric charge symmetry associated with a kind of homothety where the correspondent angle is given by the gauge parameter $\alpha$. Its symmetry acts at eqs (1.1-1.4) gauge transformations and Noether theorem [30].
Noether theorem produces three independent equations.
\begin{equation}
\alpha\partial_{\mu}J^{\mu} + \partial_{\nu}\alpha\left\{\partial_{\mu}K^{\mu \nu} + J^{\nu}\right\} + \partial_{\mu}\partial_{\nu}\alpha K^{\mu \nu}=0
\end{equation}
Eqs. (3.1) are called electric charge conservation, symmetry equation, and constraint. While at QED the first two are equal, a new aspect appears through the EM quadruplet. The electric charge equation is identified as the symmetry equation. And so, as inertial mass is characterized by second Newton's law, electric charge defines its own equation in terms of EM fields. It gives,
For the antisymmetric sector:
\begin{equation
\vec{\nabla}\cdot\left[4k_1(a_1 + \beta_1)\vec{E}_A + 4k_2 \left(a_2 +\beta_2\right)\vec{E}_U + 2k_+ \vec{E}_- + 2k_-\vec{E}_+ + 2\left(a_1k_1 + a_2k_2\right)\vec{\mathbf{e}}_+-\right] = \rho^{T}_q
\end{equation}
with
\begin{equation
\rho^{T}_q \equiv -q\left\{4a_3\left\{\vec{V}^+\cdot\vec{E}_-\right\}\right\} +\left\{b_3 \vec{V}^+\cdot\left[\vec{\mathbf{e}}_{-A} + \vec{\mathbf{e}}_{-U} \right]\right\}
\end{equation}
and
\begin{multline
\vec{\nabla}\times \left[4k_1\left(a_1 + \beta_1\right)\vec{B}_A + 4 \left(a_2 + \beta_2\right)\vec{B}_U + 2k_+a_3 \vec{B}_- + 2k_-a_3 \vec{B}_+ + 2\left(a_1k_1 + a_2k_2\right)\vec{\mathbf{b}}_{+-}\right]+\\
-\frac{\partial}{\partial t }\left[4k_1\left(a_1 + \beta_1\right)\vec{E}_A + 4k_2 \left(a_2 + \beta_2\right)\vec{E}_U + 2k_+a_3 \vec{E}_- + 2k_-a_3 \vec{E}_-\right] = -\vec{j}^{T}_q
\end{multline}
with
\begin{equation
\vec{j}^{T}_{q} \equiv -q\left\{4a_3 Im\left\{\vec{E}_- \cdot \phi^{+} + \vec{B}_- \times \vec{V^+}\right\} + 4b_3 Im\left\{\vec{\mathbf{e}}_{-A} \phi^+ + \vec{\mathbf{b}}_{-A} \times \vec{V}^+\right\}\right\}
\end{equation}
The above equations are expressing the electric charge fundamental law. Showing through a so-called gauge parameter homothety a more factual electric charge behavior in terms of electric and magnetic fields. Diversely from Maxwell, it extends the q meaning. It rewrites the continuity equation, $\frac{\partial \rho^{T}_{q}}{\partial t} + \nabla \cdot \vec{j}^{T}_q = 0$, where the charge density and current, $\rho^{T}_{q}$ and $\vec{j}^{T}_{q}$, are no more expressed in terms of electric charge but in terms of an EM fields flux. Including, eqs. (3.2-3.5) is registering the presence of potential fields connected with granular and collective EM fields through the coupling constant q. Also, chargeless fields $A_{\mu}$ and $U_{\mu}$ are participating in this electric charge dynamics.
For longitudinal sector:
\begin{multline
\frac{\partial}{\partial t}\{4(11k_1\rho_1 + \frac{1}{2}k_2 \rho_2\beta_1 + 11\rho_1\rho_2 + \frac{1}{4}k_2 \xi_{(12)})S^{\alpha}_{\alpha A} +4(11k_2\rho_2 + \frac{1}{2}k_1 \rho_1\beta_2 + \\
+11\rho_1\rho_2 + \frac{1}{4}k_1 \xi_{(12)})S^{\alpha}_{\alpha U}\} = \rho^{L}_q
\end{multline}
with
\begin{eqnarray
&&\rho^{L}_{q} \equiv -q\{4Im\{\beta_+\beta_-\vec{V}^+\vec{S}_- +\left(16\rho_+\rho_- + \rho_+\beta_- +\rho_-\beta_+\right)\phi^+S^{\alpha}_{\alpha -}\} + \nonumber
\\
&&Im\{\beta_+\left(\vec{s}_{-A} + \vec{s}_{-U}\right)\cdot \vec{V}^+ +\left(\beta_+ + 17\rho_+\right)\left(s^{\alpha}_{\alpha -A} + s^{\alpha}_{\alpha -U}\right)\phi^+\}\}
\end{eqnarray}
and
\begin{multline
\partial_i \{4(11k_1\rho_1 + \frac{1}{2}k_2 \rho_2\beta_1 + 11\rho_1\rho_2 + \frac{1}{4}k_2 \xi_{(12)})S^{\alpha}_{\alpha A} +4(11k_2\rho_2 + \frac{1}{2}k_1 \rho_1\beta_2 + \\
+11\rho_1\rho_2 + \frac{1}{4}k_1 \xi_{(12)})S^{\alpha}_{\alpha U}\} = j_{iq}
\end{multline}
with
\begin{eqnarray}
&&j^{L}_{iq} \equiv -q \{4Im\{\beta_+\beta_-V_j^+ S_i^{j-} + \left(16\rho_+\rho_- + \rho_+ \beta_- + \rho_- \beta_+\right)V_i^{+}S^{\alpha}_{\alpha -} \} +\nonumber
\\
&&+4Im\{ \beta_+ \left(s^{j}_{i -A}\right)V^{+}_{j} + s^{j}_{i -U}V^+_j+(17\rho_+ + \beta_+)(s^{\alpha}_{\alpha -A} \nonumber
\\
&&+ s^{\alpha}_{\alpha -U})V_{i}^{+}\}\}
\end{eqnarray}
introducing another continuity equations $\frac{\partial \vec{j}^{L}_{q}}{\partial t} + \vec{\nabla} \rho^{L}_{q} =0$
A new perspective on electric charge physicality is discovered. The above expressions are showing a fields flow conducting $\{+,0,-\}$ charges. Different electric charge conservation laws at transverse and longitudinal sectors are detected. Although under the same $q$, they differ dynamically. Showing that, more than a fine structure constant, electric charge is related to a field's flux.
EM is powered by an electric charge. However, Maxwell and QED do not say what electric charge is. No more than classifying in positive and negative charges. On other hand, physics understood from Heisenberg isospin and Gell-Mann-Nishijima strangeness that quantum numbers are deeper than electric charge [31]. Quantum numbers are associated not only with conservation laws as to group symmetry generators. Then, given $U_{q}(1)$, it is expected two quantum numbers are associated with U(1) and SO(2) respectively. It will correspond to, two associated corresponding conservation laws. A justification for the above, two electric charge continuity equations appear associated with spin-1 and spin-0.
\section{Four-Four Maxwell equations}
\indent Electromagnetism is being enlarged to physics, considering four bosons interconnected by electric charge symmetry. However, the corresponding equations are not only derived by Euler-Lagrange and granular Bianchi identities. Differently, from usual electrodynamics [32], constitutive physics is manifested [23]. It includes algebraic identities, Noether identities, and collective Bianchi identities. Together they make a system identified as the Four-Four Maxwell equations.
A constitutive EM is developed beyond the minimal action principle. A systemic behaviour is conducted from the quadruplet $\{A_{\mu}, U_{\mu}, V_{\mu}^{\pm}\}$ electric charge symmetry. While equations appear. They rewrite the Gauss, Amp$\grave{e}$re, Faraday laws with modifications.
\\
\\
For $A^{T}_{\mu}$ (spin-1):
\\
\\
The corresponding whole Gauss law is
\begin{equation}\label{Whole Gauss equation
\vec{\nabla} \cdot \left\{4\left(a_1 + \beta_1 + a_1k_1\right)\vec{E}_A + 2 b_1 \left[\vec{\mathbf{e}}_{AU} + \left( 1 + k_1\right)\vec{\mathbf{e}}_{+-}\right]\right\} + l^{T}_{A} + c^{T}_{A} = M^{T}_{IA} + \rho^{T}_{AW} - \rho^{T}_{ q} - k_2 \rho^{T}_{U}
\end{equation}
with
\begin{equation
l^{T}_{A} \equiv -4 \mathbf{e}_{[12]}\vec{U} \cdot \vec{\mathbf{e}}_{AU},
\end{equation}
\begin{multline
c^{T}_{A} \equiv -4 \mathbf{e}_{[12]} \vec{U} \cdot \vec{\mathbf{e}}_{+-} - \vec{V}^{-} \cdot \left[\sqrt{2}\left(\mathbf{e}_{[13]} - i\mathbf{e}_{[14]}\right)\vec{\mathbf{e}}_{+A} + \sqrt{2}\left(\mathbf{e}_{[13]} - i \mathbf{e}_{[14]}\right)\vec{\mathbf{e}}_{+U}\right]+\\
- \vec{V}^{+} \cdot \left[\sqrt{2}\left(\mathbf{e}_{[13]} + i\mathbf{e}_{[14]}\right)\vec{\mathbf{e}}_{-A} + \sqrt{2}\left(\mathbf{e}_{[13]} - i \mathbf{e}_{[14]}\right)\vec{\mathbf{e}}_{-U}\right],
\end{multline}
and
\begin{equation
\rho^{T}_{AW} \equiv -2 \mathbf{e}_{[12]}\left(\vec{E}_A + \vec{E}_U\right)\cdot \vec{U} - \sqrt{2}\left(\mathbf{e}_{[13]} + i\mathbf{e}_{[14]}\right)\vec{E}_- \cdot \vec{V}^+ \cdot \left[\sqrt{2}\left(\mathbf{e}_{[13]} - i\mathbf{e}_{[14]}\right)\vec{E}_+\cdot\vec{V}^-\right],
\end{equation}
\begin{equation
M^{T A}_{I} \equiv 2k_2\mathbf{m}^2_{U}\phi_U + k_{-}\mathbf{m}^2_{V}\phi^+ + k_+\mathbf{m}^2_{V}\phi^-.
\end{equation}
At LHS eq. (4.1) is expressing the EM fields strengths dynamics plus fields massive terms where $l_{AT}$ means the London term associated to the $U_{\mu}$ field, $c_{AT}$ a conglomerate term grouping different fields. On RHS, $\rho^{T}_{A}$ corresponds to field densities, and $M_{AT}$ is the mass source associated with fields. Notice that $\mathbf{m}^2_{U}$,$\mathbf{m}^2_{V}$ are just mass parameters introduced without requiring the Higgs mechanism. Electric charge density is expressed by $\rho_{q}^{T}$ at eq. (3.3); $\rho^{T}_{U}$ is in Appendices A
The whole Amp$\grave{e}$re law is
\begin{multline}\label{Whole Ampere-Maxwell for A
\vec{\nabla} \times \left[4\left(a_1 + \beta_1 + a_1 k_1\right)\vec{B}_A + 2b_1 \left(\vec{b}_{AU} + \left(1+k_1\right)\vec{b}_{+-}\right)\right]-\frac{\partial}{\partial t}[4(a_1 + \beta_1 + a_1 k_1)\vec{E}_A + \\
+2 b_1 (\vec{\mathbf{e}}_{AU} + (1 + k_1) \vec{\mathbf{e}}_{+-}) ]+ \vec{l}_{AT} + \vec{c}_{AT} = \vec{M}^{A}_{IT} + \vec{j}^{W}_{AT} - \vec{j}_{q T} - k_2 \vec{j}_{UT}
\end{multline}
with
\begin{equation
\vec{l}^{T}_{A} \equiv -4 \mathbf{e}_{[12]}\left\{\vec{\mathbf{e}}_{AU} \phi_U + \vec{b}_{AU} \times \vec{U} \right\},
\end{equation}
and
\begin{eqnarray
&&\vec{c}^{T}_{A} \equiv - \sqrt{2}\left(\mathbf{e}_{[13]} - i\mathbf{e}_{[14]}\right)\left[\left(\vec{\mathbf{e}}_{+A} + \vec{\mathbf{e}}_{+U}\right) \cdot \phi^- + \left(\vec{b}_{+A} + \vec{b}_{-U}\right)\times \vec{V}^- \right] + \nonumber
\\
&& - \sqrt{2}\left(\mathbf{e}_{[13]} + i\mathbf{e}_{[14]}\right)\left[\left(\vec{\mathbf{e}}_{-A}+ \vec{\mathbf{e}}_{-U}\right)\cdot \phi^+ + \left(\vec{b}_{-A} + \vec{b}_{+U}\right)\times \vec{V}^+\right]+
\\
&&-4 \mathbf{e}_{[12]}\left\{\vec{\mathbf{e}}_{+-} \cdot \phi_U + \vec{b}_{+-} \times \vec{U}\right\},\nonumber
\end{eqnarray}
\begin{eqnarray
&&\vec{j}^{T}_{AW} \equiv -2\mathbf{e}_{[12]}\left[\left(\vec{E}_{A} + \vec{E}_{U}\right)\cdot \phi_U + \left(\vec{B}_A + \vec{B}_{U}\right)\times \vec{U}\right]\nonumber
\\
&&-\sqrt{2}\left(\mathbf{e}_{[13]} - i\mathbf{e}_{[14]}\right)\left[\vec{E}_{+}\cdot \phi^- + \left(\vec{B}_{+} \times \vec{V}^{-}\right)\right]+
\\
&&-\sqrt{2}\left(\mathbf{e}_{[13]} + i\mathbf{e}_{[14]}\right)\left[\vec{E}_{-}\cdot \phi^+ + \vec{B}_{-}\times \vec{V}^{+}\right]\nonumber,
\end{eqnarray}
\begin{equation
\vec{M}^{T}_{IA} \equiv 2k_2\mathbf{m}^2_{U} \vec{U} + k_{-}\mathbf{m}^2_{V}\vec{V}^{+} + k_+\mathbf{m}^2_{V}\vec{V}^{-}.
\end{equation}
Electric charge current $\vec{j}_{q T}$ is written at eq. (3.5).
It yields the following constitutive conservation law.
\begin{equation}\label{Lei de conservação W}
\frac{\partial\rho^{T}_{CA}}{\partial t} + \nabla \cdot \vec{j}^{T}_{CA} =0
\end{equation}
where
\begin{equation}
\rho^{T}_{CA} = -l^{T}_{A} - c^{T}_{A} + M^{T}_{IA} + \rho^{T}_{AW} - k_2 \rho^{T}_{U}
\end{equation}
and
\begin{equation}
\vec{j}^{T}_{CA} = -\vec{l}^{T}_{A} - \vec{c}^{T}_{A} + \vec{M}^{T}_{IA} + \vec{j}^{T}_{AW} - k_2 \vec{j}^{T}_{U}
\end{equation}
A mass continuity equation is rewritten from eq. (4.11). It gives,
\begin{equation}
\frac{\partial M^{T}_{IA}}{\partial t} + \nabla\cdot\vec{M}^{T}_{IA} = l^{T}_{A} + c^{T}_{A} - \rho^{T}_{AW} + k_{2}\rho^{T}_{U}
\end{equation}
A photon constitutive fundamental equation is derived. As radiation, without charge and mass, the photon behaviour is extended as a field working as its own source. It incorporates photon dynamics with nonlinear granular and collective fields strengths. New features appear. A photon generates its own EM field, London, and conglomerate fields as masses terms, fields currents, and massive sources. Also, physical entities such as electric charge and mass with a physicality given by continuity equations depending on fields.
Eqs. (4.1-4.14) are introducing a shift to Gauss law and Maxwell-Amp$\grave{e}$re law. Actually, three reasons are offered to modify the Maxwell-Amp$\grave{e}$re law. They are by introducing mass according to de Broglie-Proca, Lorentz symmetry violation, and effective nonlinear EM [33]. An analysis is being studied under the MMS satellite date [34]. Differently, the above equation modifications are due to a fundamental EM.
For $U^{T}_{\mu}$: the massive photon constitutive Gauss law is
\begin{eqnarray}\label{Whole Gauss equation for U
&&\vec{\nabla} \cdot \left\{4\left(a_2 + \beta_2 + a_2 k_2\right)\vec{E}_U + 2b_2\left[\vec{\mathbf{e}}_{AU} + \left(1 + k_1\right)\vec{\mathbf{e}}_{+-}\right]\right\}\nonumber
\\
&&-2\mathbf{m}^2_{U}\phi_U + l_{UT} + c_{UT} = M^{U}_{IT} + \rho^{W}_{UT} - k_1\rho_{AT} - \rho_{NT}
\end{eqnarray}
with
\begin{equation
l^{T}_{U } \equiv -4 \mathbf{e}_{[12]} \vec{A}\cdot \vec{\mathbf{e}}_{AU},
\end{equation}
\begin{multline
c^{T}_{U} \equiv -4\mathbf{e}_{[12]}\vec{A}\cdot \vec{\mathbf{e}}_{+-}-\vec{V}^{-}\cdot\left[\sqrt{2}\left(\mathbf{e}_{[23]} - i\mathbf{e}_{[14]}\right)\vec{\mathbf{e}}_{+A} + \sqrt{2}\left(\mathbf{e}_{[23]}-i\mathbf{e}_{[24]}\right)\vec{\mathbf{e}}_{+U}\right]+\\
-\vec{V}^{+}\cdot\left[\sqrt{2}\left(\mathbf{e}_{[23]} + i\mathbf{e}_{[14]}\right)\vec{\mathbf{e}}_{-A} + \sqrt{2}\left(\mathbf{e}_{[23]}-i\mathbf{e}_{[24]}\right)\vec{\mathbf{e}}_{-U}\right],
\end{multline}
and
\begin{multline
\rho^{T}_{U W} \equiv +2i \mathbf{e}_{[12]} \left(\vec{E}_{A} + \vec{E}_{U}\right)\cdot \vec{A} - \sqrt{2}\left(\mathbf{e}_{[23]} - i\mathbf{e}_{[24]}\right)\vec{E}_{+} \cdot \vec{V}^{-} - \sqrt{2}\left(\mathbf{e}_{[23]} +i \mathbf{e}_{[24]}\right)\vec{E}_- \cdot \vec{V}^{+}
\end{multline}
\begin{equation
M^{T}_{IU} \equiv k_- \mathbf{m}^{2}_{V} \phi^{+} + k_+\mathbf{m}^{2}_{V} \phi^{-}.
\end{equation}
The corresponding constitutive Amp$\grave{e}$re law is
\begin{eqnarray}\label{Whole Ampere Maxwell for U
&&\vec{\nabla}\times \left[4\left(a_2 + \beta_2 a_2 k_2 \right) \vec{B}_{U} + 2b_2\left(\vec{b}_{AU} + \left(1 + k_2\right)\vec{b}_{+-}\right)\right] - \frac{\partial}{\partial t}[4(a_2 + \beta_2 + k_2 a_2)\vec{E}_U +\nonumber
\\
&&+2b_2 \vec{\mathbf{e}}_{AU} + (1+k_2)\vec{\mathbf{e}}_{+-}] + \vec{l}_{UT} + \vec{c}_{UT} = \vec{M}^{U}_{IT} + \vec{j}_{UT} - \vec{j}^{W}_{NT} - k_1 \vec{j}_{AT}
\end{eqnarray}
with
\begin{equation
\vec{l}^{T}_{U} \equiv -4\mathbf{e}_{[12]}\left[\mathbf{e}_{AU} \cdot \phi^{-} + \vec{b}_{AU} \times \vec{A}\right],
\end{equation}
\begin{eqnarray
&&\vec{c}^{T}_{U} \equiv - \sqrt{2}\left(\mathbf{e}_{[23]} - i \mathbf{e}_{[24]}\right)\left[\left(\vec{\mathbf{{e}}}_{+A} + \vec{\mathbf{e}}_{+U}\right)\cdot \phi^{-} + \left(\vec{b}_{+A} \vec{b}_{+U}\right)\times \vec{V}^{-}\right]+\nonumber
\\
&&-\sqrt{2}\left(\mathbf{e}_{[23]} + i \mathbf{e}_{[24]}\right)\left[\left(\vec{\mathbf{e}}_{-A} + \vec{\mathbf{e}}_{-U}\right)\cdot \phi^{+} + \left(\vec{b}_{-A} + \vec{b}_{-U}\right)\times \vec{V}^{+}\right]+
\\
&&-4 \mathbf{e}_{[12]}\left[\vec{\mathbf{e}}_{+-} \cdot \phi_A + \vec{b}_{+-} \times \vec{A}\right]\nonumber
\end{eqnarray}
and
\begin{eqnarray
&&\vec{j}^{T}_{U W} \equiv -2 \mathbf{e}_{[12]}\left[\left(\vec{E}_{A} + \vec{E}_{U}\right)\cdot \phi_{A} + \left(\vec{B}_A + \vec{B}_B\right) \times \vec{A}\right] +\nonumber
\\
&&+\sqrt{2}\left(\mathbf{e}_{[23]} - i \mathbf{e}_{[24]}\right)\left[\vec{E}_+ \cdot \phi_{-} + \left(\vec{B}_{+} \times \vec{V}^{-}\right)\right] +
\\
&&-\sqrt{2}\left(\mathbf{e}_{[23]} + i \mathbf{e}_{[24]}\right)\left[\vec{E}_- \cdot \phi_+ + \left(\vec{B}_{-} \times \vec{V}^{+}\right)\right],\nonumber
\end{eqnarray}
\begin{equation
\vec{M}^{U}_{UT} \equiv k_{-} \mathbf{m}^{2}_{V} \vec{V}^{+} + k_+ \mathbf{m}^{2}_{V} \vec{V}^{-}.
\end{equation}
For $V_{\mu}^{T-}$:
The correspondent constitutive Gauss and Amp$\grave{e}$re laws for a negative charged massive photon are
\begin{equation}\label{Whole Gauss equation V+
\vec{\nabla}\cdot \left[2\left(a_3 + \beta_+\beta_-\right)\vec{E}^{+} + 2b_3\left(\vec{\mathbf{e}}_{+A} + \vec{\mathbf{e}}_{+U}\right)\right]-\mathbf{m}_{V}^{2}\phi^{+} +l^{+}_{VT} + c^{+}_{VT} = \rho^{W+}_{VT}.
\end{equation}
with
\begin{equation
l^{T+}_{V} = 0,
\end{equation}
\begin{eqnarray
&&c^{T+}_{V} \equiv -\sqrt{2}\left(\mathbf{e}_{[13]} - i\mathbf{e}_{[14]}\right)\left[\vec{A}\cdot \left(\vec{\mathbf{e}}_{+A} + \vec{\mathbf{e}}_{+U}\right)\right]
\\
&&-\sqrt{2}\left(\mathbf{e}_{[23]} - i\mathbf{e}_{[24]}\right)\left[\vec{U}\cdot \left(\vec{\mathbf{e}}_{+A} + \vec{\mathbf{e}}_{+U}\right)\right]+
\\
&&-4i\mathbf{e}_{[34]}\left[\vec{V}^{+}\cdot \left(\vec{\mathbf{e}}_{+-} + \vec{\mathbf{e}}_{AU}\right)\right],
\end{eqnarray}
\begin{equation
\rho^{T+}_{V W} \equiv 2i\mathbf{e}_{[34]}\left(\vec{E}_{A} + \vec{E}_{U}\right)\cdot \vec{V}^{+} - \sqrt{2}\left(\mathbf{e}_{[13]} - i\mathbf{e}_{[14]}\right)\vec{E}_{A} \cdot \vec{V}^{+} - \sqrt{2}\left(\mathbf{e}_{[23]} -i\mathbf{e}_{[24]}\right)\vec{E}_{+} \cdot \vec{U},
\end{equation}
and
\begin{multline}\label{Whole Ampere-Maxwell V+
\vec{\nabla} \times \left[2\left(a_3 + \beta_+\beta_-\right)\vec{B}^+ + 2b_3\left(\vec{b}_{+A} + \vec{b}_{+U}\right)\right] - \frac{\partial}{\partial t}[2(a_3 + \beta_+\beta_-)\vec{E}_+ +2b_3 (\vec{\mathbf{e}}_{+A} + \vec{\mathbf{e}}_{+U})] +\\
\vec{l}^{+}_{TV} + \vec{c}^{+}_{TV} - \mathbf{m}^{2}_{V} \vec{V}^{+} = \vec{j}^{W+}_{VT}
\end{multline}
with
\begin{equation}
\vec{l}^{T +}_{V} = 0
\end{equation}
\begin{multline
\vec{c}^{T+}_{V} \equiv -\sqrt{2}\left(\mathbf{e}_{[13]} - i\mathbf{e}_{[14]}\right)\left[\mathbf{e}_{+A}\cdot \phi_{A} + \vec{b}_{+A}\times \vec{A} + \vec{\mathbf{e}}_{+U} \cdot \phi_{A} + \vec{b}_{+U} \times \vec{A}\right]+\\
-\sqrt{2}\left(\mathbf{e}_{[23]} - i\mathbf{e}_{[24]}\right)\left[\vec{\mathbf{e}}_{+U}\cdot \phi_U + \vec{b}_{+A} \times \vec{U} + \vec{\mathbf{e}}_{+U} \cdot \phi_U + \vec{b}_{+U} \times \vec{U}\right] + \\
-4i\mathbf{e}_{[34]}\left[\vec{\mathbf{e}}_{+-} \cdot \phi^+ + \vec{b}_{+-} \times \vec{V}^{+} + \vec{\mathbf{e}}_{AU} \cdot \phi^+ + \vec{b}_{AU} \times \vec{V}^+ \right]
\end{multline}
\begin{eqnarray
&&\vec{j}^{T+}_{VW} \equiv -2i\mathbf{e}_{[34]}\left[\left(\vec{E}_{A} + \vec{E}_{U}\right) \cdot \phi^+ + \left(\vec{B}_{A} + \vec{B}_{U}\right)\times \vec{V}^{+}\right]\nonumber
\\
&&-\sqrt{2}\left(\mathbf{e}_{[13]} - i\mathbf{e}_{[14]}\right)\left[\vec{E}_+ \cdot \phi_{A} + \vec{B}_+ \times \vec{A}\right]
\\
&&-\sqrt{2}\left(\mathbf{e}_{[23]} - i\mathbf{e}_{[24]}\right)\left[\vec{E}_+\cdot \phi_{U} + \vec{B}_+ \times \vec{U}\right]\nonumber
\end{eqnarray}
For $V^{T+}_{\mu }$: similarly, one gets
\begin{equation}\label{Whole Gauss equation V-
\vec{\nabla}\cdot \left[2\left(a_3 + \beta_+\beta_-\right)\vec{E}^{-} + 2b_3\left(\vec{\mathbf{e}}_{-A} + \vec{\mathbf{e}}_{-U}\right)\right]-\mathbf{m}_{V}^{2}\phi^{-} +l^{-}_{VT} + c^{-}_{VT} = \rho^{W-}_{VT}
\end{equation}
with
\begin{equation
l^{T-}_{V} = 0
\end{equation}
\begin{eqnarray
&&c^{T-}_{V} \equiv -\sqrt{2}\left(\mathbf{e}_{[13]} + i\mathbf{e}_{[14]}\right)\left[\vec{A}\cdot \left(\vec{\mathbf{e}}_{-A} + \vec{\mathbf{e}}_{-U}\right)\right]\nonumber
\\
&&-\sqrt{2}\left(\mathbf{e}_{[23]} + i\mathbf{e}_{[24]}\right)\left[\vec{U}\cdot \left(\vec{\mathbf{e}}_{-A} + \vec{\mathbf{e}}_{-U}\right)\right]
\\
&&-4i\mathbf{e}_{[34]}\left[\vec{V}^{-}\cdot \left(\vec{\mathbf{e}}_{+-} + \vec{\mathbf{e}}_{AU}\right)\right]\nonumber
\end{eqnarray}
\begin{equation
\rho^{T-}_{Vw} \equiv 2i\mathbf{e}_{[34]}\left(\vec{E}_{A} + \vec{E}_{U}\right)\cdot \vec{V}^{-} - \sqrt{2}\left(\mathbf{e}_{[13]} + i\mathbf{e}_{[14]}\right)\vec{E}_{-} \cdot \vec{A} - \sqrt{2}\left(\mathbf{e}_{[23]} + i\mathbf{e}_{[24]}\right)\vec{E}_- \cdot \vec{U}
\end{equation}
and
\begin{eqnarray}\label{Whole Ampere-Maxwell for V-
&&\vec{\nabla} \times \left[2\left(a_3 + \beta_+\beta_-\right)\vec{B}^+ + 2b_3\left(\vec{b}_{+A} + \vec{b}_{+U}\right)\right] - \frac{\partial}{\partial t}[2(a_3 + \beta_+\beta_-)\vec{E}_+ \nonumber
\\
&&+2b_3 (\vec{\mathbf{e}}_{+A} + \vec{\mathbf{e}}_{+U})] +\vec{l}^{+}_{TV} + \vec{c}^{+}_{TV} - \mathbf{m}^{2}_{V} \vec{V}^{+} = \vec{j}^{W+}_{VT}
\end{eqnarray}
with
\begin{eqnarray}
&&\vec{c}^{T-}_{V} \equiv -\sqrt{2}\left(\mathbf{e}_{[13]} + i\mathbf{e}_{[14]}\right)\left[\mathbf{e}_{-A}\cdot \phi_{A} + \vec{b}_{-A}\times \vec{A} + \vec{\mathbf{e}}_{-U} \cdot \phi_{A} + \vec{b}_{-U} \times \vec{A}\right]+\nonumber
\\
&&-\sqrt{2}\left(\mathbf{e}_{[23]} + i\mathbf{e}_{[24]}\right)\left[\vec{\mathbf{e}}_{-U}\cdot \phi_U + \vec{b}_{-A} \times \vec{U} + \vec{\mathbf{e}}_{-U} \cdot \phi_U + \vec{b}_{-U} \times \vec{U}\right] +
\\
&&-4i\mathbf{e}_{[34]}\left[\vec{\mathbf{e}}_{+-} \cdot \phi^- + \vec{b}_{+-} \times \vec{V}^{-} + \vec{\mathbf{e}}_{AU} \cdot \phi^- + \vec{b}_{AU} \times \vec{V}^-\nonumber \right]
\end{eqnarray}
\begin{eqnarray
&&\vec{j}^{T-}_{VW} \equiv -2i\mathbf{e}_{[34]}\left[\left(\vec{E}_{A} + \vec{E}_{U}\right) \cdot \phi^- + \left(\vec{B}_{A} + \vec{B}_{U}\right)\times \vec{V}^{-}\right]\nonumber
\\
&&-\sqrt{2}\left(\mathbf{e}_{[13]} + i\mathbf{e}_{[14]}\right)\left[\vec{E}_- \cdot \phi_{A} + \vec{B}_- \times \vec{A}\right]
\\
&&-\sqrt{2}\left(\mathbf{e}_{[23]} + i\mathbf{e}_{[24]}\right)\left[\vec{E}_-\cdot \phi_{U} + \vec{B}_- \times \vec{U}\right]\nonumber
\end{eqnarray}
Eqs. (4.14-4.17) follow the same conservation law as eq. (4.11)
Lorentz's symmetry relates to the presence of the physical spin-0 sector. It yields also two kinds of equations and corresponding conservation laws.
For $A^{L}_{\mu}$ (spin-0):
The time-dependent equation is
\begin{eqnarray}
&&\partial^{0}\left[s_{11}S^{\alpha}_{\alpha A} + c_{11}\left(s^{\alpha}_{\alpha AA} + s^{\alpha}_{\alpha UU}+ s^{\alpha}_{\alpha AU} + s^{\alpha}_{\alpha +-}\right)\right] + l^{0}_{AL} + c^{0}_{AL} = j^{W 0}_{AL} - J^{0}_{qL} \nonumber
\\
&&+ t_{11}\mathbf{m}U^{0}_{UL} + \frac{1}{2}t_{11}J^{0}_{UL}
\end{eqnarray}
with
\begin{multline}
l^{0}_{AL} = -\mathbf{e}_{(11)}\left(84 A^0 s^{\alpha}_{\alpha AA} + 4A^{0}s^{\alpha}_{\alpha AA} + 4s^{0i}_{+-}A_{i}\right) + 4\mathbf{e}_{(12)}\left(s^{00}_{AA}A_0 + s^{0i}_{AA}A_{i}\right)\\
-\mathbf{e}_{(12)}\left(68s^{\alpha}_{\alpha AA} U_{0} + 80 s^{\alpha}_{\alpha UU}U_0 + s^{\alpha}_{\alpha AU}U_0 \right),
\end{multline}
\begin{eqnarray}
&&c^{0}_{AL} = -\mathbf{e}_{(11)}\left(4s^{00}_{+-}A_{0} + 4 s^{0i}_{+-}A_{i} + 68s^{\alpha}_{\alpha +-} A_{0}\right)-72\mathbf{e}_{(12)}s^{\alpha}_{\alpha +-}U_0
\\
&&-\sqrt{2}\left(\mathbf{e}_{(13)}-i\mathbf{e}_{(14)}\right)\left(s^{00}_{+A}V^{-}_{0} + s^{0i}_{+A}V_{i}^{-} + 18s^{\alpha}_{\alpha+A}V_{0}^{-} + s^{0i}_{+U}V_{i}^{-} + 18s^{\alpha}_{\alpha +U} V_{0}^{-}\right)
\\
&&-\sqrt{2}\left(\mathbf{e}_{(13)}+i\mathbf{e}_{(14)}\right)\left(s^{00}_{-A}V^{+}_{0} + s^{0i}_{-A}V_{i}^{+} + 18s^{\alpha}_{\alpha-A}V_{0}^{+} + s^{0i}_{-U}V_{i}^{+} + 18s^{\alpha}_{\alpha -U} V_{0}^{+}\right)
\end{eqnarray}
and
\begin{eqnarray}
&&j^{0L}_{A} = 2 \left(\mathbf{e}_{(11)} + \mathbf{e}_{(12)}\right) \big\{ \left(\beta_1\vec{S}_A + \beta_2 \vec{S}_U \right) \cdot \left(\vec{A} + \vec{U}\right) \nonumber
\\
&&+ \left[\left( \beta_1 + 17\rho_1 \right)S^{\alpha}_{\alpha A} + \left(\beta_2 +17\rho_2\right) S^{\alpha}_{\alpha U} \right] \left(\phi_A + \phi_U\right) \big\}\nonumber
\\
&&+\sqrt{2}\left(\mathbf{e}_{(13)} - i\mathbf{e}_{(14)}\right)\left[\beta_+ \vec{S}^+\vec{V}^- + \left(\beta_+ + 17\rho_+\right)S^{\alpha}_{\alpha +}\phi^-\right] \nonumber
\\
&&- \beta_1\mathbf{e}_{(11)}\left(S^{\alpha}_{\alpha A} \phi_A + 2 \vec{S}_A \cdot \vec{A}\right)+
\\
&&-\beta_1\mathbf{e}\left(S^{\alpha}_{\alpha A}\phi_U + S^{\alpha}_{\alpha U}\phi_A + \vec{S}_A \cdot \vec{U} + \vec{S}_U \cdot \vec{A}\right) \nonumber
\\
&&- \beta_1\left(\mathbf{e}_{(33)} + \mathbf{e}_{(44)}\right) \left(S^{\alpha}_{\alpha +}\phi^- + S^{\alpha}_{\alpha -}\phi^+ + \vec{S}_+\vec{V}^- + \vec{S}_-\vec{V}^+\right)\nonumber
\end{eqnarray}
The correspondent vectorial equation is
\begin{eqnarray}
&&\partial^{i}\left[s_{11}S^{\alpha}_{\alpha A} + c_{11}\left(s^{\alpha}_{\alpha AA} + s^{\alpha}_{\alpha AU} + s^{\alpha}_{\alpha UU} + s^{\alpha}_{\alpha+-}\right)\right]+l^{i}_{AL} +\nonumber
\\
&&+ c^{i}_{AL} = - j^{i}_{AL} +J^{i}_{qL} - t_{11}\mathbf{m}_{U}^2U^{i}-\frac{1}{2}t_{11}j^{i}_{UL}
\end{eqnarray}
with
\begin{multline}
l^{i}_{AL} = -\mathbf{e}_{(11)}\left[4s^{i0}_{+-}A_0 + 4s^{ij}_{+-}A_{j} + 84s^{\alpha}_{\alpha UU}A^{i}\right]-\mathbf{e}_{(12)}\left(s^{0i}_{AA}A_{0} + s^{ij}_{AA}A_{j}\right)\\
-\mathbf{e}_{(12)}\left(68s^{\alpha}_{\alpha AA}U^{i} + 8s^{0i}U_{0}+8s^{ij}_{UU}U_{j}+76s^{\alpha}_{\alpha AU}U^{i}\right),
\end{multline}
\begin{multline}
c^{i}_{AL} = -\mathbf{e}_{(11)}\left(s_{+-}^{i0}A_{0} + s^{ij}_{+-}A_{j} + 68s^{\alpha}_{\alpha +-}A^{i}\right) - 72s^{\alpha}_{\alpha +-}A^{i}\\
-\sqrt{2}\left(\mathbf{e}_{(13)} - i\mathbf{e}_{(14)}\right)\left[s^{i0}_{+A}V_{0}^{-}+s^{ij}_{+A}V_{j}^{-} + 18s^{\alpha}_{\alpha +A}V^{i-}+s^{i0}_{+U}V^{-}_{0} +s^{ij}_{+U}V^{-}_{j} + 18s^{\alpha}_{\alpha +U}V^{i-}\right]+\\
-\sqrt{2}\left(\mathbf{e}_{(13)} + i\mathbf{e}_{(14)}\right)\left[s^{i0}_{-A}V_{0}^{+}+s^{ij}_{-A}V_{j}^{+} + 18s^{\alpha}_{\alpha -A}V^{i+}+s^{i0}_{-U}V^{+}_{0} +s^{ij}_{-U}V^{+}_{j} + 18s^{\alpha}_{\alpha -U}V^{i+}\right]
\end{multline}
and
\begin{multline}
j^{i}_{AL} = 2\beta_1\left\{S^{i0}_A\left(\mathbf{e}_{(11)}A_0 + \mathbf{e}_{(12)}U_0\right) + S^{ij}_A\left( \mathbf{e}_{(11)}A_j + \mathbf{e}_{(12)}U_j \right) + S^{\alpha}_{\alpha A}\left( \mathbf{e}_{(11)}A^i + \mathbf{e}_{(12)}U^i \right)\right\}\\
+2\beta_2\left\{S^{i0}_U\left(\mathbf{e}_{(11)}A_0 + \mathbf{e}_{(12)}U_0\right) + S^{ij}_U\left( \mathbf{e}_{(11)}A_j + \mathbf{e}_{(12)}U_j \right) + S^{\alpha}_{\alpha U}\left( \mathbf{e}_{(11)}A^i + \mathbf{e}_{(12)}U^i \right)\right\}\\
34\rho_1S^{\alpha}_{\alpha A}\left(\mathbf{e}_{(11)}A^i + \mathbf{e}_{(12)}U^i\right) + 34\rho_2S^{\alpha}_{\alpha U}\left(\mathbf{e}_{(11)}A^i + \mathbf{e}_{(12)}U^i\right) + \sqrt{2}\left(\mathbf{e}_{(13)} - i\mathbf{e}_{(14)}\right)[\beta_+S^{i0}_+ V_0^- +\\
+\beta_+S^{ij}_+V_j^- \left(\beta_+ + 17\rho_+\right)S^{\alpha}_{\alpha +}V^{i-}] + \sqrt{2}\left(\mathbf{e}_{(13)} + i\mathbf{e}_{(14)}\right)[\beta_-S^{i0}_- V_0^+ +\beta_-S^{ij}_-V_j^+ \left(\beta_- + 17\rho_-\right)S^{\alpha}_{\alpha -}V^{i+}]
\end{multline}
A realistic four scalar photon's quadruplet appears. However, the scalar photon physics contains peculiarities.These scalar field's strength dynamics can be removed by the gauge fixing term. A pure photonic is expressed through the London, conglomerate, and current terms.
The following conservation law is obtained
\begin{equation}
\partial^{i}\rho_{L}^{A C} + \partial_{0}j^{i L}_{A C}=0
\end{equation}
where
\begin{equation}
\rho^{L}_{A C} = - l^{0}_{AL} - c^{0}_{AL} + j^{W 0}_{AL} + t_{11}\mathbf{m}U^{0}_{UL} + \frac{1}{2}t_{11}J^{0}_{UL}
\end{equation}
\begin{equation}
j^{i L}_{A C} = -l^{i}_{AL} - c^{i}_{AL} - j^{i}_{AL} - t_{11}\mathbf{m}_{U}^2U^{i}-\frac{1}{2}t_{11}j^{i}_{UL}
\end{equation}
For $U^{L}_{\mu}$:
The spin-0 massive photon is associated to the following temporal and spatial equations,
\begin{eqnarray}
&&\partial_{0}\left[s_{22}S^{\alpha}_{\alpha U} + c_{22}\left(s^{\alpha}_{\alpha AA} + s^{\alpha}_{\alpha UU} + s^{\alpha}_{\alpha AU} + s^{\alpha}_{\alpha +-}\right)\right] - 2\mathbf{m}^{2}_{U}U^{0} + l_{UL}^{0}
\\
&&+c^{0}_{UL} = j^{0}_{UL} - k_{1}j^{W 0}_{AL} - J^{0}_{qL}
\end{eqnarray}
with
\begin{multline}
l^{0}_{UL} = -\mathbf{e}_{(12)}\left[s^{00}_{UU}A_0 + 4s^{0i}_{UU}A_{i} + 8s^{00}_{AU}A_{0} + 8s^{0i}_{AU}A_{i} + 72s^{\alpha}_{\alpha AA}A^{0} + 4s^{\alpha}_{\alpha}A^{0} + 68s^{\alpha}_{\alpha UU}A^{0}\right] - \\
-\mathbf{e}_{(22)}\left[s^{00}_{UU}U_0 + 4s^{0i}_{UU}U_{i} + 4s^{\alpha}_{\alpha UU}U^{0} + 4s^{00}_{AA}U_{0} + 4s^{0i}U_{i} + s^{\alpha}_{\alpha}U^{0} + 72s^{\alpha}_{\alpha AU}U^{0}\right]+\\
+4\mathbf{e}_{(12)}\left[s^{00}_{AU}U_0 + s^{0i}_{AU}U_{i}\right],
\end{multline}
\begin{eqnarray}
&&c^{0}_{UL} = -72\mathbf{e}_{(12)}s^{\alpha}_{\alpha+-}A^{0}-\mathbf{e}_{(12)}s^{\alpha}_{\alpha +-}U^{0}-\mathbf{e}_{(22)}\left[4s^{00}_{+-}U_0 + 4s^{0i}_{+-}U_{i} + 68s^{\alpha}_{\alpha +-}U^{0} \right]+\nonumber
\\
&&-\sqrt{2}\left(\mathbf{e}_{(23)} -i\mathbf{e}_{(24)}\right)[s^{0i}_{+A}V^{-}_{i}+s^{00}_{+A}V^{-}_{i}+18s^{\alpha}_{\alpha +A}V^{0-} + s^{00}_{+U}V^{-}_{0} + s^{0i}_{+U}V^{-}_{i} \nonumber
\\
&&+ 18s^{\alpha}_{\alpha +U}V^{0-}]-\sqrt{2}\left(\mathbf{e}_{(23)} +i\mathbf{e}_{(24)}\right)[s^{0i}_{-A}V^{+}_{i}+s^{00}_{-A}V^{+}_{i}+18s^{\alpha}_{\alpha -A}V^{0+} + s^{00}_{-U}V^{+}_{0} \nonumber
\\
&&+ s^{0i}_{-U}V^{+}_{i} + 18s^{\alpha}_{\alpha -U}V^{0+}]
\end{eqnarray}
and
\begin{eqnarray}
&&j^{0L}_{U} = 2 \left(\mathbf{e}_{(22)} + \mathbf{e}_{(12)}\right) \big\{ \left(\beta_1\vec{S}_A + \beta_2 \vec{S}_U \right) \cdot \left(\vec{A} + \vec{U}\right) \nonumber
\\
&&+ \left[\left( \beta_1 + 17\rho_1 \right)S^{\alpha}_{\alpha A} + \left(\beta_2 +17\rho_2\right) S^{\alpha}_{\alpha U} \right] \left(\phi_A + \phi_U\right) \big\}\nonumber
\\
&&+\sqrt{2}\left(\mathbf{e}_{(23)} - i\mathbf{e}_{(24)}\right)\left[\beta_+ \vec{S}^+\vec{V}^- + \left(\beta_+ + 17\rho_+\right)S^{\alpha}_{\alpha +}\phi^-\right]
\\
&&- \beta_1\mathbf{e}_{(22)}\left(S^{\alpha}_{\alpha U} \phi_U + 2 \vec{S}_U \cdot \vec{U}\right)-\beta_2\mathbf{e}_{(12)}\big(S^{\alpha}_{\alpha A}\phi_U +\nonumber
\\
&&S^{\alpha}_{\alpha U}\phi_A + \vec{S}_A \cdot \vec{U} + \vec{S}_U \cdot \vec{A}\big) - \beta_2\left(\mathbf{e}_{(33)} + \mathbf{e}_{(44)}\right) \big(S^{\alpha}_{\alpha +}\phi^-\nonumber
\\
&&+ S^{\alpha}_{\alpha -}\phi^+ + \vec{S}_+\vec{V}^- + \vec{S}_-\vec{V}^+\big)\nonumber
\end{eqnarray}
The Amp$\grave{e}$re longitudinal massive photon is
\begin{eqnarray}
&&\partial^{i}\left[s_{22}S^{\alpha}_{\alpha U} +c_{22}\left(s^{\alpha}_{\alpha AA} 6 s^{\alpha}_{\alpha UU} + s^{\alpha}_{\alpha AU} + s^{\alpha}_{\alpha +-}\right)\right] -2\mathbf{m}^{2}_{U}U^{i}_{L}+l^{i}_{UL} \nonumber
\\
&&+ c^{i}_{UL} = j^{i}_{UL} -k_{1}j^{i}_{AL}-j^{i}_{qL}
\end{eqnarray}
with
\begin{multline}
l^{i}_{UL} = -\mathbf{e}_{(12)}\left[4s^{0i}_{UU}A_{0}+4s^{ij}A_{j} + 8s^{0i}_{AU}A_{0} + 8s^{ij}_{AU}A_{j} + 72s^{\alpha}_{\alpha AA}A^{i} + 4s^{\alpha}_{\alpha AU}A^{i}+68s^{\alpha}_{\alpha UU}A^{i}\right] - \\
\mathbf{e}_{(22)}\left[4s^{0i}_{UU}U_{0} + 4s^{ij}_{UU}U_{j} + 8s^{\alpha}_{\alpha UU}U^{i} + 4s^{0i}_{AA}U_0 + 4s^{ij}_{AA}U_{j} + 4s^{\alpha}_{\alpha AA}U^{i} +72s^{\alpha}_{\alpha AU}U^{i}\right]-\\
+4\mathbf{e}_{(12)}\left[s^{0i}_{AU}U_0 + s^{ij}_{AU}U_j\right],
\end{multline}
\begin{multline}
c^{i}_{UL} =72\mathbf{e}_{(12)}s^{\alpha}_{\alpha +-}A^{i}-\mathbf{e}_{(22)}\left[4s^{\alpha}_{\alpha +-}U^{i}-4s^{0i}U_0 +4s^{ij}_{+-}+68s^{\alpha}_{\alpha +-}U^{i}\right]+\\
-\sqrt{2}\left(\mathbf{e}_{(23)}-\mathbf{e}_{(24)}\right)\left[s^{0i}_{+A}V^{-}_{0}+s^{ij}_{+A}V^{-}_{j}+18s^{\alpha}_{\alpha +A}V^{-i}+s^{i0}_{+U}V_{0}^{-}+s^{ij}_{+U}V^{-}_{j}+s^{\alpha}_{\alpha +U}V^{-i}\right]\\
-\sqrt{2}\left(\mathbf{e}_{(23)}+\mathbf{e}_{(24)}\right)\left[s^{0i}_{-A}V^{+}_{0}+s^{ij}_{-A}V^{+}_{j}+18s^{\alpha}_{\alpha -A}V^{+i}+s^{i0}_{-U}V_{0}^{+}+s^{ij}_{-U}V^{+}_{j}+s^{\alpha}_{\alpha -U}V^{+i}\right],
\end{multline}
and
\begin{multline}
j^{i}_{UL} = 2\beta_2\left\{S^{i0}_U\left(\mathbf{e}_{(22)}U_0 + \mathbf{e}_{(12)}A_0\right) + S^{ij}_U\left( \mathbf{e}_{(22)}U_j + \mathbf{e}_{(12)}A_j \right) + S^{\alpha}_{\alpha U}\left( \mathbf{e}_{(22)}U^i + \mathbf{e}_{(12)}A^i \right)\right\}\\
+2\beta_1\left\{S^{i0}_A\left(\mathbf{e}_{(22)}U_0 + \mathbf{e}_{(12)}A_0\right) + S^{ij}_A\left( \mathbf{e}_{(22)}U_j + \mathbf{e}_{(12)}A_j \right) + S^{\alpha}_{\alpha A}\left( \mathbf{e}_{(22)}U^i + \mathbf{e}_{(12)}A^i \right)\right\}\\
34\rho_2S^{\alpha}_{\alpha U}\left(\mathbf{e}_{(22)}U^i + \mathbf{e}_{(12)}A^i\right) + 34\rho_1S^{\alpha}_{\alpha A}\left(\mathbf{e}_{(22)}U^i + \mathbf{e}_{(12)}A^i\right) + \sqrt{2}\left(\mathbf{e}_{(23)} - i\mathbf{e}_{(24)}\right)[\beta_+S^{i0}_+ V_0^- +\\ +\beta_+S^{ij}_+V_j^- \left(\beta_+ + 17\rho_+\right)S^{\alpha}_{\alpha +}V^{i-}] + \sqrt{2}\left(\mathbf{e}_{(23)} + i\mathbf{e}_{(24)}\right)[\beta_-S^{i0}_- V_0^+ +\beta_-S^{ij}_-V_j^+ \left(\beta_- + 17\rho_-\right)S^{\alpha}_{\alpha -}V^{i+}]
\end{multline}
The last pair of longitudinal Gauss and Amp$\grave{e}$re laws are associated with the massive positive charged photon. They are
For $V^{L-}_{\mu}$:
Similarly, two longitudinal equations are obtained. They are
\begin{multline}
\partial^{0}\left\{s_{-}S^{\alpha}_{\alpha -} + c_{-}\left(s^{\alpha}_{\alpha -A} + s^{\alpha}_{\alpha -U}\right)\right\}-\mathbf{m}^{2}_{V}V^{0-}_{L} + l^{0-}_{VL} + c^{0-}_{VL} = j^{0-}_{VL}
\end{multline}
with
\begin{equation}
l^{0-}_{VL} = -34\left(\mathbf{e}_{(33)} + \mathbf{e}_{(44)}\right)\left(s^{\alpha}_{\alpha AA} + s^{\alpha}_{\alpha UU}\right)V^{0-} -16\left(\mathbf{e}_{(33)}-\mathbf{e}_{(44)}\right)s^{\alpha}_{\alpha--}V^{0+}
-32i\mathbf{e}_{(34)}s^{\alpha}_{\alpha--}V^{0+}
\end{equation}
\begin{multline}
c^{0-}_{VL} = -\sqrt{2}\left(\mathbf{e}_{(13)}+i\mathbf{e}_{(14)}\right)\left[s^{00}_{-A}A_{0} + s^{0i}_{-A}A_{i}+17\left(s^{\alpha}_{\alpha -A} + s^{\alpha}_{\alpha -U}\right)A^{0} +s^{0i}_{-U}A_{i}\right]\\
-\sqrt{2}\left(\mathbf{e}_{(23)}+i\mathbf{e}_{(24)}\right)\left[s^{00}_{-A}U_{0}+s^{0i}_{-A}U_{i} + 17\left(s^{\alpha}_{\alpha -A} + s^{\alpha}_{\alpha -U}\right)U^{0} + s{0i}_{-U}U_{i}\right],
\end{multline}
\begin{multline}
j^{0-}_{VL} = \left(\mathbf{e}_{33} + \mathbf{e}_{(44)}\right)\left[\left( \beta_1\vec{S}_{A} + \beta_2 \vec{S}_U\right)\cdot \vec{V}^{-} + \left(\beta_1S^{\alpha}_{\alpha A} + \beta_2S^{\alpha}_{\alpha U} + 17\rho_1S^{\alpha}_{\alpha A} + \rho_2S^{\alpha}_{\alpha U} \right)\phi^{-}\right]\\
+\sqrt{2}\left(\mathbf{e}_{(13)} + i \mathbf{e}_{(14)}\right)\left[\beta_- \vec{S}_{-}\cdot \vec{A} + \left(\beta_- + 17 \rho_- \right)S^{\alpha}_{\alpha -}\phi_A\right] +\sqrt{2}\left(\mathbf{e}_{(23)} + i \mathbf{e}_{(24)}\right)[\beta_- \vec{S}_{-}\cdot \vec{U} + \\
+\left(\beta_- + 17 \rho_- \right)S^{\alpha}_{\alpha -}\phi_U]
\end{multline}
and
\begin{equation}
\partial^{i}\left\{s_{-}S^{\alpha}_{\alpha -} + c_{-}\left(s^{\alpha}_{\alpha -A} + s^{\alpha}_{\alpha -U}\right)\right\} -\mathbf{m}^{2}_{V}V^{i-}_{L} + l^{i-}_{VL} + c^{i-}_{VL} = j^{i-}_{VL}
\end{equation}
with
\begin{multline}
l^{i-}_{VL} = -\left(\mathbf{e}_{(33)} + \mathbf{e}_{(44)}\right)\left(s^{\alpha}_{\alpha AA} + s^{\alpha}_{\alpha UU}\right)V^{i-} -16\left(\mathbf{e}_{(33)}-\mathbf{e}_{(44)}\right)s^{\alpha}_{\alpha --}V^{i+} - 32\mathbf{e}_{(34)}s^{\alpha}_{\alpha --}V^{i+}
\end{multline}
\begin{multline}
c^{i-}_{VL} = -\sqrt{2}\left(\mathbf{e}_{(13)} +i \mathbf{e}_{(14)}\right)\left[\left(s^{0i}_{-A} + s^{0i}_{-U}\right)A_0 + \left(s^{ij}_{-A} + s^{ij}_{-U}\right)A_{j} + 17\left(s^{\alpha}_{\alpha -A} + s^{\alpha}_{\alpha -U}\right)A^{i}\right]+\\
-\sqrt{2}\left(\mathbf{e}_{(23)} +i \mathbf{e}_{(24)}\right)\left[\left(s^{0i}_{-A} + s^{0i}_{-U}\right)U_{0} + \left(s^{ij}_{-A} +s^{ij}_{-U}\right)U_{j} + 17\left(s^{\alpha}_{\alpha -A} + s^{\alpha}_{\alpha -U}\right)U^{i} \right]+\\
-2\left(\mathbf{e}_{(33)} + \mathbf{e}_{(44)} \right)\left[\left(s^{0i}_{AA}+s^{0i}_{AU} + s^{0i}_{UU} + s^{0i}_{+-}\right)V^{-}_0 + \left(s^{ij}_{AA} + s^{ij}_{AU} + s^{ij}_{UU} + s^{ij}_{+-}\right)V^{-}_{j} + \left(s^{\alpha}_{\alpha AU} + s^{\alpha}_{\alpha +-}\right)V^{i-}\right]\\
\left(\mathbf{e}_{(33)} - \mathbf{e}_{(44)}\right)s^{\alpha}_{\alpha --}V^{i-}-2i\mathbf{e}_{(34)}s^{ij}_{--}V^{+}_{j}
\end{multline}
\begin{multline}
j^{i-}_{VL} = \left(\mathbf{e}_{33} + \mathbf{e}_{(44)}\right)\left[\left(\beta_1S_A^{0i} + \beta_2S_U^{0i}\right)V_0{+} + \left(\beta_1S^{\alpha}_{\alpha}\right)V^{i+} + \left(\rho_1S^{\alpha}_{\alpha A} + \rho_2S^{\alpha}_{\alpha U}\right)V^{i+}\right]+\\
+\sqrt{2}\left(\mathbf{e}_{(13)} - i \mathbf{e}_{(14)}\right)\left[\beta_+S^{0i}_+A_0 + \beta_+S^{ji}_+A_j + \left(\beta_+ + 17 \rho_+\right)S^{\alpha}_{\alpha +}A^{i}\right] +\\
+\sqrt{2}\left(\mathbf{e}_{(23)} - i \mathbf{e}_{(24)}\right)\left[\beta_+S^{0i}_+U_0 + \beta_+S^{ji}_+U_j + \left(\beta_+ + 17 \rho_+\right)S^{\alpha}_{\alpha +}U^{i}\right] +\\
-\beta_-\left(\mathbf{e}_{(13)} + i\mathbf{e}_{(14)}\right)\left\{\frac{1}{\sqrt{2}}\left[S^{\alpha}_{\alpha +}A^{i} + S^{\alpha}_{\alpha A}V^{i+}\right]+\sqrt{2}\left[S_+^{0i}A_0 + S_+^{ji}A_j + S^{0i}_AV_0^{+} + S^{ji}_A V^{i+}\right]\right\}\\
-\beta_+\left(\mathbf{e}_{(23)} + i\mathbf{e}_{(24)}\right)\left\{\frac{1}{\sqrt{2}}\left[S^{\alpha}_{\alpha +}U^{i} + S^{\alpha}_{\alpha U}V^{i+}\right]+\sqrt{2}\left[S_+^{0i}U_0 + S_+^{ji}U_j + S^{0i}_UV_0^{+} + S^{ji}_U V^{i+}\right]\right\}
\end{multline}
For $V^{L+}_{\mu}$:
\begin{equation}
\partial^{0}\left\{s_{+}S^{\alpha}_{\alpha +} + c_{+}\left(s^{\alpha}_{\alpha +A} + s^{\alpha}_{\alpha +U}\right)\right\}-\mathbf{m}^{2}_{V}V^{0+}_{L} + l^{0+}_{VL} + c^{0+}_{VL} = j^{0+}_{VL}
\end{equation}
with
\begin{multline}
l^{0+}_{VL} = -34\left(\mathbf{e}_{(13)}-i\mathbf{e}_{(14)}\right)\left(s^{\alpha}_{\alpha +A} + s^{\alpha}_{\alpha UU} \right)V^{0+} -16\left(\mathbf{e}_{(33)}-\mathbf{e}_{(44)}\right)s^{\alpha}_{\alpha ++}V^{0-}-32\mathbf{e}_{(34)}s^{\alpha}_{\alpha ++}V^{0-},
\end{multline}
\begin{multline}
c^{0+}_{VL} = -\sqrt{2}\left(\mathbf{e}_{(13)}-i\mathbf{e}_{(14)}\right)\left[\left(s^{00}_{+A} + s^{00}_{+U} \right)A_{0} + \left(s^{0i}_{+A} + s^{0i}_{+U}\right)A_{i} + 17\left(s^{\alpha}_{\alpha +A} + s^{\alpha}_{\alpha +U}\right)A^{0}\right]\\
-\sqrt{2}\left(\mathbf{e}_{(24)} - i\mathbf{e}_{(24)}\right)\left[\left(s^{00}_{+A} + s^{00}_{+U}\right)U_{0} + \left(s^{0i}_{+A} + s^{0i}_{+U}\right)U_{i} + \left(s^{\alpha}_{\alpha +A} + s^{\alpha}_{\alpha +U}\right)U^{0}\right]\\-2\left(\mathbf{e}_{(33)} + \mathbf{e}_{(44)}\right)\left[\left(s^{00}_{AA} + s^{00}_{UU} + s^{00}_{AU} + s^{00}_{+-}\right)V_{0}^{+} + \left(s^{0i}_{AA} + s^{0i}_{UU} + s^{0i}_{AU} + s^{0i}_{+-}\right)V^{+}_{i} + s^{\alpha}_{\alpha +-}V^{0+}\right]\\
-\left(\mathbf{e}_{(33)} - \mathbf{e}_{(44)}\right)s^{\alpha}_{\alpha ++}V^{0-} - 2i\mathbf{e}_{(34)}s^{00}_{++}V^{-}_{0}-2i\mathbf{e}_{(34)}s^{0i}_{++}V^{-}_{i},
\end{multline}
\begin{multline}
j^{0+}_{VL} = \left(\mathbf{e}_{33} + \mathbf{e}_{(44)}\right)\left[\left( \beta_1S^{0i}_{A} + \beta_2 S^{0i}_U\right)\cdot V_{i}^{+} + \left(\beta_1S^{\alpha}_{\alpha A} + \beta_2S^{\alpha}_{\alpha U} + 17\rho_1S^{\alpha}_{\alpha A} + \rho_2S^{\alpha}_{\alpha U} \right)V^{0+}\right]\\
+\sqrt{2}\left(\mathbf{e}_{(13)} - i \mathbf{e}_{(14)}\right)\left[\beta_+ S^{0i}_{+}A_{i} + \left(\beta_+ + 17 \rho_+ \right)S^{\alpha}_{\alpha +}A_{0}\right] +\sqrt{2}\left(\mathbf{e}_{(23)} - i \mathbf{e}_{(24)}\right)[\beta_+ S^{0i}_{+}U_{i} + \\
+\left(\beta_+ + 17 \rho_+ \right)S^{\alpha}_{\alpha +}U_{0}].
\end{multline}
and
\begin{equation}
\partial^{i}\left[s_{+}S^{\alpha}_{\alpha +} + c_{+}\left(s^{\alpha}_{\alpha +A} + s^{\alpha}_{\alpha +U}\right)\right] -\mathbf{m}^{2}_{V}V^{i+}_{L} + l^{i+}_{VL} + c^{i+}_{VL} = j^{i+}_{VL}
\end{equation}
with
\begin{multline}
l^{i+}_{VL} = -34\left(\mathbf{e}_{(33)} + \mathbf{e}_{(44)}\right)\left(s^{\alpha}_{\alpha AA} + s^{\alpha}_{\alpha UU}\right)V^{i+} - 16\left(\mathbf{e}_{(33)} - \mathbf{e}_{(44)}\right)s^{\alpha}_{\alpha ++}V^{i-}-32i\mathbf{e}_{(34)}s^{\alpha}_{\alpha ++}V^{i-},
\end{multline}
\begin{multline}
c^{i+}_{VL} = -\sqrt{2}\left(\mathbf{e}_{(13)} - i\mathbf{e}_{(14)}\right)\left[\left(s^{0i}_{+A} + s^{0i}_{+U}\right)A_{0} + \left(s^{ij}_{+A} + s^{ij}_{+U}\right)A_{j} + 17\left(s^{\alpha}_{\alpha +A} + s^{\alpha}_{\alpha +U} \right)A^{i}\right]\\
-\sqrt{2}\left(\mathbf{e}_{(23)} - i\mathbf{e}_{(24)}\right)\left[\left(s^{0i}_{+A} + s^{0i}_{+U}\right)U_{0} + \left(s^{ij}_{+A} + s^{ij}_{+U}\right)U_{j} + 17\left(s^{\alpha}_{\alpha +A} + s^{\alpha}_{\alpha +U} \right)U^{i}\right]\\
-2\left(\mathbf{e}_{(33)} +\mathbf{e}_{(44)}\right)\left[\left(s^{0i}_{AA} + s^{0i}_{UU} + s^{0i}_{AU} + s^{0i}_{+-}\right)V^{+}_{0} + \left(s^{ij}_{AA} + s^{ij}_{UU} + s^{ij}_{AU} + s^{ij}_{+-}\right)V^{+}_{j} + \left(s^{\alpha}_{\alpha AU} + s^{\alpha}_{\alpha +-}\right)V^{i+}\right]\\
-\left(\mathbf{e}_{(33)} - \mathbf{e}_{(44)}\right)s^{\alpha}_{\alpha ++}V^{i-} -2i\mathbf{e}_{(34)}s^{0i}V^{-}_{0}-2i\mathbf{e}_{(34)}s^{ij}_{++}V_{j}^{-},
\end{multline}
\begin{multline}
\vec{j}^{i+}_{V L}=\left(\mathbf{e}_{33} + \mathbf{e}_{(44)}\right)\left[\left(\beta_1S_A^{0i} + \beta_2S_U^{0i}\right)V_0{+} + \left(\beta_1S^{\alpha}_{\alpha}\right)V^{i+} + \left(\rho_1S^{\alpha}_{\alpha A} + \rho_2S^{\alpha}_{\alpha U}\right)V^{i+}\right]+\\
+\sqrt{2}\left(\mathbf{e}_{(13)} - i \mathbf{e}_{(14)}\right)\left[\beta_+S^{0i}_+A_0 + \beta_+S^{ji}_+A_j + \left(\beta_+ + 17 \rho_+\right)S^{\alpha}_{\alpha +}A^{i}\right] +\\
+\sqrt{2}\left(\mathbf{e}_{(23)} - i \mathbf{e}_{(24)}\right)\left[\beta_+S^{0i}_+U_0 + \beta_+S^{ji}_+U_j + \left(\beta_+ + 17 \rho_+\right)S^{\alpha}_{\alpha +}U^{i}\right]\\
-\beta_-\left(\mathbf{e}_{(13)} + i\mathbf{e}_{(14)}\right)\left\{\frac{1}{\sqrt{2}}\left[S^{\alpha}_{\alpha +}A^{i} + S^{\alpha}_{\alpha A}V^{i+}\right]+\sqrt{2}\left[S_+^{0i}A_0 + S_+^{ji}A_j + S^{0i}_AV_0^{+} + S^{ji}_ A V^{i+}\right]\right\}\\
-\beta_+\left(\mathbf{e}_{(23)} + i\mathbf{e}_{(24)}\right)\left\{\frac{1}{\sqrt{2}}\left[S^{\alpha}_{\alpha +}U^{i} + S^{\alpha}_{\alpha U}V^{i+}\right]+\sqrt{2}\left[S_+^{0i}U_0 + S_+^{ji}U_j + S^{0i}_UV_0^{+} + S^{ji}_U V^{i+}\right]\right\}.
\end{multline}
Eqs. (4.53-4.69) will follow the same conservation law as eq. (4.46).
A four-four electromagnetism is generated at fundamental level. The above equations respect originals EM postulates and expand the EM behaviour. Introducing as EM completeness, four interconnected photons. A new EM dynamics is derived. It contains not only nonlinear granular, as collective EM fields and the presence of potential fields. Adimensional coupling constants $g_{I}$ between EM fields and vector potential fields are introduced. not depending on electric charge. Mass without the Higgs mechanism is incorporated. Two physics appear with spin-1 and spin-0, differing, as observables and dynamics,
Thus, the above equations are showing how symmetry is more important than nature constants. The electric charge universality appears through its symmetry and no more as a coupling constant. These equations are introducing diverse coupling constants without depending on electric charge. Expressing that, the EM principle is on electric charge symmetry, and not, on its Millikan value.
\section{Collective Bianchi identities}
EM quadruplet introduces collective Bianchi identities with sources. They are associated with each collective vector bosons fields. It gives antisymmetric and symmetric Bianchi Identities.
For antisymmetric sector:
\begin{equation
\vec{\nabla} \times \vec{\mathbf{e}}_{AU} + \frac{\partial}{\partial t}\vec{b}_{AU} = \mathbf{e}_{[12]}\left(\vec{A} \times \vec{E}_U - \vec{U}\times \vec{E}_A + \phi_A\vec{B}_U - \phi_U\vec{B}_A\right)
\end{equation}
\begin{equation
\vec{\nabla}\cdot \vec{b}_{AU} = \mathbf{e}_{[12]}\left(-\vec{A}\cdot \vec{B}_U - \vec{U} \cdot \vec{B}_A\right),
\end{equation}
with the following conservation law.
\begin{equation}
\frac{\partial}{\partial t}\left(\vec{A}\cdot\vec{B}_{U}+\vec{U}\cdot
\vec{B}_{A}\right) = \vec{\nabla}\cdot\left(\vec{A}\times\vec{E}_{U} - \vec{U}\times\vec{E}_{A} + \phi_{A}\vec{B}_{U} - \phi_{U}\vec{B}_{A}\right)
\end{equation}
Thus, the magnetic monopole comes out naturally from the extended abelian symmetry. A structure similar to ice glasses is expressed in terms of fields [35].
\begin{equation
\vec{\nabla} \times \vec{\mathbf{e}_{+-}} + \frac{\partial}{\partial t} \vec{b}_{+-} = -i \mathbf{e}_{[34]}Im\left\{\vec{V}^{+} \times \vec{E}_{-} - \phi^{+}\vec{B}_-\right\}
\end{equation}
\begin{equation
\vec{\nabla}\cdot \vec{b}_{+-} = i\mathbf{e}_{[+4]}Im\left\{\vec{V}^{+}\cdot \vec{B}_-\right\},
\end{equation}
\begin{eqnarray
&&\vec{\nabla} \times \left(\vec{\mathbf{e}}_{+A} +\vec{\mathbf{e}}_{-A}\right) + \frac{\partial}{\partial t}\left(\vec{b}_{+A} + \vec{b}_{-A}\right) = Re\big\{\frac{1}{\sqrt{2}}\left(\mathbf{e}_{[13]} + i\mathbf{e}_{[14]}\right)\big(\vec{A}\ \times \vec{E}_+\nonumber
\\
&&- \vec{V}^+ \times \vec{E}_{A} + \phi_A\vec{B}_+ - \phi_+ \vec{B}_A\big)\big\}
\end{eqnarray}
\begin{equation
\vec{\nabla} \cdot \left(\vec{b}_{+A} + \vec{b}_{-A}\right) = Re\left\{\frac{1}{\sqrt{2}} \left(\mathbf{e}_{[13]} + i \mathbf{e}_{[14]}\right)\left(-\vec{A} \cdot \vec{B}_+ +\vec{V}^{+}\cdot \vec{B}_A\right)\right\},
\end{equation}
\begin{eqnarray
&&\vec{\nabla} \times \left(\vec{\mathbf{e}}_{+U} +\vec{\mathbf{e}}_{-U}\right) + \frac{\partial}{\partial t}\left(\vec{b}_{+U} + \vec{b}_{-U}\right) = Re\big\{\frac{1}{\sqrt{2}}\left(\mathbf{e}_{[23]} + i\mathbf{e}_{[24]}\right)\big(\vec{U}\ \times \vec{E}_++\nonumber
\\
&&- \vec{V}^+ \times \vec{E}_{U} + \phi_U\vec{B}_+ - \phi_+ \vec{B}_U\big)\big\}
\end{eqnarray}
\begin{equation
\vec{\nabla} \cdot \left(\vec{b}_{+U} + \vec{b}_{-U}\right) = Re\left\{\frac{1}{\sqrt{2}} \left(\mathbf{e}_{[23]} + i \mathbf{e}_{[24]}\right)\left(-\vec{U} \cdot \vec{B}_+ +\vec{V}^{+}\cdot \vec{B}_U\right)\right\},
\end{equation}
Eq. (5.4-5.9) will follow the same conservation law as eq. (5.3)
For symmetric sector:
\begin{equation}
\partial^{i}s^{0j}_{AA} + \partial^{j}s^{i0}_{AA} + \partial^{0}s^{ij}_{AA} = \mathbf{e}_{(11)}\left\{A^{i}S^{0j}_{A} + A^{j}S^{0i}_{A} + A^{0}S^{ij}\right\}
\end{equation}
\begin{equation}
\frac{\partial}{\partial t}\vec{s}_{AA} = \mathbf{e}_{(11)}\left\{\phi_{A}\cdot \vec{S}_{A} + \vec{A}S_{A}\right\},
\end{equation}
where eqs. (5.9) and (5.10) are showing the photon Faraday law, Similarly for other quadruplet fields
\begin{equation}
\partial^{i}s^{0j}_{UU} + \partial^{j}s^{i0}_{UU} + \partial^{0}s^{ij}_{UU} = \mathbf{e}_{(22)}\left\{U^{i}S^{0j}_{U} + U^{j}S^{0i}_{U} + U^{0}S^{ij}\right\}
\end{equation}
\begin{equation}
\frac{\partial}{\partial t}\vec{s}_{UU} = \mathbf{e}_{(11)}\left\{\phi_{U}\cdot \vec{S}_{U} + \vec{U}S_{U}\right\},
\end{equation}
\begin{equation}
\partial^{i}s^{0j}_{AU} + \partial^{j}s^{i0}_{AU} + \partial^{0}s^{ij}_{AU} = \mathbf{e}_{(12)}\left\{A^{i}S^{0j}_{U} + A^{j}S^{0i}_{U} + A^{0}S^{ij}_{U} + U^{i}S^{0j}_{A} + U^{j}S^{0i}_{A} + U^{0}S^{ij}_{A} \right\}
\end{equation}
\begin{equation}
\frac{\partial}{\partial t}\vec{s}_{AU} = \mathbf{e}_{(12)}\left\{\phi_{A} \vec{S}_{U} + \vec{A}S_{U} + \phi_{U} \vec{S}_{A} + \vec{U}S_{A}\right\},
\end{equation}
\begin{equation}
\partial^{i}s^{0j}_{+-} + \partial^{j}s^{i0}_{+-} + \partial^{0}s^{ij}_{+-} = \left(\mathbf{e}_{(33)} + \mathbf{e}_{(44)}\right)Re\left\{V^{i+}S^{0j}_{-} + V^{j+}S^{0i}_{-} + A^{0}S^{ij}_-\right\}
\end{equation}
\begin{equation}
\frac{\partial}{\partial t}\vec{s}_{+-} = \left(\mathbf{e}_{(33)} + \mathbf{e}_{(44)}\right)Re\left\{\phi_{+}\vec{S}_{+} + \vec{V}^{+}S_{-}\right\},
\end{equation}
\begin{eqnarray}
&&\partial^{i}\left(s^{0j}_{A+} + s^{0j}_{A-} \right)+ \partial^{j}\left(s^{i0}_{A+} + s^{i0}_{A-} \right) \partial^{0}\left(s^{ij}_{A+} + s^{ij}_{A-}+ \right) =
\\
&&\left(\mathbf{e}_{(13)} + i\mathbf{e}_{(14)}\right)Re\{V^{i+}S^{0j}_{A} +\nonumber V^{j+}S^{0i}_{A} +V^{0+}S^{ij}_A + A^{i}S^{0j}_{+} + A^{j}S^{0i}_{+} + A^{0}S^{ij}_+\}
\end{eqnarray}
\begin{eqnarray}
\frac{\partial}{\partial t}\left(\vec{s}_{A+} + \vec{s}_{A-} \right) = \left(\mathbf{e}_{(13)} + i\mathbf{e}_{(14)}\right)Re\left\{\phi_{+}\vec{S}_{A} + \vec{V}^{+}S_{A} + \phi_{A}\vec{S}_{+} + \vec{A}S_{+}\right\},
\end{eqnarray}
\begin{eqnarray}
&&\partial^{i}\left(s^{0j}_{U+} + s^{0j}_{U-} \right)+ \partial^{j}\left(s^{i0}_{U+} + s^{i0}_{U-} \right) \partial^{0}\left(s^{ij}_{U+} + s^{ij}_{U-}+ \right)
\\
&&= \left(\mathbf{e}_{(13)} + i\mathbf{e}_{(14)}\right)Re\{V^{i+}S^{0j}_{U} + V^{j+}S^{0i}_{U} +
V^{0+}S^{ij}_U + U^{i}S^{0j}_{+} + U^{j}S^{0i}_{+} + U^{0}S^{ij}_+\}\nonumber
\end{eqnarray}
\begin{equation}
\frac{\partial}{\partial t}\left(\vec{s}_{U+} + \vec{s}_{U-} \right) = \left(\mathbf{e}_{(13)} + i\mathbf{e}_{(14)}\right)Re\left\{\phi_{+}\vec{S}_{U} + \vec{V}^{+}S_{U} + \phi_{U}\vec{S}_{+} + \vec{U}S_{+}\right\},
\end{equation}
\begin{eqnarray}
&&\partial^{i}\left(s^{0j}_{++} + s^{0j}_{--} \right)+ \partial^{j}\left(s^{i0}_{++} + s^{i0}_{--} \right) \partial^{0}\left(s^{ij}_{++} + s^{ij}_{--}+ \right) =
\\
&&\left(\mathbf{e}_{(33)} - \mathbf{e}_{(44)}\right)\frac{1}{2}\{V^{i+}S^{0j}_{-} + V^{j+}S^{0i}_{-} +V^{0+}S^{ij}_- + V^{i-}S^{0j}_{+} + V^{j-}S^{0i}_{+} + V^{0-}S^{ij}_+\}\nonumber
\end{eqnarray}
\begin{equation}
\frac{\partial}{\partial t}\left(\vec{s}_{++} + \vec{s}_{--} \right) = \left(\mathbf{e}_{(33)} - \mathbf{e}_{(44)}\right)\left\{\phi_{+}\vec{S}_{+} + \vec{V}^{+}S_{+} + \phi_{+}\vec{S}_{+} + \vec{V}^+S_{+}\right\},
\end{equation}
\begin{eqnarray}
&&\partial^{i}\left(s^{0j}_{++34} + s^{0j}_{--34} \right)+ \partial^{j}\left(s^{i0}_{++34} + s^{i0}_{--34} \right) \partial^{0}\left(s^{ij}_{++34} + s^{ij}_{--34}+ \right) =
\\
&& i\mathbf{e}_{(34)}Im\{V^{i+}S^{0j}_{+} + V^{j+}S^{0i}_{+} + V^{0+}S^{ij}_+\}\nonumber
\end{eqnarray}
\begin{equation}
\frac{\partial}{\partial t}\left(\vec{s}_{++} + \vec{s}_{--} \right) = i\mathbf{e}_{(34)}Im\left\{\phi_{+}\vec{S}_{+} + \vec{V}^{+}S_{+} \right\}.
\end{equation}
The above equations are introducing a new physicality in theory. Expanding the presence of electromagnetic induction. New Faraday laws with monopoles are obtained. Notice that the symmetric sector does not provide conservation laws.
\section{Final Considerations}
A fundamental EM beyond Maxwell is required. Nonlinearity [36], strong magnetic fields [37], and photonics [38] are introducing new phenomenologies for electromagnetism to be extended. There is a more fundamental EM to be excavated under electric charge symmetry. Consider Maxwell just as an EM sector. Discover new electric and magnetic fields and corresponding equations.
A constitutive four photons dynamics is proposed. A new significance for electric charge symmetry appears. A quadruplet EM completeness emerges from charge transfer. The four bosons electromagnetism deploys new aspects of electric charge, light, and spin. Electric charge physics is more than stipulate Maxwell EM fields, continuity equation, and coupling constant. A fundamental EM theory must be supported by a fundamental electric charge symmetry. It associates the four intermediate gauge bosons. It arranges the field's quadruplet $\{A_{\mu}, U_{\mu}, V_{\mu}^{\pm}\}$ as homothety for triangle sides. New relationships between electric charge symmetry and EM fields are produced. Through the correspondent gauge homothety, it constitutes the Lagrangian [22-23], generating new EM observables described in section 1 and three Noether identities in section 3.
The second argument for four bosons electromagnetism to be a candidate for a fundamental EM is on primordial light. A new nature for light is proposed. A constitutive light associated with three other intermediate bosons is performed. A primitive photon is encountered. The four bosons EM provides a light more primordial than being just a Maxwell wave. The Lagrangian study introduces physics that includes sectors from Maxwell to photonics. It contains light invariance, ubiquous and selfinteracting photons. Photonics is derived. Photon acting as own source, generating granular and collective fields strengths, interconnected, sharing a quadruplet photon physics and selfinteracting at tree level. An inductive photon Faraday law is proposed, as eqs (2.6,5.10-5.11) shows. A Lorentz force depending on the photon field is expressed [22]. Feynman's vertices with tri-and-quadrilinear are obtained [39]. Pure photonics is constituted, The third aspect is spin. From the heuristic Stern-Gerlach experiment [40], spin is incorporated on fields formalism as [24].
A methodology to analyse a model's significance is how far it is incorporated into the historical process. As relativity came out as an extension of Newtonian mechanics, a new EM model should happen inserted into the EM development. The EM process may be viewed in three historical phases. First, through charges and magnets, as described by William Gilbert in 1600 [41]; second, charges and fields, by Maxwell in 1864; and third, on fields and fields, being expected at this 21$^{th}$ century.
As Faraday, the four bosons, EM penetrates in the fields-fields region. A pure EM field's physics is expressed. Surpass Maxwell's limitations, the myth of matter and light as EM waves. The first aspect was treated in the introduction. The myth of matter appears when we look at solid objects around us and ask what is going on. The perspective is to envisage matter concepts as guiding nature. Then, it is noticed the presence of an empty space filled with fields. Something says that nature is not only made by moving particles but also through field dynamics. Faraday was the first one to perceive this physics beyond matter. An enlargement of Faraday's perception is expected from an EM development.
The four bosons EM nonlinearity advances the concept of matter depending on fields. Following Faraday, introduces a view where physical laws are beyond matter. As a premise from electric charge symmetry. Consider fields as the most fundamental objects to construct the world. The concept of matter derived from fields. Rewrite the relationships between matter and fields by developing fields grouping, nonlinear fields, fields being own sources, mass and electric charge depending on fields, diverse Faraday laws, fields monopoles, Lorentz force depending on potential fields. Expressing also electric charge and mass in terms of continuity equations depending on fields, as in section 3.
A consequence of matter depending on fields is the possibility of reinterpreting on dark matter and dark energy in terms of field properties. Considering that the corresponding energy-momentum produces a negative pressure depending just on scalar potential fields [22], it may be a candidate for dark energy. The diagond term $T_{ii}$ be responsible for universe expansion [42].
Fundamental electromagnetism is proposed. A three charges EM is performed. An EM completeness by four intermediate photons is established beyond QED [43]. A new EM energy is discovered. New EM regimes are obtained. Overall, seven interrelated EM sectors are developed based on the presence of new EM observables. They are Maxwell, systemic, nonlinear, neutral, spintronics, electroweak, and photonics.
|
2,869,038,154,692 | arxiv | \section{Introduction}
There is a history of attempts to use linear quantum interferometers to design a quantum computer. \v{C}ern\'{y} showed that a linear interferometer could solve NP\textendash complete problems in polynomial time but only with an exponential overhead in energy \cite{cerny}. Clauser and Dowling showed that a linear interferometer could factor large numbers in polynomial time but only with exponential overhead in both energy and spatial dimension \cite{clauser}. Cerf, Adami, and Kwiat showed how to build a programmable linear quantum optical computer but with an exponential overhead in spatial dimension \cite{cerf}.
Nonlinear optics provides a well\textendash known route to universal quantum computing \cite{milburn}. We include in this nonlinear class the so\textendash called \lq\lq{}linear\rq\rq{} optical approach to quantum computing \cite{knill}, because this scheme contains an effective Kerr nonlinearity \cite{lapaire}.
In light of these results there arose a widely held belief that linear interferometers alone, even with nonclassical input states, cannot provide a road to universal quantum computation and, as a corollary, that all such devices can be efficiently simulated classically. However, recently Aaronson and Arkhipov (AA) gave an argument that multimode, linear, quantum optical interferometers with arbitrary Fock\textendash state photon inputs very likely could not be simulated efficiently with a classical computer \cite{aaronson}. Their argument, couched in the language of quantum computer complexity class theory, is not easy to follow for those not skilled in that art. Nevertheless, White and collaborators, and several other groups, carried out experiments that demonstrated that the conclusion of AA holds up for small photon numbers \cite{broome,crespi,tillmann,spring}. Our goal here is to understand\textemdash from a physical point of view\textemdash why such a device cannot be simulated classically.
\begin{figure}
\centering
\includegraphics[height=4.5cm]{figure}
\caption{Quantum pachinko machine for numerical depth $L = 3$. We indicate an arbitrary bosonic dual\textendash Fock input $\ket{N}\ket{M}$ at the top of the interferometer and then the lattice of beam splitters ($B$), phase shifters ($\varphi$), and photon\textendash number\textendash resolving detectors ($\nabla$). The vacuum input modes $\ket{0}$ (dashed lines) and internal modes $\ket{\psi}$ (solid lines) are also shown. The notation is such that the superscripts label the level $\ell$ and the subscripts label the row element from left to right.}
\label{fig:fig1}
\end{figure}
In their paper, AA prove that both strong and weak simulation of such an interferometer is not efficient classically. In the context of Fock-state interferometers, a strong simulation implies the direct computation of the joint output probabilities of a system. However, one can consider a \lq\lq{}weak\rq\rq{} simulation where one could efficiently estimate the joint output probabilities to within some acceptably small margin of error. There are many examples of systems for which weak simulation is efficient even when strong simulation is not, such as finding the permanent of an $n\times n$ matrix with real, positive entries. But as our goal is to provide the most straightforward and physical explanation for this phenomenon, we do so only for the strong case. Since many classical systems cannot even be strong simulated, it may at first seem unsurprising that this is the case. However, we note that here not only does our system's classical counterpart\textemdash Galton's board\textemdash admit an efficient strong simulation, but so does a myriad of other quantum interferometers with non-Fock state inputs as we will show.
We then independently came to the same conclusion as AA in our recent analysis of multi\textendash photon quantum random walks in a particular multi\textendash mode interferometer called a quantum \lq\lq{}pachinko\rq\rq{} machine shown in Fig.\ref{fig:fig1} \cite{gard}. The dual\textendash photon Fock state $\ket{N}\ket{M}$ is inputted into the top of the interferometer and then the photons are allowed to cascade downwards through the lattice of beam splitters ($B$) and phase shifters ($\varphi$) to arrive at an array of photon\textendash number\textendash resolving detectors ($\nabla$) at the bottom. Our goal was to compute all the joint probabilities for, say, the $q^{th}$ detector received $p$ photons while the $r^{th}$ detector received $s$ photons, and so forth, for arbitrary input photon number and lattice dimension. We failed utterly. It is easy to see why.
Working in the Schr\"{o}dinger picture, we set out to compute the probability amplitudes at each detector by following the Feynman\textendash like paths of each photon through the machine, and then summing their contributions at the end. For a machine of numerical depth $L$, as shown in Fig. \ref{fig:fig1}, it is easy to compute that the number of such Feynman\textendash like paths is $2^{L(N+M)}$. So for even a meager number of photons and levels the solution to the problem by this Schr\"{o}dinger picture approach becomes rapidly intractable. For example, choosing $N= M = 9 $ and $L = 6$, we have $2^{288} \cong 5 \times 10^{86}$ total possible paths, which is about four orders of magnitude larger than the number of atoms in the observable universe. We were puzzled by this conclusion; we expected any passive linear quantum optical interferometer to be efficiently simulatable classically. With the AA result now in hand, we set out here to investigate the issue of the complexity of our quantum pachinko machine from an intuitive physical perspective. The most mathematics and physics we shall need is elementary combinatorics and quantum optics.
Following Feynman, we shall explicitly construct the pachinko machine's Hilbert state space for an arbitrary level $L$, and for arbitrary photon input number, and show that the space's dimension grows exponentially as a function of each of the physical resources needed to build and run the interferometer \cite{feynman}. Because interference only occurs when the input state has been symmetrized (with respect to interchange of mode), we compute the size of the symmetrized subspace and show that it too grows exponentially with the number of physical resources. We remark that while a classical pachinko machine (or \lq\lq{}Galton’s board\rq\rq{}) will also have an exponential large state space, because no interference occurs there is only a quadratic increase with $L$ in the number of calculations necessary to simulate the output (corresponding to the number of beam splitters in the interferometer). From this result we conclude that it is very likely that any classical computer that tries to simulate the operation of the quantum pachinko machine will always suffer an exponential slowdown. We will also show that no exponential growth occurs if Fock states are replaced with photonic coherent states or squeezed states, which elucidates part of the special nature of photonic Fock states. However an exponentially large Hilbert space, while necessary for classical non\textendash simulatability, is not sufficient. We then finally examine the physical symmetry requirements for bosonic versus fermionic multi\textendash particle states and show that in the bosonic case, in order to simulate the interferometer as a physics experiment, one must compute the permanent of a large matrix, which in turn is a problem in a computer algorithm complexity class strongly believed to be intractable on a classical or even a quantum computer. This concludes our elementary argument, which invokes only simple quantum mechanics, combinatorics, and a simplistic appeal to complexity theory.
\section{The Pachinko Machine Model}
As our argument is all about counting resources, we have carefully labeled all the components in the pachinko machine in Fig. \ref{fig:fig1} to help us with that reckoning. The machine has a total of $L$ levels of physical depth $d$ each. The input state at the top is the dual\textendash Fock state $\ket{N}^0_1\ket{M}^0_2$, where the superscripts label the level number and the subscripts the element in the row at that level (from left to right). We illustrate a machine of total numerical depth of $L = 3$. For $1\leq \ell < L$, we show the vacuum input modes along the edges of the machine. The resources we are most concerned about are energy, time, spatial dimension, and number of physical elements needed to construct the device. All of these scale either linearly or quadratically in either $L$ or $N + M$. The total physical depth is $D = L d$ and so the spatial area is $A = (\sqrt{2} D)^2=2L^2 d^2$. Using identical photons of frequency $\omega$, the energy per run is $E=(N+M)\hbar \omega$. The time it takes for the photons to arrive at the detectors is $T = \sqrt{2} L d / c$, where $c$ is the speed of light. In each level the photons encounter $\ell$ number of beam splitter (BS) so the total number is $ \# B = \sum_{\ell=1}^{L} \ell = L(L+1)/2$. Below each BS (with the exception of the $L$th level) there are two independently tunable phase shifters (PS) for a total number of PS that is $ \# \varphi = \sum_{\ell=1}^{L-1} 2\ell = L(L-1)$. The total number of detectors is $\# \nabla = 2L$. The total number of input modes is equal to the total number of output modes and is $\#I = \#O = 2L$. The total number of internal modes is $\# \psi = \sum_{\ell=1}^{L-1} 2\ell = L(L-1)$. As promised everything scales either linearly or quadratically in either $L$ or $N+M$.
The input state may be written in the Heisenberg picture as $\ket{N}^0_1\ket{M}^0_2= (\hdag{1}{0})^N (\hdag{2}{0})^M \ket{0}^0_1 \ket{0}^0_2/\sqrt{N!M!}$, where $\hat{a}^{\dagger}$ is a modal creation operator. Each BS performs a forward unitary mode transformation, which we illustrate with $B^1_1$, of the form $\hat{a}_1^1=i r_1^1 \hat{a}_1^0+ t_1^1 \hat{a}_2^0$ and $\hat{a}_2^1= t_1^1\hat{a}_1^0+i r_1^1\hat{a}_2^0$ where the reflection and transmission coefficients $r$ and $t$ are positive real numbers such that $ r^2 +t^2 = R+T =1$. The choice $r = t = 1/\sqrt{2}$ implements a 50\textendash 50 BS. Each PS is implemented by, for example, applying the unitary operation $\textrm{exp}(i \varphi_1^1 \hat{n}_1^1)$ on mode $\ket{\psi}_1^1$, where $\hat{n}_1^1 := \hdag{1}{1}\hat{a}_1^1$ is the number operator, $\hat{a}_1^1$ is the annihilation operator conjugate to $\hdag{1}{1}$, and $\varphi_1^1$ is a real number. Finally the $2L$ detectors in the final level $L$ are each photon number resolving \cite{lee}.
To argue that this machine (or any like it) cannot be simulated classically, in general, it suffices to show that this is so for a particular simplified example. We now take $N$ and $L$ arbitrary but $M = 0$ and turn off all the phase shifts and make all the BS identical by setting $ \varphi_k^{\ell}=0$, $t_k^{\ell} = t$, and $r_k^{\ell} =r$ for all $(k,\ell)$. We then need the backwards BS transformation on the creation operators, which is, $\hdag{1}{0} = i r \hdag{1}{1} + t \hdag{2}{1}$ and $\hdag{2}{0} = t \hdag{1}{1} + i r \hdag{2}{1} $. Similar transforms apply down the machine at each level. With $M = 0$ the input simplifies to $\ket{N}_1^0\ket{0}_2^0 = (\hdag{1}{0})^N \ket{0}_1^0\ket{0}_2^0/\sqrt{N!}$ and now we apply the first backwards BS transformation $\ket{\psi}_1^1\ket{\psi}_2^1= (i r \hdag{1}{1} + t\hdag{2}{1})^N\ket{0}_1^0\ket{0}_2^0/\sqrt{N!}$ to get the state at level one.
At every new level each $\hat{a}^\dagger$ will again bifurcate according to the BS transformations for that level, with the total number of bifurcations equal to the total number of BS, and so the computation of all the terms at the final level involves a polynomial number of steps in $L$. It is instructive to carry this process out explicitly to level $L = 3$ to get,
\begin{equation}
\begin{split}
\ket{\psi}^3&= \frac{1}{\sqrt{N!}}(i r t^2\hdag{1}{3} -r^2 t\hdag{2}{3}+i r (t^2-r^2)\hdag{3}{3} \\
&-2r^2 t \hdag{4}{3}+i r t^2 \hdag{5}{3} +t^3 \hdag{6}{3})^N \prod_{\ell=1}^6\ket{0}_{\ell}^3,
\end{split}
\label{eq:prod1}
\end{equation}
where we have used a tensor product notation for the states. If $r\cong0$ or $r\cong1$ the state is easily computed. Since we are seeking a regime that cannot be simulated classically we work with $r\cong t \cong 1/\sqrt{2}$.
\section{Solution in the Heisenberg and Schr\"{o}dinger Pictures}
It is now clear from Eq.(\ref{eq:prod1}) what the general form of the solution will be. We define
\begin{equation}
\ket{\psi}^L := \underset{\mathclap{\begin{subarray}{c}
\lbrace n_\ell\rbrace \\
N=\sum_{\ell=1}^{2L} n_\ell
\end{subarray}}}
\sum \ket{\psi}_{\lbrace n_\ell\rbrace}^L \quad , \quad
\ket{0}^L := \prod_{\ell=1}^{2L}\ket{0}_{\ell}^L ,
\label{eq:definitions}
\end{equation}
and the general solution has the form,
\begin{equation}
\begin{split}
\ket{\psi}^L&=\frac{1}{\sqrt{N!}}\left( \sum_{\ell=1}^{2L}\alpha_{\ell}^L\hdag{\ell}{L} \right)^N \ket{0}^L \\
&=\frac{1}{\sqrt{N!}}\sum_{N=\sum_{\ell=1}^{2L} n_{\ell}}
\binom{N}{n_1, n_2,\mathellipsis, n_{2L}} \\
&\times \prod_{1\leq k \leq 2L} (\alpha_{k}^L \hdag{k}{L})^{n_k}\ket{0}^L ,
\end{split}
\label{eq:state}
\end{equation}
where all the coefficients $\alpha_{\ell}^L$ will be nonzero in general. Since all the operators commute, as they each operate on a different mode, we have expanded Eq. (\ref{eq:state}) using the multinomial theorem where the sum in the expansion is over all combinations of non\textendash negative integers constrained by $N=\sum_{\ell=1}^{2L} n_l$ and
\begin{equation}
\binom{N}{n_1, n_2,\mathellipsis, n_{2L}}= \frac{N!}{n_1! n_2! \mathellipsis n_{2L}!}
\label{eq:binom}
\end{equation}
is the multinomial coefficient \cite{nist}. The state $\ket{\psi}^L$ is highly entangled over the number\textendash path degrees of freedom. Each monomial in the expansion of Eq. (\ref{eq:state}) is unique and so the action of the set of all monomial operators on the vacuum will produce a complete orthonormal basis set for the Hilbert space at level $L$, given by $ \ket{\psi}_{\lbrace n_\ell \rbrace}^L:=\prod_{\ell=1}^{2L}\ket{n_\ell}_{\ell}^L$, where the $n_\ell$ are subject to the same sum constraint. Let us call the dimension of that Hilbert space dim$[H(N,L)]$, which is therefore the total number of such basis vectors.
Taking $L=3$ and $N=2$, we can use Eq.(\ref{eq:state}) to compute the probability a particular sequence of detectors will fire with particular photon numbers. What is the probability detector one gets one photon, detector two also gets one, and all the rest get zero? This is the modulus squared of the probability amplitude of the state $\ket{1}_1^3\ket{1}_2^3\ket{0}_3^3\ket{0}_4^3\ket{0}_5^3\ket{0}_6^3$. Setting $r=t=1/\sqrt{2}$ for the 50\textendash50 BS case, from Eq.(\ref{eq:prod1}) we read off $\alpha_1^3=irt^2=i/(2\sqrt{2})$ and $\alpha_2^3=-r^2t=-1/(2\sqrt{2})$, and so the probability of this event is given by $P_{110000}\cong 0.031$.
It turns out that it is possible (for general $L$ and $N$) to compute the single and binary joint probabilities, that detector $p$ gets $n$ photons and detector $q$ gets $m$ \cite{mayer}. However computing arbitrary joint probabilities between triplets, quadruplets, etc., of detectors rapidly becomes intractable. We can provide a closed form expression for dim$[H(N,L)]$ by realizing that it is the same as the number of different ways one can add up non\textendash negative integers that total to fixed $N$. More physically this is the number of possible ways that $N$ indistinguishable photons may be distributed over $2L$ detectors. The answer is well known in the theory of combinatorics and is:
\begin{equation}
\textrm{dim}[H(N,L)]=\binom{N+2L-1}{N},
\label{eq:dim1}
\end{equation}
where this is the ordinary binomial coefficient \cite{benjamin}. For our example with $L = 3$, $N = 2$, Eq.(\ref{eq:dim1}) implies that the number of distinct probabilities $P_{npqrst}^3$ to be tabulated is again 21.
We first examine two \lq\lq{}computationally simple\rq\rq{} examples. Taking $N$ arbitrary and $L =1$ we get dim$[H(N,1)]=N+1$, which is easily seen to be the number of ways to distribute $N$ photons over two detectors. Next taking $N = 1$ and $L$ arbitrary we get dim$[H(1,L)]=2L$, which is the number of ways to distribute a single photon over $2L$ detectors. If we were to invoke Dirac's edict\textemdash \lq\lq{}Each photon then interferes only with itself.\rq\rq{}\textemdash we would then expect that adding a second photon should only double this latter result \cite{dirac1}. Instead the effect of two\textendash photon interference on the state space can be seen immediately by computing dim$[H(2,L)]=L(2L+1)$. That is, adding a second photon causes a quadratic (as opposed to linear) jump in the size of the Hilbert space. Dirac was wrong; photons do interfere with each other, and that multiphoton interference directly affects the computational complexity. All these three cases are simulatable in polynomial time steps with $N$ and $L$, but we see a quadratic jump in dimension as soon as we go from one to two photons. These jumps in complexity continue for each additional photon added and the dimension grows rapidly.
We therefore next investigate a \lq\lq{}computationally complex\rq\rq{} intermediate regime by fixing $N=2L-1$. That is we build a machine with total number of levels $L$ and then choose an odd\textendash numbered photon input so that this restriction holds. Equation (\ref{eq:dim1}) becomes dim$[H(N)]=(2N)!/(N!)^2$. Deploying Sterling's approximation for large $N$, in the form $n! \cong (n/e)^n \sqrt{2 \pi n}$ we have dim$[H(N)]\cong2^{2N}/\sqrt{\pi N}$. This is one of our primary results. The Hilbert space dimension scales exponentially with $N=2L-1$. Since all the physical parameters needed to construct and run our quantum pachinko machine scale only linear or quadratically with respect to $N$ or $L$, we have an exponentially large Hilbert space produced from a polynomial number of physical resources \textemdash Feynman's necessary condition for a potential universal quantum computer.
Let us suppose we build onto an integrated optical circuit a machine of depth $L = 69$ and fix $N = 2L-1 = 137$. Such a device is not too far off on the current quantum optical technological growth curve \cite{bonneau}. Then we have dim$[H(137)]=10^{81}$, which is again on the order of the number of atoms in the observable universe. Following Feynman's lead, we conclude that, due to this exponentially large Hilbert space, we have a sufficient condition that a classical computer can not likely efficiently simulate this device. However this is not a necessary condition. From the Gottesman-Knill theorem we know that quantum circuits that access an exponentially large Hilbert space may sometimes be efficiently simulated \cite{gottesman}. We will strengthen our argument (below) by discussing the necessity of properly symmetrizing a multi-particle bosonic state and tie that physical observation back to the complexity of computing the permanent of a large matrix.
Let us now compare our Heisenberg picture result to that of the Schr\"{o}dinger picture. In the computationally complex regime where $N=2L-1$ the number of distinct Feynman\textendash like paths we must follow in the Schr\"{o}dinger picture is $2^{LN}=2^{N(N+1)/2}\cong 2^{N^2/2}$. Taking $N = 137$ and $L = 69$, as in the previous example, we get an astounding $2^{9453}\cong 4\times10^{2845}$ total paths. Dirac proved that the Heisenberg and Schr\"{o}dinger pictures are mathematically equivalent, that they always give the same predictions, but we see here that they are not always necessarily \textit{computationally} equivalent \cite{dirac2}. Calculations in the Heisenberg picture are often \textit{much} simpler than in the Schr\"{o}dinger picture. The fact that the two pictures are not always computationally equivalent is implicit in the Gottesman\textendash Knill theorem; however, it is satisfying to see here just how that is so in a simple optical interferometer \cite{gottesman}.
\section{Sampling with Coherent \& Squeezed State Inputs}
To contrast this exponential overhead from the resource of bosonic Fock states, let us now carry out the same analysis with the bosonic coherent input state input $\ket{\beta}_1^0\ket{0}_2^0$, where we take the mean number of photons to be $\abs{\beta}^2=\overline{n}$. In the Heisenberg picture this input becomes $\hat{D}_1^0(\beta)\ket{0}_1^0\ket{0}_2^0$, where $\hat{D}_1^0(\beta)=\textrm{exp}(\beta \hdag{1}{0}-\beta^* \hat{a}_1^0)$ is the displacement operator \cite{gerry}. Applying the BS transformations down to final level $L$ we get
\begin{equation}
\begin{split}
\ket{\psi}^L&=\textrm{exp}\left(\beta \sum_{\ell=1}^{2L}\alpha_{\ell}^L \hdag{\ell}{L}- \beta^* \sum_{\ell=1}^{2L}\alpha_{\ell}^{L*} \hat{a}_{\ell}^L\right)\ket{0}^L \\
&=\prod_{\ell=1}^{2L}\textrm{exp}(\beta \alpha_{\ell}^L \hdag{\ell}{L}- \beta^*\alpha_{\ell}^{L*}\hat{a}_{\ell}^L)\ket{0}^L \\
&=\prod_{\ell=1}^{2L}\ket{\beta \alpha_{\ell}^L}_{\ell}^L .
\end{split}
\label{eq:state2}
\end{equation}
At the output we have $2L$ coherent states that have been modified in phase and amplitude. This is to be expected, as it is well known that linear interferometers transform a coherent state into another coherent state. Since all the coefficients $\alpha_{\ell}^L$ are computable in $\#B=L(L+1)/2$ steps, this result is obtained in polynomial time steps in $L$, independent of $\overline{n}$. The mean number of photons at each detector is then simply $\overline{n}_{\ell}^L=\abs{\beta \alpha_{\ell}^L}^2=\overline{n}\abs{\alpha_{\ell}^L}^2$.
A similar analysis may be carried out for bosonic squeezed input states. Taking, for example, a single\textendash mode squeezed vacuum input $\ket{\xi}_1^0\ket{0}_2^0= \hat{S}_1^0(\xi)\ket{0}_1^0\ket{0}_2^0$, with the squeezing operator defined as $\hat{S}_1^0(\xi)=\textrm{exp}\lbrace [\xi^*(\hat{a}_1^0)^2-\xi(\hdag{1}{0})^2]/2\rbrace$, we arrive at,
\begin{equation}
\ket{\psi}^L=\textrm{exp}\left\{ \left[\xi^*\left(\sum_{\ell=1}^{2L}\alpha_{\ell}^*\hat{a}_{\ell}^L\right)^2-\xi \left(\sum_{\ell=1}^{2L}\alpha_{\ell}\hdag{\ell}{L}\right)^2\right]/2\right\} \ket{0}^L,
\label{eq:state3}
\end{equation}
which does not in general decompose into a separable product of single\textendash mode squeezers on each output port. Nevertheless the probability amplitudes may still be computed in a time polynomial in $L$ by noting that, from Eq.(\ref{eq:dim1}) with $N = 2$, there are at most $2L(L+1)$ terms in this exponent that must be evaluated. This result generalizes to arbitrary Gaussian state inputs \cite{bartlett,*veicht}. The output of the interferometer may be then calculated on the transformed device in polynomial steps in $L$.
The exponential scaling comes from the bosonic Fock structure $\ket{N}=(\hat{a}^\dagger)^N\ket{0}/\sqrt{N!}$ and the rapid growth of the number\textendash path entanglement in the interferometer. It is well known that beam splitters can generate number\textendash path entanglement from separable bosonic Fock states. For example, the simplest version of the HOM effect at level one with separable input $\ket{1}_1^0\ket{1}_2^0$ becomes $\ket{\psi}_1^1 \ket{\psi}_2^1=(i\hdag{1}{1}+\hdag{2}{1})(\hdag{1}{1}+i\hdag{2}{1})\ket{0}_1^1\ket{0}_2^1/2=i[\ket{2}_1^1\ket{0}_2^1+\ket{0}_1^1\ket{2}_2^1]/\sqrt{2}$ a NOON state \cite{ou}. Such entangled NOON states violate a Bell inequality and are hence nonlocal even though the input was not \cite{wildfeuer}. For arbitrary bosonic Fock input states and interferometer size the amount of number\textendash path entanglement grows exponentially fast. However, even in the case of fermionic interferometers, where there is a restriction of two identical particles per mode, the Hilbert space can still grow exponentially fast (just not quite as fast as in the case of bosons) as we shall now show.
\section{Comparison of Bosonic to Fermionic Fock State Inputs}
We now compare the multimode bosonic Fock state interferometer to the multimode fermionic interferometer. We will restrict ourselves to spin\textendash1/2 neutral fermions such as neutrons that are commonly used in interferometry. Now the number of fermions per input mode is restricted to zero, one, or two and we can have two only if they have opposite spin states to be consistent with the Pauli exclusion principle. The exclusion principle is derived from the requirement that the total multi\textendash particle fermionic wave function, which is the product of the spin and spatial wave functions, is antisymmetric under the interchange of any two particle state labels. Likewise there is a constraint on the bosonic multi\textendash particle multi\textendash mode wave function that the total wave function be symmetric. The symmetry of the wave function must be enforced at each beam splitter where the particles become indistinguishable and the spatial part of the wave function experiences maximal overlap for multi\textendash particle interference to occur. For the sake of argument we take the coherence length of the particles to be infinite (or at least much larger than the depth of the interferometer $Ld$) so that enforcing the correct symmetry at each beam splitter requires enforcing the correct symmetry everywhere in space.
Some care must now be used in the notation. For example when we write the bosonic spatial wave function input state $\ket{1}^{b}_{A_{\text{in}}} \ket{1}^b_{B_{\text{in}}}$, we are assuming both bosons have the same spin state, since clearly this state is spatially symmetric under particle interchange its spin state must also be, so that the product of the two (total wave function) remains symmetric. To denote this point we instead write $\ket{\uparrow}^{b}_{A_{\text{in}}} \ket{\uparrow}^b_{B_{\text{in}}}$ to explicitly show the spin state. [More properly we should write $\psi^b(x_{A_{\text{in}}})\psi^b(x_{B_{\text{in}}})\ket{\uparrow}^{b}_{A_{\text{in}}} \ket{\uparrow}^b_{B_{\text{in}}}$ but this notation is a bit cumbersome.] Thence for a 50:50 BS the HOM effect for bosons in the same spin state can be written, $\ket{\uparrow}^{b}_{A_{\text{in}}} \ket{\uparrow}^b_{B_{\text{in}}}\overset{BS}{\rightarrow} \ket{\uparrow \uparrow}^b_{A_{\text{out}}}\ket{0}^b_{B_{\text{out}}} + \ket{0}^b_{A_{\text{out}}}\ket{\uparrow \uparrow}^b_{B_{\text{out}}}$ , so both bosons \lq\lq{}stick\rq\rq{} at the beam splitter and emerge together. This effect arises as a direct result of the fact that the \textit{spatial} part of the wave function, which gives rises to an effective attraction at the BS, is symmetric. We could instead prepare an antisymmetric bosonic singlet spin state input $\ket{\uparrow}^{b}_{A_{\text{in}}} \ket{\downarrow}^b_{B_{\text{in}}}- \ket{\downarrow}^{b}_{A_{\text{in}}} \ket{\uparrow}^b_{B_{\text{in}}}$, in which case the spatial wavefunction must be also antisymmetric, $\psi^b(x_{A_{\text{in}}})\psi^b(x_{B_{\text{in}}})-\psi^b(x_{B_{\text{in}}})\psi^b(x_{A_{\text{in}}})$ , so that the product of the two remains symmetric. In this case the particles behave fermionically as far as the spatial wavefunction overlap is concerned at the BS and they repel each other in an anti\textendash HOM effect, always exiting out separate ports and never together; $\ket{\uparrow}^{b}_{A_{\text{in}}} \ket{\downarrow}^b_{B_{\text{in}}}- \ket{\downarrow}^{b}_{A_{\text{in}}} \ket{\uparrow}^b_{B_{\text{in}}} \overset{BS}{\rightarrow} \ket{\uparrow}^{b}_{A_{\text{out}}} \ket{\downarrow}^b_{B_{\text{out}}}- \ket{\downarrow}^{b}_{A_{\text{out}}} \ket{\uparrow}^b_{B_{\text{out}}}$ \cite{loudon1,loudon2}. The reverse happens for fermions.
For example, the symmetric spin input state $\ket{\uparrow}^{f}_{A_{\text{in}}} \ket{\uparrow}^f_{B_{\text{in}}}$ is allowed for fermions only if the spatial wave wave function is antisymmetric, $\psi^f(x_{A_{\text{in}}})\psi^f(x_{B_{\text{in}}})-\psi^f(x_{B_{\text{in}}})\psi^f(x_{A_{\text{in}}})$, so that the entire wave function product remains antisymmetric. Since the spatial part governs the HOM effect they repel at the BS and obey an anti\textendash HOM effect and always exit out separate ports, consistent with the exclusion principle, namely $\ket{\uparrow}^{f}_{A_{\text{in}}} \ket{\uparrow}^f_{B_{\text{in}}} \overset{BS}{\rightarrow}\ket{\uparrow}^{f}_{A_{\text{out}}} \ket{\uparrow}^f_{B_{\text{out}}}$. However, we can make the fermions behave spatially bosonically by preparing them in a spin\textendash antisymmetric singlet input state, which then must be symmetric in the spatial part, and so they behave as bosons as far as the spatial overlap is concerned, and we recover the usual HOM effect, where now they always exit the same port together: $\ket{\uparrow}^{f}_{A_{\text{in}}} \ket{\downarrow}^f_{B_{\text{in}}}- \ket{\downarrow}^{f}_{A_{\text{in}}} \ket{\uparrow}^f_{B_{\text{in}}} \rightarrow \ket{\uparrow \downarrow}^{f}_{A_{\text{out}}} \ket{0}^f_{B_{\text{out}}}- \ket{\downarrow \uparrow}^{f}_{A_{\text{out}}} \ket{0}^f_{B_{\text{out}}}$. There is no violation of the exclusion principle as they also always exit with opposite spins. (This type of effective spatial attraction between fermions in a spin singlet state explains why the ground state of the neutral hydrogen molecule is a bound state.) It is clear then that even fermions can experience number-path entanglement in a linear interferometer, although not to the same degree as bosons. However this entanglement is still sufficient to lead to an exponential growth in the fermionic Hilbert space, as we shall now argue.
Now we are ready to apply our resource counting argument to the fermionic case. For fermions the computationally complex regime may be accessed when the number of input particles $N$ is half the number of input modes $2L$. The dimension of the Hilbert space may be computed as before and turns out to be, for this example, $\binom{2L}{N}$. This also grows exponentially as a function of the resources choosing $N = L$. Following the same Sterling's approximation argument as above we get exactly the same exponential formula for the Hilbert space dimension as with bosons, namely $ 2^{2N}/ \sqrt{\pi N}$.
So, in general, in both the fermionic and bosonic case the Hilbert space dimension grows exponentially with respect to the resources: particle number and mode number. However, Feynman's arguments notwithstanding, an exponential growth in the Hilbert space is only sufficient but not necessary to attain classical non\textendash simulatability. For example, from the Gottesman\textendash Knill theorem, we can construct a Clifford\textendash algebra\textendash based quantum computer circuit that accesses an exponentially large Hilbert space but still can be simulated efficiently classically \cite{gottesman}. Sometimes there are shortcuts through Hilbert space, as we shall now argue is the case here for fermions but not for bosons.
In order to access these large Hilbert spaces in the interferometer one must require that multi\textendash particle interference take place at each beam splitter, where the particles must be indistinguishable, and the spatial wave function overlap determines the type of particle\textendash mode entanglement that will result. The overall bosonic wave function (spatial multiplied by spin) must be totally symmetric and the overall fermionic wave function must be totally antisymmetric at each row of BS, and so they must have these symmetries everywhere in space and particularly at the input. Now if we give up on a complete tabulation of the Hilbert state space at level $L$, due to its exponential growth, and treat the interferometer using a standard quantum optical input\textendash output formalism, there is an efficient way to take a given multi\textendash particle, multi\textendash mode input state at the top of the interferometer to the bottom of the interferometer. This method is called matrix transfer and is accomplished by encoding each level of BS transformations in terms of $L$ matrices of size $(2L) \times (2L)$ and then multiplying them together. This can be done in the order of $O(L^3)$ steps and so it is efficient.
We must now address the issue of computing the sampling output of the interferometer. While the one- and two- particle joint particle detection probabilities at the detectors may be computed efficiently, computation of the higher order joint probabilities rapidly become intractable \cite{mayer}. In order to compute the complete joint probability distribution, we must compute the determinant (if the input is fermionic) or the permanent (if the input is bosonic) of the $(2L) \times (2L)$ matrix found above. Using the method of Laplace decomposition for constructing the determinant of a matrix, one decomposes the large determinant into a sum over ever-smaller determinants, appending alternating plus and minus signs to each in a checkerboard pattern. Constructing the permanent follows the same process but
all the signs are now only plus.
However, for the determinant, there is a polynomial shortcut through the exponential Hilbert space\textemdash the row\textendash reduction method. For fermions we may always construct the most general input state efficiently. On the other hand, there is no known method such as row reduction to compute the permanent of an arbitrary matrix efficiently. The most efficient known protocols for the permanent computation are variants on the Laplace decomposition and all scale exponentially with the size of the matrix. This problem of computing the permanent, in the lingo of computer complexity theory, is that it is in the class of \lq\lq{}\#P\textendash hard\rq\rq{} (sharp\textendash P\textendash hard) problems. All problems in this class are very strongly believed to be intractable on any classical computer and also strongly suspected to be intractable on even a quantum computer \cite{aaronson}. While some matrices have a general form for which the permanent can be more easily computed, for an arbitrary interferometer setup, this matrix does not have a general form which we can exploit in order to shortcut the computation of the permanent. We are left with the task of using our most efficient, general, exact permanent computing algorithm (Ryser's formula), which requires $O(2^{2L}L^2)$ number of steps \cite{ryser}. Finally, we have reached the snag that undermines our ability to efficiently compute the output and so renders simulation of the device classically intractable.
\section{Conclusion}
In conclusion, we have shown that a multi\textendash mode linear optical interferometer with arbitrary Fock input states is very likely not simulatable classically. Our result is consistent with the argument of AA. Without invoking much complexity theory, we have argued this by explicitly constructing the Hilbert state space of a particular such interferometer and showed that the dimension grows exponentially with the size of the machine. The output state is highly entangled in the photon number and path degrees of freedom. We have also shown that simulating the device has radically different computational overheads in the Heisenberg versus the Schr\"{o}dinger picture, illustrating just how the two pictures are not in general computationally equivalent within this simple linear optical example. Finally we supplement our Hilbert space dimension argument with a discussion of the symmetry requirements of multi\textendash particle interferometers and particularly tie the simulation of the bosonic device to the computation of the permanent of a large matrix, which is strongly believed to be intractable. It is unknown (but thought unlikely) if such bosonic multi\textendash mode interferometers as these are universal quantum computers, but regardless they will certainly not be fault tolerant. As pointed out by Rohde \cite{rohde}, it is well known that Fock states of high photon number are particularly sensitive to loss \cite{huver}. They are also super\textendash sensitive to dephasing as well \cite{qasimi}. This implies that even if such a machine turns out to be universal it would require some type of error correction to run fault tolerantly. Nevertheless, such devices could be interesting tools for studying the relationship between multi\textendash photon interference and quantum information processing for small numbers of photons. If we choose each of the PS and BS transformations independent of each other, we have a mechanism to program the pachinko machine by steering the output into any of the possible output states. Even if universality turns out to be lacking we may very well be able to exploit this programmability to make a special purpose quantum simulator for certain physics problems such as frustrated spin systems \cite{britton}.
\begin{acknowledgments}
B.T.G would like to acknowledge support from the NPSC/NIST fellowship program. J.P.D. would like to acknowledge support from the NSF. This work is also supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior National Business Center contract number D12PC00527. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoI/NBC, or the U.S. Government. We also would like to acknowledge interesting and useful discussions with S. Aaronson, S. T. Flammia, K. R. Motes, and P. P. Rohde.
\end{acknowledgments}
|
2,869,038,154,693 | arxiv | \section{Introduction}
Video contributes to 75\% of all Internet traffic in 2017, and the percent is expected to reach 82\% by 2022 \cite{cisco2018cisco}. Compressing video into a smaller size is an urgent requirement to reduce the transmission cost. Currently, Internet video is usually compressed into H.264 \cite{wiegand2003overview} or H.265 format \cite{sullivan2012overview}. New video coding standards like H.266 and AV1 are upcoming. While new standards promise an improvement in compression ratio, such improvement is accompanied with multiplied encoding complexity. Indeed, all the standards in use or in the way coming follow the same framework, that is motion-compensated prediction, block-based transform, and handcrafted entropy coding. The framework has been inherited for over three decades, and the development within the framework is gradually saturated.
Recently, a series of studies try to build brand-new video compression schemes on top of trained deep networks. These studies can be divided into two classes according to their targeted scenarios. As for the first class, Wu \etal proposed a recurrent neural network (RNN) based approach for interpolation-based video compression \cite{wu2018video}, where the motion information is achieved by the traditional block-based motion estimation and is compressed by an image compression method.
Later on, Djelouah \etal also proposed a method for interpolation-based video compression, where the interpolation model combines motion information compression and image synthesis, and the same auto-encoder is used for image and residual \cite{Djelouah_2019_ICCV}.
Interpolation-based compression uses the previous and the subsequent frames as references to compress the current frame, which is valid in random-access scenarios like playback.
However, it is less applicable for low-latency scenarios like live transmission.
The second class of studies target low-latency case and restrict the network to use merely temporally previous frames as references.
For example, Lu \etal proposed DVC, an end-to-end deep video compression model that jointly learns motion estimation, motion compression, motion compensation, and residual compression functions \cite{lu2018dvc}.
In this model, only one previous frame is used for motion compensation, which may not fully exploit the temporal correlation in video frames.
Rippel \etal proposed another video compression model, which maintains a latent state to memorize the information of the previous frames \cite{Rippel_2019_ICCV}. Due to the presence of the latent state, the model is difficult to train and sensitive to transmission error.
In this paper, we are interested in low-latency scenarios and propose an end-to-end learned video compression scheme. Our key idea is to use the previous \emph{multiple} frames as references. Compared to DVC, which uses only one reference frame, our used multiple reference frames enhance the prediction twofold. First, given multiple reference frames and associated multiple motion vector (MV) fields, it is possible to derive multiple hypotheses for predicting the current frame; combination of the hypotheses provides an ensemble. Second, given multiple MV fields, it is possible to extrapolate so as to predict the following MV field; using the MV prediction can reduce the coding cost of MV field. Therefore, our method is termed Multiple frames prediction for Learned Video Compression (M-LVC). Note that in \cite{Rippel_2019_ICCV}, the information of the previous multiple frames is \emph{implicitly} used to predict the current frame through the latent state; but in our scheme, the multiple frames prediction is \emph{explicitly} addressed. Accordingly, our scheme is more scalable (\ie can use more or less references), more interpretable (\ie the prediction is fulfilled by motion compensation), and easier to train per our observation.
Moreover, in our scheme, we design a MV refinement network and a residual refinement network. Since we use a deep auto-encoder to compress MV (resp. residual), the compression is lossy and incurs error in the decoded MV (resp. residual). The MV (resp. residual) refinement network is used to compensate for the compression error and to enhance the reconstruction quality. We also take use of the multiple reference frames and/or associated multiple MV fields in the residual/MV refinement network.
In summary, our technical contributions include:
\begin{itemize}
\item
We introduce four effective modules into end-to-end learned video compression: multiple frame-based MV prediction, multiple frame-based motion compensation, MV refinement, and residual refinement. Ablation study demonstrates the gain achieved by these modules.
\item
We use a single rate-distortion loss function, \emph{together with} a step-by-step training strategy, to jointly optimize all the modules in our scheme.
\item
We conduct extensive experiments on different datasets with various resolutions and diverse content. Our method outperforms the existing learned video compression methods for low-latency mode. Our method performs better than H.265 in both PSNR and MS-SSIM.
\end{itemize}
\vspace{-0.3cm}
\begin{figure*}
\centering
\subfigure[]{
\includegraphics[width=.39\linewidth]{DVC_Framework.pdf}
}
\subfigure[]{
\includegraphics[width=.58\linewidth]{framework1.pdf}}
\caption{(a) The scheme of DVC \cite{lu2018dvc}. (b) Our scheme. Compared to DVC, our scheme has four new modules that are highlighted in blue. In addition, our Decoded Frame Buffer stores multiple previously decoded frames as references. Our Decoded MV Buffer also stores multiple decoded MV fields. Four reference frames are depicted in the figure, which is the default setting in this paper.}
\label{fig:framework}
\vspace{-0.3cm}
\end{figure*}
\section{Related Work}
\subsection{Learned Image Compression}
Recently, deep learning-based image compression methods have achieved great progress \cite{johnston2018improved,toderici2015variable,toderici2017full,balle2016end,balle2018variational,minnen2018joint}. Instead of relying on handcrafted techniques like in conventional image codecs, such as JPEG \cite{wallace1992jpeg}, JPEG2000 \cite{skodras2001jpeg}, and BPG \cite{bellardbpg}, new methods can learn a non-linear transform from data and estimate the probabilities required for entropy coding in an end-to-end manner. In \cite{johnston2018improved,toderici2015variable,toderici2017full}, Long Short Term Memory (LSTM) based auto-encoders are used to progressively encode the difference between the original image and the reconstructed image. In addition, there are some studies utilizing convolutional neural network (CNN) based auto-encoders to compress images \cite{balle2016end,balle2018variational,minnen2018joint,theis2017lossy}. For example, Ball\'{e} \etal \cite{balle2016end} introduced a non-linear activation function, generalized divisive normalization (GDN), into CNN-based auto-encoder and estimated the probabilities of latent representations using a fully-connected network. This method outperformed JPEG2000. It does not take into account the input-adaptive entropy model. Ball\'{e} \etal later in \cite{balle2018variational} introduced an input-adaptive entropy model by using a zero-mean Gaussian distribution to model each latent representation and the standard deviations are predicted by a parametric transform. More recently, Minnen \etal \cite{minnen2018joint} further improved the above input-adaptive entropy model by integrating a context-adaptive model; their method outperformed BPG. In this paper, the modules for compressing the motion vector and the residual are based on the image compression methods in \cite{balle2016end,balle2018variational}. We remark that new progress on learned image compression models can be easily integrated into our scheme.
\subsection{Learned Video Compression}
Compared with learned image compression, related work for learned video compression is much less. In 2018, Wu \etal proposed a RNN-based approach for interpolation-based video compression \cite{wu2018video}. They first use an image compression model to compress the key frames, and then generate the remaining frames using hierarchical interpolation. The motion information is extracted by traditional block-based motion estimation and encoded by a traditional image compression method. Han \etal proposed to use variational auto-encoders (VAEs) for compressing sequential data \cite{han2018deep}. Their method jointly learns to transform the original video into lower-dimensional representations and to entropy code these representations according to a temporally-conditioned probabilistic model. However, their model is limited to low-resolution video. More recently, Djelouah \etal proposed a scheme for interpolation-based video compression, where the motion and blending coefficients are directly decoded from latent representations and the residual is directly computed in the latent space \cite{Djelouah_2019_ICCV}. But the interpolation model and the residual compression model are not jointly optimized.
While the above methods are designed for random-access mode, some other methods have been developed for low-latency mode. For example, Lu \etal proposed to replace the modules in the traditional video compression framework with CNN-based components, \ie motion estimation, motion compression, motion compensation, and residual compression \cite{lu2018dvc}. Their model directly compresses the motion information, and uses only one previous frame as reference for motion compensation. Rippel \etal proposed to utilize the information of multiple reference frames through maintaining a latent state \cite{Rippel_2019_ICCV}. Due to the presence of the latent state, their model is difficult to train and sensitive to transmission error. Our scheme is also tailored for low-latency mode and we will compare to \cite{lu2018dvc} more specifically in the following.
\section{Proposed Method}
{\bf Notations.}
Let $\mathcal{V}=\{x_{1},x_{2},\dots,x_{t},\dots\}$ denotes the original video sequence. $x_{t}$, $\bar{x}_{t}$, and $\hat{x}_{t}$ represent the original, predicted, and decoded/reconstructed frames at time step $t$, respectively. $r_{t}$ is the residual between the original frame $x_{t}$ and the predicted frame $\bar{x}_{t}$. $\hat{r}_{t}'$ represents the residual reconstructed by the residual auto-encoder, and $\hat{r}_{t}$ is the final decoded residual. In order to remove the temporal redundancy between video frames, we use pixel-wise motion vector (MV) field based on optical flow estimation. $v_{t}$, $\bar{v}_{t}$, and $\hat{v}_{t}$ represent the original, predicted, and decoded MV fields at time step $t$, respectively. $d_{t}$ is the MV difference (MVD) between the original MV $v_{t}$ and the predicted MV $\bar{v}_{t}$. $\hat{d}_{t}$ is the MVD reconstructed by the MVD auto-encoder, and $\hat{v}_{t}'$ represents the reconstructed MV by adding $\hat{d}_{t}$ to $\bar{v}_{t}$. Since auto-encoder represents transform, the residual $r_{t}$ and the MVD $d_{t}$ are transformed to $y_{t}$ and $m_{t}$. $\hat{y}_{t}$ and $\hat{m}_{t}$ are the corresponding quantized versions, respectively.
\subsection{Overview of the Proposed Method}
Fig.\ \ref{fig:framework} presents the scheme of DVC \cite{lu2018dvc} and our scheme for a side-by-side comparison. Our scheme introduces four new modules, which are all based on multiple reference frames. The specific compression workflow of our scheme is introduced as follows.
{\bf Step 1. Motion estimation and prediction.}
The current frame $x_{t}$ and the reference frame $\hat{x}_{t-1}$ are fed into a motion estimation network (ME-Net) to extract the motion information $v_{t}$. In this paper, the ME-Net is based on the optical flow network FlowNet2.0 \cite{ilg2017flownet}, which is at the state of the art. Instead of directly encoding the pixel-wise MV field $v_{t}$ like in Fig.\ \ref{fig:framework} (a), which incurs a high coding cost, we propose to use a MV prediction network (MAMVP-Net) to predict the current MV field, which can largely remove the temporal redundancy of MV fields. More information is provided in Section \ref{MAMVP-Net}.
{\bf Step 2. Motion compression and refinement.}
After motion prediction, we use the MVD encoder-decoder network to encode the difference $d_{t}$ between the original MV $v_{t}$ and the predicted MV $\bar{v}_{t}$.
Here the network structure is similar to that in \cite{balle2016end}.
This MVD encoder-decoder network can further remove the spatial redundancy present in $d_{t}$. Specifically, $d_{t}$ is first non-linearly mapped into the latent representations $m_{t}$, and then quantized to $\hat{m}_{t}$ by a rounding operation. The probability distributions of $\hat{m}_{t}$ are then estimated by the CNNs proposed in \cite{balle2016end}. In the inference stage, $\hat{m}_{t}$ is entropy coded into a bit stream using the estimated distributions. Then, $\hat{d}_{t}$ can be reconstructed from the entropy decoded $\hat{m}_{t}$ by the non-linear inverse transform. Since the decoded $\hat{d}_{t}$ contains error due to quantization, especially at low bit rates, we propose to use a MV refinement network (MV Refine-Net) to reduce quantization error and improve the quality. After that, the refined MV $\hat{v}_{t}$ is cached in the decoded MV buffer for next frames coding. More details are presented in Section \ref{MVR-Net}.
{\bf Step 3. Motion compensation.}
After reconstructing the MV, we use a motion compensation network (MMC-Net) to obtain the predicted frame $\bar{x}_{t}$. Instead of only using one reference frame for motion compensation like in Fig.\ \ref{fig:framework} (a), our MMC-Net can generate a more accurate prediction frame by using multiple reference frames. More information is provided in Section \ref{MC-Net}.
{\bf Step 4. Residual compression and refinement.}
After motion compensation, the residual encoder-decoder network is used to encode the residual $r_{t}$ between the original frame $x_{t}$ and the predicted frame $\bar{x}_{t}$.
The network structure is similar to that in \cite{balle2018variational}.
This residual encoder-decoder network can further remove the spatial redundancy present in $r_{t}$ by a powerful non-linear transform, which is also used in DVC \cite{lu2018dvc} because of its effectiveness. Similar to the $d_{t}$ compression, the residual $r_{t}$ is first transformed into $y_{t}$, and then quantized to $\hat{y}_{t}$. The probability distributions of $\hat{y}_{t}$ are then estimated by the CNNs proposed in \cite{balle2018variational}. In the inference stage, $\hat{y}_{t}$ is entropy coded into a bit stream using the estimated distributions. Then, $\hat{r}_{t}'$ can be reconstructed from the entropy decoded $\hat{y}_{t}$ by the non-linear inverse transform. The decoded $\hat{r}_{t}'$ contains quantization error, so we propose to use a residual refinement network (Residual Refine-Net) to reduce quantization error and enhance the quality. The details are presented in Section \ref{RR-Net}.
{\bf Step 5. Frame reconstruction.}
After refining the residual, the reconstructed frame $\hat{x}_{t}$ can be obtained by adding $\hat{r}_{t}$ to the predicted frame $\bar{x}_{t}$. $\hat{x}_{t}$ is then cached in the decoded frame buffer for next frames coding.
\subsection{Multi-scale Aligned MV Prediction Network}
\label{MAMVP-Net}
\begin{figure}
\begin{center}
\subfigure[ ]
{
\includegraphics[width=.9\linewidth]{MFNet.pdf}
}
\subfigure[ ]
{
\includegraphics[width=.93\linewidth]{MAMVPNet_paper_2.pdf}
}
\end{center}
\caption{The multi-scale aligned MV prediction network. Conv(3,16,1) denotes the hyper-parameters of a convolutional layer: kernel size is 3$\times$3, output channel number is 16, and stride is 1. Each convolutional layer is equipped with a leaky ReLU except the one indicated by green. (a) Multi-scale feature extraction part. 2$\times$ down-sampling is performed by a convolutional layer with a stride of 2, and $i$ is 0, 1, 2. (b) MV prediction part at the $l$-th level. $l$ is 0, 1, 2, 3, and the network at the $3$-th level does not condition on the previous level.}
\label{fig:MAMVPNet}
\vspace{-0.3cm}
\end{figure}
To address large and complex motion between frames, we propose a Multi-scale Aligned MV Prediction Network (MAMVP-Net), shown in Fig.\ \ref{fig:MAMVPNet}. We use the previous three reconstructed MV fields, \ie $\hat{v}_{t-3}$, $\hat{v}_{t-2}$, and $\hat{v}_{t-1}$, to obtain the MV prediction $\bar{v}_{t}$. More or less MV fields may be used depending on the size of the Decoded MV Buffer.
As shown in Fig.\ \ref{fig:MAMVPNet} (a), we first generate a multi-level feature pyramid for each previous reconstructed MV field, using a multi-scale feature extraction network (four levels are used for example),
\vspace{-0.07cm}
\begin{equation}\label{multi-scale_extractor}
\{f_{\hat{v}_{t-i}}^{l}|l=0,1,2,3\}=H_{mf}(\hat{v}_{t-i}), i=1,2,3
\end{equation}
where $f_{\hat{v}_{t-i}}^{l}$ represents the features of $\hat{v}_{t-i}$ at the $l$-th level.
Second, considering the previous reconstructed MV fields contain compression error, we choose to warp the feature pyramids of $\hat{v}_{t-3}$ and $\hat{v}_{t-2}$, instead of the MV fields themselves, towards $\hat{v}_{t-1}$ via:
\vspace{-0.1cm}
\begin{equation}\label{warp_mvp}
\begin{split}
f_{\hat{v}_{t-3}}^{l,w} &= Warp(f_{\hat{v}_{t-3}}^{l},\hat{v}_{t-1}^{l}+Warp(\hat{v}_{t-2}^{l},\hat{v}_{t-1}^{l})) \\
f_{\hat{v}_{t-2}}^{l,w} &= Warp(f_{\hat{v}_{t-2}}^{l},\hat{v}_{t-1}^{l}),l=0,1,2,3
\end{split}
\end{equation}
where $f_{\hat{v}_{t-3}}^{l,w}$ and $f_{\hat{v}_{t-2}}^{l,w}$ are the warped features of $\hat{v}_{t-3}$ and $\hat{v}_{t-2}$ at the $l$-th level. $\hat{v}_{t-1}^{l}$ and $\hat{v}_{t-2}^{l}$ are the down-sampled versions of $\hat{v}_{t-1}$ and $\hat{v}_{t-2}$ at the $l$-th level. $Warp$ stands for bilinear interpolation-based warping. Note that feature domain warping has been adopted in previous work because of its effectiveness, such as in \cite{niklaus2018context} for video frame interpolation and in \cite{sun2018pwc} for optical flow generation.
Third, we use a pyramid network to predict the current MV field from coarse to fine based on the feature pyramid of $\hat{v}_{t-1}$ and the warped feature pyramids of $\hat{v}_{t-2}$ and $\hat{v}_{t-3}$. As shown in Fig.\ \ref{fig:MAMVPNet} (b), the predicted MV field $\bar{v}^{l}_{t}$ and the predicted features $f_{\bar{v}_{t}}^{l}$ at the $l$-th level can be obtained via:
\begin{equation}\label{predict_mvp}
\bar{v}^{l}_{t}, f_{\bar{v}_{t}}^{l} = H_{mvp}(\bar{v}^{l+1,u}_{t},f_{\bar{v}_{t}}^{l+1,u},f_{\hat{v}_{t-1}}^{l},f_{\hat{v}_{t-2}}^{l,w},f_{\hat{v}_{t-3}}^{l,w})
\end{equation}
where $\bar{v}^{l+1,u}_{t}$ and $f_{\bar{v}_{t}}^{l+1,u}$ are the 2$\times$ up-sampled MV field and features from those at the previous $(l+1)$-th level using bilinear. This process is repeated until the desired $0$-th level, resulting in the final predicted MV field $\bar{v}_{t}$.
\subsection{MV Refinement Network}
\label{MVR-Net}
After MVD compression, we can reconstruct the MV field $\hat{v}_{t}'$ by adding the decoded MVD $\hat{d}_{t}$ to the predicted MV $\bar{v}_{t}$. But $\hat{v}_{t}'$ contains compression error caused by quantization, especially at low bit rates. For example, we found there are many zeros in $\hat{d}_{t}$, as zero MVD requires less bits to encode. A similar result is also reported in DVC \cite{lu2018dvc} when compressing the MV field. But such zero MVD incurs inaccurate motion compensation. Therefore, we propose to use a MV refinement network (MV Refine-Net) to reduce compression error and improve the accuracy of reconstructed MV. As shown in Fig.\ \ref{fig:framework} (b), we use the previous three reconstructed MV fields, \ie $\hat{v}_{t-3}$, $\hat{v}_{t-2}$, and $\hat{v}_{t-1}$, and the reference frame $\hat{x}_{t-1}$ to refine $\hat{v}_{t}'$. Using the previous multiple reconstructed MV fields can more accurately predict the current MV, and then help on refinement. The reason for using $\hat{x}_{t-1}$ is that the following motion compensation module will depend on the refined $\hat{v}_{t}$ and $\hat{x}_{t-1}$ to obtain the predicted frame, so $\hat{x}_{t-1}$ can be a guidance to help refine $\hat{v}_{t}'$. According to our experimental results (Section \ref{Ablation}), feeding $\hat{x}_{t-1}$ into the MV refinement network does improve the compression efficiency. More details of the MV Refine-Net can been found in the supplementary.
\subsection{Motion Compensation Network with Multiple Reference Frames}
\label{MC-Net}
\begin{figure}
\begin{center}
\includegraphics[width=.85\linewidth]{MCNet.pdf}
\end{center}
\caption{The motion compensation network. Each convolutional layer outside residual blocks is equipped with a leaky ReLU except the last layer (indicated by green). Each residual block consists of two convolutional layers, which are configured as follows: kernel size is 3$\times$3, output channel number is 64, the first layer has ReLU.}
\label{fig:MCNet}
\vspace{-0.3cm}
\end{figure}
In traditional video coding schemes, the motion compensation using multiple reference frames is adopted in H.264/AVC \cite{wiegand2003overview}, and inherited by the following standards. For example, some coding blocks may use a weighted average of two different motion-compensated predictions from different reference frames, which greatly improves the compression efficiency. Besides, in recent work for video super-resolution, multiple frames methods are also observed much better than those based on a single frame \cite{wang2019edvr,Haris_2019_CVPR,Li_2019_CVPR}. Therefore, we propose to use multiple reference frames for motion compensation in our scheme.
The network architecture is shown in Fig.\ \ref{fig:MCNet}. In this module, we use the previous four reference frames, \ie $\hat{x}_{t-4}$, $\hat{x}_{t-3}$, $\hat{x}_{t-2}$ and $\hat{x}_{t-1}$ to obtain the predicted frame $\bar{x}_{t}$. More or less reference frames can be used depending on the size of the Decoded Frame Buffer.
First, we use a two-layer CNN to extract the features of each reference frame. Then, the extracted features and $\hat{x}_{t-1}$ are warped towards the current frame via:
\vspace{-0.3cm}
\begin{equation}\label{warp_frames}
\begin{split}
\hat{v}^{w}_{t-k} &= Warp(\hat{v}_{t-k},\hat{v}_{t}+\sum_{l=1}^{k-1}\hat{v}^{w}_{t-l}), k=1,2,3\\
\hat{x}^{w}_{t-1} &= Warp(\hat{x}_{t-1},\hat{v}_{t})\\
f_{\hat{x}_{t-i}}^{w} &= Warp(f_{\hat{x}_{t-i}},\hat{v}_{t}+\sum_{k=1}^{i-1}\hat{v}^{w}_{t-k}), i=1,2,3,4 \\
\end{split}
\end{equation}
where $\hat{v}^{w}_{t-k}$ is the warped version of $\hat{v}_{t-k}$ towards $\hat{v}_{t}$, and $f_{\hat{x}_{t-i}}^{w}$ is the warped feature of $\hat{x}_{t-i}$. Finally, as Fig.\ \ref{fig:MCNet} shows, the warped features and frames are fed into a CNN to obtain the predicted frame,
\begin{equation}\label{res-equation}
\bar{x}_{t} = H_{mc}(f_{\hat{x}_{t-4}}^{w}, f_{\hat{x}_{t-3}}^{w}, f_{\hat{x}_{t-2}}^{w}, f_{\hat{x}_{t-1}}^{w}, \hat{x}^{w}_{t-1}) + \hat{x}^{w}_{t-1}
\end{equation}
where the network is based on the U-Net structure \cite{ronneberger2015u} and integrates multiple residual blocks.
\subsection{Residual Refinement Network}
\label{RR-Net}
After residual compression, the reconstructed residual $\hat{r}_{t}'$ contains compression error, especially at low bit rates. Similar to the case of MV Refine-Net, we propose a residual refinement network (Residual Refine-Net) to reduce compression error and improve quality. As shown in Fig.\ \ref{fig:framework} (b), this module utilizes the previous four reference frames, \ie $\hat{x}_{t-4}$, $\hat{x}_{t-3}$, $\hat{x}_{t-2}$ and $\hat{x}_{t-1}$, and the predicted frame $\bar{x}_{t}$ to refine $\hat{r}_{t}'$. More details of this network are provided in the supplementary.
\subsection{Training Strategy}
\label{Training_strategy}
{\bf Loss Function.}
Our scheme aims to jointly optimize the number of encoding bits and the distortion between the original frame $x_{t}$ and the reconstructed frame $\hat{x}_{t}$. We use the following loss function for training,
\begin{equation}\label{loss-equation}
J = D + \lambda R = d(x_{t},\hat{x}_{t}) + \lambda (R_{mvd}+R_{res})
\end{equation}
where $d(x_{t},\hat{x}_{t})$ is the distortion between $x_{t}$ and $\hat{x}_{t}$. We use the mean squared error (MSE) as distortion measure in our experiments. $R_{mvd}$ and $R_{res}$ represent the bit rates used for encoding the MVD $d_{t}$ and the residual $r_{t}$, respectively. During training, we do not perform real encoding but instead estimate the bit rates from the entropy of the corresponding latent representations $\hat{m}_{t}$ and $\hat{y}_{t}$. We use the CNNs in \cite{balle2016end} and \cite{balle2018variational} to estimate the probability distributions of $\hat{m}_{t}$ and $\hat{y}_{t}$, respectively, and then obtain the corresponding entropy. Since $\hat{m}_{t}$ and $\hat{y}_{t}$ are the quantized representations and the quantization operation is not differentiable, we use the method proposed in \cite{balle2016end}, where the quantization operation is replaced by adding uniform noise during training.
\begin{figure*}
\centering
\subfigure[]{
\includegraphics[width=2.0in]{ClassUVG_PSNR.pdf}}
\subfigure[]{
\includegraphics[width=2.0in]{ClassB_PSNR.pdf}}
\subfigure[]{
\includegraphics[width=2.0in]{ClassD_PSNR.pdf}}
\subfigure[]{
\includegraphics[width=2.0in]{ClassUVG_MSSSIM.pdf}}
\subfigure[]{
\includegraphics[width=2.0in]{ClassB_MSSSIM.pdf}}
\subfigure[]{
\includegraphics[width=2.0in]{ClassD_MSSSIM.pdf}}
\caption{\textbf{Overall performance}. The compression results on the three datasets using H.264 \cite{wiegand2003overview}, H.265 \cite{sullivan2012overview}, DVC \cite{lu2018dvc}, Wu's method \cite{wu2018video} and the proposed method. We directly use the results reported in \cite{lu2018dvc} and \cite{wu2018video}. The results of H.264 and H.265 are cited from \cite{lu2018dvc}. Wu \cite{wu2018video} did not report on HEVC Class B and Class D. Top row: PSNR. Bottom row: MS-SSIM.}
\label{fig_RD Curve}
\vspace{-0.3cm}
\end{figure*}
{\bf Progressive Training.}
We had tried to train the entire network from scratch, \ie with all the modules except the ME-Net randomly initialized (ME-Net is readily initialized with FlowNet2.0). The results are not satisfactory, as the resulting bitrates are not balanced: too less rate for MVD and too much rate for residual, and thus the compression results are inefficient (see the experimental results in Section \ref{Ablation}).
To address this problem, we use a step-by-step training strategy. First, we train the network including only the ME-Net and MMC-Net, while the ME-Net is the pre-trained model in \cite{ilg2017flownet} and remains unchanged. Then, the MVD and residual encoder-decoder networks are added for training, while the parameters of ME-Net and MMC-Net are fixed. After that, all of the above four modules are jointly fine-tuned. Next, we add the MAMVP-Net, MV Refine-Net and Residual Refine-Net one by one to the training system. Each time when adding a new module, we fix the previously trained modules and learn the new module specifically, and then jointly fine-tune all of them. It is worth noting that many previous studies that use step-by-step training usually adopt a different loss function for each step (\eg \cite{reda2018sdc,Yang_2018_CVPR}), while the loss function remains the same rate-distortion cost in our method.
\vspace{-0.08cm}
\section{Experiments}
\subsection{Experimental Setup}
\vspace{-0.08cm}
{\bf Training Data.}
We use the Vimeo-90k dataset \cite{xue2019video}, and crop the large and long video sequences into 192$\times$192, 16-frame video clips.
{\bf Implementation Details.}
In our experiments, the coding structure is IPPP\dots~and all the P-frames are compressed by the same network. We do not implement a single image compression network but use H.265 to compress the only I-frame. For the first three P-frames, whose reference frames are less than four, we duplicate the furthest reference frame to achieve the required four frames. We train four models with different $\lambda$ values ($16, 24, 40, 64$) for multiple coding rates. The Adam optimizer \cite{kingma2014adam} with the momentum of $0.9$ is used. The initial learning rate is $5e{-5}$ for training newly added modules, and $1e{-5}$ in the fine-tuning stages. The learning rate is reduced by a factor of $2$ five times during training. Batch size is $8$ (\ie $8$ cropped clips). The entire scheme is implemented by TensorFlow and trained/tested on a single Titan Xp GPU.
{\bf Testing Sequences.}
The HEVC common test sequences, including 16 videos of different resolutions known as Classes B, C, D, E \cite{bossen2011common}, are used for evaluation. We also use the seven sequences at 1080p from the UVG dataset \cite{uvgdata}.
{\bf Evaluation Metrics.}
Both PSNR and MS-SSIM \cite{wang2003multiscale} are used to measure the quality of the reconstructed frames in comparison to the original frames. Bits per pixel (bpp) is used to measure the number of bits for encoding the representations including MVD and residual.
\vspace{-0.08cm}
\subsection{Experimental Results}
To demonstrate the advantage of our proposed scheme, we compare with existing video codecs, in particular H.264 \cite{wiegand2003overview} and H.265 \cite{sullivan2012overview}. For easy comparison with DVC, we directly cite the compression results of H.264 and H.265 reported in \cite{lu2018dvc}. The results of H.264 and H.265 default settings can be found in the supplementary.
In addition, we compare with several state-of-the-art learned video compression methods, including Wu\_ECCV2018 \cite{wu2018video} and DVC \cite{lu2018dvc}. To the best of our knowledge, DVC \cite{lu2018dvc} reports the best compression performance in PSNR among the learning-based methods for low-latency mode.
Fig.\ \ref{fig_RD Curve} presents the compression results on the UVG dataset and the HEVC Class B and Class D datasets. It can be observed that our method outperforms the learned video compression methods DVC \cite{lu2018dvc} and Wu\_ECCV2018 \cite{wu2018video} by a large margin. On the HEVC Class B dataset, our method achieves about 1.2dB coding gain than DVC at the same bpp of 0.226. When compared with the traditional codec H.265, our method has achieved better compression performance in both PSNR and MS-SSIM. The gain in MS-SSIM seems more significant. It is worth noting that our model is trained with the MSE loss, but results show that it also works for MS-SSIM.
More experimental results, including HEVC Class C and Class E, comparisons to other methods \cite{Djelouah_2019_ICCV,Rippel_2019_ICCV}, are given in the supplementary.\vspace{-0.08cm}
\subsection{Ablation Study}
\label{Ablation}
\begin{figure}
\begin{center}
\includegraphics[width=.8\linewidth]{ClassD_PSNR_Ablation_RF.pdf}
\end{center}
\caption{The compression results of using two or three reference frames in our trained models on the HEVC Class D dataset. The proposed model uses four by default and DVC \cite{lu2018dvc} uses only one.}
\label{fig:RF_Ablation}
\vspace{-0.5cm}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=.8\linewidth]{ClassB_PSNR_Ablation.pdf}
\end{center}
\caption{{\bf Ablation study.} The compression results of the following settings on the HEVC Class B dataset. (1) \texttt{Our Baseline}: The network contains ME-Net, MC-Net with only one reference frame, and the MV and residual encoder-decoder networks. (2) \texttt{Add MAMVP-Net}: The MAMVP-Net is added to (1). (3) \texttt{Add MVRefine-Net}: The MV Refine-Net is added to (2). (4) \texttt{Add MVRefine-Net-0}: $f_{\hat{x}_{t-1}}$ is removed from the MV Refine-Net in (3). (5) \texttt{Add MMC-Net}: The MC-Net with one reference frame in (3) is replaced by the MMC-Net with multiple reference frames. (6) \texttt{Proposed}: The Residual Refine-Net is added to (5). (7) \texttt{Scratch}: Training (6) from scratch.}
\label{fig:Ablation}
\vspace{-0.55cm}
\end{figure}
{\bf On the Number of Reference Frames.}
The number of reference frames is an important hyper-parameter in our scheme. Our used default value is four reference frames and their associated MV fields, which is also the default value in the H.265 reference software. To evaluate the effectiveness of using less reference frames, we conduct a comparison experiment by using two or three reference frames in our trained models. Fig.\ \ref{fig:RF_Ablation} presents the compression results on the HEVC Class D dataset. As observed, the marginal gain of increasing reference frame is less and less.
{\bf Multi-scale Aligned MV Prediction Network.}
\begin{figure*}
\centering
\subfigure[]{
\includegraphics[width=.17\linewidth]{image_prev_rec_save.pdf}
}
\subfigure[]{
\includegraphics[width=.17\linewidth]{image_cur_ori_save.pdf}}
\subfigure[]{
\includegraphics[width=.17\linewidth]{flow_ori_save.pdf}}
\subfigure[]{
\includegraphics[width=.17\linewidth]{flow_pred_save.pdf}}
\subfigure[]{
\includegraphics[width=.17\linewidth]{flow_diff_save.pdf}}
\caption{Visualized results of compressing the Kimono sequence using \texttt{Add MAMVP-Net} model with $\lambda =16$. (a) The reference frame $\hat{x}_{5}$. (b) The original frame $x_{6}$. (c) The original MV $v_{6}$. (d) The predicted MV $\bar{v}_{6}$. (e) The MVD $d_{6}$.}
\label{fig_mvp}
\end{figure*}
\begin{table*}
\caption{Average running time per frame of using our different models for a 320$\times$256 sequence.}
\label{Time}
\center
\begin{tabular}{c|c|c|c|c|c}
\hline
Model & \texttt{Our Baseline} & \texttt{Add MAMVP-Net} & \texttt{Add MVRefine-Net} & \texttt{Add MMC-Net} & \texttt{Proposed} \\
\hline
Encoding Time & 0.25s &0.31s &0.34s &0.35s & 0.37s \\
\hline
Decoding Time & 0.05s &0.11s &0.14s &0.15s & 0.17s \\
\hline
\end{tabular}
\end{table*}
\begin{figure}
\begin{center}
\subfigure[]{
\includegraphics[width=.40\linewidth]{Flow_Ori_Mag_Distribution.pdf}}
\subfigure[]{
\includegraphics[width=.40\linewidth]{Flow_Diff_Mag_Distribution.pdf}}
\end{center}
\caption{The distribution of MV magnitude. (a) The MV of Fig.\ \ref{fig_mvp} (c). (b) The MVD of Fig.\ \ref{fig_mvp} (e).}
\label{fig:Flow_dis}
\end{figure}
To evaluate its effectiveness, we perform a comparison experiment. The anchor is the network containing the ME-Net, the MC-Net with only one reference frame, and the MV and residual encoder-decoder networks. Here, the MC-Net with only one reference frame is almost identical to the MMC-Net shown in Fig.\ \ref{fig:MCNet}, except for removing $f_{\hat{x}_{t-4}}^{w}$, $f_{\hat{x}_{t-3}}^{w}$, $f_{\hat{x}_{t-2}}^{w}$ from the inputs. This anchor is denoted by \texttt{Our Baseline} (the green curve in Fig.\ \ref{fig:Ablation}). The tested network is constructed by adding the MAMVP-Net to \texttt{Our Baseline}, and is denoted by \texttt{Add MAMVP-Net} (the red curve in Fig.\ \ref{fig:Ablation}). It can be observed that the MAMVP-Net improves the compression efficiency significantly, achieving about $0.5\sim0.7$ dB gain at the same bpp.
In Fig.\ \ref{fig_mvp}, we visualize the intermediate results when compressing the Kimono sequence using \texttt{Add MAMVP-Net} model. Fig.\ \ref{fig:Flow_dis} shows the corresponding probability distributions of MV magnitudes for $v_{6}$ and $d_{6}$. It is observed that the magnitude of MV to be encoded is greatly reduced by using our MAMVP-Net. Quantitatively, it needs 0.042bpp for encoding the original MV $v_6$ using \texttt{Our Baseline} model, while it needs 0.027bpp for encoding the MVD $d_6$ using \texttt{Add MAMVP-Net} model. Therefore, our MAMVP-Net can largely reduce the bits for encoding MV and thus improve the compression efficiency. More ablation study results can be found in the supplementary.
{\bf MV Refinement Network.}
To evaluate the effectiveness, we perform another experiment by adding the MV Refine-Net to \texttt{Add MAMVP-Net}, leading to \texttt{Add MVRefine-Net} (the cyan curve in Fig.\ \ref{fig:Ablation}). Compared with the compression results of \texttt{Add MAMVP-Net}, at the same bpp, the MV Refine-Net achieves a compression gain of about 0.15dB at high bit rates and about 0.4dB at low bit rates. This is understandable as the compression error is more severe when the bit rate is lower. In addition, to evaluate the effectiveness of introducing $\hat{x}_{t-1}$ into the MV Refine-Net, we perform an experiment by removing $f_{\hat{x}_{t-1}}$ from the inputs of the MV Refine-Net (denoted by \texttt{Add MVRefine-Net-0}, the black curve in Fig.\ \ref{fig:Ablation}). We can observe that feeding $\hat{x}_{t-1}$ into the MV Refine-Net provides about 0.1dB gain consistently. Visual results of the MV Refine-Net can be found in the supplementary.
{\bf Motion Compensation Network with Multiple Reference Frames.}
To verify the effectiveness, we perform an experiment by replacing the MC-Net (with only one reference frame) in \texttt{Add MVRefine-Net} with the proposed MMC-Net using multiple reference frames (denoted by \texttt{Add MMC-Net}, the magenta curve in Fig.\ \ref{fig:Ablation}). We can observe that using multiple reference frames in MMC-Net provides about $0.1\sim0.25$dB gain. Visual results of the MMC-Net can be found in the supplementary.
{\bf Residual Refinement Network.}
We conduct another experiment to evaluate its effectiveness by adding the Residual Refine-Net to \texttt{Add MMC-Net} (denoted by \texttt{Proposed}, the blue curve in Fig.\ \ref{fig:Ablation}). We observe that the Residual Refine-Net provides about 0.3dB gain at low bit rates and about 0.2dB gain at high bit rates. Similar to MV Refine-Net, the gain of Residual Refine-Net is higher at lower bit rates because of more compression error. Visual results of the Residual Refine-Net can be found in the supplementary.
{\bf Step-by-step Training Strategy.}
To verify the effectiveness, we perform an experiment by training the \texttt{Proposed} model from scratch except the ME-Net initialized by the pre-trained model in \cite{ilg2017flownet} (denoted by \texttt{Scratch}, the yellow curve in Fig.\ \ref{fig:Ablation}). We can observe that the compression results are very bad. Quantitatively, when compressing the Kimono sequence using \texttt{Scratch} model with $\lambda =16$, the bitrates are very unbalanced: 0.0002bpp for MVD and 0.2431bpp for residual. Our step-by-step training strategy can overcome this.
{\bf Encoding and Decoding Time.}
We use a single Titan Xp GPU to test the inference speed of our different models. The running time is presented in Table \ref{Time}. We can observe that the MAMVP-Net increases more encoding/decoding time than the other newly added modules. For a 352$\times$256 sequence, the overall encoding (resp. decoding) speed of our \texttt{Proposed} model is 2.7fps (resp. 5.9fps). It requires our future work to optimize the network structure for computational efficiency to achieve real-time decoding.
\vspace{-0.18cm}
\section{Conclusion}
In this paper, we have proposed an end-to-end learned video compression scheme for low-latency scenarios. Our scheme can effectively remove temporal redundancy by utilizing multiple reference frames for both motion compensation and motion vector prediction. We also introduce the MV and residual refinement modules to compensate for the compression error and to enhance the reconstruction quality. All the modules in our scheme are jointly optimized by using a single rate-distortion loss function, together with a step-by-step training strategy. Experimental results show that our method outperforms the existing learned video compression methods for low-latency mode. In the future, we anticipate that advanced entropy coding model can further boost the compression efficiency.
{\small
\bibliographystyle{ieee_fullname}
|
2,869,038,154,694 | arxiv | \section{Introduction}
Our current state of knowledge about how stars and planets are
formed comes from an intertwined web of observations (spanning the
electromagnetic spectrum) and theoretical work.
The early stages of low-mass star formation comprise a wide
array of inferred physical processes, including disk accretion,
various kinds of outflow, and magnetohydrodynamic (MHD)
activity on time scales ranging from hours to millennia
(see, e.g., Lada 1985; Bertout 1989; Appenzeller \& Mundt 1989;
Hartmann 2000; K\"{o}nigl \& Pudritz 2000; McKee \& Ostriker 2007;
Shu et al.\ 2007).
A key recurring issue is that there is a great deal of {\em mutual
interaction and feedback} between the star and its circumstellar
environment.
This interaction can determine how rapidly the star rotates,
how active the star appears from radio to X-ray wavelengths, and
how much mass and energy the star releases into its
interplanetary medium.
A key example of circumstellar feedback is the magnetospheric
accretion paradigm for classical T Tauri stars
(Lynden-Bell \& Pringle 1974; Uchida \& Shibata 1984;
Camenzind 1990; K\"{o}nigl 1991).
Because of strong ($\sim$1 kG) stellar magnetic fields, the evolving
equatorial accretion disk does not penetrate to the stellar surface,
but instead is stopped by the stellar magnetosphere.
Accretion is thought to proceed via ballistic infall along magnetic
flux tubes threading the inner disk, leading to shocks and hot spots
on the surface.
The primordial accretion disk is dissipated gradually as the star
enters the weak-lined T Tauri star phase, with a likely transition
to a protoplanetary dust/debris disk.
Throughout these stages, solar-mass stars are inferred to exhibit
some kind of wind or jet-like outflow.
There are several possible explanations of how and where the outflows
arise, including extended disk winds, X-winds, impulsive
(plasmoid-like) ejections, and ``true'' stellar winds (e.g.,
Paatz \& Camenzind 1996; Calvet 1997; K\"{o}nigl \& Pudritz 2000;
Dupree et al.\ 2005; Edwards et al.\ 2006; Ferreira et al.\ 2006;
G\'{o}mez de Castro \& Verdugo 2007; Cai et al.\ 2008).
Whatever their origin, the outflows produce observational
diagnostics that indicate mass loss rates exceeding the Sun's present
mass loss rate by factors of $10^3$ to $10^6$.
It is of some interest to evaluate how much of the observed outflow
can be explained solely with {\em stellar} winds, since these flows
are locked to the star and thus are capable of removing angular
momentum from the system.
Recent work by Matt \& Pudritz (2005, 2007, 2008)
has suggested that if there is a stellar wind with a sustained mass
loss rate about 10\% of the accretion rate, the wind can carry away
enough angular momentum to keep T Tauri stars from being spun up
unrealistically by the accretion.
Despite many years of study, the dominant physical processes that
accelerate winds from cool stars have not yet been identified
conclusively.
For many stars, the large-scale energetics of the system---i.e., the
luminosity and the gravitational potential---seem to determine the
overall magnitude of the mass loss (Reimers 1975; 1977;
Schr\"{o}der \& Cuntz 2005).
Indeed, for the most luminous cool stars, radiation pressure seems
to provide a direct causal link between the luminosity $L_{\ast}$
and the mass loss rate $\dot{M}_{\rm wind}$ (e.g.,
Gail \& Sedlmayr 1987; H\"{o}fner 2005).
However, for young solar-mass stars, other mediating processes
(such as coronal heating, waves, or time-variable magnetic ejections)
are more likely to connect the properties of the stellar interior
to the surrounding outflowing wind.
For example, magnetohydrodynamic (MHD) waves have been studied for
several decades as a likely way for energy to be transferred
from late-type stars to their winds (Hartmann \& MacGregor 1980;
DeCampli 1981; Airapetian et al.\ 2000;
Falceta-Gon\c{c}alves et al.\ 2006; Suzuki 2007).
In parallel with the above work in improving our understanding of
stellar winds, there has been a great deal of progress toward
identifying and characterizing the processes that produce the
{\em Sun's} corona and wind.
It seems increasingly clear that closed magnetic loops in the
low solar corona are heated by small-scale, intermittent magnetic
reconnection that is driven by the continual stressing of their
footpoints by convective motions (e.g., Aschwanden 2006; Klimchuk 2006).
The open field lines that connect the Sun to interplanetary space,
though, appear to be energized by the dissipation of waves and
turbulent motions that originate at the stellar surface
(Tu \& Marsch 1995; Cranmer 2002; Suzuki 2006; Kohl et al.\ 2006).
Parker's (1958) classic paradigm of solar wind acceleration via
gas pressure in a hot ($T \sim 10^{6}$ K) corona still
seems to be the dominant mechanism, though waves and turbulence
have an impact as well.
A recent self-consistent model of turbulence-driven coronal heating
and solar wind acceleration has succeeded in reproducing a wide
range of observations {\em with no ad-hoc free parameters}
(Cranmer et al.\ 2007).
This progress on the solar front is a fruitful jumping-off point
for a better understanding of the basic physics of winds and
accretion in young stars.
The remainder of this paper is organized as follows.
{\S}~2 presents an overview of the general scenario of
accretion-driven MHD waves that is proposed here to be important
for driving T Tauri mass loss.
In {\S}~3 the detailed properties of an evolving solar-mass star
are presented, including the fundamental stellar parameters,
the accretion rate and disk geometry, and the properties of the
clumped gas in the magnetospheric accretion streams.
These clumped streams impact the stellar surface and create
MHD waves that propagate horizontally to the launching points of
stellar wind streams.
{\S}~4 describes how self-consistent models of these wind regions
are implemented, and {\S}~5 gives the results.
Finally, {\S}~6 contains a summary of the major results of
this paper and a discussion of the implications these results
may have on our wider understanding of low-mass star formation.
\section{Basal Wave Driving from Inhomogeneous Accretion}
\begin{figure}
\epsscale{1.11}
\plotone{cranmer_ttau_f01.eps}
\caption{Summary sketch of the accretion and wind geometry
discussed in this paper.
(1) Magnetospheric accretion streams are assumed to impact the
star at mid-latitudes.
(2) Dense clumps in the accretion streams generate MHD waves
that propagate horizontally over the stellar surface.
(3) Polar magnetic field lines exhibit enhanced photospheric
wave activity and thus experience stronger coronal heating and
stellar wind acceleration.}
\end{figure}
Figure 1 illustrates the proposed connection between accretion and
stellar wind driving that is explored below.
The models make use of the idea that magnetospheric accretion
streams are likely to be highly unstable and time-variable,
and thus much of the material deposited onto the star is expected to
be in the form of intermittent dense clumps.
The impact of each clump is likely to generate MHD waves on
the stellar surface (Scheurwater \& Kuijpers 1988) and these waves
can propagate across the surface, spread out the kinetic energy
of accretion, and inject some of it into the global magnetic
field.\footnote{%
A somewhat similar wave-generation scenario was suggested by
Vasconcelos et al.\ (2002) and Elfimov et al.\ (2004), but their
model was applied only to the energization of the accretion streams
themselves, not to the regions exterior to the streams.}
The models simulate the time-steady properties of the polar magnetic
flux tubes, wherein the source of wave/turbulence energy at the
photosphere comes from two sources:
(1) accretion-driven waves that have propagated over the surface
from the ``hot spots,'' and
(2) the ever-present sub-photospheric convection, which
shakes the magnetic field lines even without accretion, and is
the dominant source of waves for the present-day Sun.
There is substantial observational evidence that the accretion
streams of classical T Tauri stars are clumped and inhomogeneous
(see, e.g., Gullbring et al.\ 1996; Safier 1998;
Bouvier et al.\ 2003, 2004, 2007; Stempels 2003).
Strong variations in the accretion signatures take place over
time scales ranging between a few hours (i.e., the free-fall time
from the inner edge of the disk to the star) and a few days (i.e.,
the stellar rotation period).
Some of this variability may arise from instabilities in the
accretion shock (Chevalier \& Imamura 1982; Mignone 2005),
with the possibility of transitions between stable and unstable
periods of accretion (e.g., Romanova et al.\ 2008).
Observations of the bases of accretion streams indicate
the presence of turbulence (Johns \& Basri 1995) as well as discrete
events that could signal the presence of magnetic reconnection
(Giardino et al.\ 2007).
Alternately, some of the observed variability may come from
larger-scale instabilities in the torqued magnetic field
that is threaded by the disk (e.g., Long et al.\ 2007;
Kulkarni \& Romanova 2008) and which could also drive periodic
reconnection events (van Ballegooijen 1994).
The latter variations would not necessarily be limited to the
rotation time scale, since ``instantaneous'' topology changes can
happen much more rapidly.
The mechanisms by which impulsive accretion events (i.e.,
impacts of dense clumps) can create MHD waves on the stellar
surface are described in more detail in {\S}~3.4.
This paper uses the analytic results of
Scheurwater \& Kuijpers (1988) to set the energy flux in these
waves, based on the available kinetic energy in the impacts.
It is useful to point out, however, that there are relatively
well-observed examples of impulse-generated waves on the Sun
that help to justify the overall plausibility of this scenario.
These are the so-called ``Moreton waves'' and ``EIT waves'' that
are driven by solar flares and coronal mass ejections (CMEs).
Moreton waves are localized ripples seen in the H$\alpha$
chromosphere expanding out from strong flares and erupting
filaments at speeds of 500 to 2000 km s$^{-1}$ (Moreton \& Ramsey 1960).
These seem to be consistent with fast-mode MHD shocks associated with
the flaring regions (e.g., Uchida 1968; Balasubramaniam et al.\ 2007).
A separate phenomenon, discovered more recently in the extreme
ultraviolet and called EIT waves (Thompson et al.\ 1998),
is still not yet well understood.
These waves propagate more slowly than Moreton waves (typically
100 to 400 km s$^{-1}$), but they are often seen to traverse the
entire diameter of the Sun and remain coherent.
Explanations include fast-mode MHD waves (Wang 2000),
solitons (Wills-Davey et al.\ 2007), and sheared current layers
containing enhanced Joule heating (Delann\'{e}e et al.\ 2008).
These serve as ample evidence that impulsive phenomena can
generate MHD fluctuations that travel across a stellar surface.
Finally, it is important to clarify some of the limitations of the
modeling approach used in this paper.
The models include only a description of the magnetospheric accretion
streams (with an assumed dipole geometry) and the open flux tubes
at the north and south poles of the star.
Thus, there is no attempt to model either disk winds or the
closed-field parts of the stellar corona, despite the fact that
these are likely to contribute to many key observational diagnostics
of T Tauri circumstellar gas.
In addition, the accretion streams themselves are included only for
their net dynamical impact on the polar (non-accreting) regions, and
thus there is no need to describe, e.g., the temperature or ionization
state inside the streams.
The wind acceleration in the polar flux tubes is treated with a
one-fluid, time-steady approximation similar to that described
by Cranmer et al.\ (2007) for the solar wind.
To summarize, this paper is a ``proof of concept'' study to evaluate
how much about T Tauri outflows can be explained with {\em only}
polar stellar winds that are energized by accretion-driven waves.
\section{Baseline Solar-Mass Evolution Model}
\subsection{Stellar Parameters and Accretion Rate}
To begin exploring how accretion-driven waves affect the winds of
young stars, a representative evolutionary sequence of fundamental
parameters was constructed for a solar-type star.
The adopted parameters are not intended to reproduce any specific
observation in detail and should be considered illustrative.
The STARS evolution code\footnote{%
The December 2003 version was obtained from:$\,$
http://www.ast.cam.ac.uk/$\sim$stars/}
(Eggleton 1971, 1972, 1973; Eggleton et al.\ 1973;
Han et al.\ 1994; Pols et al.\ 1995) was used to evolve a
star having a constant mass $M_{\ast} = 1 \, M_{\odot}$ from
the Hayashi track to well past the current age of the Sun.
Neither mass accretion nor mass loss were included in the
evolutionary calculation, and a standard solar abundance mixture
($X = 0.70$, $Z = 0.02$) was assumed.
All other adjustable parameters of the code were held fixed at
their default values.
\begin{figure}
\epsscale{0.99}
\plotone{cranmer_ttau_f02.eps}
\caption{Basic properties of 1 $M_{\odot}$ star.
({\em{a}}) H-R diagram showing the main sequence ({\em gray band})
and the modeled evolutionary track, with several representative
ages ({\em open circles}) including the present-day Sun
({\em filled circle}).
({\em{b}}) Length scales, plotted in units of solar radii, as a
function of age: stellar radius ({\em solid line}),
inner edge of accretion disk ({\em dashed line}), accretion
clump radius ({\em dot-dashed line}), and photospheric pressure
scale height ({\em dotted line}).}
\end{figure}
Figure 2 presents a summary of the modeled stellar parameters.
A Hertzsprung-Russell (H-R) diagram is shown in Figure 2{\em{a}},
with the bolometric luminosity $L_{\ast}$ plotted as a function of
the effective temperature $T_{\rm eff}$ for both the evolutionary
model (with a subset of stellar ages $t$ indicated by symbols) and
an approximate main sequence band for luminosity class V stars
(de Jager \& Nieuwenhuijzen 1987).
The current age of the Sun is denoted by $t = 4.6 \times 10^{9}$ yr,
or $\log_{10} t = 9.66$.
Figure 2{\em{b}} shows the age dependence of a selection of
relevant length scales, including the stellar radius $R_{\ast}$
and the photospheric pressure scale height $H_{\ast}$.
The latter quantity is defined assuming a pure hydrogen gas, as
\begin{equation}
H_{\ast} \, = \, \frac{k_{\rm B} T_{\rm eff}}{m_{\rm H}}
\, \frac{R_{\ast}^2}{G M_{\ast}}
\label{eq:Hast}
\end{equation}
where $k_{\rm B}$ is Boltzmann's constant, $m_{\rm H}$ is the
mass of a hydrogen atom, and $G$ is the Newtonian gravitation
constant.
Note that for most of the applications below, only the relative
variation of $H_{\ast}$ with age is needed and not its absolute value.
The scale height is assumed to be proportional to the horizontal
granulation scale length (e.g., Robinson et al.\ 2004), which in
turn governs the perpendicular correlation length of Alfv\'{e}nic
turbulence in the photosphere (see {\S}~4).
The other major parameter to be specified as a function of age is
the mass accretion rate $\dot{M}_{\rm acc}$.
For classical T Tauri stars, it is generally accepted that the
accretion rate decreases with increasing stellar age $t$, but there
is a large spread in measured values that may be affected by
observational uncertainties in both $\dot{M}_{\rm acc}$ and $t$.
In order to determine a representative age dependence for
$\dot{M}_{\rm acc}(t)$, we utilized tabulated accretion rates
from Hartigan et al.\ (1995) and Hartmann et al.\ (1998),
who interpreted optical/UV continuum excesses as an ``accretion
luminosity'' that comes from the kinetic energy of accreted gas
impacting the star.
There is some overlap in the star lists from these two sources,
and differences in the diagnostic techniques resulted in different
values of both $\dot{M}_{\rm acc}$ and $t$ for some stars common to
both lists.
Both values have been retained here, and thus Figure 3{\em{a}}
shows a total of 88 data points from both lists.\footnote{%
This figure shows relatively nearby Galactic stars only.
In the LMC, the mass accretion rates seem to be much higher
at ages around 10 Myr (e.g., Romaniello et al.\ 2004), which could
indicate a substantial metallicity dependence for many properties of
the T Tauri phase.}
The observational uncertainties are not shown; they may be as large
as an order of magnitude in $\dot{M}_{\rm acc}$ and a factor of
$\sim$3 in the age (see, e.g., Figure 3 of Muzerolle et al.\ 2000).
\begin{figure}
\epsscale{1.05}
\plotone{cranmer_ttau_f03.eps}
\caption{Mass accretion rates and mass loss rates for classical
T Tauri stars.
({\em{a}}) Measured accretion rates from Hartigan et al.\ (1995)
and Hartmann et al.\ (1998) ({\em open circles}) with the subset
of roughly solar-mass stars highlighted ({\em filled circles}).
The best-fit curve from eq.~(\ref{eq:Maccfit}) is also shown
({\em solid line}) as well as two measured rates for the stars
TW Hya and MP Mus ({\em error bars}); see text for details.
({\em{b}}) Ratio of mass loss rate to mass accretion rate for
measured T Tauri stars, where the symbols have the same meanings
as in panel ({\em{a}}).}
\end{figure}
Highlighted in Figure 3{\em{a}}, as filled symbols, are a subset
of stars that appear to be on the 1 $M_{\odot}$ Hayashi track (i.e.,
they have $T_{\rm eff}$ between about 4000 and 4500 K).
An initial attempt to fit these 35 stars to a power-law age
dependence yielded
\begin{equation}
\dot{M}_{\rm acc} \, \approx \, 2.8 \times 10^{-8} \,
M_{\odot} \, \mbox{yr}^{-1} \, \left( \frac{t}{10^{6} \, \mbox{yr}}
\right)^{-1.22} \,\, .
\end{equation}
However, the theoretical accretion disk models of
Hartmann et al.\ (1998) found that the exponent $\eta$ in
$\dot{M}_{\rm acc} \propto t^{-\eta}$ is most likely to range
between about 1.5 and 2.8, with the lower end of that range being
most likely.
Thus, the fitted value of $\eta = 1.22$ above was judged to be
too low, and a more realistic fit was performed by fixing
$\eta = 1.5$ and finding
\begin{equation}
\dot{M}_{\rm acc} \, \approx \, 2.2 \times 10^{-8} \,
M_{\odot} \, \mbox{yr}^{-1} \, \left( \frac{t}{10^{6} \, \mbox{yr}}
\right)^{-1.5} \label{eq:Maccfit}
\end{equation}
which is used in the wind models below and is plotted in
Figure 3{\em{a}}.
The scatter around the mean fit curve has a roughly normal
distribution (in the logarithm of $\dot{M}_{\rm acc}$) with a
standard deviation ranging between a factor of 6 (for the 35
solar-mass stars) to 8 (for all 88 data points) on either side of
the representative trend given by equation (\ref{eq:Maccfit}).
The sensitivity of the resulting accretion-driven wind to this
scatter is explored further below in {\S}~5.4.
Classical T Tauri magnetospheric accretion is believed to end at an
age around 10 Myr (e.g., Strom et al.\ 1989; Bouvier et al.\ 1997).
This is probably coincident with the time when the inner edge of the
accretion disk grows to a larger extent than the Keplerian corotation
radius, with ``propeller-like'' outflow replacing the accretion
(Illarionov \& Sunyaev 1975; Ustyugova et al.\ 2006).
Rotation is not included in the models presented here, so this
criterion is not applied directly.
However, for advanced ages (e.g., $t \gtrsim 100$ Myr) equation
(\ref{eq:Maccfit}) gives increasingly weak accretion rates that
end up not having any significant effect on the stellar wind.
Thus, even though the Figures below plot the various accretion-driven
quantities up to an age of 1 Gyr ($\log_{10} t = 9$), the models
below do not apply any abrupt cutoff to the accretion rate.
Figure 3{\em{b}} plots mass loss rates $\dot{M}_{\rm wind}$ for a
subset of classical T Tauri stars that have measurements of
blueshifted emission in forbidden lines such as [\ion{O}{1}]
$\lambda$6300 (Hartigan et al.\ 1995).
The mass loss rates are shown as an efficiency ratio
$\dot{M}_{\rm wind}/\dot{M}_{\rm acc}$ which depends on the combined
uncertainties of both the outflow and accretion properties.
The largest ratio shown, at $\log_{10} t \approx 6.9$, is for
the jet of HN Tau, and the original value from
Hartigan et al.\ (1995) has been replaced with the slightly
lower revised value from Hartigan et al.\ (2004).
It is uncertain whether the measured outflows originate on the
stellar surface or in the accretion disk (see, e.g.,
Paatz \& Camenzind 1996; Calvet 1997; Ferreira et al.\ 2006),
but these rates can be used as upper limits for any
stellar wind component.
Figure 3 also gives additional information about two of the oldest
reported classical T Tauri stars: TW Hya and MP Mus.
Note, however, that their extremely uncertain accretion rates
were not included in the fitting for $\dot{M}_{\rm acc} (t)$.
The age of TW Hya is quite uncertain; it is often given as
$\sim$10 Myr (e.g., Muzerolle et al.\ 2000), but values as large
as 30 Myr have been computed (Batalha et al.\ 2002)
and the plotted error bar attempts to span this range.
Similarly, the age of MP Mus seems to be between 7 and 23 Myr
(e.g., Argiroffi et al.\ 2007).
The plotted upper and lower limits on $\dot{M}_{\rm acc}$ for
TW Hya come from from Batalha et al.\ (2002) and
Muzerolle et al.\ (2000), respectively, and the range of values
for $\dot{M}_{\rm wind}$ in Figure 3{\em{b}} were taken from
Dupree et al.\ (2005).
The plotted ratio, however, divides the observational limits on
$\dot{M}_{\rm wind}$ by a mean value of $\dot{M}_{\rm acc} =
1.4 \times 10^{-9}$ $M_{\odot}$ yr$^{-1}$, in order to avoid
creating an unrealistically huge range.
For MP Mus, the mean accretion rate of $5 \times 10^{-11}$
$M_{\odot}$ yr$^{-1}$ was derived from X-ray measurements by
Argiroffi et al.\ (2007).
The plotted error bars for $\dot{M}_{\rm acc}$ were estimated
by using the 1$\sigma$ uncertainties on the X-ray emission measure
and column density.
\subsection{Magnetospheric Accretion Streams}
The dynamical properties of accretion streams are modeled here
using an axisymmetric dipole magnetic field, coupled with the
assumption of ballistic infall (e.g., Calvet \& Hartmann 1992;
Muzerolle et al.\ 2001).
Although it is almost certain that the actual magnetic fields of
T Tauri stars are not dipolar (Donati et al.\ 2007;
Jardine et al.\ 2008), this assumption
allows the properties of the accretion streams to be calculated
simply and straightforwardly as a function of evolutionary age.
Stellar field lines that thread the accretion disk are bounded
between inner and outer radii $r_{\rm in}$ and $r_{\rm out}$ as
measured in the equatorial plane.
We assume the inner radius---also called the ``truncation radius''---is
described by K\"{o}nigl's (1991) application of neutron-star accretion
theory (see also Davidson \& Ostriker 1973; Elsner \& Lamb 1977;
Ghosh \& Lamb 1979a,b).
This expression for $r_{\rm in}$ is given by determining where the
magnetic pressure of the inner dipole region balances the gas pressure
in the outer accretion region.
Assuming free-fall in the accretion stream,
\begin{equation}
r_{\rm in} \, = \, \beta_{\rm GL} \left(
\frac{B_{\ast}^{4} R_{\ast}^{12}}
{2G M_{\ast} \dot{M}_{\rm acc}^{2}} \right)^{1/7}
\label{eq:K91}
\end{equation}
where the scaling constant $\beta_{\rm GL}$ describes the departure
from ideal magnetostatic balance; i.e., $\beta_{\rm GL} = 1$ gives
the standard ``Alfv\'{e}n radius'' at which the pressures balance.
Following K\"{o}nigl (1991), the value $\beta_{\rm GL} = 0.5$ is
used here.
The outer disk radius $r_{\rm out}$ may be as large as the Keplerian
corotation radius, but it may not extend that far in reality (e.g.,
Hartmann et al.\ 1994; Bessolaz et al.\ 2008).
For simplicity, the outer disk radius is assumed to scale with the
inner disk radius as
\begin{equation}
r_{\rm out} \, = \, r_{\rm in} (1 + \epsilon)
\end{equation}
where a constant value of $\epsilon = 0.1$ was adopted.
This value falls within the range of empirically constrained
outer/inner disk ratios used by Muzerolle et al.\ (2001) to model
H$\alpha$ profiles; they used effective values of $\epsilon$ between
0.034 and 0.36.
The intermediate value of 0.1 produces reasonable magnitudes for
the filling factors of accretion stream footpoints on the stellar
surface (see below).
Note that equation (\ref{eq:K91}) above requires the specification
of the surface magnetic field strength $B_{\ast}$.
In the models of the open flux tubes (in the {\em polar} regions)
used below, a solar-type magnetic field is adopted.
The photospheric value of $B_{\ast} \approx 1500$ G
is held fixed for the footpoints of the stellar wind flux tubes
(see Cranmer \& van Ballegooijen 2005).
For the {\em mid-latitude} field at the footpoints of the accretion
streams, though, a slightly weaker field of 1000 G is used.
Figure 2{\em{b}} shows the resulting age dependence for $r_{\rm in}$.
For the youngest modeled ``protostars'' ($t \lesssim 10^{4}$ yr),
the accretion rate is so large that $r_{\rm in}$ would be smaller
than the stellar radius itself.
In that case, the accretion disk would penetrate down to the
star and there would be no magnetospheric infall.
Thus, the youngest age considered for the remainder of this paper
is $t = 13.5$ kyr (i.e., $\log_{10} t = 4.13$), for which
$r_{\rm in} = R_{\ast}$.
Over most of the classical T Tauri age range (0.1--10 Myr),
$r_{\rm in}$ decreases slightly in absolute extent, but the
ratio $r_{\rm in}/R_{\ast}$ increases and is approximately
proportional to $t^{0.2}$.
For any radius $r > R_{\ast}$ in the equatorial disk, an aligned
dipole field line impacts the stellar surface at colatitude $\theta$,
where $R_{\ast} = r \sin^{2}\theta$.
This allows the colatitudes $\theta_{\rm in}$ and $\theta_{\rm out}$
to be computed from $r_{\rm in}$ and $r_{\rm out}$, and these can
be used to compute the fraction of stellar surface area $\delta$
that is filled by accretion streams, with
\begin{equation}
\delta \, = \, \cos\theta_{\rm out} - \cos\theta_{\rm in}
\label{eq:delta}
\end{equation}
(see also Lamzin 1999).
Note that both the northern and southern polar ``rings'' are taken
into account in the above expression.
The filling factor $\delta$ depends sensitively on the geometry of
the accretion volume; i.e., for a dipole field, it depends only on
the outer/inner radius parameter $\epsilon$ and the ratio
$r_{\rm in} / R_{\ast}$.
Similarly, the fractional area subtended by both polar caps,
to the north and south of the accretion rings, is given by
\begin{equation}
\delta_{\rm pol} \, = \, 1 - \cos\theta_{\rm out} \,\, .
\end{equation}
Dupree et al.\ (2005) called this quantity $\phi$ and estimated
it to be $\sim$0.3 for TW Hya.
Accreting gas is assumed to be accelerated from rest at the
inner edge of the disk, and thus is flowing at roughly the ballistic
free-fall speed at the stellar surface,
\begin{equation}
v_{\rm ff} \, = \, \left[ \frac{2 G M_{\ast}}{R_{\ast}}
\left( 1 - \frac{R_{\ast}}{r_{\rm in}} \right) \right]^{1/2}
\,\, .
\label{eq:vff}
\end{equation}
This slightly underestimates the mean velocity, since in reality
the streams come from radii between $r_{\rm in}$ and $r_{\rm out}$.
Using this expression allows the ram pressure at the stellar surface
to be computed, with
\begin{equation}
P_{\rm ram} \, = \, \frac{\rho v_{\rm ff}^2}{2} \, = \,
\frac{v_{\rm ff} \dot{M}_{\rm acc}}{8 \pi \delta R_{\ast}^2}
\label{eq:Pram}
\end{equation}
(e.g., Hartmann et al.\ 1997; Calvet \& Gullbring 1998).
The accretion is assumed to be stopped at the point where
$P_{\rm ram}$ is balanced by the stellar atmosphere's gas pressure.
A representative T Tauri model atmosphere (with gray opacity and
local thermodynamic equilibrium) was used to determine the height
dependence of density and gas pressure.
The atmospheric density $\rho_{\rm sh}$ was thus defined as that of
the highest point in the atmosphere that remains undisturbed by either
the shock or its (denser) post-shock cooling zone.
The age dependence of $\rho_{\rm sh}$ was computed from the
condition of ram pressure balance.
Examining these resulting values, though, yielded a useful
approximation for this quantity.
For most of the evolutionary ages considered for the solar-mass
T Tauri star, the accretion is stopped a few scale heights above
the photosphere, at which the temperature has reached its minimum
radiative equilibrium value of $\sim 0.8 T_{\rm eff}$.
This value can be used in the definition of gas pressure to
obtain a satisfactory estimate for $\rho_{\rm sh}$; i.e., one can
solve for
\begin{equation}
\rho_{\rm sh} \, \approx \,
\frac{5 m_{\rm H} P_{\rm ram}}{4 k_{\rm B} T_{\rm eff}}
\label{eq:rhosh}
\end{equation}
using equation (\ref{eq:Pram}) above and obtain a value within
about 10\% to 20\% of the value interpolated from the full stellar
atmosphere model.
All of these values are only order-of-magnitude estimates, of course,
since neither the post-shock heating nor the thermal equilibrium
{\em inside} the accretion columns has been taken into account
consistently.
In addition to the density at the shock, the models below also
require specifying the density at the visible stellar surface.
A representative photospheric mass density $\rho_{\ast}$ was
computed from the criterion that the Rosseland mean optical
depth should have a value of approximately one:
\begin{equation}
\tau_{\rm R} \, \approx \,
\kappa_{\rm R} \rho_{\ast} H_{\ast} \, = \, 1
\end{equation}
where $H_{\ast}$ is the photospheric scale height as defined
above and $\kappa_{\rm R}$ is the Rosseland mean opacity
(in cm$^2$ g$^{-1}$) interpolated as a function of temperature
and pressure from the table of Kurucz (1992).
The resulting ratio $\rho_{\rm sh} / \rho_{\ast}$ varies
strongly over the T Tauri evolution, from a value of order unity
(for the youngest stars with the strongest accretion) down to
$10^{-3}$ at the end of the accretion phase.
\begin{figure}
\epsscale{1.05}
\plotone{cranmer_ttau_f04.eps}
\caption{Accretion stream properties as a function of age.
({\em{a}}) Filling factor of accretion columns that thread the
disk ({\em solid line}), filling factor of polar caps containing
open field lines ({\em dashed line}), and the reciprocal of the
number of inhomogeneous accretion tubes in one hemisphere
({\em dotted line}).
({\em{b}}) Mass densities at the stellar photosphere
({\em solid line}) and at the accretion shock (exact:
{\em dashed line,} approximation from eq.~[\ref{eq:rhosh}]:
{\em dotted line}), and mean pre-shock density in the accretion
stream ({\em dot-dashed line}).}
\end{figure}
Figure 4 shows several of the accretion-stream quantities defined above.
Figure 4{\em{a}} shows the how the fractional area of both the
accretion streams ($\delta$) and the polar caps ($\delta_{\rm pol}$)
decrease similarly with increasing age.
Calvet \& Gullbring (1998) gave observationally determined values of
$\delta$ for a number of T Tauri stars, but there was no overall
trend with age; $\delta$ was distributed (apparently) randomly
between values of about 0.001 and 0.1 for ages between
$\log_{10} t =4$ and 7.
Dupree et al.\ (2005) estimated $\delta_{\rm pol}$ to be about 0.3,
which agrees with the youngest T Tauri stars modeled here.
Figure 4{\em{b}} plots the age dependence of both the photospheric
density $\rho_{\ast}$ and the density at the accretion shock
$\rho_{\rm sh}$ (computed both numerically and using
eq.~[\ref{eq:rhosh}]).
\subsection{Properties of Inhomogeneous Accretion (``Clumps'')}
Observational evidence for intermittency and time variability
in the accretion streams was discussed in {\S}~2.
Here, the flow along magnetospheric accretion columns is modeled
as an idealized ``string of pearls,'' where infalling dense
clumps (i.e., blobs or clouds) are surrounded by ambient gas of
much lower density.
The clumps are assumed to be roughly spherical in shape with roughly
the same extent in latitude as the magnetic flux tubes connecting
the star and accretion disk.
Thus, once they reach the stellar surface, the clumps have a
radius given by
\begin{equation}
r_{c} \, = \,
\frac{R_{\ast} (\theta_{\rm in} - \theta_{\rm out})}{2}
\end{equation}
where the angles $\theta_{\rm in}$ and $\theta_{\rm out}$ must be
expressed in radians.
Figure 2{\em{b}} shows how $r_c$ varies with age.
The dense clumps are assumed to impact the star with a velocity
$v_{c}$ equivalent to the free-fall speed $v_{\rm ff}$ defined in
equation (\ref{eq:vff}).
Scheurwater \& Kuijpers (1988) defined a fiducial shock-crossing time
for clumps impacting a stellar surface, with
$t_{c} = 1.5 r_{c} / v_{c}$.
This time scale is of the same order of magnitude as the time it
would take the blob to pass through the stellar surface if it was
not stopped.
It is useful to assume that the clumps are spread out along the
flux tube with a constant periodicity; i.e., that along a given
flux tube, clumps impact the star at a time interval
$\Delta t = \zeta t_{c}$, where the dimensionless intermittency
factor $\zeta$ must be larger than 1.
When considering the mass accreted by infalling clumps, we follow
Scheurwater \& Kuijpers (1988) and define the density interior
to each clump as $\rho_c$ and the (lower) ambient inter-clump
density as $\rho_{0}$.
For the purpose of mass flux conservation along the magnetic
flux tubes, these densities are defined at the stellar surface.
Despite the fact that the ratio $\rho_{c}/\rho_{0}$ appears in
several resulting quantities below, it always is canceled out
by other quantities that depend separately on $\rho_{c}$ and
$\rho_{0}$ such that the numerical value of their ratio never
needs to be specified directly.
A more relevant quantity for accretion is the mean density
$\langle \rho \rangle$ in the flux tube, which is by definition
larger than $\rho_0$ and smaller than $\rho_c$.
The clump ``overdensity ratio'' ($\rho_{c}/ \langle \rho \rangle$)
can be computed by comparing the volume subtended by one clump
with the volume traversed by a clump over time $\Delta t$.
In other words, if one assumes that $\rho_{c} \gg \rho_{0}$, one
can find the mean density $\langle \rho \rangle$ by spreading
out the gas in each clump to fill its own portion of the flux tube.
Thus,
\begin{equation}
\frac{\rho_{c}}{\langle \rho \rangle} \, = \,
\frac{v_{c} \Delta t \, \pi r_{c}^{2}}{4\pi r_{c}^{3} / 3}
\, = \, \frac{9\zeta}{8} \,\, .
\label{eq:overden}
\end{equation}
The overdensity ratio (or, equivalently, $\zeta$) is a free
parameter of this system, and a value of
$\rho_{c}/ \langle \rho \rangle = 3$ was chosen more or less
arbitrarily.
This sets $\zeta = 24/9$.
Below, the resulting Alfv\'{e}n wave amplitude (caused by the
periodic clump impacts) is found to depend on an overall factor of
$\zeta^{1/2}$; i.e., it is relatively insensitive to the chosen
value of the overdensity ratio.
In order to relate the inhomogeneous properties along a given
flux tube to the total mass accretion rate $\dot{M}_{\rm acc}$,
the total number of flux tubes impacting the star must be
calculated.
The quantity $N$ is defined as the number of flux tubes
{\em in either the northern or southern hemisphere} that contain
accreting clumps.
This definition is specific to the assumption of an aligned
dipole magnetic field, and it is convenient because the summed
effect of infalling clumps measured at the north [south] pole is
assumed to depend only on the flux tubes in the northern [southern]
hemisphere.
The total number of flux tubes impacting the star is thus assumed
to be $2N$.
One can compute $N$ by comparing two different ways of expressing
the mass accretion rate.
First, we know that in a time-averaged sense, mass is accreted
with a local flux $\langle \rho \rangle v_{c}$ over a subset of
the stellar surface $\delta$ given by equation (\ref{eq:delta}).
Thus,
\begin{equation}
\dot{M}_{\rm acc} \, = \, 4\pi \delta R_{\ast}^{2}
\langle \rho \rangle v_{c} \,\, .
\label{eq:Mblob1}
\end{equation}
Alternately, the mass in the $2N$ flux tubes can be summed up
by knowing that each clump deposits a mass
$m_{c} = 4\pi r_{c}^{3} \rho_{c} / 3$, with
\begin{equation}
\dot{M}_{\rm acc} \, = \, \frac{2N m_{c}}{\Delta t} \, = \,
\frac{16\pi N}{9\zeta} \, \rho_{c} v_{c} r_{c}^{2} \,\,.
\label{eq:Mblob2}
\end{equation}
Equations (\ref{eq:Mblob1}) and (\ref{eq:Mblob2}) must give the
same total accretion rate, so equating them gives a useful
expression for
\begin{equation}
N \, = \, 2 \delta \left( \frac{R_{\ast}}{r_c} \right)^{2}
\end{equation}
where equation (\ref{eq:overden}) was also used.
This quantity is used below to compute the total effect of waves
at the poles from the individual impact events.
Figure 4{\em{a}} shows $N^{-1}$ (rather than $N$ itself, to keep
it on the same scale as the other plotted quantities), and it
is interesting that $N$ remains reasonably constant around
100--150 over most of the classical T Tauri phase of evolution.
This mean value (for any instant of time) is large, but it is not
so large that any fluctuations around the mean would be
unresolvable due to averaging over the star.
If the distribution of flux tubes is assumed to follow some
kind of Poisson-like statistics, a standard deviation of order
$N^{1/2}$ would be expected.
In other words, for $N \approx 100$ there may always be something
like a 10\% level of fluctuations in the magnetospheric accretion
rate.
The equations above also allow $\langle \rho \rangle$ and $\rho_c$
to be computed from the known accretion rate $\dot{M}_{\rm acc}$.
Figure 4{\em{b}} shows $\langle \rho \rangle$ to typically be several
orders of magnitude smaller than both $\rho_{\rm sh}$ and $\rho_{\ast}$.
This large difference arises because the ram pressure inside the
accretion stream is much larger than the gas pressure in the stream.
The ratio $\langle \rho \rangle / \rho_{\rm sh}$ at the stellar
surface is given very roughly by $(c_{s} / v_{c})^2$, where
$c_{s}$ is the sound speed corresponding to $T_{\rm eff}$, and
$c_{s} \ll v_{c}$.
For the youngest T Tauri stars modeled, however,
$c_{s} \approx v_{c}$ and thus
$\langle \rho \rangle \approx \rho_{\rm sh}$.
\subsection{Properties of Impact-Generated Waves}
This section utilizes the results of Scheurwater \& Kuijpers (1988),
who computed the flux of magnetohydrodynamic (MHD) waves that
arise from the impact of a dense clump onto a stellar surface.
Scheurwater \& Kuijpers (1988) derived the total energy released
in both Alfv\'{e}n and fast-mode MHD waves from such an impact
in the ``far-field'' limit (i.e., at horizontal distances large
compared to the clump size $r_c$).
Slow-mode MHD waves were not considered because of the use of
the cold-plasma approximation.
The models below utilize only their Alfv\'{e}n wave result, since
the efficiency of fast-mode wave generation was found to be
significantly lower than for Alfv\'{e}n waves.
Also, due to their compressibility, the magnetosonic (fast and slow
mode) MHD waves are expected to dissipate more rapidly than
Alfv\'{e}n waves in stellar atmospheres (e.g., Kuperus et al.\ 1981;
Narain \& Ulmschneider 1990, 1996; Whang 1997).
Thus, even if fast-mode and Alfv\'{e}n waves were generated in equal
amounts at the impact site, the fast-mode waves may not survive
their journey to the polar wind-generation regions of the star
without being strongly damped.
For simplicity, we assume that waves propagate away from the
impact site with an overall energy given by the
Scheurwater \& Kuijpers (1988) Alfv\'{e}n wave result, and that
the waves undergo negligible dissipation.
The wave energy released in one impact event is given by
\begin{equation}
E_{\rm A} \, = \, 0.06615 \,\, \frac{\rho_c}{\rho_0}
\left( \frac{v_c}{V_{\rm A}} \right)^{3} m_{c} v_{c}^{2}
\label{eq:EA}
\end{equation}
where the numerical factor in front was given approximately as 0.07
in equation (57) of Scheurwater \& Kuijpers (1988), but has been
calculated more precisely from their Bessel-function integral.
In this context, the Alfv\'{e}n speed $V_{\rm A}$ is defined as that
of the ambient medium, with
\begin{equation}
V_{\rm A} \, = \, \frac{B_{0}}{\sqrt{4\pi\rho_0}}
\end{equation}
and $B_{0} \approx 1000$ G is the ambient magnetic field strength at
the stellar surface, where the accretion streams connect with the star.
Scheurwater \& Kuijpers (1988) assumed that $v_{c} < V_{\rm A}$
(i.e., that the impacting clumps do not strongly distort the
background magnetic field).
This tends to lead to a very low efficiency of wave generation,
such that equation (\ref{eq:EA}) may be considered a ``conservative''
lower limit to the available energy of fluctuations.
For a given accretion column, the wave power that is emitted
continuously by a stream of clumps is given by $E_{\rm A}/ \Delta t$.
In order to compute the wave energy density at other points on the
stellar surface, we make the assumption that the waves propagate
out {\em horizontally} from the impact point.
It is important to note that ideal Alfv\'{e}n waves do not
propagate any energy perpendicularly to the background magnetic field.
There are several reasons, however, why this does not disqualify the
adopted treatment of wave energy propagation over the horizontal
stellar surface.
First, the true evolution of the impact pulse is probably nonlinear.
Much like the solar Moreton and EIT waves discussed in {\S}~2,
nonlinear effects such as mode coupling, shock steepening, and
soliton-like coherence are likely to be acting to help convey the
total ``fluctuation yield'' of the impact across the stellar surface.
Second, for any real star, the background magnetic field is never
completely radial, and it will always have some component horizontal
to the surface (along which even linear Alfv\'{e}n waves can propagate).
Thus, the dominant end-result of multiple cloud impacts is assumed
here to be {\em some} kind of transverse field-line perturbations that
are treated for simplicity with the energetics of Alfv\'{e}n waves.
Considering waves that spread out in circular ripples from
the impact point, the goal is to compute the horizontal wave flux
(power per unit area) at a distance $x$ away from the central point.
For this purpose, the stellar surface can be treated as a flat plane.
The wave power is assumed to be emitted into an expanding cylinder
with an approximate height of $2 r_c$ (the clump diameter) and a
horizontal radius $x$.
The horizontal flux $F_{\rm A}$ of waves into the vertical
wall of the cylinder is given by dividing the power by the
area of the surrounding wall, with
\begin{equation}
F_{\rm A} \, = \, \frac{E_{\rm A} / \Delta t}{4\pi x r_{c}}
\, = \, 0.0147 \,\,
\frac{r_{c} \rho_{c}^{2} v_{c}^{3}}{x \rho_{0} \zeta}
\left( \frac{v_c}{V_{\rm A}} \right)^{3} \,\, .
\end{equation}
Note that the flux decreases linearly with increasing $x$, as is
expected for a cylindrical geometry.
The total wave flux from the effect of multiple clump impacts is
computed at the north or south pole of the star.
The accretion stream impact points are assumed to be distributed
circularly in a ring around the pole.
Thus, using the geometric quantities derived earlier, this implies
$x = R_{\ast} (\theta_{\rm in} + \theta_{\rm out})/2$.
Also, because each accretion column is assumed to be at the same
horizontal distance from the target point at the pole, the total
wave flux is given straightforwardly by $N$ times the flux due
to one stream of clumps.
The total accretion-driven wave flux arriving at either the north
or south pole is thus given by $F_{\rm A, tot} = N F_{\rm A}$.
It is assumed that the waves do not damp appreciably between where
they are generated (at the bases of the accretion streams) and their
destination (at the pole).
The standard definition of the flux of Alfv\'{e}n waves, in a
medium where the bulk flow speed is negligible in comparison to
the Alfv\'{e}n speed, is
\begin{equation}
F_{\rm A, tot} \, = \, \rho_{0} v_{\perp}^{2} V_{\rm A}
\,\, .
\end{equation}
This can be compared with the total wave flux derived above to
obtain the perpendicular Alfv\'{e}n wave velocity amplitude
$v_{\perp}$.
In units of the clump infall velocity $v_c$, the wave amplitude
is thus given by
\begin{equation}
\frac{v_{\perp}}{v_c} \, = \, 0.1715 \,\, \frac{\rho_c}{\rho_0}
\left( \frac{N r_c}{\zeta x} \right)^{1/2}
\left( \frac{v_c}{V_{\rm A}} \right)^{2} \,\, .
\label{eq:vperpsh}
\end{equation}
Note that there is no actual dependence on $\rho_0$, since
the explicit factor of $\rho_0$ in the denominator is canceled
by the density dependence of $V_{\rm A}^{-2}$.
The wave amplitude derived in equation (\ref{eq:vperpsh}) is the
value at the shock impact height, which formally can be above
or below the photosphere.
The stellar wind models below, though, require the Alfv\'{e}n wave
amplitude to be specified exactly at the photosphere.
If we assume that the wave energy density
$U_{\rm A} = \rho v_{\perp}^{2}$ is conserved between the
shock height and the photosphere, then the densities at those
heights can be used to scale one to the other, with
\begin{equation}
v_{\perp \ast} \, = \, v_{\perp}
\left( \frac{\rho_{\rm sh}}{\rho_{\ast}} \right)^{1/2}
\label{eq:vpast}
\end{equation}
where $v_{\perp\ast}$ is the photospheric wave amplitude, and
the densities $\rho_{\ast}$ and $\rho_{\rm sh}$ were defined
above in {\S}~3.2.
Although the overall energy budget of accretion-driven waves is
treated under the assumption that the waves are Alfv\'{e}nic,
it is likely that the strongly nonlinear stream impacts also give
rise to compressible waves of some kind.
As mentioned above, the analysis of Scheurwater \& Kuijpers (1988)
did not take account of slow-mode MHD waves that, for parallel
propagation and a strong background field, are identical to
hydrodynamic acoustic waves.
There is evidence, however, that another highly nonlinear
MHD phenomenon---turbulent subsurface convection---gives
rise to both longitudinal (compressible) and transverse
(incompressible) MHD waves with roughly comparable energy
densities (e.g., Musielak \& Ulmschneider 2002).
Thus, the models below are given a photospheric source of
accretion-driven {\em acoustic waves} that are in energy
equipartition with the accretion-driven Alfv\'{e}n waves; i.e.,
$U_{s} = U_{\rm A}$.
The upward flux of acoustic waves is thus given by
$F_{s} = c_{s} U_{s}$, where $c_{s}$ is the sound
speed appropriate for $T_{\rm eff}$.
For both the acoustic and Alfv\'{e}n waves at the photosphere,
the accretion-driven component is added to the intrinsic
(convection-driven) component.
A key assumption of this paper is that the convection-driven
component is held fixed, as a function of age, at the values
used by Cranmer et al.\ (2007) for the present-day Sun.
This results in minimum values for the Alfv\'{e}n wave amplitude
(0.255 km s$^{-1}$) and the acoustic wave flux
($10^8$ erg cm$^{-2}$ s$^{-1}$) below which the models never go.
There are hints that rapidly rotating young stars may undergo
more intense subsurface convection than the evolved slowly
rotating Sun (K\"{a}pyl\"{a} et al.\ 2007; Brown et al. 2007;
Ballot et al. 2007), but the implications of these additional
variations with age are left for future work (see also {\S}~6).
\begin{figure}
\epsscale{1.08}
\plotone{cranmer_ttau_f05.eps}
\caption{Velocities related to accretion streams as a function
of age: free-fall clump speed ({\em thin solid line}), ambient
Alfv\'{e}n speed ({\em dot-dashed line}), photospheric sound
speed ({\em dotted line}).
Plotted Alfv\'{e}n wave amplitudes are those due to
accretion-streams and measured at the shock
({\em dashed line}) and those due to both accretion and
convection, measured at the photosphere ({\em thick solid line}).}
\end{figure}
Figure 5 displays various velocity quantities used in the
accretion-driven wave scenario.
The clump free-fall speed $v_{c} = v_{\rm ff}$ always remains
smaller than the ambient Alfv\'{e}n speed $V_{\rm A}$, which
was an assumption that Scheurwater \& Kuijpers (1988) had to make
in order for the background magnetic field to remain relatively
undisturbed by the clumps.
The accretion-driven Alfv\'{e}n wave amplitude at the shock is
larger than that measured at the photosphere (see
eq.~[\ref{eq:vpast}]), and at ages later than about 60 Myr
the accretion-driven waves at the photosphere grow weak enough
to be overwhelmed by the convection-driven waves.
For a limited range of younger ages
($2 \times 10^{4} < t < 5 \times 10^5$ yr) the Alfv\'{e}n wave
motions are supersonic in the photosphere, with a peak value
of $v_{\perp\ast} = 18$ km s$^{-1}$ at $t = 5 \times 10^{4}$ yr.
The sharp decrease in wave amplitude at the youngest ages is
due to the inner edge of the accretion disk coming closer to
the stellar surface.
In that limit, the ballistic infall speed $v_{\rm ff}$
becomes small and the latitudinal distance $x$ traversed by
the waves becomes large, thus leaving negligible energy in
the waves once they reach the pole.
Finally, it is worthwhile to sum up how the above values depend
on the relatively large number of assumptions made about the
accretion streams.
First, the use of an aligned dipole field for the accretion
streams, the K\"{o}nigl (1991) expression for $r_{\rm in}$,
and the ``string of pearls'' model of clumps along the flux
tubes are all somewhat simplistic and should be replaced by
more realistic conditions in future work.
Second, there are three primary parameters that had to be
specified in order to determine numerical values for the
accretion properties. These are:
(1) the relative size of the outer disk radius with respect to
the inner disk radius; i.e., $\epsilon = 0.1$, (2) the clump
overdensity ratio $\rho_{c} / \langle\rho\rangle = 3$, and
(3) the magnetic field strength at the base of the accretion
streams $B_{0} = 1000$ G.
Third, probably the most idealistic assumption in the modeled
evolutionary sequence is the use of a single monotonic relation
for $\dot{M}_{\rm acc}$ versus $t$ (e.g., eq.~[\ref{eq:Maccfit}]).
{\S}~6 describes how these assumptions can be relaxed in
subsequent modeling efforts.
\section{Implementation in Stellar Wind Models}
The steady-state outflow models presented here are numerical
solutions to one-fluid conservation equations for mass, momentum,
and energy along a polar flux tube.
For the specific case of the solar wind, Cranmer et al.\ (2007)
presented these equations and described their self-consistent
numerical solution using a computer code called ZEPHYR.
The T Tauri wind models are calculated with an updated version of
ZEPHYR, with specific differences from the solar case described
below.
\subsection{Conservation Equations and Input Physics}
The equation of mass conservation along a magnetic flux tube is
\begin{equation}
\frac{1}{A} \frac{\partial}{\partial r} \left( \rho u A \right)
\, = \, 0
\label{eq:drhodt}
\end{equation}
where $u$ and $\rho$ are the wind speed and mass density specified
as functions of radial distance $r$, and $A$ is the cross-sectional
area of the polar flux tube.
Magnetic flux conservation demands that the product $B_{0}A$ is
constant along the flux tube, where $B_{0}(r)$ is the field strength
that is specified explicitly.
The equation of momentum conservation is
\begin{equation}
u \frac{\partial u}{\partial r}
+ \frac{1}{\rho} \frac{\partial P}{\partial r} \, = \,
- \frac{GM_{\ast}}{r^2} + D
\end{equation}
where $P$ is the gas pressure and $D$ is the bulk acceleration
on the plasma due to {\em wave pressure;} i.e., the nondissipative
net ponderomotive force due to wave propagation through the
inhomogeneous medium (Bretherton \& Garrett 1968; Belcher 1971;
Jacques 1977).
A complete expression for $D$ in the presence of damped acoustic
and Alfv\'{e}n waves was given by Cranmer et al.\ (2007).
The simpler limit of wave pressure due to dissipationless
Alfv\'{e}n waves is discussed in more detail in {\S\S}~5.2--5.3.
For the pure hydrogen plasma assumed here, the gas pressure is
given by $P = (1 + x) \rho k_{\rm B} T / m_{\rm H}$, where $x$ is
the hydrogen ionization fraction.
Note that although the pressure is calculated for a hydrogen gas,
the radiative cooling rate $Q_{\rm rad}$ used in the energy equation
is dominated by metals.\footnote{%
Although this is formally inconsistent, the resulting properties of
the plasma are not expected to be far from values computed with a
more accurate equation of state.}
Cranmer et al.\ (2007) tested the ZEPHYR code with two separate
assumptions for the ionization balance.
First, a self-consistent, but computationally intensive solution
was used, which implemented a three-level hydrogen atom.
In that model, the $n=1$ and $n=2$ levels were assumed to remain in
relative local thermodynamic equilibrium (LTE) and the full rate
equation between $n=2$ and the continuum was solved iteratively.
Second, a simpler tabulated function of $x$ as a function of
temperature $T$ was taken from a semi-empirical non-LTE model
of the solar photosphere, chromosphere, and transition region
(e.g., Avrett \& Loeser 2008).
For both the solar and T Tauri star applications, the results for
the two cases were extremely similar, and thus the simpler
tabulated function was used in the models described below.
For solar-type winds, the key equation for both heating the corona
and setting the mass loss rate is the conservation of internal energy,
\begin{equation}
u \frac{\partial E}{\partial r}
+ \left( \frac{E+P}{A} \right)
\frac{\partial}{\partial r} \left( u A \right)
\, = \, Q_{\rm rad} + Q_{\rm cond} + Q_{\rm A} + Q_{\rm S}
\label{eq:dEdt}
\end{equation}
where $E$ is the internal energy density and the terms on the
right-hand side are volumetric heating/cooling rates due to
radiation, conduction, Alfv\'{e}n wave damping, and acoustic (sound)
wave damping.
The terms on the left-hand side that depend on $u$ are responsible
for enthalpy transport and adiabatic cooling.
In a partially ionized plasma, the definition of $E$ is
convention-dependent; we use the same one as
Ulmschneider \& Muchmore (1986) and Mullan \& Cheng (1993),
which is $E = (3P/2) + x \rho I_{\rm H} / m_{\rm H}$, where
$I_{\rm H} = 13.6$ eV.
The net heating/cooling from conduction ($Q_{\rm cond}$) is given
by a gradual transition between classical Spitzer-H\"{a}rm
conductivity (when the electron collision rate is fast compared
to the wind expansion rate) and Hollweg's (1974, 1976) prescription
for free-streaming heat flux in the collisionless heliosphere
(when electron collisions are negligible).
All of these terms were described in more detail by
Cranmer et al.\ (2007).
The volumetric radiative heating/cooling rate $Q_{\rm rad}$ has
been modified slightly from the earlier solar models.
The solar rate is first computed as in Cranmer et al.\ (2007),
but then it is multiplied by a correction factor similar to that
suggested by Hartmann \& Macgregor (1980) for massive,
{\em optically thick} chromospheres of late-type stars.
The correction factor $f$ is assumed to be proportional to the
escape probability $P_{\rm esc}$ for photons in the core of
the \ion{Mg}{2} $\lambda$2803 ($h$) and $\lambda$2796 ($k$) lines.
A simple expression that bridges the optically thin and thick
limits, for a line with Voigt wings, is
\begin{equation}
f \, = \, 2 P_{\rm esc} \, \approx \,
\frac{1}{1 + \tau_{\rm hk}^{1/2}}
\end{equation}
(e.g., Mihalas 1978), which may err on the side of overestimating
the escape probability and thus would give a conservative
undercorrection to $Q_{\rm rad}$.
Hartmann \& Macgregor (1980) assumed that the optical depth of
low-ionization metals would scale as $g^{-1/2}$, where $g$ is the
stellar surface gravity.
Here, we compute the optical depth in the core of the $h$ and $k$
lines ($\tau_{\rm hk}$) more exactly by integrating over the radial
grid of density and temperature in the wind model,
\begin{equation}
\tau_{\rm hk} \, = \, \int \chi_{\rm hk}
\left( \frac{R_{\ast}}{r} \right)^{2} dr \,\, ,
\end{equation}
where we include the spherical correction factor suggested by
Lucy (1971, 1976) for extended atmospheres.
The line-center extinction coefficient is given approximately by
\begin{equation}
\chi_{\rm hk} \, = \, 0.0153 \,
\left( \frac{n_{\rm Mg \, II}}{10^{10} \,\, \mbox{cm}^{-3}} \right)
\left( \frac{T}{10^{4} \,\, \mbox{K}} \right)^{-1/2}
\, \mbox{cm}^{-1}
\end{equation}
and the number density $n_{\rm Mg \, II}$ of ground-state ions
is given by the product of the hydrogen number density
($\rho / m_{\rm H}$), the Mg abundance with respect to hydrogen
($3.4 \times 10^{-5}$; Grevesse et al.\ 2007), and the ionization
fraction of Mg$^{+}$ with respect to all Mg species.
For temperatures in excess of $10^4$ K, the latter is given by
coronal ionization equilibrium (Mazzotta et al.\ 1998); for
temperatures below $10^4$ K, the coronal equilibrium curve is
matched onto a relation determined from LTE Saha ionization
balance.
Following Hartmann \& Macgregor (1980), the opacity correction
factor $f$ is ramped up gradually as a function of temperature.
When $T < 7160$ K, $Q_{\rm rad}$ is multiplied by the full
factor $f$.
For $7160 < T < 8440$ K, the multiplier is varied by taking
$f^{(8440-T)/1280}$, which ramps down to unity by the time the
temperature rises to 8440 K.
For temperatures in excess of 8440 K, the correction factor is
not used.
The evolution and damping of Alfv\'{e}n and acoustic waves---which
affects both $D$ in the momentum equation and $Q_{\rm A}$ and
$Q_{\rm S}$ in the energy equation---is described in much more
detail by Cranmer et al.\ (2007).
Given a specified frequency spectrum of acoustic and Alfv\'{e}n wave
power in the photosphere, equations of wave action conservation are
solved to determine the radial evolution of the wave amplitudes.
Acoustic waves are damped by both heat conduction and entropy gain
at shock discontinuities.
Alfv\'{e}n waves are damped by MHD turbulence, for which we only
specify the net transport of energy from large to small eddies,
assuming the cascade must terminate in an irreversible conversion
of wave energy to heat.
Coupled with the wave action equations are also non-WKB transport
equations to determine the degree of linear reflection of the
Alfv\'{e}n waves (e.g., Heinemann \& Olbert 1980).
This is required because the turbulent dissipation rate depends on
differences in energy density between upward and downward
traveling waves (see also Matthaeus et al.\ 1999;
Dmitruk et al.\ 2001, 2002; Cranmer \& van Ballegooijen 2005).
The resulting values of the Elsasser amplitudes $Z_{\pm}$, which
denote the energy contained in upward ($Z_{-}$) and downward
($Z_{+}$) propagating waves, were then used to constrain the
energy flux in the cascade.
The Alfv\'{e}n wave heating rate (erg s$^{-1}$ cm$^{-3}$) is
given by a phenomenological relation that has evolved from analytic
studies and numerical simulations; i.e.,
\begin{equation}
Q_{\rm A} \, = \, \rho \, \left(
\frac{1}{1 + t_{\rm eddy}/t_{\rm ref}} \right) \,
\frac{Z_{-}^{2} Z_{+} + Z_{+}^{2} Z_{-}}{4 \ell_{\perp}}
\label{eq:Qdmit}
\end{equation}
(see also Hossain et al.\ 1995; Zhou \& Matthaeus 1990;
Oughton et al.\ 2006).
The transverse length scale $\ell_{\perp}$ is an effective
perpendicular correlation length of the turbulence, and
Cranmer et al.\ (2007) used a standard assumption that
$\ell_{\perp}$ scales with the cross-sectional width of the
flux tube (Hollweg 1986).
The term in parentheses in equation (\ref{eq:Qdmit}) is an
efficiency factor that accounts for situations in which the
turbulent cascade does not have time to develop before the waves
or the wind carry away the energy (Dmitruk \& Matthaeus 2003).
The cascade is quenched when the nonlinear eddy time scale
$t_{\rm eddy}$ becomes much longer than the macroscopic wave
reflection time scale $t_{\rm ref}$.
\subsection{Model Inputs and Numerical Procedures}
The ZEPHYR code was designed to utilize as few free parameters
as possible.
For example, the coronal heating rate and the spatial length
scales of wave dissipation are computed {\em internally} from
straightforward physical principles and are not input as
adjustable parameters.
However, there are quantities that do need to be specified
prior to solving the equations given in {\S}~4.1.
An important input parameter to ZEPHYR is the radial dependence
of the background magnetic field $B_{0}(r)$.
Although many stellar field-strength measurements have been made,
relatively little is known about how rapidly $B_0$ decreases
with increasing height above the photosphere, or how fragmented
the flux tubes become on the granulation scale.
Some information about this kind of structure is contained in
``filling factors'' that can be determined observationally
(e.g., Saar 2001).
However, these measurements may be biased toward the bright
closed-field active regions and not the footpoints of stellar
wind streams.
Thus, as in other areas of this study, the present-day solar case
was adopted as a baseline on which to vary the evolving properties
of the 1 $M_{\odot}$ model star.
\begin{figure}
\epsscale{1.08}
\plotone{cranmer_ttau_f06.eps}
\caption{Background magnetic field $B_{0}$ over the stellar poles as a
function of the height above the photosphere (measured in
stellar radii).
Present-day solar field strength ({\em thick solid line}) is
compared with that of ages: $\log_{10} t = 5.5$ ({\em dotted line}),
6.0 ({\em dot-dashed line}), and 6.5 ({\em dashed line}).
Also shown is an ideal dipolar field having the same strength
at the photosphere as the other cases ({\em thin solid line}).}
\end{figure}
The multi-scale polar coronal hole field of
Cranmer \& van Ballegooijen (2005) was used as the starting
point for the younger T Tauri models.
The lower regions of that model were derived for a magnetostatic
balance between the strong-field (low density) solar wind flux
tube and the weak-field (high density) surrounding atmosphere.
The radial dependence of magnetic pressure $B_{0}^{2}/8\pi$
thus scales with the gas pressure $P$.
In the T Tauri models, then, the lower part of the magnetic field
was stretched in radius in proportion with the pressure scale
height $H_{\ast}$.
Figure 6 shows $B_{0}(r)$ for several ages, along with an
idealized dipole that does not take account of the lateral
expansion of the flux tube close to the surface.
In addition to the global magnetic field strength, the ZEPHYR
models also require three key wave-driving parameters to be
specified at the lower boundary:
\begin{enumerate}
\item
The {\em photospheric acoustic flux} $F_{s}$ mainly affects
the heating at chromospheric temperatures ($T \sim 10^{4}$ K).
The solar value of $10^8$ erg cm$^{-2}$ s$^{-1}$ was summed with
the accretion-driven acoustic flux as discussed in {\S}~3.4.
The power spectrum of acoustic waves was adopted from the solar
spectrum given in Figure 3 of Cranmer et al.\ (2007), and the
frequency scale was shifted up or down with the photospheric value
of the acoustic cutoff frequency,
$\omega_{\rm ac} = c_{s} / 2H_{\ast}$, which evolves over time.
\item
The {\em photospheric Alfv\'{e}n wave amplitude} $v_{\perp \ast}$
is specified instead of the upward energy flux $F_{\rm A}$ because
the latter depends on the cancellation between upward and downward
propagating waves that is determined as a part of the
self-consistent solution.
As discussed in {\S}~3.4, the solar value of 0.255 km s$^{-1}$
was supplemented by the accretion-driven wave component.
The shape of the Alfv\'{e}n wave frequency spectrum was kept fixed
at the solar model because it is unclear how it should scale with
varying stellar properties.
\item
The {\em photospheric Alfv\'{e}n wave correlation length}
$\ell_{\perp \ast}$ sets the scale of the turbulent heating rate
$Q_{\rm A}$ (eq.\ [\ref{eq:Qdmit}]).
Once this parameter is set, the value of $\ell_{\perp}$ at larger
distances is determined by the assumed proportionality with
$A^{1/2}$.
The solar value of 75 km (Cranmer et al.\ 2007) was evolved in
proportion with changes in the photospheric scale height $H_{\ast}$.
The justification for this is that the horizontal scale
of convective granulation is believed to be set by the scale height
(e.g., Robinson et al.\ 2004), and the turbulent mixing scale is
probably related closely to the properties of the granulation.
\end{enumerate}
The numerical relaxation method used by ZEPHYR was discussed by
Cranmer et al.\ (2007).
In the absence of explicit specification here, the new T Tauri
wind models use the same parameters as given in that paper.
However, the new models use a slightly stronger form for the code's
iterative undercorrection than did the original solar models.
From one iteration to the next, ZEPHYR replaces old values with
with a fractional step toward the newly computed values, rather
than using the new values themselves.
This technique was motivated by globally convergent backtracking
methods for finding roots of nonlinear equations (e.g.,
Dennis \& Schnabel 1983).
The solar models used a constant minimum undercorrection exponent
$\epsilon_{0} = 0.17$, as defined in equation (65) of
Cranmer et al.\ (2007).
The T Tauri models started with this value, but gradually decreased
it over time by multiplying $\epsilon_0$ by 0.97 after each iteration.
The value of $\epsilon_{0}$ was not allowed to be smaller than
an absolute minimum of 0.001.
This represented an additional kind of ``annealing'' that helped
the parameters reach their time-steady values more rapidly
and robustly.
\section{Results}
\subsection{Standard Age Sequence}
\begin{figure}
\epsscale{1.05}
\plotone{cranmer_ttau_f07.eps}
\caption{Age-dependent model properties.
({\em{a}}) Mass loss rates of time-steady ZEPHYR models
({\em solid line with symbols}) compared with the Reimers (1975, 1977)
relation ({\em dashed line}), analytic/cold models of {\S}~5.3
({\em dotted line}), and adopted accretion rate
({\em dot-dashed line}).
({\em{b}}) Terminal wind speed for ZEPHYR models
({\em thick solid line with symbols}), surface escape speed
({\em dashed line}), photospheric sound speed ({\em dot-dashed line}),
and wind speed at the wave-modified critical point for ZEPHYR
models ({\em thin solid line}) and analytic/cold models
({\em dotted line}).
({\em{c}}) Peak temperatures of ZEPHYR models
({\em solid line with symbols}), temperatures at critical point
for analytic/cold models ({\em dotted line}), and stellar
$T_{\rm eff}$ ({\em dot-dashed line}).}
\end{figure}
A series of 24 ZEPHYR models was created with ages ranging
between $\log_{10} t = 5.65$ and 9.66.
These models all converged to steady-state mass-conserving wind
outflow solutions within 250 numerical iterations.
The relative errors in the energy equation, averaged over the radial
grid, were of order 1\% for the final converged models.
For ages younger than $\log_{10} t = 5.65$ (i.e., 0.45 Myr),
it was found that time-steady solutions to
equations (\ref{eq:drhodt})--(\ref{eq:dEdt}) do not exist, and the
best we can do is to estimate the mass flux that is driven up through
the Parker critical point (see {\S\S}~5.2--5.3 below).
Figure 7 presents a summary of various scalar properties for the
wind models as a function of age.
The mass loss rate $\dot{M}_{\rm wind}$, shown in Figure 7{\em{a}},
was calculated by multiplying the mass flux $\rho u$ at the largest
grid radius ($1200 \, R_{\ast}$ for all models) by the full spherical
area $4\pi r^{2}$ at that radius.
This is a slight overestimate, since the polar flux tubes only cover
a finite fraction of the stellar surface ($\delta_{\rm pol} < 1$).
However, it is unknown whether these regions expand outward or inward
(i.e., to larger or smaller solid angle coverage) with increasing
distance from the star.
The actual large-scale geometry is likely to depend on how the pressure
(gas and magnetic) in the field lines that thread the accretion disk
is able to confine or collimate the polar flux tubes.
As the age is decreased, $\dot{M}_{\rm wind}$ for the time-steady
models increases by four orders of magnitude, from the present-day solar
value of about $2 \times 10^{-14}$ $M_{\odot}$ yr$^{-1}$ up to
$4 \times 10^{-10}$ $M_{\odot}$ yr$^{-1}$ at the youngest modeled
age of 0.45 Myr.
Note that these values exceed the mass loss rates predicted by the
empirical scaling relation of Reimers (1975, 1977) at all ages (i.e.,
using $\dot{M}_{\rm wind} \propto L_{\ast} R_{\ast} / M_{\ast}$ and
normalizing to the present-day mass loss rate), but for
$t\gtrsim 100$ Myr there is rough agreement.
The stellar wind velocity at the largest radius is denoted as
an ``asymptotic'' or terminal speed $u_{\infty}$ and is shown
in Figure 7{\em{b}}.
The wind speed remains roughly constant for most of the later phase
of the evolution (with $u_{\infty} \approx V_{\rm esc}$), but it
drops precipitously in the youngest models.
The dominant physical processes that drive the evolving stellar winds
are revealed when examining the maximum temperatures in the models,
as shown in Figure 7{\em{c}}.
The older ($t \gtrsim 1.75$ Myr) models with high wind speeds and
solar-like mass loss rates have hot coronae, with peak temperatures
between $10^6$ and $2 \times 10^6$ K.
The younger models undergo a rapid drop in temperature, ultimately
leading to ``extended chromospheres'' with peak temperatures
around $10^4$ K.
This transition occurs because of a well-known {\em thermal
instability} in the radiative cooling rate $Q_{\rm rad}$
(e.g., Parker 1953; Field 1965; Rosner et al.\ 1978; Suzuki 2007).
At temperatures below about $10^5$ K, the cooling rate decreases
with decreasing $T$, and small temperature perturbations are
easily stabilized.
However, above $10^5$ K, $Q_{\rm rad}$ decreases as $T$ increases,
which gives any small increase in temperature an unstable
runaway toward larger values.
This is the same instability that helps lead to the sharp transition
region between the $10^4$ K solar chromosphere and the $10^6$ K corona
(see also Hammer 1982; Withbroe 1988; Owocki 2004).
The reason that the young and old models in Figure 7 end up on
opposite sides of the thermal instability is that the total rates
of energy input (i.e., the wave heating rates $Q_{\rm A}$ and
$Q_{\rm S}$) vary strongly as a function of age.
The older models have lower wave amplitudes and thus weaker
heating rates, which leads to relatively low values of
$\dot{M}_{\rm wind}$.
The correspondingly low atmospheric densities in these models give
rise to weak radiative cooling (because $Q_{\rm rad} \propto \rho^{2}$)
that cannot suppress the coronal heating.
The coronal winds are driven by comparable contributions from gas
pressure and wave pressure.
On the other hand, the younger models have larger wave amplitudes,
more energy input (as well as more wave pressure, which expands the
density scale height), and thus more massive winds with
stronger radiative cooling.
These chromospheric winds are driven mainly by wave pressure.
Related explanations for cool winds from low-gravity stars
have been discussed by, e.g., Antiochos et al.\ (1986),
Rosner et al.\ (1991, 1995), Willson (2000), Killie (2002), and
Suzuki (2007).
The processes that determine the quantitative value of
$\dot{M}_{\rm wind}$ are different on either side of the
thermal bifurcation.
The mass flux for the older, solar-like models can be explained well
by the energy balance between heat conduction, radiative losses,
and the upward enthalpy flux.
The atmosphere finds the height of the transition region that
best matches these sources and sinks of energy in a time-steady way,
and the resulting gas pressure at this height sets the mass flux.
Various analytic solutions for this balance have been given by
Hammer (1982), Withbroe (1988), Leer et al.\ (1998), and
Cranmer (2004).
The younger, more massive wind models are found to approach the
limit of ``cold'' wave-driven outflows, where the Alfv\'{e}n wave
pressure replaces the gas pressure as the primary means of
canceling out the stellar gravity.
In this limit, the analytic approach of Holzer et al.\ (1983),
discussed further in {\S}~5.3, has been shown to provide a
relatively simple estimate for the mass loss rate.
\begin{figure}
\epsscale{1.05}
\plotone{cranmer_ttau_f08.eps}
\caption{Additional age-dependent model properties.
({\em{a}}) Wave-modified critical radii and Alfv\'{e}n radii
({\em upper solid lines, labeled}), ratio of sound speed to
wind speed at critical radius ({\em dot-dashed line}), and
ratio of wind luminosity to photon luminosity, measured at
the largest modeled radius ({\em lower solid line}) and at
the critical point ({\em dashed line}).
Results for the analytic/cold models are also shown
({\em dotted lines}).
({\em{b}}) Areas denote the relative contribution of terms to the
energy conservation equation at the critical point; see labels.
($Q_{\rm adv}$ denotes the advection terms on the left-hand
side of equation (\ref{eq:dEdt}), other terms are defined
in the text.) Also plotted is the exponent $\xi$ versus age
({\em dot-dashed line}).}
\end{figure}
Figure 8 illustrates a selection of other parameters of the ZEPHYR
models.
The heights of the wave-modified critical point (see
eq.~[\ref{eq:ucrit}] below) and the Alfv\'{e}n point (where
$u = V_{\rm A}$) are shown in units of the stellar radius.
The ratio of the isothermal sound speed $a$ to the wind speed at
the critical point is also shown.
This ratio is close to unity for the older, less massive models
(indicating that gas pressure dominates the wind acceleration) and
is less than 1 for the younger (more wave-driven) models.
Note also that the older ``coronal'' models have critical points in
the sub-Alfv\'{e}nic wind ($u < V_{\rm A}$) and the younger
``extended chromosphere'' models have larger critical radii that
are in the super-Alfv\'{e}nic ($u > V_{\rm A}$) part of the wind.
Also plotted in Figure 8{\em{a}} is the so-called wind luminosity
$L_{\rm wind}$, which we estimate from the sum of the energy required
to lift the wind out of the gravitational potential and the
remaining kinetic energy of the flow, i.e.,
\begin{equation}
L_{\rm wind} \, = \, \dot{M}_{\rm wind} \left(
\frac{GM_{\ast}}{R_{\ast}} + \frac{u^2}{2} \right)
\label{eq:Lwind}
\end{equation}
(e.g., Clayton 1966).
This expression ignores thermal energy, ionization energy,
magnetic energy, and waves, which are expected to be small
contributors at and above the critical point.
Curves for the ratio $L_{\rm wind}/L_{\ast}$ are shown both for
the wind at its largest height (i.e., using $u_{\infty}$ in
eq.~[\ref{eq:Lwind}]) and at the critical point (using
$u_{\rm crit}$).
The latter is helpful to compare with analytic estimates for
younger ages described below---for which nothing above the
critical point was computed.
Figure 8{\em{b}} shows the terms that dominate the energy
conservation equation (at the critical point) as a function of age.
The areas plotted here were computed by normalizing the absolute
values of the individual terms in equation (\ref{eq:dEdt}) by the
maximum absolute value for each model and then ``stacking'' them so
that together they fill the region between 0 and 1.
The older coronal models have a three-part balance between
Alfv\'{e}n wave heating, heat conduction, and the upward advection
of enthalpy due to the terms on the left-hand side of
equation (\ref{eq:dEdt}).
For the younger models, radiative losses become important because
of the higher densities at the critical point, and heat conduction
disappears because the temperatures are so low.
\begin{figure}
\epsscale{1.05}
\plotone{cranmer_ttau_f09.eps}
\caption{Radial dependence of wind parameters for three selected
ZEPHYR models:
({\em{a}}) wind outflow velocity, ({\em{b}}) temperature, and
({\em{c}}) total hydrogen number density.
In all panels, models are shown for ages $\log_{10} t = 6.0$
({\em dotted lines}), 6.5 ({\em dashed lines}), and 9.66
({\em solid lines}).
Wave-modified critical radii ({\em circles}) and
Alfv\'{e}n radii ({\em triangles}) are also shown.}
\end{figure}
Figure 9 displays the radial dependence of wind speed, temperature,
and density for three selected ages: (1) the present-day polar
outflow of the Sun, (2) a younger, but still low-$\dot{M}_{\rm wind}$
coronal wind, and (3) an even younger model that has made the
transition to a higher mass loss rate and an extended chromosphere.
The plotted hydrogen number density $n_{\rm H}$ is that of all
hydrogen nuclei (both neutral and ionized).
Note that the model with the lowest temperature would be expected
to have a small density scale height (eq.~[\ref{eq:Hast}]) and
thus a rapid radial decline in $n_{\rm H}$.
However, Figure 9{\em{c}} shows the opposite to be the case:
the coolest model has the largest {\em effective} density scale height
of the three because of a much larger contribution from wave pressure.
There are two additional points of comparison with other work that
should be made in the light of the main results shown in Figures 7--9.
\begin{enumerate}
\item
There is a relatively flat age dependence of the predicted mass loss
rate for the post-T~Tauri phase (i.e., zero-age main sequence [ZAMS]
stars with $t \gtrsim 50$ Myr).
This stands in contrast to the observationally inferred power-law
decrease for these stages, which is approximately
$\dot{M}_{\rm wind} \propto t^{-2}$ (e.g., Wood et al.\ 2002, 2005).
Note, however, that the internal (convection-related) source of
MHD wave energy at the photosphere was assumed in the models to be
set at the present-day solar level and not to vary with age.
As described in {\S}~6, the inclusion of {\em rotation-dependent}
convection may give rise to a stronger variation in the
turbulent fluctuation amplitude with age, and thus also a more
pronounced age dependence of the rates of coronal heating and
mass loss.
\item
Suzuki (2007) modeled the extended atmospheres and winds of
low-gravity cool stars, and found that the onset of thermal
instability gave rise to large amounts of time variability.
These models often showed dense and cool shells that coexist with
spontaneously produced hot and tenuous ``bubbles.''
Suzuki (2007) noted that this dynamical instability began to occur
once the escape velocity $V_{\rm esc}$ (measured in the wind
acceleration region) dropped to the point of being of the same
order of magnitude as the sound speed corresponding to the
thermal instability temperature of $\sim 10^5$ K.
It is unclear, though, whether this variability is triggered only
for relatively moderate rates of turbulent energy input, like the
amplitudes derived by Suzuki (2007) from non-rotating convection
models.
The larger amplitudes used in the present models of accretion-driven
waves may be sufficient to drive the winds much more robustly
{\em past} the thermal instability and into time-steady
extended chromospheres (see also Killie 2002).
\end{enumerate}
\subsection{Disappearance of Time-Steady Solutions}
The ZEPHYR code could not find time-independent wind solutions for
ages younger than about 0.45 Myr (i.e., for accretion rates larger
than about $7 \times 10^{-8}$ $M_{\odot}$ yr$^{-1}$).
This was found {\em not} to be a numerical effect.
Instead, it is an outcome of the requirement that time-steady winds
must have a sufficient amount of outward acceleration (either due
to gas pressure or some other external forcing, like waves) to
drive material out of the star's gravitational potential well
to a finite coasting speed at infinity.
This was realized for {\em polytropic} gas-pressure-driven winds
very early on (Parker 1963; Holzer \& Axford 1970),
such that if the temperature decreases too rapidly with increasing
distance (i.e., with decreasing density) the wind would become
``stalled'' and not have a time-steady solution.
To illustrate how this effect occurs for the youngest, wave-driven
models, let us write and analyze an approximate equation of momentum
conservation.
For simplicity, the wind temperature $T$ is assumed to be constant,
and the radii of interest are far enough from the star such that
the flux-tube expansion can be assumed to be spherical, with
$A \propto r^{2}$.
Also, for these models the acoustic wave pressure can be ignored
(since compressive waves damp out rather low in the atmosphere), and
the radial behavior of the Alfv\'{e}n wave amplitude can be modeled
roughly in the dissipationless limit.
Thus, one can follow Jacques (1977) and write the
momentum equation as a modified critical point equation
\begin{equation}
\left( u - \frac{u_{\rm crit}^2}{u} \right) \frac{du}{dr} \, = \,
- \frac{GM_{\ast}}{r^2} + \frac{2 u_{\rm crit}^{2}}{r} \,\, .
\label{eq:ucold}
\end{equation}
At the wave-modified critical point ($r_{\rm crit}$), the wind speed
$u$ equals the critical speed $u_{\rm crit}$, which is defined as
\begin{equation}
u_{\rm crit}^{2} \, = \, a^{2} + \frac{v_{\perp}^2}{4} \left(
\frac{1 + 3 M_{\rm A}}{1 + M_{\rm A}} \right)
\label{eq:ucrit}
\end{equation}
where the squared isothermal sound speed is
$a^{2} = (1+x) k_{\rm B}T/m_{\rm H}$,
and the bulk-flow Alfv\'{e}n Mach number $M_{\rm A} = u/V_{\rm A}$.
A more general version of equation (\ref{eq:ucold}) that also contains
damping and acoustic wave pressure is given in, e.g., {\S}~6 of
Cranmer et al.\ (2007).
The above expressions show how the outward pressure, which balances
gravity at the critical point, can be dominated either by $a^2$
(gas pressure) or by a term proportional to $v_{\perp}^{2}$
(wave pressure).
For gas pressure that can be described as a polytrope (i.e.,
$a^{2} \propto \rho^{\gamma-1}$), the polytropic index $\gamma$ at
the critical point must be smaller than 1.5 in order for there to
be a time-steady acceleration from the critical point to infinity
(Parker 1963; see also Velli 2001; Owocki 2004).
Larger polytropic indices $\gamma \geq 1.5$ imply that $a^2$ drops
too rapidly with increasing distance to provide sufficient acceleration
for a parcel accelerated through the critical point to escape
to infinity.
For winds dominated by wave pressure, it is possible to use the
equation of wave action conservation (eq.~[\ref{eq:wact}])
to examine the density dependence of the wave amplitude in a similar
way as above (e.g., Jacques 1977; Heinemann \& Olbert 1980;
Cranmer \& van Ballegooijen 2005).
The exponent $\xi$ in the scaling relation
$v_{\perp} \propto \rho^{\xi}$ is known to be a slowly varying
function of distance.
Close to the star, where $u \ll V_{\rm A}$, the exponent
$\xi \approx -0.25$.
In the vicinity of the Alfv\'{e}n point ($u \approx V_{\rm A}$),
$\xi$ increases to zero and $v_{\perp}(r)$ has a local maximum.
Far from the star, where $u \gg V_{\rm A}$, the exponent $\xi$
grows to an asymptotically large value of $+0.25$.
The dimensionless quantity in parentheses in equation (\ref{eq:ucrit})
varies only between 1 and 3 over the full range of distances, and can
be assumed to be roughly constant compared to the density
dependence of $v_{\perp}$.
Thus, by comparing the polytropic, gas-pressure dominated expression
for $u_{\rm crit}^2$ to the wave-pressure dominated version, it
becomes possible to write an {\em effective polytropic index} for
the latter case as $\gamma_{\rm eff} = 2\xi + 1$.
The unstable region of $\gamma \geq 1.5$ corresponds to $\xi \geq 0.25$,
and this value is reached at the critical point only when
$r_{\rm crit}$ is well into the super-Alfv\'{e}nic part of the wind.
Indeed, this corresponds to the youngest models shown in Figures 7--9.
As $\dot{M}_{\rm wind}$ increases (with decreasing age), the density
in the wind increases, and this leads to a sharp decline in the value
of $V_{\rm A}$ at the critical point.
(Just over the span of ages going from $\log_{10} t = 6.25$ to 6.0,
$V_{\rm A}$ at the critical point drops by a factor of 150.)
Figure 8{\em{b}} shows $\xi$ versus age for the ZEPHYR models, where
this exponent was first computed for all heights via
\begin{equation}
\xi \, = \, \frac{\partial (\ln v_{\perp}) / \partial r}
{\partial (\ln \rho) / \partial r}
\end{equation}
and the value shown is that interpolated to the location of the
wave-modified critical point.
It is clear that $\xi$ approaches the unstable limiting value of 0.25
just at the point where the time-steady solutions disappear.
What happens to a stellar outflow when ``too much'' mass is driven
up past the critical point to maintain a time-steady wind?
An isolated parcel of gas with an outflow speed $u = u_{\rm crit}$
at the critical point would be decelerated to stagnation at some
height above the critical point, and it would want to fall back
down towards the star.
In reality, this parcel would collide with other parcels that
are still accelerating, and a stochastic collection of shocked
clumps is likely to result.
Interactions between these parcels may result in an extra degree
of collisional heating that could act as an extended source of
gas pressure to help maintain a mean net outward flow.
Situations similar to this have been suggested to occur in the
outflows of both pulsating cool stars (e.g., Bowen 1988;
Willson 2000; Struck et al.\ 2004) and luminous blue variables
(Owocki \& van Marle 2008).
The models presented here suggest that the most massive stellar
winds ($\dot{M}_{\rm wind} \gtrsim 10^{-9}$ $M_{\odot}$ yr$^{-1}$)
of young T Tauri stars may exist in a similar kind of superposition
of outflowing and inflowing shells.
\subsection{Analytic Estimate of Wave-Driven Mass Loss}
For the youngest T Tauri models with (seemingly) no steady state,
it is possible to use an analytic technique to estimate how much
mass gets accelerated up to the wave-modified critical point.
As described above, it is not certain whether all of this mass can
be accelerated to infinity.
However, the ability to determine a mass flux that applies to the
finite region between the stellar surface and the critical radius
may be sufficient to predict many observed mass loss diagnostics.
Figure 8{\em{a}} showed that $r_{\rm crit} \gg R_{\ast}$
for these young ages, and thus the ``subcritical volume'' is
relatively large.
The modified critical point equation given above for the limiting
case of dissipationless Alfv\'{e}n waves has been studied for
several decades (e.g., Jacques 1977; Hartmann \& MacGregor 1980;
DeCampli 1981; Holzer et al.\ 1983; Wang \& Sheeley 1991).
Equation (\ref{eq:ucold}) can be solved for the critical point radius
by setting the right-hand side to zero, to obtain
\begin{equation}
r_{\rm crit} \, = \, \frac{GM_{\ast}}{2 u_{\rm crit}^2}
\,\, , \,\,\,\,\,
u_{\rm crit}^{2} \, = \, a^{2} + \frac{3 v_{\perp {\rm crit}}^2}{4}
\end{equation}
where it is assumed that $u \gg V_{\rm A}$ (or $M_{\rm A} \gg 1$) at
the critical point, which applies in the youngest T Tauri models.
In order to compute a value for the critical point radius, however,
we would have to know the value of $v_{\perp {\rm crit}}$, the
Alfv\'{e}n wave amplitude at the critical point.
No straightforward (a~priori) method was found to predict
$v_{\perp {\rm crit}}$ from the other known quantities derived
in {\S}~3.
Thus, we relied on an empirical fitting relation that was produced
from the 6 youngest ZEPHYR models (with $0.45 \leq t \leq 1.4$ Myr),
\begin{equation}
\frac{v_{\perp {\rm crit}}}{18.9 \,\, \mbox{km} \,\, \mbox{s}^{-1}}
\, = \, \left(
\frac{v_{\perp \ast}}{10 \,\, \mbox{km} \,\, \mbox{s}^{-1}}
\right)^{-2.37}
\end{equation}
where $v_{\perp \ast}$ is the Alfv\'{e}n wave amplitude at the
photosphere.
The fit is good to 5\% accuracy for the 6 youngest numerical models,
but it is not known whether this level of accuracy persists when
it is extrapolated to ages younger than $\sim$0.45 Myr.
The assumption that the Alfv\'{e}n waves are dissipationless allows
the conservation of wave action to be used, i.e.,
\begin{equation}
\frac{\rho v_{\perp}^{2} (u + V_{\rm A})^{2} A}{V_{\rm A}} \, = \,
\mbox{constant} \,\, .
\label{eq:wact}
\end{equation}
This can be simplified by noting that, for the youngest T Tauri
models, the photosphere always exhibits $u \ll V_{\rm A}$, and the
wave-modified critical point always exhibits $u \gg V_{\rm A}$.
Thus, with these assumptions, equation (\ref{eq:wact}) can be
applied at these two heights to obtain
\begin{equation}
\rho_{\ast} v_{\perp \ast}^{2} V_{{\rm A} \ast} A_{\ast}
\, \approx \,
\frac{\rho_{\rm crit} v_{\perp {\rm crit}}^{2}
u_{\rm crit}^{2} A_{\rm crit}}{V_{\rm A \, crit}} \,\, .
\end{equation}
The main unknown is the density $\rho_{\rm crit}$ at the critical
point, which can be solved in terms of the other known quantities,
\begin{equation}
\left( \frac{\rho_{\rm crit}}{\rho_{\ast}} \right)^{3/2}
\, \approx \,
\frac{B_{\rm crit}^2}{4\pi \rho_{\ast} u_{\rm crit}^2}
\left( \frac{v_{\perp \ast}}{v_{\perp {\rm crit}}} \right)^{2}
\end{equation}
and thus the mass flux is determined uniquely at the critical point
(Holzer et al.\ 1983).
The mass loss rate is
$\dot{M}_{\rm wind} = \rho_{\rm crit} u_{\rm crit} A_{\rm crit}$,
where the critical flux-tube area $A_{\rm crit}$ is normalized such
that as $r \rightarrow \infty$, $A \rightarrow 4\pi r^{2}$.
Although the above provides a straightforward algorithm for estimating
$\dot{M}_{\rm wind}$ for a wave-driven wind, there is one other
unspecified quantity: the isothermal sound speed $a$ at the
critical point.
A completely ``cold'' wave-driven model would set $a = 0$, but
it was found that the extended chromospheres in the youngest ZEPHYR
models do contribute some gas pressure to the overall acceleration.
The present set of approximate models began with an initial guess
for the critical point temperature ($T_{\rm crit} \approx 10^4$ K)
and then iterated to find a more consistent value.
The iteration process involved alternating between solving for
$\dot{M}_{\rm wind}$ at the critical point (as described above)
and recomputing $T_{\rm crit}$ by assuming a balance between
Alfv\'{e}n-wave turbulent heating and radiative cooling at
the critical point.
The turbulent heating rate $Q_{\rm A}$ was computed as in equation
(\ref{eq:Qdmit}), and the cooling rate $Q_{\rm rad}$ was assumed to
remain proportional to its optically thin limit $\rho^{2} \Lambda(T)$
(i.e., ignoring the $\tau_{\rm hk}$ correction factor described in
{\S}~4.1).
The temperature at which this balance occurred was found by
inverting the tabulated radiative loss function $\Lambda(T)$,
shown in Figure 1 of Cranmer et al.\ (2007).
For each age---between 13.5 kyr and 0.45 Myr---this process was run
for 20 iterations, but in all cases it converged rapidly within the
first 5 iterations.
The results of this iterative estimation of $\dot{M}_{\rm wind}$,
$T_{\rm crit}$, and $u_{\rm crit}$ are shown by the dotted curves
in Figures 7 and 8.
The maximum mass loss rate of $2.4 \times 10^{-8}$
$M_{\odot}$ yr$^{-1}$ occurs at an age of about 40 kyr.
At the youngest ages, the mass loss rate declines because the
accretion-driven wave power also declines (see Fig.~5).
The most massive wind corresponds to the minimum value of the
temperature at the critical point, which is indistinguishable
from the stellar $T_{\rm eff}$ at that age.
At the oldest ages for which these solutions were attempted
($t \gtrsim 1.5$ Myr), the various approximations made above no
longer hold (e.g., the critical point is no longer
super-Alfv\'{e}nic) and the agreement between the analytic
estimates and the ZEPHYR models breaks down.
\subsection{Varying the Accretion Rate}
As shown in Figure 3 above, the measured values for $\dot{M}_{\rm acc}$
exhibit a wide spread around the mean relation that was used to
derive the properties of accretion-driven waves.
In order to explore how variability in the accretion rate affects
the resulting stellar wind, a set of additional ZEPHYR models was
constructed with a factor of two decrease and increase in
$\dot{M}_{\rm acc}$ in comparison to equation (\ref{eq:Maccfit}).
These models were constructed mainly for ages around the thermal
bifurcation at $\log_{10} t \approx 6.25$.
It was found that the older ``hot corona'' models were insensitive
to variations in the accretion rate, even at ages where the
accretion-driven component to $v_{\perp\ast}$ exceeded
the internal convective component.
\begin{figure}
\epsscale{1.05}
\plotone{cranmer_ttau_f10.eps}
\caption{Age-dependent mass loss rates ({\em{a}}) and peak wind
temperatures ({\em{b}}) for ZEPHYR models produced with the
standard accretion rate from eq.~(\ref{eq:Maccfit})
({\em solid lines}), and accretion rates that are double
({\em dashed lines}) and half ({\em dotted lines}) the
standard values.
Individual ZEPHYR models are denoted by symbols.}
\end{figure}
Figure 10 compares the mass loss rate and maximum wind temperature
for the three sets of ZEPHYR models: the original standard
models, ones with half of the accretion rate given by
equation (\ref{eq:Maccfit}), and ones with double that rate.
The specific age at which the thermal bifurcation occurs changes by
only a small amount over the range of modeled accretion rates.
(Indeed, the standard model and the double-$\dot{M}_{\rm acc}$
model undergo thermal bifurcation at nearly the exact same age.)
The youngest and coolest models show a greater degree of responsiveness
to the varying accretion rate than do the older hot models.
The younger models with higher accretion rates have larger
photospheric MHD wave amplitudes, and thus they give rise to larger
mass loss rates and cooler extended chromospheres.
\begin{figure}
\epsscale{1.05}
\plotone{cranmer_ttau_f11.eps}
\caption{Mass accretion and mass loss rates for the 3 sets of
ZEPHYR models shown in Figure 10 (with the same line styles) shown
versus the photospheric Alfv\'{e}n wave amplitude $v_{\perp\ast}$.
Mass loss rates for the analytic/cold models of {\S}~5.3 are also
shown for the standard accretion rate case ({\em dot-dashed line}).}
\end{figure}
Figure 11 shows how both $\dot{M}_{\rm acc}$ and
$\dot{M}_{\rm wind}$ vary as a function of the photospheric
Alfv\'{e}n wave amplitude $v_{\perp\ast}$.
This latter quantity is a key intermediary between the mid-latitude
accretion and the polar mass loss, and thus it is instructive to see
how both mass flux quantities scale with its evolving magnitude.
The three sets of ZEPHYR models from Figure 10 are also shown in
Figure 11, and it is noteworthy that there is {\em not} a simple
one-to-one correspondence between $\dot{M}_{\rm acc}$ and
$v_{\perp\ast}$.
The small spread arises because the fundamental stellar parameters
(e.g., $R_{\ast}$ and $\rho_{\ast}$) are different at the three
ages that correspond to the same value of $\dot{M}_{\rm acc}$ in
the three models.
Thus, it is not surprising that $\dot{M}_{\rm wind}$ is not a
``universal'' function of the wave amplitude either.
\begin{figure}
\epsscale{1.05}
\plotone{cranmer_ttau_f12.eps}
\caption{Ratio of mass loss rate to mass accretion rate for
measured T Tauri stars (symbols as in Fig.~3) and for the ZEPHYR
and analytic models (line styles as in Figs.~10 and 11).}
\end{figure}
Figure 12 plots the key ``wind efficiency'' ratio
$\dot{M}_{\rm wind}/\dot{M}_{\rm acc}$ as a function of age for
the three sets of ZEPHYR models and the analytic estimates derived
in {\S}~5.3.
Note that, in contrast to Figure 10, when plotted as a ratio,
the models on the young/cool side of the thermal bifurcation all
seem to collapse onto a single curve, while the older/hotter models
separate (based on differing accretion rates in the denominator).
For both the youngest ages ($t \lesssim 0.5$ Myr) and for the specific
case of TW Hya, the model values shown in Figure 12 seem to agree
reasonably well with the observationally determined ratios that
are also shown in Figure 3{\em{b}}.
The models having ages between 0.5 and 10 Myr clearly fall well below
the measured mass loss rates.
However, even the limited agreement with the data is somewhat
surprising, since these measured values come from the [\ion{O}{1}]
$\lambda$6300 forbidden line diagnostic that is widely believed
to sample the much larger-scale disk wind (Hartigan et al.\ 1995).
It is thus possible that stellar winds may contribute to
observational signatures that previously have been assumed to
probe only the (disk-related) bipolar jets.
\subsection{X-Ray Emission}
Many aspects of the dynamics and energetics of young stars and their
environments are revealed by high-energy measurements such as X-ray
emission (e.g., Feigelson \& Montmerle 1999).
It is thus worthwhile to determine the level of X-ray flux that the
modeled polar winds are expected to generate.
This has been done in an approximate way to produce order-of-magnitude
estimates, and should be followed up by more exact calculations in
future work.
The optically thin radiative loss rate $Q_{\rm rad}$ described in
{\S}~4.1 was used as a starting point to ``count up'' the total
number of photons generated by each radial grid zone in the ZEPHYR
models.
(Finite optical depth effects in $Q_{\rm rad}$ were ignored here
because they contribute mainly to temperatures too low to affect
X-ray fluxes.)
This rate depends on the plasma temperature $T$, the density $\rho$,
and the hydrogen ionization fraction $x$.
The total radiative loss rate is multiplied by a fraction $F$ that
gives only those photons that would be observable as X-rays.
This fraction is estimated for each radial grid zone as
\begin{equation}
F \, = \, \frac{\int d\lambda \,\, B_{\lambda}(T) \, S(\lambda)}
{\int d\lambda \,\, B_{\lambda}(T)}
\end{equation}
where $B_{\lambda}(T)$ is the Planck blackbody function and
$S(\lambda)$ is an X-ray sensitivity function, for which we use that
of the {\em ROSAT} Position Sensitive Proportional Counter (PSPC)
instrument, as specified by Judge et al.\ (2003).
The function $S(\lambda)$ is nonzero between about 0.1 and 2.4 keV,
with a minimum around 0.3 keV that separates the hard and soft bands.
The integration in the denominator is taken over all wavelengths.
The use of the blackbody function, rather than a true optically
thin emissivity, was validated by comparison with wavelength-limited
X-ray radiative loss rates given by Raymond et al.\ (1976).
Fractions of emissivity (relative to the total loss rate) in specific
X-ray wavebands were computed for $T = 0.5,$ 1, 2, and 5 MK
and compared with the plots of Raymond et al.\ (1976).
The agreement between the models and the published curves was always
better than a factor of two.\footnote{%
Although this is obviously not accurate enough for quantitative
comparisons with specific observations, it allows the correct order of
magnitude of the X-ray emission to be estimated; see Figure 13.
Also, the factor-of-two validation should be taken in the context
of the factor of $\sim$1000 variation in these fractions over the
modeled coronal temperatures.}
\begin{figure}
\epsscale{1.05}
\plotone{cranmer_ttau_f13.eps}
\caption{Ratio of X-ray luminosity to total bolometric luminosity
for the standard run of ZEPHYR models ({\em solid line}), and for
observations of solar-type stars ({\em asterisks}), nearby clusters
({\em thin error bars}), and the Sun ({\em thick error bar}); see
text for details.}
\end{figure}
Figure 13 shows the simulated ratio of X-ray luminosity $L_{\rm X}$
to the bolometric luminosity $L_{\ast}$ for the modeled wind regions.
For each ZEPHYR model, the radiative losses were integrated over
an assumed spherical volume for the stellar wind (i.e., the same
assumption used to compute $\dot{M}_{\rm wind}$; see {\S}~5.1) to
produce $L_{\rm X}$. No absorption of X-rays was applied.
Figure 13 also shows a collection of observed X-ray luminosity
ratios for individual solar-type stars
(from G\"{u}del et al.\ 1998; Garc\'{\i}a-Alvarez et al.\ 2005)
and clusters of various ages (Flaccomio et al.\ 2003b;
Preibisch \& Feigelson 2005; Jeffries et al.\ 2006).
For the latter, the error bars indicate the $\pm 1 \sigma$ spread
about the mean values reported in these papers.
The range of values for the present-day Sun is taken from
Judge et al.\ (2003).
It is clear from Figure 13 that the modeled polar wind regions do
not produce anywhere near enough X-rays to explain the observations
of young stars.
For the present-day Sun, the computed value slightly underestimates
the lower limit of the observed range of X-ray luminosities.
This latter prediction does make some sense, since the X-ray emission
is essentially computed for the dark ``polar coronal holes.''
On the Sun, these never occupy more than about 20\% of the surface.
Most notably, though, there is no strong power-law decrease in
$L_{\rm X}/L_{\ast}$ for young ZAMS stars---just as there is no
power-law decrease in $\dot{M}_{\rm wind}$---because there has been
no attempt to model the rotation-age-activity relationship
(Skumanich 1972; Noyes et al.\ 1984).
The ZEPHYR models neglect the closed-field coronae of these stars
(both inside and outside the polar-cap regions) that are likely to
dominate the X-ray emission, like they do for the Sun
(e.g., Schwadron et al.\ 2006).
It is somewhat interesting that the ZEPHYR models undergo the
thermal bifurcation to extended chromospheres (and thus disappear
from Figure 13 because of the lack of X-ray emitting temperatures)
at about the same age where there seems to exist a slight decline
in the X-ray emission of young T Tauri stars.
Although there is some evidence for such a deficit of X-rays in
classical T Tauri stars, with respect to the older class of
weak-lined T Tauri stars (e.g., Flaccomio et al.\ 2003a,b;
Telleschi et al.\ 2007), the existence of a distinct peak in X-ray
emission at intermediate ages has not been proven rigorously.
If the polar wind regions indeed produce negligible X-ray emission
compared to the closed-field regions, we note that the polar cap area
$\delta_{\rm pol}$ grows largest for the youngest T Tauri stars
(see Fig.~4).
Thus, the cooler wind material may occupy a significantly larger
volume than the hotter (closed-loop) coronal plasma at the youngest
ages. This may be partly responsible for the observed X-ray trends.
\section{Discussion and Conclusions}
The primary aim of this paper has been to show how accretion-driven
waves on the surfaces of T Tauri stars may help contribute to the
strong rates of atmospheric heating and large mass loss rates
inferred for these stars.
The ZEPHYR code, which was originally developed to model the solar
corona and solar wind (Cranmer et al.\ 2007), has been applied to
the T Tauri stellar wind problem.
A key aspect of the models presented above is that the only true free
parameters are: (1) the properties of MHD waves injected at the
photospheric lower boundary, and (2) the background magnetic geometry.
Everything else (e.g., the radial dependence of the rates of
chromospheric and coronal heating, the temperature structure of the
atmosphere, and the wind speeds and mass fluxes) emerges naturally
from the modeling process.
For solar-mass T Tauri stars, time-steady stellar winds were found to
be supportable for all ages older than about 0.45 Myr, with accretion
rates less than $7 \times 10^{-8}$ $M_{\odot}$ yr$^{-1}$ driving
mass loss rates less than $4 \times 10^{-10}$ $M_{\odot}$ yr$^{-1}$.
Still younger T Tauri stars (i.e., ages between about 13 kyr and
0.45 Myr) may exhibit time-variable winds with mass loss rates
extending up several more orders of magnitude to
$\sim 2 \times 10^{-8}$ $M_{\odot}$ yr$^{-1}$.
The transition between time-steady and variable winds occurs when the
critical point of the flow migrates far enough past the Alfv\'{e}n
point (at which the wind speed equals the Alfv\'{e}n speed) such
that the Alfv\'{e}n wave amplitude begins to decline rapidly with
increasing radius.
When this happens, the outward wave-pressure acceleration is quickly
``choked off;'' i.e., parcels of gas that make it past the critical
point cannot be accelerated to infinity, and stochastic collisions
between upflowing and downflowing parcels must begin to occur.
The maximum wind efficiency ratio
$\dot{M}_{\rm wind}/\dot{M}_{\rm acc}$ for the T Tauri models
computed here was approximately 1.4\%, computed for ages of order
0.1 Myr.
This is somewhat smaller than the values of order 10\% required by
Matt \& Pudritz (2005, 2007, 2008) to remove enough angular momentum
from the young solar system to match present-day conditions.
It is also well below the observational ratios derived by, e.g.,
Hartigan et al.\ (1995, 2004), which can reach up to 20\% for
T Tauri stars between 1 and 10 Myr old (see Fig.~12).
These higher ratios, though, may be the product of both stellar
winds and disk winds (possibly even dominated by the disk wind
component).
Additionally, it is possible that future observational analysis
will result in these empirical ratios being revised {\em upward}
with more accurate (lower) values of $\dot{M}_{\rm acc}$
(S.\ Edwards 2008, private communication).
The accretion-driven solutions for $\dot{M}_{\rm wind}$ depend
crucially on the properties of the waves in the polar regions.
It is important to note that the calculation of MHD
wave properties was based on several assumptions that should
be examined in more detail.
The relatively low MHD wave efficiency used in equation (\ref{eq:EA})
is an approximation based on the limiting case of waves being far
from the impact site.
A more realistic model would have to contain additional information
about both the nonlinearities of the waves themselves and the
vertical atmospheric structure through which the waves propagate.
It seems likely that a better treatment of the wave generation would
lead to larger wave energies at the poles.
On the other hand, the assumption that the waves do not damp
significantly between their generation point and the polar wind
regions may have led to an assumed wave energy that is too high.
It is unclear how strong the waves will be in a model that takes
account of all of the above effects.
In any case, the ZEPHYR results presented here are the first
self-consistent models of T Tauri stellar winds that produce
wind efficiency ratios that even get into the right ``ballpark''
of the angular momentum requirements.
Additional improvements in the models are needed to make further progress.
For example, the effects of stellar rotation should be included, both in
the explicit wind dynamics (e.g., ``magneto-centrifugal driving,'' as
recently applied by Holzwarth \& Jardine 2007) and in the modified
subsurface convective activity that is likely to affect photospheric
amplitudes of waves and turbulence.
Young and rapidly rotating stars are likely to have qualitatively
different convective motions than are evident in standard
(nonrotating, mixing-length) models.
It is uncertain, though, whether rapid rotation gives rise to larger
(K\"{a}pyl\"{a} et al.\ 2007; Brown et al. 2007; Ballot et al. 2007)
or smaller (Chan 2007) convection eddy velocities at the latitudes
of interest for T Tauri stellar winds.
In any case, rapid rotation can also increase the buoyancy of
subsurface magnetic flux elements, leading to a higher rate of flux
emergence (Holzwarth 2007).
Also, the lower gravities of T Tauri stars may give rise to a larger
fraction of the convective velocity reaching the surface as wave
energy (e.g., Renzini et al.\ 1977), or the convection may even
penetrate directly into the photosphere (Montalb\'{a}n et al.\ 2004).
Future work must involve not only increased physical realism for the
models, but also quantitative comparisons with observations.
The methodology outlined in this paper should be applied to a set
of real stars, rather than to the idealized evolutionary sequence
of representative parameters.
Measured stellar masses, radii, and $T_{\rm eff}$ values, as well as
accretion rates, magnetic field strengths, and hot spot filling factors
($\delta$), should be used as constraints on individual calculations
for the stellar wind properties.
It should also be possible to use measured three-dimensional magnetic
fields (e.g., Donati et al.\ 2007; Jardine et al.\ 2008) to more
accurately map out the patterns of accretion stream footpoints, wave
fluxes, and the flux tubes in which stellar winds are accelerated.
This work helps to accomplish the larger goal of understanding the
physics responsible for low-mass stellar outflows and the feedback
between accretion disks, winds, and stellar magnetic fields.
In addition, there are links to more interdisciplinary studies of
how stars affect objects in young solar systems.
For example, the coronal activity and wind of the young Sun is likely to
have created many of the observed abundance and radionuclide patterns in
early meteorites (Lee et al.\ 1998) and possibly affected the Earth's
atmospheric chemistry to the point of influencing the development of
life (see, e.g., Ribas et al.\ 2005; G\"{u}del 2007;
Cuntz et al.\ 2008).
The identification of key physical processes in young stellar winds
is important not only for understanding stellar and planetary evolution,
but also for being able to model and predict the present-day impacts of
solar variability and ``space weather'' (e.g.,
Feynman \& Gabriel 2000; Cole 2003).
\acknowledgments
I gratefully acknowledge Andrea Dupree, Adriaan van Ballegooijen,
Nancy Brickhouse, and Eugene Avrett for many valuable discussions.
This work was supported by the National Aeronautics and
Space Administration (NASA) under grant {NNG\-04\-GE77G}
to the Smithsonian Astrophysical Observatory.
|
2,869,038,154,695 | arxiv | \section{References}}
|
2,869,038,154,696 | arxiv | \section{Introduction}
\label{sec:introduction}
This paper concerns connections attached to $(J^{2}=\pm 1)$-metric manifolds. A manifold will be called to have an $(\alpha ,\varepsilon )$-structure if $J$ is an almost complex ($\alpha =-1$) or almost product ($\alpha =1$) structure which is an isometry ($\varepsilon =1$) or anti-isometry ($\varepsilon =-1$) respect to $g$. Thus, there exist four kinds of $(\alpha ,\varepsilon )$ structures
\[
J^{2} = \alpha Id, \quad g(JX,JY)= \varepsilon g(X,Y), \quad \forall X, Y \in \mathfrak{X}(M).
\]
As is well known, these four geometries have been intensively studied. The corresponding manifolds are known as:
\begin{enumerate}
\renewcommand*{\theenumi}{\roman{enumi})}
\renewcommand*{\labelenumi}{\theenumi}
\item Almost-Hermitian manifold if it has a $(-1,1)$-structure. We will consider through this paper the case $g$ being a Riemannian metric. See e.g., \cite{gray-hervella}.
\item Almost anti-Hermitian or almost Norden manifolds if it has a $(-1,-1)$-structure. The metric $g$ is semi-Riemannian having signature $(n,n)$. See e.g., \cite{ganchev-borisov}.
\item Almost product Riemannian manifolds if it has an $(1,1)$-structure. We will consider through this paper the case $g$ being a Riemannian metric, and the trace of $J$ vanishing, which in particular means these manifolds have even dimension. See e.g., \cite{naveira} and \cite{staikova}.
\item Almost para-Hermitian manifolds if it has an $(1,-1)$-structure. The metric $g$ is semi-Riemannian having signature $(n,n)$. See e.g., \cite{gadea}.
\end{enumerate}
\noindent The above cited papers focused on the classification of manifolds belonging to the different kind of geometries. Strong relations between these four geometries were established by us in \cite{debrecen}. Also, relations with
biparacomplex structures and 3-webs were shown, and we proved that any almost para-Hermitian structure admits an almost Hermitian structure with the same fundamental form.
If the structure $J$ is integrable, i.e., the Nijenhuis tensor $N_{J}=0$, the corresponding manifolds are called Hermitian, Norden, Product Riemannian and para-Hermitian (without the word ``almost"). Integrability means $M$ is a holomorphic manifold in cases i) and ii), and $M$ has two complementary foliations in cases iii) and iv).
As we have pointed out, the cases where $\varepsilon = 1$ the metric could be not Riemannian, as in \cite{barros-romero} for the almost complex case and in \cite{pripoae} for the almost product one. We will focus only on the Riemannian metrics. Besides in all the cases, except the $(1,1)$ one, the trace of $J$ vanishes and the dimension of the manifold is even. We will impose in the case $(1,1)$ the condition $\mathrm{trace} \, J=0$ (of course, then $\mathrm{dim}\, M$ is also even) in order to have a unified treatment of these four geometries.
Observe that in the case $\alpha =1$ the tangent bundle of $M$
can be decomposed (assuming $\mathrm{trace} \, J=0$) as the Whitney sum of two equidimensional subbundles corresponding to the eigenspaces of $J$; i.e., $\mathrm{dim}\, T^+_p (M) = \mathrm{dim}\, T^-_p (M)$, for all $p \in M$,
where
\[
T_p^+(M)=\{ v \in T_p(M) \colon Jv=v\}, \quad T_p^-(M)=\{ v \in T_p(M) \colon Jv=-v\}.
\]
The manifold $M$ is said to have an almost paracomplex structure (see \cite{cruceanuetal}).
\bigskip
In a lot of papers, natural or adapted connections to an $(\alpha ,\varepsilon )$-structure have been defined and studied. A \textit{natural} or \textit{adapted connection} is understood as a connection parallelizing both $J$ and $g$. Observe that the Levi Civita connection $\nabla^{\mathrm{g}}$ of $g$ is not a natural connection in the general case, although it is taken to define K\"{a}hler type condition as $\nabla^{\mathrm{g}} J =0$. On the other hand, as one can easily suppose, there is no a unique natural connection. In order to have a distinguished connection among all the natural ones, one must add some extra requirements. The first example of such a connection is that nowadays is called the Chern connection.
In 1946 Chern introduced $\nabla^{\mathrm{c}}$ as the unique linear connection in a Hermitian manifold $(M,J,g)$ such that $\nabla^{\mathrm{c}} J=0$, $\nabla^{\mathrm{c}} g=0$ and having torsion $\mathrm{T}^{\mathrm{c}}$ satisfying
\[
\mathrm{T}^{\mathrm{c}}(JX,JY)= -\mathrm{T}^{\mathrm{c}}(X,Y), \quad \forall X, Y \in \mathfrak{X}(M).
\]
\noindent This connection also runs in the non integrable case, i.e., if $(M,J,g)$ is an almost Hermitian manifold, as it is showed; e.g., in \cite{gray}. Besides, Chern connection has been taken as a model in some other geometries, but possible ways to define connections depend on the values of $(\alpha ,\varepsilon)$.
\bigskip
In the present paper we will introduce a different approach to define distinguished natural connections. We will focus the attention on the $G$-structure defined by each $(\alpha ,\varepsilon )$-structure, showing that all of them admit such a connection. This connection $\nabla^{\mathrm{w}}$ is called \textit{the well adapted connection} and it is a functorial connection, see e.g., \cite{munoz2}. The well adapted connection is in some sense the most natural connection one can define in a manifold with a $G$-structure, because it measures the integrability of the $G$-structure: it is integrable if and only if the torsion and the curvature tensors of the well adapted connection vanish (see \cite[Theor.\ 2.3]{valdes}). Surprisingly, the well adapted connection of an $(\alpha ,\varepsilon)$-structure has not been deeply studied. In \cite{valdes} and in \cite{brassov} well adapted connections to almost Hermitian (resp. almost para-Hermitian) manifolds were determined, but there are no results about the other two geometries. In this paper we want to fill that lack obtaining expressions of the well adapted connection to an $(\alpha ,\varepsilon )$-structure in a unified way independent of the values $(\alpha ,\varepsilon)$ as possible as we can do.
Thus, the main purposes of this paper are to prove the existence of the well adapted connection of any $(\alpha ,\varepsilon )$-structure, to obtain its expression as a derivation law, and to find the relationship between Chern type connections and the well adapted connection. We will recover the results obtained by different mathematicians which will be named as the ``Bulgarian School'' through the present paper, because most of them are from that country. They have looked for distinguished natural connections in the case of almost Norden and almost product Riemannian manifolds \textit{\`{a} la Chern}, i.e., imposing conditions on the torsion tensor field (see \cite{ganchev-mihova}, \cite{mihova}, \cite{staikova}, \cite{teofilova}). We will prove that their distinguished connections coincide with the well adapted connection in the corresponding geometry.
\bigskip
The organization of the paper is as follows:
In Section~\ref{sec:welladapted} we will recover the basic results about the well adapted connection, if there exists, of a $G$-structure. We will remember a sufficient condition for its existence, which will be the key to prove the existence of the well adapted connection of an $(\alpha ,\varepsilon)$-structure.
In Section~\ref{sec:existenceofwelladapted} we will obtain the $G$-structure associated to $(J^{2}=\pm 1)$-metric-manifolds. We will consider $G_{(\alpha,\varepsilon )}$ as the $G$-structure associated to an $(\alpha ,\varepsilon)$-structure $(J, g)$ and we will prove the existence of the well adapted connection in each of the four $(\alpha ,\varepsilon)$ geometries (Theorem \ref{teor:ae-functorial}). The well adapted connection is found as a principal connection $\Gamma ^{\mathrm{w}}$ on the principal $G_{(\alpha,\varepsilon )}$-bundle.
In Section~\ref{sec:expressionofthewelladapted} we will obtain the expression of its derivation law $\nabla^{\mathrm{w}}$ (Theorem \ref{teor:bienadaptada-ae-estructura}). We will prove that $\nabla^{\mathrm{w}}$ is the unique adapted connection satisfying
\[
g(\mathrm{T}^{\mathrm{w}}(X,Y),Z)-g(\mathrm{T}^{\mathrm{w}}(Z,Y),X) = -\varepsilon (g(\mathrm{T}^{\mathrm{w}}(JX,Y),JZ)-g(\mathrm{T}^{\mathrm{w}}(JZ,Y),JX)),
\]
\noindent for all vector fields $X, Y, Z$ on $M$. Then, in Section~\ref{sec:particularizingthewelladaptedconnection}, we will specialize the expression in each of the four geometries, showing that the well adapted connections in almost Norden and almost product Riemannian manifolds coincide with connections previously introduced by the ``Bulgarian School" with different techniques. Finally we will characterize the equality between the well adapted and the Levi Civita connection (Theorem \ref{Levi Civita2}): they coincide if and only if $M$ is a K\"{a}hler-type manifold.
In Section~\ref{sec:chernconnection} we will show the classical Chern connection can be extended to the other geometry satisfying $\alpha \varepsilon = -1$, i.e., that of almost para-Hermitian manifolds, by the assumption (Theorem \ref{teor:chern-connection})
\[
\mathrm{T}^{\mathrm{c}}(JX,JY)= \alpha \mathrm{T}^{\mathrm{c}}(X,Y), \quad \forall X, Y \in \mathfrak{X}(M).
\]
\noindent Then the connection coincides with that introduced in \cite{etayo}. We will end the paper characterizing the coincidence between the Chern and the well adapted connection (Theorem \ref{teor:quasi}).
\bigskip
We will consider smooth manifolds and operators being of class $C^{\infty }$. As in this introduction, $\mathfrak{X}(M)$ denotes the module of vector fields of a manifold $M$. Endomorphisms of the tangent bundle are tensor fields of type (1,1). Almost complex and paracomplex structures are given by tensor fields of type (1,1). All the manifolds in the paper have even dimension $2n$. The general linear group will be denoted as usual by $GL(2n,\mathbb{R})$. The identity (resp. null) square matrix of order $n$ will be denoted as $I_n$ (resp. $O_n$).
Some proofs in the paper will seem to be reiterative, because one should check four similar cases. Nevertheless, these proofs need to be done with maximum care. In Remark \ref{noChern} we will show that the Chern-type connection which runs well in cases $\alpha \varepsilon =-1$, does not run in case $\alpha \varepsilon =1$. This is a warning to be extremely careful.
\section{The well adapted connection of a $G$-structure}
\label{sec:welladapted}
As is well known principal connections on the principal bundle of linear frames correspond to linear connections on the manifold, expressed by a derivation law. We will try to avoid the general constructions on frame bundles, but we need them to show what the well adapted connection means. We assume the theory of $G$-structures and reducible connections is known \cite{KN}.
The key results in order to define, if there exists, the well adapted connection of a $G$-structure are the following ones:
\begin{teor}[{\cite[Theor.\ 1.1]{valdes}}]
\label{teor:metodo}
Let $G\subseteq GL(n,\mathbb{R})$ be a Lie subgroup de Lie and let $\mathfrak g$ be its Lie algebra. The following two assertions are equivalent:
\begin{enumerate}
\renewcommand*{\theenumi}{\roman{enumi})}
\renewcommand*{\labelenumi}{\theenumi}
\item For every $G$-structure $\pi \colon \mathcal S \to M$, there exists a unique linear connection
$\Gamma^{\mathrm{w}}$ reducible to the $G$-structure such that, for every endomorphism $S$ given by a section of the adjoint bundle of $\mathcal S$ and every vector field $X \in \mathfrak{X}(M)$ one has
\[ \mathrm{trace}\, (S \circ i_X \circ \mathrm{T}^{\mathrm{w}}) = 0, \]
where $\mathrm{T}^{\mathrm{w}}$ is the torsion tensor of the derivation law $\nabla ^{\mathrm{w}}$ of $\Gamma^{\mathrm{w}}$.
\item If $S \in \mathrm{Hom} (\mathbb{R}^{n} , \mathfrak g)$ satisfies
\[
i_v \circ \mathrm{Alt} (S) \in \mathfrak g^{\perp}, \quad \forall v \in \mathbb{R}^{n},
\]
then $S = 0$, where $\mathfrak g^{\perp}$ is the orthogonal subspace of $\mathfrak g$ in $GL(n,\mathbb{R})$
respect to the Killing-Cartan metric and
\[ \mathrm{Alt} (S) (v,w) = S(v) w - S(w)v , \quad \forall v , w \in \mathbb{R}^{n}.\]
\end{enumerate}
The linear connection $\Gamma^{\mathrm{w}}$, if there exists, is called the well adapted connection to the $G$-structure $\pi \colon \mathcal S \to M$.
\end{teor}
This connection is a functorial connection in the sense of \cite{munoz2}. We do not develop the theory of functorial connection, looking for a direct introduction to the well adapted connection of an $(\alpha, \varepsilon )$-structure. Papers \cite{munoz}, \cite{munoz2} and \cite{valdes} cover that theory. In Section~\ref{sec:expressionofthewelladapted} we will explain and use all the elements introduced in the above Theorem.
\bigskip
The second result we need is following one:
\begin{teor}[{\cite[Theor.\ 2.4]{munoz2}, \cite[Theor. 2.1]{valdes}}]
\label{teor:suficiente}
Let $G\subseteq GL(n,\mathbb{R})$ be a Lie group and let $\mathfrak g$ its Lie algebra. If $\mathfrak g^{(1)} = 0$ and $\mathfrak g$ is invariant under matrix transpositions, then condition $ii)$ of Theorem \ref{teor:metodo} is satisfied, where $\mathfrak g^{(1)}=\{ S \in \mathrm{Hom} (\mathbb{R}^{n}, \mathfrak g) \colon S(v) w - S(w) v = 0, \forall v, w \in \mathbb{R}^{n} \}$ denotes the first prolongation of the Lie algebra $\mathfrak g$.
\end{teor}
The well adapted connection, if there exists, is a functorial connection and measures the integrability of the $G$-structure, as it is proved in \cite[Theor.\ 2.3]{valdes}. The first condition also means that it is preserved by direct image. We do not use this property in the paper.
\bigskip
These results allow to check that there exists the well adapted connection to any $(\alpha ,\varepsilon )$-structure. In fact, the following one will be useful:
\begin{prop}
\label{teor:ppp-heridataria}
Let $G\subseteq GL(n,\mathbb{R})$ be a Lie group and let $\mathfrak g$ its Lie algebra such that $\mathfrak g^{(1)}=0$. If $H\subseteq G$ is a Lie subgroup then $\mathfrak h^{(1)}=0$, where $\mathfrak h$ denotes the Lie algebra of $H$.
\end{prop}
{\bf Proof.} As $H\subseteq G$ one also has $\mathfrak h \subseteq \mathfrak g$. Remember that
\[ \mathfrak h^{(1)}=
\{ S \in \mathrm{Hom} (\mathbb{R}^{n}, \mathfrak h) \colon S(v) w - S(w) v = 0, \forall v, w \in \mathbb{R}^{n} \}.
\]
Let $S \in \mathfrak h^{(1)}$. As $\mathfrak h \subseteq \mathfrak g$, then $S \in \mathrm{Hom} (\mathbb{R}^{n}, \mathfrak g)$, which implies
\[
S(v) w - S(w) v = 0, \quad \forall v, w \in \mathbb{R}^{n},
\]
thus $S \in \mathfrak g^{(1)}=0$, which means $\mathfrak h^{(1)}=0$. $\blacksquare$
\bigskip
In Section~\ref{sec:existenceofwelladapted} we will show the $G$-structures corresponding to $(J^{2}=\pm 1)$-metric manifolds and in Section~\ref{sec:expressionofthewelladapted} we will prove they fulfill the sufficient conditions of Theorem \ref{teor:suficiente} to define the well adapted connection.
\section{Existence of the well adapted connection}
\label{sec:existenceofwelladapted}
We introduce the formal definition of $(\alpha, \varepsilon )$-structures.
\begin{defin}
Let $M$ be a manifold, $J$ a tensor field of type (1,1), $g$ a semi-Riemannian metric in $M$ and $\alpha ,\varepsilon \in \{-1,1\}$. Then $(J, g)$ is called an $(\alpha ,\varepsilon )$-structure on $M$ if
\[
J^{2} = \alpha Id, \quad \mathrm{trace}\, J=0, \quad g(JX,JY)= \varepsilon g(X,Y), \quad \forall X, Y \in \mathfrak{X}(M),
\]
\noindent $g$ being a Riemanianan metric if $\varepsilon =1$. Then $(M,J,g)$ is called a $(J^{2}=\pm 1)$-metric manifold.
\end{defin}
As we have said in the introduction, then $M$ has even dimension $2n$ and it is an almost complex (resp. almost paracomplex) manifold if $\alpha =-1$ (resp. $\alpha =1$).
\bigskip
We want to determine the $G$-structure associated to each $(\alpha, \varepsilon )$-structure. We use the following notation:
\begin{enumerate}
\item ${\mathcal F} (M)\to M$ is the principal bundle of linear frames of $M$; it has Lie structure group $GL(2n,\mathbb{R})$.
\item ${\mathcal C}_{(\alpha ,\varepsilon )}\to M$ is the bundle of adapted frames. It is a subbundle of the frame bundle.
\item $G_{(\alpha ,\varepsilon )}$ is the structure group of ${\mathcal C}_{(\alpha ,\varepsilon )}\to M$, i.e., the Lie group of the corresponding $G$-structure.
\item $\mathfrak{g}_{(\alpha, \varepsilon )}$ is the Lie algebra of $G_{(\alpha, \varepsilon )}$.
\end{enumerate}
\bigskip
Taking in mind the almost Hermitian case, which is well known, it is not difficult to find all the above elements. In fact, we have:
\begin{prop}[{\cite[Vol.\ II, Cap.\ IX]{KN}}]
Let $(J,g)$ be a $(-1,1)$-structure in a $2n$-dimensional manifold $M$. The corresponding $G_{(-1,1)}$-structure is given by
\[
{\mathcal C}_{(-1,1)} = \bigcup_{p \in M} \left\{ (X_1, \ldots, X_n, Y_1, \ldots , Y_n) \in {\mathcal F}_p (M) \colon
\begin{array}{l}
Y_i=JX_i, \forall i =1, \ldots n, \\
g(X_i, X_j) = g(Y_i, Y_j)=\delta_{ij},\\
g(X_i, Y_j)=0, \forall i, j =1, \ldots, n
\end{array}
\right\},
\]
\[
G_{(-1,1)}=\left\{ \left( \begin{array}{cc}
A&B\\
-B&A
\end{array}\right) \in GL (2n; \mathbb{R}) \colon A, B \in GL (n; \mathbb{R}), \begin{array}{l}
A^t A +B^t B = I_n\\
B^t A-A^tB = O_n
\end{array} \right\},
\]
and
\begin{equation}
\mathfrak{g}_{(-1,1)} = \left\{ \left( \begin{array}{cc}
A&B\\
-B&A
\end{array}\right) \in \mathfrak{gl} (2n; \mathbb{R}) \colon A, B \in \mathfrak{gl} (n; \mathbb{R}), \begin{array}{l}
A+A^t = O_n\\
B-B^t = O_n
\end{array} \right\}.
\label{eq:(-11)-algebra}
\end{equation}
\end{prop}
\begin{prop}
Let $(J,g)$ be a $(-1,-1)$-structure in a $2n$-dimensional manifold $M$. The corresponding $G_{(-1,-1)}$-structure is given by
\[
{\mathcal C}_{(-1,-1)} = \bigcup_{p \in M} \left\{ (X_1, \ldots, X_n, Y_1, \ldots , Y_n) \in {\mathcal F}_p (M) \colon
\begin{array}{l}
Y_i=JX_i, \forall i =1, \ldots n, \\
g(X_i, X_j) = g(Y_i, Y_j)=0,\\
g(X_i, Y_j)=\delta_{ij}, \forall i, j =1, \ldots, n
\end{array}
\right\},
\]
\[
G_{(-1,-1)}=\left\{ \left( \begin{array}{cc}
A&B\\
-B&A
\end{array}\right)\in GL (2n; \mathbb{R}) \colon A, B \in GL (n; \mathbb{R}), \begin{array}{l}
A^t A -B^t B = I_n\\
A^t B + B^t A = O_n
\end{array} \right\}
\]
and
\begin{equation}
\mathfrak{g}_{(-1,-1)} = \left\{ \left( \begin{array}{cc}
A&B\\
-B&A
\end{array}\right) \in \mathfrak{gl} (2n; \mathbb{R}) \colon A, B \in \mathfrak{gl} (n; \mathbb{R}), \begin{array}{c}
A+A^t = O_n\\
B+B^t = O_n
\end{array} \right\}.
\label{eq:(-1-1)-algebra}
\end{equation}
\end{prop}
\begin{prop}[{\cite{naveira}}]
Let $(J, g)$ be a $(1,1)$-structure in a $2n$-dimensional manifold $M$. The corresponding $G_{(1,1)}$-structure is given by
\[
{\mathcal C}_{(1,1)} = \bigcup_{p \in M}
\left\{ (X_1, \ldots, X_n, Y_1, \ldots , Y_n) \in {\mathcal F}_p (M) \colon
\begin{array}{l}
X_i \in T_p^+(M), Y_i \in T^-_p(M), \forall i =1, \ldots n \\
g(X_i, X_j) = g(Y_i, Y_j)=\delta_{ij}\\
g(X_i, Y_j)=0, \forall i, j, =1, \ldots , n
\end{array}
\right\},
\]
\[
G_{(1,1)}=\left\{ \left( \begin{array}{cc}
A&O_n\\
O_n&B
\end{array}\right) \in GL (2n; \mathbb{R}) \colon A, B \in O(n; \mathbb{R}) \right\},
\]
where $O(n; \mathbb{R})$ denotes the real orthogonal group of order $n$,
$O(n; \mathbb{R})=\{ N \in GL (n; \mathbb{R}) \colon N^t=N^{-1}\}$, and
\begin{equation}
\mathfrak{g}_{(1,1)}=\left\{ \left( \begin{array}{cc}
A&O_n\\
O_n&B
\end{array}\right) \in \mathfrak{gl} (2n; \mathbb{R}) \colon A, B \in \mathfrak{gl} (n; \mathbb{R}) \colon \begin{array}{l}
A +A^t = O_n\\
B + B^t = O_n
\end{array} \right\}.
\label{eq:(11)-algebra}
\end{equation}
\end{prop}
\begin{prop}[{\cite{cruceanuetal}, \cite{gadea}}]
Let $(J, g)$ be a $(1,-1)$-structure in a $2n$-dimensional manifold $M$. The corresponding $G_{(1,-1)}$-structure is given by
\[
{\mathcal C}_{(1,-1)} = \bigcup_{p \in M}
\left\{ (X_1, \ldots, X_n, Y_1, \ldots , Y_n) \in {\mathcal F}_p (M) \colon
\begin{array}{l}
X_i \in T_p^+(M), Y_i \in T^-_p(M), \forall i =1, \ldots n \\
g(X_i, X_j) = g(Y_i, Y_j)=0\\
g(X_i, Y_j)=\delta_{ij}, \forall i, j, =1, \ldots , n
\end{array}
\right\},
\]
\[
G_{(1,-1)}=\left\{ \left( \begin{array}{cc}
A&O_n\\
O_n&(A^t)^{-1}
\end{array}\right) \in GL (2n; \mathbb{R}) \colon A \in GL (n; \mathbb{R}) \right\},
\]
and
\begin{equation}
\mathfrak{g}_{(1,-1)}=\left\{ \left( \begin{array}{cc}
A&O_n\\
O_n&-A^t
\end{array}\right) \in \mathfrak{gl} (2n; \mathbb{R}) \colon A \in \mathfrak{gl} (n; \mathbb{R}) \right\}.
\label{eq:(1-1)-algebra}
\end{equation}
\end{prop}
In the four cases, we have emphasized the Lie algebras, because they are the key elements to prove the existence of the well adapted connection. Now we are going to study the relationships among different structural groups we have found. The same relationships will have their Lie algebras. Let us remember the orthogonal and neutral orthogonal Lie groups and algebras:
\begin{eqnarray*}
O(2n; \mathbb{R})&=&\{N \in GL (2n; \mathbb{R}) \colon N^t = N^{-1}\},\\
\mathfrak{o} (2n; \mathbb{R}) &=& \{ N \in \mathfrak{gl} (2n; \mathbb{R}) \colon N+N^t=O_{2n}\},\\
O(n,n;\mathbb{R})&=&\left\{
\left(
\begin{array}{cc}
A&B\\
C&D
\end{array}
\right)
\in GL (2n; \mathbb{R}) \colon A, B, C, D \in GL (n; \mathbb{R}),
\begin{array}{l}
C^tA+A^tC=O_n\\
C^tB+A^tD=I_n\\
D^tB+B^tD=O_n
\end{array}
\right\},\\
\mathfrak{o}(n,n;\mathbb{R})&=&\left\{
\left(
\begin{array}{cc}
A&B\\
C&D
\end{array}
\right)
\in \mathfrak{gl} (2n; \mathbb{R}) \colon A, B, C, D \in GL (n; \mathbb{R}),
\begin{array}{l}
A+D^t=O_n\\
B+B^t=O_n\\
C+C^t=O_n
\end{array}
\right\}.
\end{eqnarray*}
The following result will be relevant in order to prove the existence of the well adapted connection to an $(\alpha ,\varepsilon )$-structure.
\begin{prop}
\label{teor:prolongacionesnulas}
Let $n \in \mathbb N$. The first prolongation of the Lie algebras of $O(2n;\mathbb{R})$ and $O(n,n;\mathbb{R})$ vanish; {\em i.e.};
\[
\mathfrak o(2n;\mathbb{R})^{(1)}=0, \quad \mathfrak o(n,n;\mathbb{R})^{(1)}=0.
\]
\end{prop}
{\bf Proof.} One can prove the result by a straightforward computation. In a different way, one can deduce the result from some properties of functorial connections:
\begin{enumerate}
\item The well adapted connection is a functorial connection.
\item Manifolds endowed with a Riemannian or a neutral metric admits the well adapted connection, which is the Levi Civita connection (see \cite[Theor.\ 3.1 and Rem.\ 3.2]{munoz}).
\item If a manifold admits a functorial connection associated to a $G$-structure, then $\mathfrak g^{(1)}=0$ (see \cite[Theor.\ 2.1]{munoz2}).
\end{enumerate}
The result is proved. $\blacksquare$
\bigskip
One can easily prove the following relationships:
\begin{prop}
\label{contenidos}
Assuming the above notations, one has the following subsets:
\begin{enumerate}
\item $G_{(-1,1)}\subseteq O(2n; \mathbb{R}), \quad G_{(1,1)}\subseteq O(2n;\mathbb{R}), \quad G_{(-1,-1)}\subseteq O(n,n;\mathbb{R}), \quad G_{(1,-1)} \subseteq O(n,n;\mathbb{R}).$
\item $\mathfrak{g}_{(-1,1)}\subseteq \mathfrak{o}(2n; \mathbb{R}), \quad \mathfrak{g}_{(1,1)}\subseteq \mathfrak{o}(2n;\mathbb{R}), \quad
\mathfrak{g}_{(-1,-1)}\subseteq \mathfrak{o}(n,n;\mathbb{R}) \quad \mathfrak{g}_{(1,-1)} \subseteq \mathfrak{o}(n,n;\mathbb{R}).$
\end{enumerate}
Besides, one has the following equalities:
\begin{eqnarray*}
G_{(-1,1)}= GL (n; \mathbb C ) \cap O(2n; \mathbb{R}), &\quad& G_{(1,1)}=GL (n; \mathbb{R})\times GL (n; \mathbb{R}) \cap O(2n;\mathbb{R}),\\
G_{(-1,-1)}=GL (n; \mathbb C ) \cap O(n,n;\mathbb{R}),&\quad& G_{(1,-1)} = GL (n; \mathbb{R})\times GL (n; \mathbb{R}) \cap O(n,n;\mathbb{R}).
\end{eqnarray*}
\end{prop}
Taking into account the above results we can prove:
\begin{teor}
\label{teor:ae-functorial}
Let $M$ be a $2n$-dimensional manifold with an $(\alpha ,\varepsilon )$-structure. Then $M$ admits the well adapted connection.
\end{teor}
{\bf Proof.} Let $(\alpha ,\varepsilon ) \in \{(-1,1), (-1,-1), (1,1,), (1,-1)\}$. The four Lie algebras $\mathfrak g_{(\alpha ,\varepsilon )}$ corresponding to the four Lie groups $G_{(\alpha ,\varepsilon)}$ are given by formulas (\ref{eq:(-11)-algebra}), (\ref{eq:(-1-1)-algebra}), (\ref{eq:(11)-algebra}) y (\ref{eq:(1-1)-algebra}). They are invariant under matrix transpositions, as one can easily check. Taking into account Proposition \ref{contenidos}, we obtain, by
Proposition \ref{teor:ppp-heridataria} and Proposition \ref{teor:prolongacionesnulas}, the formula $\mathfrak g_{(\alpha,\varepsilon)}^{(1)}=0$ holds. Then, by Theorem \ref{teor:suficiente} we conclude the result. $\blacksquare$
\bigskip
In Section~\ref{sec:expressionofthewelladapted} we will study carefully this well adapted connection. In order to pass from the bundles to the manifolds, we need the following result, similar to that of the case given in \cite[Vol. II, Prop. 4.7]{KN}.
\begin{prop}
\label{teor:aeconexiones}
Let $(M, J, g)$ be a $(J^2=\pm1)$-metric manifold and let $\pi \colon {\mathcal C}_{(\alpha ,\varepsilon )} \to M$ the $G_{(\alpha ,\varepsilon )}$-structure on $M$ defined by $(g,J)$. Let $\Gamma$ a linear connection on $M$ and let $\nabla$ the corresponding derivation law. Then $\Gamma$ is a reducible connection to $\pi \colon {\mathcal C}_{(\alpha ,\varepsilon )} \to M$ if and only if $\nabla J = 0, \nabla g = 0$.
\end{prop}
Thus, reducible connections correspond to natural or adapted connections. Among them, we have a distinguished one: the well adapted. In the next Section we will study the well adapted connection as a derivation law.
\bigskip
\begin{obs}
The above results could suggest that every known $G$-structure admits a well adapted connection. This is not the case. For example, the $G$-structure determined by an almost complex or an almost paracomplex structure does not admit a well adapted connection. In both cases one can use an idea taken in the proof of Proposition \ref{teor:prolongacionesnulas}: if there exists a functorial connection associated to a $G$-structure, then $\mathfrak g^{(1)}=0$. The corresponding structural groups are $GL(n,\mathbb{C})$ and $GL(n,\mathbb{R})\times GL(n,\mathbb{R})$. The first prolongation of the corresponding Lie algebras does not vanish, thus proving these $G$-structures do not admit a functorial connection; in particular, the well adapted connection.
\end{obs}
\section{Expression of the well adapted connection}
\label{sec:expressionofthewelladapted}
Let $(M, J, g)$ be a $(J^{2}=\pm 1)$-metric manifold and let $\nabla^{\mathrm{w}}$ be the covariant operator or derivation law defined by the well adapted connection of the corresponding $G_{(\alpha ,\varepsilon )}$-structure. Taking into account Proposition \ref{teor:aeconexiones} we know $\nabla^{\mathrm{w}} J=0$ and $\nabla^{\mathrm{w}} g=0$. As $\nabla^{\mathrm{w}}$ is uniquely determined, there should exist another condition which determine it uniquely. We are looking for this condition. So, the natural way is to study the set of all the natural connections of a $(J^{2}=\pm 1)$-metric manifold.
\begin{defin}
Let $(M, J, g)$ be a $(J^2=\pm1)$-metric manifold. A covariant derivative or derivation law $\nabla^{\mathrm{a}}$ on $M$ is said to be natural or adapted to $(J, g)$ if
\[
\nabla^{\mathrm{a}} J=0, \quad \nabla^{\mathrm{a}} g=0;
\]
i.e., if it is the derivation law of a linear connection on $M$ reducible to the $G_{(\alpha ,\varepsilon )}$-structure on $M$ defined by $(J, g)$.
\end{defin}
\begin{defin}
\label{teor:tensorpotencial}
Let $(M,J,g)$ be a $(J^2=\pm1)$-metric manifold, let $\nabla^{\mathrm{g}}$ be the derivation law of the Levi Civita connection of $g$ and let $\nabla^{\mathrm{a}}$ be a derivation law adapted to $(J,g)$. The potential tensor of $\nabla^{\mathrm{a}}$ is the tensor $S\in \mathcal T^1_2 (M)$ defined as
\[
S(X,Y)=\nabla^{\mathrm{a}}_X Y -\nabla^{\mathrm{g}}_X Y, \quad \forall X, Y \in \mathfrak{X} (M).
\]
\end{defin}
Then we can parametrize all the adapted connections to $(J,g)$:
\begin{lema}
\label{teor:natural}
Let $(M,J,g)$ be a $(J^2=\pm1)$-metric manifold. The set of derivation laws adapted to $(J,g)$ is:
\[
\left\{ \nabla^{\mathrm{g}}+S \colon S \in \mathcal T^1_2 (M),
\begin{array}{l}
JS(X,Y)-S(X,JY)=(\nabla^{\mathrm{g}}_X J) Y, \\
g(S(X,Y),Z)+g(S(X,Z),Y)=0,
\end{array}
\quad
\forall X, Y, Z \in \mathfrak{X} (M) \right\}.
\]
\end{lema}
{\bf Proof.} Let $S$ be the potential tensor of $\nabla^{\mathrm{a}}$; then
\[
\nabla^{\mathrm{a}}_X Y = \nabla^{\mathrm{g}}_X Y + S(X,Y), \quad \forall X, Y \in \mathfrak{X} (M).
\]
Given $X, Y \in \mathfrak{X} (M)$, if the following relations hold
\begin{eqnarray*}
\nabla^{\mathrm{a}}_X J Y &=& J \nabla^{\mathrm{a}}_X Y \\
\nabla^{\mathrm{g}}_X JY + S(X,JY) &=& J \nabla^{\mathrm{g}}_X Y + J S(X,Y)
\end{eqnarray*}
then the condition $\nabla^{\mathrm{a}} J=0$ is equivalent to the following identity
\[
JS(X,Y)-S(X,JY) =\nabla^{\mathrm{g}}_X JY - J\nabla^{\mathrm{g}}_X Y =(\nabla^{\mathrm{g}}_X J) Y, \quad \forall X, Y \in \mathfrak{X} (M).
\]
Given the vector fields $X, Y, Z$ on $M$, as $\nabla^{\mathrm{g}} g=0$ one has
\begin{eqnarray*}
(\nabla^{\mathrm{a}}_X g) (Y,Z) &=& (\nabla^{\mathrm{a}}_X g) (Y,Z)- (\nabla^{\mathrm{g}}_X g) (Y,Z)\\
&=& -g(\nabla^{\mathrm{a}}_X Y, Z)-g(\nabla^{\mathrm{a}}_X Z, Y)+ g(\nabla^{\mathrm{g}}_X Y, Z)+g(\nabla^{\mathrm{g}}_X Z, Y)\\
&=&-(g(S(X,Y),Z)+g(S(X,Z),Y).
\end{eqnarray*}
thus proving $\nabla^{\mathrm{a}}$ parallelizes the metric $g$ if and only if
\[
g(S(X,Y),Z)+g(S(X,Z),Y)=0, \quad \forall X, Y, Z \in \mathfrak{X} (M). \ \blacksquare
\]
The fundamental result in this section is the following one:
\begin{teor}
\label{teor:bienadaptada-ae-estructura}
Let $(M,J,g)$ be a $(J^2=\pm1)$-metric manifold. The derivation law $\nabla^{\mathrm{w}}$ of the well adapted connection $\Gamma ^{\mathrm{w}}$ is the unique derivation law satisfying $\nabla^{\mathrm{w}} J=0, \nabla^{\mathrm{w}} g=0$ and
\begin{equation}
g(\mathrm{T}^{\mathrm{w}}(X,Y),Z)-g(\mathrm{T}^{\mathrm{w}}(Z,Y),X) = -\varepsilon (g(\mathrm{T}^{\mathrm{w}}(JX,Y),JZ)-g(\mathrm{T}^{\mathrm{w}}(JZ,Y),JX)), \quad \forall X,Y, Z \in \mathfrak{X} (M).
\label{eq:welladapted}
\end{equation}
\end{teor}
The relations $\nabla^{\mathrm{w}} J=0$, $\nabla^{\mathrm{w}} g=0$ hold because of Proposition \ref{teor:aeconexiones}, being $\Gamma ^{\mathrm{w}}$ reducible to the corresponding $G_{(\alpha, \varepsilon)}$-structure. The hard part of the proof is formula (\ref{eq:welladapted}). The strategy we will follow is this: we must prove that condition $i)$ in Theorem \ref{teor:metodo} is equivalent to formula (\ref{eq:welladapted}), because that condition characterizes the well adapted connection. This can be done working with local adapted frames defined in local charts (which will be introduced in Definition \ref{teor:ae-baselocal}) but first we should show what a section of the adjoint bundle means in our context of $G_{(\alpha ,\varepsilon )}$-structures. This is our first auxilar result.
\bigskip
Let $\pi \colon \mathcal C_{(\alpha,\varepsilon)}\to M$ be the bundle of adapted frames, and let $\mathrm{ad} \mathcal C_{(\alpha,\varepsilon)} =(\mathcal C_{(\alpha,\varepsilon)} \times \mathfrak g_{(\alpha,\varepsilon)})/G_{(\alpha,\varepsilon)}$ be the adjoint bundle. The structural group $G_{(\alpha,\varepsilon)}$ acts on $\mathcal C_{(\alpha,\varepsilon)} \times \mathfrak g_{(\alpha,\varepsilon)}$ as:
\[
(u_p, A) \cdot N = (u_p \cdot N, N^{-1} A N), \quad \forall p\in M, u_p \in {\mathcal C_{(\alpha,\varepsilon)}} _p, N \in G_{(\alpha,\varepsilon)}, A \in \mathfrak g_{(\alpha,\varepsilon)}.
\]
As $\mathfrak g_{(\alpha,\varepsilon)}\subseteq \mathfrak{gl} (n; \mathbb{R})$, one has
$\mathrm{ad} \mathcal C_{(\alpha,\varepsilon)} \subseteq \mathrm{End} (TM)$, where $\mathrm{End} (TM)$ denotes the set of endomorphisms of the tangent bundle of $M$. Then we have:
\begin{prop}
\label{teor:adjunto-ae-estructura}
Let $(M, J, g)$ be a $(J^2=\pm1)$-metric manifold of dimension $2n$. The sections of the adjoint bundle $\mathrm{ad} \mathcal C_{(\alpha,\varepsilon)}$ are the endomorphisms of the tangent bundle of $M$ satisfying the following two conditions:
\[
J\circ S = S \circ J, \quad g(SX, Y)=-g(X,SY), \quad \forall X, Y \in \mathfrak{X} (M).
\]
\end{prop}
{\bf Proof.} Observe that given $p\in M$, then an element $S \in (\mathrm{ad} \mathcal C_{(\alpha,\varepsilon)})_p$ is an endomorphism $S\colon T_p (M)\to T_p(M)$ having coordinate matrix belonging to the Lie algebra $\mathfrak g_{(\alpha,\varepsilon)}$ when it is expressed respect to the adapted frame $u_p=(X_1, \ldots, X_n, Y_1, \ldots, Y_n) \in {\mathcal C_{(\alpha,\varepsilon)}}_p$. Then we will prove both implications working at a point $p\in M$.\medskip
$\Rightarrow)$ We will first consider the case $\alpha=1$ with the two subcases $\varepsilon =\pm 1$ and after that we will take $\alpha=-1$ with the corresponding subcases. The proof follows from a careful analysis of the four situations. Condition on $\alpha$ will determine the commutativity of $J$ and $S$. Condition on $\varepsilon$ will allow us to obtain the formula linking $g$ and $S$.
\medskip
Assuming $\alpha=1$ there exist $A, B \in \mathfrak{gl} (n; \mathbb{R})$ such that the endomorphism $S$ has the following matrix respect to the adapted frame $u_p$
\[
\left(
\begin{array}{cc}
A&O_n\\
O_n &B
\end{array}
\right) \in \mathfrak{gl} (2n; \mathbb{R}),
\]
i.e.,
\[
SX_j = \sum_{i=1}^n a_{ij}X_i, \quad SY_j = \sum_{i=1}^n b_{ij} Y_i, \quad \forall j=1, \ldots , n,
\; {\rm where}\;
JX_i =X_i, \quad JY_i =-Y_i, \quad \forall i=1, \ldots, n.
\]
Then, for each $j =1, \ldots, n$, one has
\begin{eqnarray*}
JSX_j = J \left( \sum_{i=1}^n a_{ij}X_i\right) = \sum_{i=1}^n a_{ij}X_i, &\quad&
JSY_j = J \left( \sum_{i=1}^n b_{ij}Y_i\right) = - \sum_{i=1}^n b_{ij}Y_i,\\
SJX_j= SX_j = \sum_{i=1}^n a_{ij}X_i, &\quad&
SJY_j= -SY_j = -\sum_{i=1}^n b_{ij} Y_i,
\end{eqnarray*}
thus proving $(J\circ S) (u) = (S \circ J) (u)$ for all $u\in T_p(M)$.
\medskip
If $\varepsilon =1$ then $A+A^t=O_n$ and $B+B^t=O_n$, thus obtaining
\[
a_{ij}=-a_{ji}, \quad b_{ij}=-b_{ji}, \quad \forall i, j =1,\ldots , n.
\]
where
\[
g(X_i,X_j)=g(Y_i, Y_j)= \delta_{ij}, \quad g(X_i, Y_j)= 0, \quad \forall i, j =1, \ldots, n.
\]
For each pair $i, j =1, \ldots , n$, one has
\begin{eqnarray*}
g(SX_i, X_j)= g\left(\sum_{j=1}^n a_{ji}X_j, X_j \right)= a_{ji}, &\quad & g(X_i, SX_j)= g\left(X_i, \sum_{i=1}^n a_{ij}X_i \right)= a_{ij}=-a_{ji},\\
g(SY_i, Y_j)= g\left(\sum_{j=1}^n b_{ji}Y_j, Y_j \right)= b_{ji}, &\quad& g(Y_i, SY_j)= g\left(Y_i, \sum_{i=1}^n b_{ij}Y_i \right)= b_{ij}=-b_{ji}\\
g(SX_i, Y_j)= g\left(\sum_{j=1}^n a_{ji}X_j, Y_j \right)= 0, & \quad & g(X_i, SY_j)= g\left(X_i, \sum_{i=1}^n b_{ij}Y_i \right)= 0,
\end{eqnarray*}
thus proving $g(Su, v)=-g(u, Sv)$ for all $u, v \in T_p (M)$.
\medskip
If $\varepsilon =-1$ then $B=-A^t$, thus obtaining
\[
b_{ij}=-a_{ji}, \quad \forall i, j =1,\ldots , n.
\]
where
\[
g(X_i,X_j)=g(Y_i, Y_j)= 0, \quad g(X_i, Y_j)= \delta_{ij}, \quad \forall i, j =1, \ldots, n.
\]
For each pair $i, j =1, \ldots , n$, one has
\begin{eqnarray*}
g(SX_i, X_j)= g\left(\sum_{j=1}^n a_{ji}X_j, X_j \right)= 0, &\quad& g(X_i, SX_j)= g\left(X_i, \sum_{i=1}^n a_{ij}X_i \right)= 0,\\
g(SY_i, Y_j)= g\left(\sum_{j=1}^n b_{ji}Y_j, Y_j \right)= 0, &\quad& g(Y_i, SY_j)= g\left(Y_i, \sum_{i=1}^n b_{ij}Y_i \right)= 0,\\
g(SX_i, Y_j)= g\left(\sum_{j=1}^n a_{ji}X_j, Y_j \right)= a_{ji}, &\quad& g(X_i, SY_j)= g\left(X_i, \sum_{i=1}^n b_{ij}Y_i \right)= b_{ij}=-a_{ji},
\end{eqnarray*}
thus proving $g(Su, v)=-g(u, Sv)$ for all $u, v \in T_p (M)$.
\medskip
Assuming $\alpha=-1$ there exist $A, B \in \mathfrak{gl} (n; \mathbb{R})$ such that the endomorphism $S$ has the following matrix respect to the adapted frame $u_p$
\[
\left(
\begin{array}{cc}
A&B\\
-B &A
\end{array}
\right) \in \mathfrak{gl} (2n; \mathbb{R}),
\]
i.e.,
\[
SX_j = \sum_{i=1}^n a_{ij}X_i -\sum_{i=1}^n b_{ij} Y_i, \quad SY_j = \sum_{i=1}^n b_{ij} X_i+ \sum_{i=1}^n a_{ij} Y_i, \quad \forall j=1, \ldots , n,
\]
where
\[
JX_i =Y_i, \quad JY_i =-X_i, \quad \forall i=1, \ldots, n.
\]
For each $j =1, \ldots, n$, one has
\begin{eqnarray*}
JSX_j &=& J \left( \sum_{i=1}^n a_{ij}X_i -\sum_{i=1}^n b_{ij} Y_i \right) = \sum_{i=1}^n b_{ij}X_i +\sum_{i=1}^n a_{ij} Y_i , \\
JSY_j &=& J \left( \sum_{i=1}^n b_{ij} X_i+ \sum_{i=1}^n a_{ij} Y_i\right) = -\sum_{i=1}^n a_{ij} X_i+ \sum_{i=1}^n b_{ij} Y_i,\\
SJX_j&=& SY_j = \sum_{i=1}^n b_{ij} X_i+ \sum_{i=1}^n a_{ij} Y_i, \\
SJY_j&=& -SX_j = -\sum_{i=1}^n a_{ij}X_i +\sum_{i=1}^n b_{ij} Y_i,
\end{eqnarray*}
thus proving $(J\circ S) (u) = (S \circ J) (u)$ for all $u\in T_p(M)$.
\medskip
If $\varepsilon =1$ then $A+A^t=O_n$ and $B-B^t=O_n$, thus obtaining
\[
a_{ij}=-a_{ji}, \quad b_{ij}=b_{ji}, \quad \forall i, j =1,\ldots , n.
\]
where
\[
g(X_i,X_j)=g(Y_i, Y_j)= \delta_{ij}, \quad g(X_i, Y_j)= 0, \quad \forall i, j =1, \ldots, n.
\]
For each pair $i, j =1, \ldots , n$, one has
\begin{eqnarray*}
g(SX_i, X_j)= g\left(\sum_{i=1}^n a_{ji}X_j -\sum_{i=1}^n b_{ji} Y_j, X_j \right)= a_{ji},
&\quad &
g(X_i, SX_j)= g\left(X_i, \sum_{i=1}^n a_{ij}X_i -\sum_{i=1}^n b_{ij} Y_i \right)= a_{ij}=-a_{ji},\\
g(SY_i, Y_j)= g\left(\sum_{i=1}^n b_{ji}X_j +\sum_{i=1}^n a_{ji} Y_j, Y_j \right)= a_{ji},
&\quad&
g(Y_i, SY_j)= g\left(Y_i, \sum_{i=1}^n b_{ij} X_i+ \sum_{i=1}^n a_{ij} Y_i \right)= a_{ij}=-a_{ji},\\
g(SX_i, Y_j)= g\left(\sum_{i=1}^n a_{ji}X_j -\sum_{i=1}^n b_{ji} Y_j, Y_j \right)= -b_{ji},
&\quad&
g(X_i, SY_j)= g\left(X_i,\sum_{i=1}^n b_{ij} X_i+ \sum_{i=1}^n a_{ij} Y_i \right)= b_{ij}=b_{ji},
\end{eqnarray*}
thus proving $g(Su, v)=-g(u, Sv)$ for all $u, v \in T_p (M)$.
\medskip
If $\varepsilon =-1$ then $A+A^t=O_n$ and $B+B^t=O_n$, thus obtaining
\[
a_{ij}=-a_{ji}, \quad b_{ij}=-b_{ji}, \quad \forall i, j =1,\ldots , n.
\]
where
\[
g(X_i,X_j)=g(Y_i, Y_j)= 0, \quad g(X_i, Y_j)= \delta_{ij}, \quad \forall i, j =1, \ldots, n.
\]
For each pair $i, j =1, \ldots , n$, one has
\begin{eqnarray*}
g(SX_i, X_j)= g\left(\sum_{i=1}^n a_{ji}X_j -\sum_{i=1}^n b_{ji} Y_j, X_j \right)= -b_{ji},
&\quad&
g(X_i, SX_j)= g\left(X_i, \sum_{i=1}^n a_{ij}X_i -\sum_{i=1}^n b_{ij} Y_i \right)= -b_{ij}=b_{ji},\\
g(SY_i, Y_j)= g\left(\sum_{i=1}^n b_{ji}X_j +\sum_{i=1}^n a_{ji} Y_j, Y_j \right)= b_{ji},
&\quad&
g(Y_i, SY_j)= g\left(Y_i, \sum_{i=1}^n b_{ij} X_i+ \sum_{i=1}^n a_{ij} Y_i \right)= b_{ij}=-b_{ji},\\
g(SX_i, Y_j)= g\left(\sum_{i=1}^n a_{ji}X_j -\sum_{i=1}^n b_{ji} Y_j, Y_j \right)= a_{ji},
&\quad&
g(X_i, SY_j)= g\left(X_i,\sum_{i=1}^n b_{ij} X_i+ \sum_{i=1}^n a_{ij} Y_i \right)= a_{ij}=-a_{ji},
\end{eqnarray*}
thus proving $g(Su, v)=-g(u, Sv)$ for all $u, v \in T_p (M)$.
\medskip
$\Leftarrow)$ Let $S$ be an endomorphism of the tangent bundle of $M$. Its matrix respect to an adapted frame $u_{p}\in {\mathcal C_{(\alpha,\varepsilon)}}_p$ will be
\[
\left(
\begin{array}{cc}
A&B\\
C &D
\end{array}
\right) \in \mathfrak{gl} (2n; \mathbb{R}),
\quad {\rm i.e.,}\quad
SX_j=\sum_{i=1}^n a_{ij} X_i + \sum_{i=1}^n c_{ij} Y_i, \quad SY_j = \sum_{i=1}^n b_{ij} X_i + \sum_{i=1}^n d_{ij} Y_i, \quad \forall j=1, \ldots, n.
\]
We must prove that if $S$ satisfies the two relations with $J$ and $g$, then $S$ is a section of the adjoint bundle, which is equivalent to prove that the above matrix of $S$ belongs to the corresponding Lie algebra $\mathfrak g_{(\alpha,\varepsilon)}$. As in the other implication we begin assuming $\alpha=1$ with the two subcases $\varepsilon =\pm 1$ and after that we will take $\alpha =-1$ with the corresponding subcases.
\medskip
Let $\alpha=1$. Then $JX_i =X_i, JY_i =-Y_i, \forall i=1, \ldots, n$, thus obtaining
\begin{eqnarray*}
SJX_j= \sum_{i=1}^n a_{ij} X_i + \sum_{i=1}^n c_{ij} Y_i,
&\quad&
SJY_j=-\sum_{i=1}^n b_{ij} X_i - \sum_{i=1}^n d_{ij} Y_i, \\
JSX_j= \sum_{i=1}^n a_{ij} X_i - \sum_{i=1}^n c_{ij} Y_i,
&\quad&
JSY_j= \sum_{i=1}^n b_{ij} X_i - \sum_{i=1}^n d_{ij} Y_i.
\end{eqnarray*}
As $J\circ S = S \circ J$ one has $c_{ij}=0, d_{ij}=0, \forall i, j =1\ldots, n$,
thus proving the matrix of $S$ has the following expression:
\[
\left(
\begin{array}{cc}
A&O_n\\
O_n &D
\end{array}
\right) \in \mathfrak{gl} (2n; \mathbb{R}),
\quad {\rm i.e.}, \quad
SX_j=\sum_{i=1}^n a_{ij} X_i \quad SY_j = \sum_{i=1}^n d_{ij} Y_i, \quad \forall j=1, \ldots, n.
\]
Let $\varepsilon=1$. Thus $g(X_i,X_j)=g(Y_i, Y_j)= \delta_{ij}, g(X_i, Y_j)= 0, \forall i, j =1, \ldots, n$, and then for each pair $i, j =1, \ldots , n$, one has
\begin{eqnarray*}
g(SX_i, X_j)= g \left( \sum_{j=1}^n a_{ji} X_j , X_j \right)=a_{ji},
&\quad&
g(X_i, SX_j)= g \left( X_i, \sum_{j=1}^n a_{ij} X_i \right)=a_{ij},\\
g(SY_i, Y_j)= g \left( \sum_{j=1}^n d_{ji} Y_j , Y_j \right)=d_{ji},
&\quad&
g(Y_i, SY_j)= g \left( Y_i, \sum_{j=1}^n d_{ij} Y_i \right)=d_{ij}.
\end{eqnarray*}
As $g(Su, v)=-g(u, Sv)$ for all $u, v \in T_p (M)$ one has
$a_{ij}=-a_{ji}, \quad d_{ij}=-d_{ji}, \quad \forall i, j=1, \ldots, n$, thus proving the matrix of $S$ belongs to
$\mathfrak g_{(1,1)}$ (see (\ref{eq:(11)-algebra})).
\medskip
Let $\varepsilon=-1$. Thus $g(X_i,X_j)=g(Y_i, Y_j)= 0, g(X_i, Y_j)= \delta_{ij}, \forall i, j =1, \ldots, n$, and then for each pair $i, j =1, \ldots , n$, one has
\begin{eqnarray*}
g(SX_i, Y_j)= g \left( \sum_{j=1}^n a_{ji} X_j , Y_j \right)=a_{ji}, \quad
g(X_i, SY_j)= g \left( X_i, \sum_{j=1}^n d_{ij} Y_i \right)=d_{ij}.
\end{eqnarray*}
As $g(Su, v)=-g(u, Sv)$ for all $u, v \in T_p (M)$ one has
$a_{ij}=-d_{ji}, \quad \forall i, j=1, \ldots, n$, thus proving the matrix of $S$ belongs to $\mathfrak g_{(1,-1)}$ (see (\ref{eq:(1-1)-algebra})).
\medskip
Let $\alpha=-1$. Then $JX_i =Y_i, JY_i =-X_i, \forall i=1, \ldots, n$, thus obtaining
\begin{eqnarray*}
SJX_j \sum_{i=1}^n b_{ij} X_i + \sum_{i=1}^n d_{ij} Y_i,
&\quad&
SJY_j=-\sum_{i=1}^n a_{ij} X_i - \sum_{i=1}^n c_{ij} Y_i,\\
JSX_j= -\sum_{i=1}^n c_{ij} X_i a +\sum_{i=1}^n a_{ij} Y_i,
&\quad&
JSY_j=-\sum_{i=1}^n d_{ij} X_i + \sum_{i=1}^n b_{ij} Y_i.
\end{eqnarray*}
As $J\circ S = S \circ J$ one has $b_{ij}=-c_{ij}, d_{ij}=a_{ij}, \forall i, j =1\ldots, n$, thus proving the matrix of $S$ has the following expression:
\[
\left(
\begin{array}{cc}
A&B\\
-B &A
\end{array}
\right) \in \mathfrak{gl} (2n; \mathbb{R}),
\quad {\rm i.e.,}\quad
SX_j=\sum_{i=1}^n a_{ij} X_i - \sum_{i=1}^n b_{ij} Y_i, \quad SY_j = \sum_{i=1}^n b_{ij} X_i + \sum_{i=1}^n a_{ij} Y_i, \quad \forall j=1, \ldots, n.
\]
Let $\varepsilon=1$. Thus
$g(X_i,X_j)=g(Y_i, Y_j)= \delta_{ij}, g(X_i, Y_j)= 0, \forall i, j =1, \ldots, n$, and then for each pair
$i, j =1, \ldots , n$, one has
\begin{eqnarray*}
g(SX_i, X_j)= g \left( \sum_{j=1}^n a_{ji} X_j -\sum_{j=1}^n b_{ji} X_j , Y_j \right)=a_{ji},
&\quad&
g(X_i, SX_j)= g \left( X_i, \sum_{j=1}^n a_{ij} X_i - \sum_{i=1}^n b_{ij} Y_i \right)=a_{ij},
\\
g(SX_i, Y_j)= g \left( \sum_{j=1}^n a_{ji} X_j -\sum_{j=1}^n b_{ji} Y_j, Y_j \right)=-b_{ji},
&\quad&
g(X_i, SY_j)= g \left( X_i, \sum_{j=1}^n b_{ij} X_i + \sum_{i=1}^n a_{ij} Y_i \right)=b_{ij}.
\end{eqnarray*}
As $g(Su, v)=-g(u, Sv)$ for all $u, v \in T_p (M)$ one has $a_{ij}=-a_{ji}, b_{ij}=b_{ji}, \forall i, j=1, \ldots, n$, thus proving the matrix of $S$ belongs to $\mathfrak g_{(-1,1)}$ (see (\ref{eq:(-11)-algebra})).
\medskip
Finally, let $\varepsilon=-1$. Thus $g(X_i,X_j)=g(Y_i, Y_j)= 0, g(X_i, Y_j)= \delta_{ij}, \forall i, j =1, \ldots, n$, and then for each pair $i, j =1, \ldots , n$, one has
\begin{eqnarray*}
g(SX_i, X_j)= g \left( \sum_{j=1}^n a_{ji} X_j -\sum_{j=1}^n b_{ji} X_j , Y_j \right)=-b_{ji},
&\quad&
g(X_i, SX_j)= g \left( X_i, \sum_{j=1}^n a_{ij} X_i - \sum_{i=1}^n b_{ij} Y_i \right)=-b_{ij},
\\
g(SX_i, Y_j)= g \left( \sum_{j=1}^n a_{ji} X_j -\sum_{j=1}^n b_{ji} Y_j, Y_j \right)=a_{ji},
&\quad&
g(X_i, SY_j)= g \left( X_i, \sum_{j=1}^n b_{ij} X_i + \sum_{i=1}^n a_{ij} Y_i \right)=a_{ij}.
\end{eqnarray*}
As $g(Su, v)=-g(u, Sv)$, for all $u, v \in T_p (M)$ one has $
a_{ij}=-a_{ji}, b_{ij}=-b_{ji}, \forall i, j=1, \ldots, n$, thus proving the matrix of $S$ belongs to $\mathfrak g_{(-1,-1)}$ (see (\ref{eq:(-1-1)-algebra})). $\blacksquare$
\begin{defin}
\label{teor:ae-baselocal}
Let $(M,J,g)$ be a $(J^2=\pm1)$-metric manifold of dimension $2n$ and let $U\subseteq M$be an open subset. A family $(X_1, \ldots, X_n, Y_1, \ldots, Y_n)$, $X_1, \ldots, X_n, Y_1, \ldots, Y_n \in \mathfrak{X} (U)$ is called an adapted local frame to the $G_{(\alpha,\varepsilon)}$-structure defined by $(J,g)$ in $U$ if $(X_1(p), \ldots, X_n(p), Y_1(p),\ldots Y_n(p)) \in {\mathcal C_{(\alpha,\varepsilon)}}_p, \forall p \in U$.
Its dual local frame is the family $(\eta_1, \ldots, \eta_n, \omega_1, \ldots \omega_n)$, $\eta_1, \ldots, \eta_n, \omega_1, \ldots, \omega_n \in \bigwedge^1 (U)$, satisfying
\[
\eta_i(X_j)= \omega_{i} (Y_i)=\delta_{ij}, \quad \eta_i (Y_j)=\omega_i(X_j)=0, \quad \forall i, j =1, \ldots, n.
\]
\end{defin}
In the case $\varepsilon=1$ the dual frame is given by
\[
\eta_i (X)= g(X_i,X), \quad \omega_i (X)= g(Y_i,X), \quad i=1, \ldots, n, \quad \forall X\in \mathfrak{X} (U),
\]
while in the case $\varepsilon=-1$ it is given by
\[
\eta_i (X)= g(Y_i,X), \quad \omega_i (X)= g(X_i,X), \quad i=1, \ldots, n,\quad \forall X\in \mathfrak{X} (U).
\]
\medskip
The following result allow to obtain a local basis of section of the adjoint bundle:
\begin{prop}
\label{teor:base-adjunto-ae-estructura}
Let $(M,J,g)$ be a $(J^2=\pm1)$-metric manifold of dimension $2n$ and let $U\subseteq M$ be an open subset. Let $(X_1, \ldots, X_n, Y_1, \ldots, Y_n)$ be an adapted local frame to the $G_{(\alpha,\varepsilon)}$-structure defined by $(J,g)$ in $U$ and let $(\eta_1, \ldots, \eta_n, \omega_1, \ldots \omega_n)$ be its dual local frame.
\begin{enumerate}
\renewcommand*{\theenumi}{\roman{enumi})}
\renewcommand*{\labelenumi}{\theenumi}
\item If $(J,g)$ is $(-1,1)$-structure then
\[
\left\{
\begin{array}{c}
S_{ab}=\eta_b \otimes X_a -\eta_a \otimes X_b +\omega_b \otimes Y_a -\omega_a\otimes Y_b\\
S'_{ab}=\eta_b \otimes Y_a +\eta_a \otimes Y_b -\omega_b \otimes X_a -\omega_a\otimes X_b\\
\end{array} \colon a, b \in \{1, \ldots, n\}, 1 \leq a <b \leq n
\right\}
\]
is a local basis of sections in $U$ of the adjoint bundle $\mathrm{ad} \mathcal C_{(-1,1)} = (\mathcal C_{(-1,1)} \times \mathfrak g_{(-1,1)})/G_{(-1,1)}$.
\item If $(J,g)$ is a $(-1,-1)$-structure then
\[
\left\{
\begin{array}{c}
S_{ab}=\eta_b \otimes X_a -\eta_a \otimes X_b +\omega_b \otimes Y_a -\omega_a\otimes Y_b\\
S'_{ab}=\eta_b \otimes Y_a -\eta_a \otimes Y_b -\omega_b \otimes X_a +\omega_a\otimes X_b\\
\end{array} \colon a, b \in \{1, \ldots, n\}, 1 \leq a <b \leq n
\right\}
\]
is a local basis of sections in $U$ of the adjoint bundle $\mathrm{ad} \mathcal C_{(-1,-1)} = (\mathcal C_{(-1,-1)} \times \mathfrak g_{(-1,-1)})/G_{(-1,-1)}$.
\item If $(J,g)$ is a $(1,1)$-structure then
\[
\left\{
\begin{array}{c}
S_{ab}=\eta_b \otimes X_a -\eta_a \otimes X_b \\
S'_{ab}=\omega_b \otimes Y_a -\omega_a \otimes Y_b \\
\end{array} \colon a, b \in \{1, \ldots, n\}, 1 \leq a <b \leq n
\right\}
\]
is a local basis of sections in $U$ of the adjoint bundle $\mathrm{ad} \mathcal C_{(1,1)} = (\mathcal C_{(1,1)} \times \mathfrak g_{(1,1)})/G_{(1,1)}$.
\item If $(J,g)$ is $(1,-1)$-structure then
\[
\left\{
S_{ab}=\eta_b \otimes X_a -\omega_a \otimes Y_b \\ \colon a, b \in \{1, \ldots, n\}
\right\}
\]
is a local basis of sections in $U$ of the adjoint bundle
$\mathrm{ad} \mathcal C_{(1,-1)} = (\mathcal C_{(1,-1)} \times \mathfrak g_{(1,-1)})/G_{(1,-1)}$.
\end{enumerate}
\end{prop}
{\bf Proof.} Trivial, taking into account Proposition \ref{teor:adjunto-ae-estructura}. $\blacksquare$
\bigskip
Then, we can prove the main Theorem of this Section:
\bigskip
\textbf{Proof of Theorem \ref{teor:bienadaptada-ae-estructura}.}
We must prove that the derivation law $\nabla^{\mathrm{w}}$ is characterized by parallelizing $g$ and $J$ and by formula (\ref{eq:welladapted}). Theorem \ref{teor:metodo} shows that the well adapted connection is characterized as the unique natural connection satisfying $ \mathrm{trace}\, (S \circ i_X \circ \mathrm{T}^{\mathrm{w}}) = 0 $, for all section $S$ of the adjoint bundle, where $\mathrm{T}^{\mathrm{w}}$ is the torsion tensor of the derivation law $\nabla ^{\mathrm{w}}$. Sections of the adjoint bundle have been characterized in Proposition \ref{teor:adjunto-ae-estructura}. Local basis of sections of the adjoint bundle have been obtained in the previous Proposition \ref{teor:base-adjunto-ae-estructura}. Then, combining all the above results we will able to prove the Theorem.
As in the proof of Proposition \ref{teor:adjunto-ae-estructura}, we will distinguish four cases. We begin with $\alpha =-1$ and the two subcases $\varepsilon =\pm 1$ and after that we will study the case $\alpha =1$ and the corresponding two subcases. As our proof will be local, we assume that $(X_1, \ldots, X_n, Y_1, \ldots, Y_n)$ is an adapted local frame to the $G_{(\alpha,\varepsilon)}$-structure defined by $(J,g)$ in an open subset $U$ of $M$.
\medskip
Let $\alpha=-1$ and let us denote $V_1=<X_1, \ldots, X_n>$, $V_2=<Y_1,\ldots, Y_n > = <JX_1, \ldots, JX_n> = JV_1$. For any two vector fields $X, Z$ in $U$ there exist $X^1, X^2, Z^1, Z^2 \in V_1$ such that
\[
X= X^1+JX^2, \quad Z =Z^1+JZ^2, \quad JX = -X^2 +JX^1, \quad JZ = -Z^2+JZ^1,
\]
thus obtaining
\begin{eqnarray}
g(\mathrm{T}^{\mathrm{w}}(X,Y),Z)-g(\mathrm{T}^{\mathrm{w}}(Z,Y),X)
&=& g(\mathrm{T}^{\mathrm{w}}(X^1,Y),Z^1)+g(\mathrm{T}^{\mathrm{w}}(JX^2,Y),Z^1) \nonumber\\
&+&g(\mathrm{T}^{\mathrm{w}}(X^1,Y),JZ^2)+g(\mathrm{T}^{\mathrm{w}}(JX^2,Y),JZ^2) \nonumber\\
&-& g(\mathrm{T}^{\mathrm{w}}(Z^1,Y),X^1)-g(\mathrm{T}^{\mathrm{w}}(JZ^2,Y),X^1) \nonumber\\
&-&g(\mathrm{T}^{\mathrm{w}}(Z^1,Y),JX^2)+g(\mathrm{T}^{\mathrm{w}}(JZ^2,Y),JX^2), \label{eq:functorial-11} \\
-\varepsilon(g(\mathrm{T}^{\mathrm{w}}(JX,Y),JZ)-g(\mathrm{T}^{\mathrm{w}}(JZ,Y),JX))
&=& -\varepsilon \biggl( g(\mathrm{T}^{\mathrm{w}}(X^2,Y),Z^2)-g(\mathrm{T}^{\mathrm{w}}(JX^1,Y),Z^2) \nonumber\\
&-&g(\mathrm{T}^{\mathrm{w}}(X^2,Y),JZ^1)+g(\mathrm{T}^{\mathrm{w}}(JX^1,Y),JZ^1) \nonumber\\
&-& g(\mathrm{T}^{\mathrm{w}}(Z^2,Y),X^2)+g(\mathrm{T}^{\mathrm{w}}(JZ^1,Y),X^2) \nonumber\\
&+&g(\mathrm{T}^{\mathrm{w}}(Z^2,Y),JX^1)-g(\mathrm{T}^{\mathrm{w}}(JZ^1,Y),JX^1) \biggl). \label{eq:functorial-12}
\end{eqnarray}
If $(\alpha,\varepsilon)=(-1,1)$, taking into account Proposition \ref{teor:base-adjunto-ae-estructura} $i)$, a local basis of sections of $\mathrm{ad} \mathcal C_{(-1,1)}$ is
\[
\left\{
\begin{array}{c}
S_{ab}=\eta_b \otimes X_a -\eta_a \otimes X_b +\omega_b \otimes Y_a -\omega_a\otimes Y_b\\
S'_{ab}=\eta_b \otimes Y_a +\eta_a \otimes Y_b -\omega_b \otimes X_a -\omega_a\otimes X_b\\
\end{array} \colon a, b \in \{1, \ldots, n\}, 1 \leq a <b \leq n
\right\}.
\]
Then, by Theorem \ref{teor:metodo}, $\nabla^{\mathrm{w}}$ is the unique natural derivation law satisfying
\[
\mathrm{trace}\, (S_{ab} \circ i_Y \circ \mathrm{T}^{\mathrm{w}}) = 0, \quad \mathrm{trace}\, (S'_{ab} \circ i_Y \circ \mathrm{T}^{\mathrm{w}}) = 0, \quad \forall Y\in \mathfrak{X} (M),
\ {\rm for}\ {\rm all}\ a,b\ {\rm with}\ 1\leq a <b \leq n.\]
Taking $Y\in \mathfrak{X} (M)$ and $a,b$ with $1\leq a <b \leq n$ one has
\begin{eqnarray*}
\mathrm{trace}\, (S_{ab} \circ i_Y \circ \mathrm{T}^{\mathrm{w}}) &=& \sum_{i=1}^n \eta_i (S_{ab} (\mathrm{T}^{\mathrm{w}}(Y,X_i)))+ \sum_{i=1}^n \omega_i (S_{ab}(\mathrm{T}^{\mathrm{w}}(Y,Y_i)))\\
&=& \eta_b (\mathrm{T}^{\mathrm{w}}(Y, X_a))-\eta_a(\mathrm{T}^{\mathrm{w}}(Y,X_b))+\omega_b (\mathrm{T}^{\mathrm{w}}(Y,Y_a))-\omega_a(\mathrm{T}^{\mathrm{w}}(Y,Y_b))\\
&=& g(\mathrm{T}^{\mathrm{w}}(Y, X_a),X_b)-g(\mathrm{T}^{\mathrm{w}}(Y,X_b),X_a)+g (\mathrm{T}^{\mathrm{w}}(Y,Y_a),Y_b)-g(\mathrm{T}^{\mathrm{w}}(Y,Y_b),Y_a),\\
\mathrm{trace}\, (S'_{ab} \circ i_X \circ \mathrm{T}^{\mathrm{w}}) &=& \sum_{i=1}^n \eta_i (S'_{ab} (\mathrm{T}^{\mathrm{w}}(Y,X_i)))+ \sum_{i=1}^n \omega_i (S'_{ab}(\mathrm{T}^{\mathrm{w}}(Y,Y_i)))\\
&=&- \omega_b (\mathrm{T}^{\mathrm{w}}(Y, X_a))-\omega_a(\mathrm{T}^{\mathrm{w}}(Y,X_b))+\eta_b (\mathrm{T}^{\mathrm{w}}(Y,Y_a))+\eta_a(\mathrm{T}^{\mathrm{w}}(Y,Y_b))\\
&=& - g (\mathrm{T}^{\mathrm{w}}(Y, X_a), Y_b)-g(\mathrm{T}^{\mathrm{w}}(Y,X_b),Y_a)+g (\mathrm{T}^{\mathrm{w}}(Y,Y_a),X_b)+g(\mathrm{T}^{\mathrm{w}}(Y,Y_b), X_a).
\end{eqnarray*}
From the above conditions one deduces:
\begin{eqnarray*}
g(\mathrm{T}^{\mathrm{w}}(Y, X_a),X_b)-g(\mathrm{T}^{\mathrm{w}}(Y,X_b),X_a)+g (\mathrm{T}^{\mathrm{w}}(Y,Y_a),Y_b)-g(\mathrm{T}^{\mathrm{w}}(Y,Y_b),Y_a) &=&0,\\
- g (\mathrm{T}^{\mathrm{w}}(Y, X_a), Y_b)-g(\mathrm{T}^{\mathrm{w}}(Y,X_b),Y_a)+g (\mathrm{T}^{\mathrm{w}}(Y,Y_a),X_b)+g(\mathrm{T}^{\mathrm{w}}(Y,Y_b), X_a)&=&0,
\end{eqnarray*}
for all $a,b$ with $ 1\leq a < b\leq n$. Given $X^1, Z^1 \in V_1$ one has
\begin{eqnarray}
g(\mathrm{T}^{\mathrm{w}}(X^1, Y),Z^1)-g(\mathrm{T}^{\mathrm{w}}(Z^1,Y),X^1)+g (\mathrm{T}^{\mathrm{w}}(JX^1,Y),JZ^1)-g(\mathrm{T}^{\mathrm{w}}(JZ^1,Y),JX^1) &=&0, \label{eq:functorial-1+11}\\
-g(\mathrm{T}^{\mathrm{w}}(X^1, Y),JZ^1)-g(\mathrm{T}^{\mathrm{w}}(Z^1,Y),JX^1)+g (\mathrm{T}^{\mathrm{w}}(JX^1,Y),Z^1)+g(\mathrm{T}^{\mathrm{w}}(JZ^1,Y),X^1) &=&0. \label{eq:functorial-1+12}
\end{eqnarray}
And given $X^1, X^2, Z^1, Z^2 \in V_1$, from equation (\ref{eq:functorial-1+11}), one obtains
\begin{eqnarray*}
g(\mathrm{T}^{\mathrm{w}}(X^1,Y),Z^1)-g(\mathrm{T}^{\mathrm{w}}(Z^1,Y),X^1)&=& -g(\mathrm{T}^{\mathrm{w}}(JX^1,Y),JZ^1)+g(\mathrm{T}^{\mathrm{w}}(JZ^1,Y),JX^1),\\
g(\mathrm{T}^{\mathrm{w}}(JX^2,Y),JZ^2)-g(\mathrm{T}^{\mathrm{w}}(JZ^2,Y),JX^2)&=& -g(\mathrm{T}^{\mathrm{w}}(X^2,Y),Z^2)+g(\mathrm{T}^{\mathrm{w}}(Z^2,Y),X^2),
\end{eqnarray*}
while from equation (\ref{eq:functorial-1+12}) one obtains
\begin{eqnarray*}
g(\mathrm{T}^{\mathrm{w}}(X^1,Y),JZ^2)-g(\mathrm{T}^{\mathrm{w}}(JZ^2,Y),X^1)&=& -g(\mathrm{T}^{\mathrm{w}}(Z^2,Y),JX^1)+g(\mathrm{T}^{\mathrm{w}}(JX^1,Y),Z^2),\\
g(\mathrm{T}^{\mathrm{w}}(JX^2,Y),Z^1)-g(\mathrm{T}^{\mathrm{w}}(Z^1,Y),JX^2)&=& -g(\mathrm{T}^{\mathrm{w}}(X^2,Y),JZ^1)+g(\mathrm{T}^{\mathrm{w}}(JZ^1,Y),X^2),
\end{eqnarray*}
These last equalities combined with (\ref{eq:functorial-11}) and (\ref{eq:functorial-12}) give the expected relation
\[
g(\mathrm{T}^{\mathrm{w}}(X,Y),Z)-g(\mathrm{T}^{\mathrm{w}}(Z,Y),X)= -(g(\mathrm{T}^{\mathrm{w}}(JX,Y),JZ)-g(\mathrm{T}^{\mathrm{w}}(JZ,Y),JX)), \quad \forall X, Y, Z \in \mathfrak{X} (M).
\]
which is formula (\ref{eq:welladapted}) in the case $\varepsilon=1$.
Observe that last equation in the case $X=X^1$ and $Z=Z^1$ reads as
\[
g(\mathrm{T}^{\mathrm{w}}(X^1,Y),Z^1)-g(\mathrm{T}^{\mathrm{w}}(Z^1,Y),X^1)+g(\mathrm{T}^{\mathrm{w}}(JX^1,Y),JZ^1)-g(\mathrm{T}^{\mathrm{w}}(JZ^1,Y),JX^1)=0,
\]
i.e, coincides with formula (\ref{eq:functorial-1+11}), while in the case $X=-X^1$ and $Z=JZ^1$ reads as
\[
-g(\mathrm{T}^{\mathrm{w}}(X^1,Y),JZ^1)+g(\mathrm{T}^{\mathrm{w}}(JZ^1,Y),X^1)+g(\mathrm{T}^{\mathrm{w}}(JX^1,Y),Z^1)-g(\mathrm{T}^{\mathrm{w}}(Z^1,Y),JX^1)=0
\]
and thus coincides with formula (\ref{eq:functorial-1+12}).
\medskip
If $(\alpha,\varepsilon)=(-1,-1)$ taking into account Proposition \ref{teor:base-adjunto-ae-estructura} $ii)$, a local basis of sections of $\mathrm{ad} \mathcal C_{(-1,-1)}$ is
\[
\left\{
\begin{array}{c}
S_{ab}=\eta_b \otimes X_a -\eta_a \otimes X_b +\omega_b \otimes Y_a -\omega_a\otimes Y_b\\
S'_{ab}=\eta_b \otimes Y_a -\eta_a \otimes Y_b -\omega_b \otimes X_a +\omega_a\otimes X_b\\
\end{array} \colon a, b \in \{1, \ldots, n\}, 1 \leq a <b \leq n
\right\}
\]
Then, by Theorem \ref{teor:metodo} $\nabla^{\mathrm{w}}$ is the unique natural derivation law satisfying
\[
\mathrm{trace}\, (S_{ab} \circ i_Y \circ \mathrm{T}^{\mathrm{w}}) = 0, \quad \mathrm{trace}\, (S'_{ab} \circ i_Y \circ \mathrm{T}^{\mathrm{w}}) = 0, \quad \forall Y\in \mathfrak{X} (M),
\ {\rm for}\ {\rm all}\ a,b\ {\rm with}\ 1\leq a <b \leq n.\]
Taking $Y\in \mathfrak{X} (M)$ and $a,b$ with $1\leq a <b \leq n$ one has
\begin{eqnarray*}
\mathrm{trace}\, (S_{ab} \circ i_Y \circ \mathrm{T}^{\mathrm{w}}) &=& \sum_{i=1}^n \eta_i (S_{ab} (\mathrm{T}^{\mathrm{w}}(Y,X_i)))+ \sum_{i=1}^n \omega_i (S_{ab}(\mathrm{T}^{\mathrm{w}}(Y,Y_i)))\\
&=& \eta_b (\mathrm{T}^{\mathrm{w}}(Y, X_a))-\eta_a(\mathrm{T}^{\mathrm{w}}(Y,X_b))+\omega_b (\mathrm{T}^{\mathrm{w}}(Y,Y_a))-\omega_a(\mathrm{T}^{\mathrm{w}}(Y,Y_b))\\
&=& g(\mathrm{T}^{\mathrm{w}}(Y, X_a),Y_b)-g(\mathrm{T}^{\mathrm{w}}(Y,X_b),Y_a)+g (\mathrm{T}^{\mathrm{w}}(Y,Y_a),X_b)-g(\mathrm{T}^{\mathrm{w}}(Y,Y_b),X_a),
\\
\mathrm{trace}\, (S'_{ab} \circ i_X \circ \mathrm{T}^{\mathrm{w}}) &=& \sum_{i=1}^n \eta_i (S'_{ab} (\mathrm{T}^{\mathrm{w}}(Y,X_i)))+ \sum_{i=1}^n \omega_i (S'_{ab}(\mathrm{T}^{\mathrm{w}}(Y,Y_i)))\\
&=&- \omega_b (\mathrm{T}^{\mathrm{w}}(Y, X_a))+\omega_a(\mathrm{T}^{\mathrm{w}}(Y,X_b))+\eta_b (\mathrm{T}^{\mathrm{w}}(Y,Y_a))-\eta_a(\mathrm{T}^{\mathrm{w}}(Y,Y_b))\\
&=& - g (\mathrm{T}^{\mathrm{w}}(Y, X_a), X_b)+g(\mathrm{T}^{\mathrm{w}}(Y,X_b),X_a)+g (\mathrm{T}^{\mathrm{w}}(Y,Y_a),Y_b)-g(\mathrm{T}^{\mathrm{w}}(Y,Y_b), Y_a).
\end{eqnarray*}
From the above conditions one deduces:
\begin{eqnarray*}
g(\mathrm{T}^{\mathrm{w}}(Y, X_a),Y_b)-g(\mathrm{T}^{\mathrm{w}}(Y,X_b),Y_a)+g (\mathrm{T}^{\mathrm{w}}(Y,Y_a),X_b)-g(\mathrm{T}^{\mathrm{w}}(Y,Y_b),X_a) &=&0,\\
g (\mathrm{T}^{\mathrm{w}}(Y, X_a), X_b)+g(\mathrm{T}^{\mathrm{w}}(Y,X_b),X_a)+g (\mathrm{T}^{\mathrm{w}}(Y,Y_a),Y_b)-g(\mathrm{T}^{\mathrm{w}}(Y,Y_b), Y_a)&=&0,
\end{eqnarray*}
for all $a,b$ with $1\leq a < b\leq n$. Given $X^1, Z^1 \in V_1$ one has
\begin{eqnarray}
g(\mathrm{T}^{\mathrm{w}}(X^1, Y),JZ^1)-g(\mathrm{T}^{\mathrm{w}}(Z^1,Y),JX^1)+g (\mathrm{T}^{\mathrm{w}}(JX^1,Y),Z^1)-g(\mathrm{T}^{\mathrm{w}}(JZ^1,Y),X^1) &=&0, \label{eq:functorial-1+13}\\
-g(\mathrm{T}^{\mathrm{w}}(X^1, Y),Z^1)+g(\mathrm{T}^{\mathrm{w}}(Z^1,Y),X^1)+g (\mathrm{T}^{\mathrm{w}}(JX^1,Y),JZ^1)-g(\mathrm{T}^{\mathrm{w}}(JZ^1,Y),X^1) &=&0. \label{eq:functorial-1+14}
\end{eqnarray}
And given $X^1, X^2, Z^1, Z^2 \in V_1$, from equation (\ref{eq:functorial-1+13}), one obtains
\begin{eqnarray*}
g(\mathrm{T}^{\mathrm{w}}(X^1,Y),JZ^2)-g(\mathrm{T}^{\mathrm{w}}(JZ^2,Y),X^1)&=& g(\mathrm{T}^{\mathrm{w}}(Z^2,Y),JX^1)-g(\mathrm{T}^{\mathrm{w}}(JX^1,Y),Z^2),\\
g(\mathrm{T}^{\mathrm{w}}(JX^2,Y),Z^1)-g(\mathrm{T}^{\mathrm{w}}(Z^1,Y),JX^2)&=& -g(\mathrm{T}^{\mathrm{w}}(X^2,Y),JZ^1)+g(\mathrm{T}^{\mathrm{w}}(JZ^1,Y),X^2),
\end{eqnarray*}
while from equation (\ref{eq:functorial-1+14}) one obtains
\begin{eqnarray*}
g(\mathrm{T}^{\mathrm{w}}(X^1,Y),Z^1)-g(\mathrm{T}^{\mathrm{w}}(Z^1,Y),X^1)&=& g(\mathrm{T}^{\mathrm{w}}(JX^1,Y),JZ^1)-g(\mathrm{T}^{\mathrm{w}}(JZ^1,Y),JX^1),\\
g(\mathrm{T}^{\mathrm{w}}(JX^2,Y),JZ^2)-g(\mathrm{T}^{\mathrm{w}}(JZ^2,Y),JX^2)&=& g(\mathrm{T}^{\mathrm{w}}(Z^2,Y),Z^2)-g(\mathrm{T}^{\mathrm{w}}(Z^2,Y),X^2),
\end{eqnarray*}
These last equalities combined with (\ref{eq:functorial-11}) and (\ref{eq:functorial-12}) give the expected relation
\[
g(\mathrm{T}^{\mathrm{w}}(X,Y),Z)-g(\mathrm{T}^{\mathrm{w}}(Z,Y),X)= g(\mathrm{T}^{\mathrm{w}}(JX,Y),JZ)-g(\mathrm{T}^{\mathrm{w}}(JZ,Y),JX), \quad \forall X, Y, Z \in \mathfrak{X} (M).
\]
which is formula (\ref{eq:welladapted}) in the case $\varepsilon=-1$.
Observe that the last equation in the case $X=X^1$ and $Z=JZ^1$ reads as
\[
g(\mathrm{T}^{\mathrm{w}}(X^1,Y),JZ^1)-g(\mathrm{T}^{\mathrm{w}}(JZ^1,Y),X^1)+g(\mathrm{T}^{\mathrm{w}}(JX^1,Y),Z^1)-g(\mathrm{T}^{\mathrm{w}}(Z^1,Y),JX^1)=0,
\]
i.e., coincides with formula (\ref{eq:functorial-1+13}), while in the case $X=-X^1$ and $Z=Z^1$ reads as
\[
-g(\mathrm{T}^{\mathrm{w}}(X^1,Y),Z^1)+g(\mathrm{T}^{\mathrm{w}}(Z^1,Y),X^1)+g(\mathrm{T}^{\mathrm{w}}(JX^1,Y),JZ^1)-g(\mathrm{T}^{\mathrm{w}}(JZ^1,Y),JX^1)=0
\]
and thus coincides with formula (\ref{eq:functorial-1+14}).
\medskip
Let $\alpha=1$. Let us denote $T^+_J(U)=<X_1, \ldots, X_n>$, $T^-_J(U)=<Y_1,\ldots, Y_n >=<JX_1, \ldots, JX_n>$, and
\[
J^+=\frac{1}{2} (Id+J), \quad J^-=\frac{1}{2}(Id-J).
\]
Given two vector fields $X, Z$ in $U$ one has
\[
X=J^+X + J^-X, \quad Z=J^+Z+J^-Z, \quad JX = J^+X-J^-X, \quad JZ =J^+Z-J^-Z,
\]
thus obtaining
\begin{eqnarray}
g(\mathrm{T}^{\mathrm{w}}(X,Y),Z)-g(\mathrm{T}^{\mathrm{w}}(Z,Y),X)&=& g(\mathrm{T}^{\mathrm{w}}( J^+X + J^-X,Y),J^+Z + J^-Z)\nonumber \\
&-&g(\mathrm{T}^{\mathrm{w}}(J^+Z + J^-Z,Y),J^+X + J^-X) \nonumber\\
&=& g(\mathrm{T}^{\mathrm{w}}(J^+X,Y),J^+Z)+g(\mathrm{T}^{\mathrm{w}}(J^-X,Y),J^+Z) \nonumber\\
&+&g(\mathrm{T}^{\mathrm{w}}(J^+X,Y),J^-Z)+g(\mathrm{T}^{\mathrm{w}}(J^-X,Y),J^-Z) \nonumber\\
&-& g(\mathrm{T}^{\mathrm{w}}(J^+Z,Y),J^+X)-g(\mathrm{T}^{\mathrm{w}}(J^-Z,Y),J^+X) \nonumber\\
&-&g(\mathrm{T}^{\mathrm{w}}(J^+Z,Y),J^-X)-g(\mathrm{T}^{\mathrm{w}}(J^-Z,Y),J^-X), \label{eq:functorial+11}
\end{eqnarray}
\begin{eqnarray}
-\varepsilon(g(\mathrm{T}^{\mathrm{w}}(JX,Y),JZ)-g(\mathrm{T}^{\mathrm{w}}(JZ,Y),JX))&=& -\varepsilon\biggl(g(\mathrm{T}^{\mathrm{w}}( J^+X - J^-X,Y),J^+Z - J^-Z)\nonumber \\
&-& g(\mathrm{T}^{\mathrm{w}}(J^+Z - J^-Z,Y),J^+X - J^-X) \biggl)\nonumber\\
&=& -\varepsilon \biggl( g(\mathrm{T}^{\mathrm{w}}(J^+X,Y),J^+Z)-g(\mathrm{T}^{\mathrm{w}}(J^-X,Y),J^+Z) \nonumber\\
&-&g(\mathrm{T}^{\mathrm{w}}(J^+X,Y),J^-Z)+g(\mathrm{T}^{\mathrm{w}}(J^-X,Y),J^-Z) \nonumber\\
&-& g(\mathrm{T}^{\mathrm{w}}(J^+Z,Y),J^+X)+g(\mathrm{T}^{\mathrm{w}}(J^-Z,Y),J^+X) \nonumber\\
&+&g(\mathrm{T}^{\mathrm{w}}(J^+Z,Y),J^-X)-g(\mathrm{T}^{\mathrm{w}}(J^-Z,Y),J^-X) \biggl). \label{eq:functorial+12}
\end{eqnarray}
If $(\alpha,\varepsilon)=(1,1)$ taking into account Proposition \ref{teor:base-adjunto-ae-estructura} $iii)$, a local basis of sections of $\mathrm{ad} \mathcal C_{(1,1)}$ is
\[
\left\{
\begin{array}{c}
S_{ab}=\eta_b \otimes X_a -\eta_a \otimes X_b \\
S'_{ab}=\omega_b \otimes Y_a -\omega_a \otimes Y_b
\end{array} \colon a, b \in \{1, \ldots, n\}, 1 \leq a <b \leq n
\right\}
\]
Then, by Theorem \ref{teor:metodo} $\nabla^{\mathrm{w}}$ is the unique natural derivation law satisfying
\[
\mathrm{trace}\, (S_{ab} \circ i_Y \circ \mathrm{T}^{\mathrm{w}}) = 0, \quad \mathrm{trace}\, (S'_{ab} \circ i_Y \circ \mathrm{T}^{\mathrm{w}}) = 0, \quad \forall Y\in \mathfrak{X} (M),
\ {\rm for}\ {\rm all}\ a,b\ {\rm with}\ 1\leq a <b \leq n.\]
Taking $Y\in \mathfrak{X} (M)$ and $a,b$ with $1\leq a <b \leq n$ one has
\begin{eqnarray*}
\mathrm{trace}\, (S_{ab} \circ i_Y \circ \mathrm{T}^{\mathrm{w}}) &=& \sum_{i=1}^n \eta_i (S_{ab} (\mathrm{T}^{\mathrm{w}}(Y,X_i)))+ \sum_{i=1}^n \omega_i (S_{ab}(\mathrm{T}^{\mathrm{w}}(Y,Y_i)))\\
&=& \eta_b(\mathrm{T}^{\mathrm{w}}(Y,X_a))-\eta_a(\mathrm{T}^{\mathrm{w}}(Y,X_b))= g(\mathrm{T}^{\mathrm{w}}(Y,X_a),X_b)-g(\mathrm{T}^{\mathrm{w}}(Y,X_b),X_a),
\\
\mathrm{trace}\, (S'_{ab} \circ i_X \circ \mathrm{T}^{\mathrm{w}}) &=& \sum_{i=1}^n \eta_i (S'_{ab} (\mathrm{T}^{\mathrm{w}}(Y,X_i)))+ \sum_{i=1}^n \omega_i (S'_{ab}(\mathrm{T}^{\mathrm{w}}(Y,Y_i)))\\
&=& \omega_b(\mathrm{T}^{\mathrm{w}}(Y,Y_a))-\omega_a(\mathrm{T}^{\mathrm{w}}(Y,Y_b))= g(\mathrm{T}^{\mathrm{w}}(Y,Y_a),Y_b)-g (\mathrm{T}^{\mathrm{w}}(Y,Y_b),Y_a).
\end{eqnarray*}
From the above conditions one deduces
\[
g(\mathrm{T}^{\mathrm{w}}(Y,X_a),X_b)-g(\mathrm{T}^{\mathrm{w}}(Y,X_b),X_a)) =0, \quad
g(\mathrm{T}^{\mathrm{w}}(Y,Y_a),Y_b)-g (\mathrm{T}^{\mathrm{w}}(Y,Y_b),Y_a)=0,
\]
for all $a,b$ with $1\leq a < b\leq n$. Given vector fields $X, Z$ in $U$ one has
\begin{eqnarray}
g(\mathrm{T}^{\mathrm{w}}(J^+X,Y),J^+Z)-g(\mathrm{T}^{\mathrm{w}}(J^+Z,Y),J^+X) &=&0, \label{eq:functorial+1+11}\\
g(\mathrm{T}^{\mathrm{w}}(J^-X,Y),J^-Z)-g(\mathrm{T}^{\mathrm{w}}(J^-Z,Y),J^-X)&=&0. \label{eq:functorial+1+12}
\end{eqnarray}
By applying (\ref{eq:functorial+1+11}) and (\ref{eq:functorial+1+12}) to (\ref{eq:functorial+11}) one deduces
\begin{eqnarray*}
g(\mathrm{T}^{\mathrm{w}}(X,Y),Z)-g(\mathrm{T}^{\mathrm{w}}(Z,Y),X) &=& g(\mathrm{T}^{\mathrm{w}}(J^-X,Y),J^+Z)+g(\mathrm{T}^{\mathrm{w}}(J^+X,Y),J^-Z)\\
&-&g(\mathrm{T}^{\mathrm{w}}(J^-Z,Y),J^+X) -g(\mathrm{T}^{\mathrm{w}}(J^+Z,Y),J^-X),
\end{eqnarray*}
while applying (\ref{eq:functorial+1+11}) and (\ref{eq:functorial+1+12}) to (\ref{eq:functorial+12}) one obtains
\begin{eqnarray*}
-(g(\mathrm{T}^{\mathrm{w}}(JX,Y),JZ)-g(\mathrm{T}^{\mathrm{w}}(JZ,Y),JX))&=& g(\mathrm{T}^{\mathrm{w}}(J^-X,Y),J^+Z)+g(\mathrm{T}^{\mathrm{w}}(J^+X,Y),J^-Z)\\
&-&g(\mathrm{T}^{\mathrm{w}}(J^-Z,Y),J^+X) -g(\mathrm{T}^{\mathrm{w}}(J^+Z,Y),J^-X),
\end{eqnarray*}
thus proving
\[
g(\mathrm{T}^{\mathrm{w}}(X,Y),Z)-g(\mathrm{T}^{\mathrm{w}}(Z,Y),X)= -(g(\mathrm{T}^{\mathrm{w}}(JX,Y),JZ)-g(\mathrm{T}^{\mathrm{w}}(JZ,Y),JX)), \quad \forall X, Y, Z \in \mathfrak{X} (M).
\]
Observe that in the above equation when $(X,Z)$ are changed by $(J^+X,J^+Z)$ one obtains
\begin{eqnarray*}
g(\mathrm{T}^{\mathrm{w}}(J^+X,Y),J^+Z)-g(\mathrm{T}^{\mathrm{w}}(J^+Z,Y),J^+X)&=&-g(\mathrm{T}^{\mathrm{w}}(J^+X,Y),J^+Z)+g(\mathrm{T}^{\mathrm{w}}(J^+Z,Y),J^+X),\\
g(\mathrm{T}^{\mathrm{w}}(J^+X,Y),J^+Z) &=& g(\mathrm{T}^{\mathrm{w}}(J^+Z,Y),J^+X),
\end{eqnarray*}
i.e., one obtains formula (\ref{eq:functorial+1+11}), while changing $(X,Z)$ by $(J^-X,J^-Z)$ one has
\begin{eqnarray*}
g(\mathrm{T}^{\mathrm{w}}(J^-X,Y),J^-Z)-g(\mathrm{T}^{\mathrm{w}}(J^-Z,Y),J^-X)&=& - g(\mathrm{T}^{\mathrm{w}}(J^-X,Y),J^-Z)+g(\mathrm{T}^{\mathrm{w}}(J^-Z,Y),J^-X),\\
g(\mathrm{T}^{\mathrm{w}}(J^-X,Y),J^-Z)&=& g(\mathrm{T}^{\mathrm{w}}(J^-Z,Y),J^-X),
\end{eqnarray*}
which is formula (\ref{eq:functorial+1+12}).
\medskip
If $(\alpha,\varepsilon)=(1,-1)$ taking into account Proposition \ref{teor:base-adjunto-ae-estructura} $iv)$, a local basis of sections of $\mathrm{ad} \mathcal C_{(1,-1)}$ is
\[
\left\{
S_{ab}=\eta_b \otimes X_a -\omega_a \otimes Y_b \\ \colon a, b \in \{1, \ldots, n\}
\right\}.
\]
Then, by Theorem \ref{teor:metodo} $\nabla^{\mathrm{w}}$is the unique natural derivation law satisfying
\[
\mathrm{trace}\, (S_{ab} \circ i_Y \circ \mathrm{T}^{\mathrm{w}}) = 0, \quad \forall Y\in \mathfrak{X} (M),
\ {\rm for}\ {\rm all}\ a,b\ {\rm with}\ 1\leq a <b \leq n.\]
Taking $Y\in \mathfrak{X} (M)$ and $a,b$ with $1\leq a,b \leq n$ one has
\begin{eqnarray*}
\mathrm{trace}\, (S_{ab} \circ i_Y \circ \mathrm{T}^{\mathrm{w}}) &=& \sum_{i=1}^n \eta_i (S_{ab} (\mathrm{T}^{\mathrm{w}}(Y,X_i)))+ \sum_{i=1}^n \omega_i (S_{ab}(\mathrm{T}^{\mathrm{w}}(Y,Y_i)))\\
&=& \eta_b(\mathrm{T}^{\mathrm{w}}(Y,X_a))-\omega_a(\mathrm{T}^{\mathrm{w}}(Y,Y_b))=g(\mathrm{T}^{\mathrm{w}}(Y,X_a),Y_b)-g(\mathrm{T}^{\mathrm{w}}(Y,Y_b),X_a),
\end{eqnarray*}
and then
\[
g(\mathrm{T}^{\mathrm{w}}(Y,X_a),Y_b)-g(\mathrm{T}^{\mathrm{w}}(Y,Y_b),X_a)=0, \quad \forall a, b=1, \ldots, n.
\]
Consequently, given two vector fields $X, Z$ in $U$ one has
\begin{equation}
g(\mathrm{T}^{\mathrm{w}}(J^+X,Y),J^-Z)-g(\mathrm{T}^{\mathrm{w}}(J^-Z,Y),J^+X) =0,
\label{eq:functorial+1+13}\\
\end{equation}
By applying (\ref{eq:functorial+1+13}) to (\ref{eq:functorial+11}) one deduces
\begin{eqnarray*}
g(\mathrm{T}^{\mathrm{w}}(X,Y),Z)-g(\mathrm{T}^{\mathrm{w}}(Z,Y),X) &=& g(\mathrm{T}^{\mathrm{w}}(J^+X,Y),J^+Z)+g(\mathrm{T}^{\mathrm{w}}(J^-X,Y),J^-Z)\\
&-&g(\mathrm{T}^{\mathrm{w}}(J^+Z,Y),J^+X) -g(\mathrm{T}^{\mathrm{w}}(J^+Z,Y),J^+X),
\end{eqnarray*}
while applying (\ref{eq:functorial+1+13}) to (\ref{eq:functorial+12}) one obtains
\begin{eqnarray*}
g(\mathrm{T}^{\mathrm{w}}(JX,Y),JZ)-g(\mathrm{T}^{\mathrm{w}}(JZ,Y),JX)&=& g(\mathrm{T}^{\mathrm{w}}(J^-X,Y),J^-Z)+g(\mathrm{T}^{\mathrm{w}}(J^-X,Y),J^-Z)\\
&-&g(\mathrm{T}^{\mathrm{w}}(J^+Z,Y),J^+X) -g(\mathrm{T}^{\mathrm{w}}(J^-Z,Y),J^-X),
\end{eqnarray*}
thus proving
\[
g(\mathrm{T}^{\mathrm{w}}(X,Y),Z)-g(\mathrm{T}^{\mathrm{w}}(Z,Y),X)= g(\mathrm{T}^{\mathrm{w}}(JX,Y),JZ)-g(\mathrm{T}^{\mathrm{w}}(JZ,Y),JX), \quad \forall X, Y, Z \in \mathfrak{X} (M).
\]
Observe that in the above equation when $(X,Z)$ are changed by $(J^+X,J^-Z)$ one obtains
\begin{eqnarray*}
g(\mathrm{T}^{\mathrm{w}}(J^+X,Y),J^-Z)-g(\mathrm{T}^{\mathrm{w}}(J^-Z,Y),J^+X)&=&-g(\mathrm{T}^{\mathrm{w}}(J^+X,Y),J^-Z)+g(\mathrm{T}^{\mathrm{w}}(J^-Z,Y),J^+X),\\
g(\mathrm{T}^{\mathrm{w}}(J^+X,Y),J^-Z) &=& g(\mathrm{T}^{\mathrm{w}}(J^-Z,Y),J^+X),
\end{eqnarray*}
i.e., one obtains formula (\ref{eq:functorial+1+13}). $\blacksquare$
\section{Particularizing the well adapted connection}
\label{sec:particularizingthewelladaptedconnection}
The expression (\ref{eq:welladapted}) in Theorem \ref{teor:bienadaptada-ae-estructura} is common for the four well adapted connections corresponding to the four classes of $(J^2=\pm1)$-metrics manifolds. We will study them carefully in order to recover connections firstly introduced in the Literature under other denominations.
\bigskip
\textbf{ Almost para-Hermitian manifolds or $(1,-1)$-structures.}
If $(J,g)$ is an $(1,-1)$-structure, then the expression (\ref{eq:functorial+1+13})
\[
g(\mathrm{T}^{\mathrm{w}}(J^+X,Y),J^-Z)=g(\mathrm{T}^{\mathrm{w}}(J^-Z,Y),J^+X), \quad \forall X, Y, Z \in \mathfrak{X} (M),
\]
characterize $\nabla^{\mathrm{w}}$. This equation and the conditions $\nabla^{\mathrm{w}} J=0$ and $\nabla^{\mathrm{w}} g=0$ are the characterization of the well adapted connection that we have previously obtained in \cite[Theor.\ 3.8]{brassov}.
\bigskip
\textbf{ Almost Hermitian manifolds or $(-1,1)$-structures.}
In this case, $\varepsilon =1$, formula (\ref{eq:welladapted}) reads as
\begin{equation}
g(\mathrm{T}^{\mathrm{w}}(X,Y),Z)-g(\mathrm{T}^{\mathrm{w}}(Z,Y),X) +g(\mathrm{T}^{\mathrm{w}}(JX,Y),JZ)-g(\mathrm{T}^{\mathrm{w}}(JZ,Y),JX)=0, \quad \forall X,Y,Z\in \mathfrak{X} (M).
\label{eq:welladapted-1}
\end{equation}
This property and the parallelization of $J$ and $g$ are used in \cite[Secc.\ 3.1]{valdes} to characterize $\nabla^{\mathrm{w}}$.
By using functorial connections, in \cite[Theor.\ 3.1]{munoz}, the authors had obtained the expression of $\nabla^{\mathrm{w}}$ in terms of complex frames.
\bigskip
In the above two cases, explicit expressions of $\nabla^{\mathrm{w}}$ had been obtained. The situation is different in the other two cases: the expression had been obtained under the name of canonical connections, because the starting point was not that of the well adapted connections. Authors of the ``Bulgarian School" had defined adapted connections which coincide with the well adapted one. Let us see it.
\bigskip
\textbf{ Almost Norden manifolds or $(-1,-1)$-structures and almost product Riemannian manifolds or $(1,1)$-structures.}
If $\varepsilon =-1$ formula (\ref{eq:welladapted}) reads
\begin{equation}
g(\mathrm{T}^{\mathrm{w}}(X,Y),Z)-g(\mathrm{T}^{\mathrm{w}}(Z,Y),X) -g(\mathrm{T}^{\mathrm{w}}(JX,Y),JZ)+g(\mathrm{T}^{\mathrm{w}}(JZ,Y),JX)=0, \quad \forall X,Y,Z\in \mathfrak{X} (M).
\label{eq:welladapted+1}
\end{equation}
Equations (\ref{eq:welladapted-1}) and (\ref{eq:welladapted+1}) are also the conditions defining the well adapted connections in the almost product Riemannian and almost Norden manifolds, because formula (\ref{eq:welladapted}) depends on $\varepsilon$ but does not depend on $\alpha$. In the almost Norden case, coincides with equations obtained in \cite[Theor. 5]{ganchev-mihova}, and in the almost semi-Riemannian product of signature $(n,n)$ with those of \cite[Theor.\ 4]{mihova}. In these papers these connections are called canonical connections.
Besides, these cases correspond with $(J^2=\pm1)$-metric manifolds having $\alpha \varepsilon=1$. The authors of the quoted papers have introduced the canonical connection by using the tensor of type $(1,2)$ defined as the difference of the Levi Civita connections of the metrics $g$ and $\widetilde g$,
where $\widetilde g$ is the twin metric defined as
\[
\widetilde g(X,Y)= g(JX,Y), \quad \forall X, Y \in \mathfrak{X} (M).
\]
The twin metric is semi-Riemannian of signature $(n,n)$ and has a r\^{o}le similar to that of the K\"{a}hler form in the case $\alpha \varepsilon=-1$.
\bigskip
We finish this section studying the the difference $\nabla^{\mathrm{w}} - \nabla^{\mathrm{g}}$.
\begin{prop}
Let $(M,J,g)$ be a $(J^2=\pm1)$-metric manifold. Then $\nabla^{\mathrm{w}} = \nabla^{\mathrm{g}}$ if and only if
\[
g(\mathrm{T}^{\mathrm{w}}(JX,Y),JZ)=g(\mathrm{T}^{\mathrm{w}}(JZ,Y),JX), \quad \forall X, Y, Z \in \mathfrak{X} (M).
\]
In that case, $(M,J,g)$ is a K\"{a}hler type manifold.
\end{prop}
\textbf{Proof.} The Levi Civita connection $\nabla^{\mathrm{g}}$ is the unique torsionless connection parallelizing $g$. It is easy to prove that these conditions are equivalent to $\nabla^{\mathrm{g}}g=0$ and
\begin{equation*}
g(\mathrm{T}^{\mathrm{g}}(X,Y),Z) = g(\mathrm{T}^{\mathrm{g}}(Z,Y),X), \quad \forall X,Y, Z \in \mathfrak{X} (M).
\end{equation*}
As $\nabla^{\mathrm{w}}$ is an adapted connection one has $\nabla^{\mathrm{w}}g=0$. If $\nabla^{\mathrm{w}}$ satisfies the condition
\begin{equation}
g(\mathrm{T}^{\mathrm{w}}(X,Y),Z) = g(\mathrm{T}^{\mathrm{w}}(Z,Y),X), \quad \forall X,Y, Z \in \mathfrak{X} (M),
\label{eq:ba-ae-torsion2}
\end{equation}
\noindent then $\nabla^{\mathrm{w}}=\nabla^{\mathrm{g}}$. Taking into account formula (\ref{eq:welladapted}), the equality (\ref{eq:ba-ae-torsion2}) is satisfied if and only if
\[
g(\mathrm{T}^{\mathrm{w}}(JX,Y),JZ)=g(\mathrm{T}^{\mathrm{w}}(JZ,Y),JX), \quad \forall X, Y, Z \in \mathfrak{X} (M),
\]
\noindent as we wanted.
Finally, as $\nabla^{\mathrm{w}}$ is an adapted connection one has $\nabla^{\mathrm{w}}J=0$, and then if both connections coincide, the manifold is of K\"{a}hler type because $\nabla^{\mathrm{g}}J=0$, thus finishing the proof. $\blacksquare$
\begin{teor}
\label{Levi Civita2}
Let $(M,J,g)$ be a $(J^2=\pm1)$-metric manifold. Then $\nabla^{\mathrm{w}} = \nabla^{\mathrm{g}}$ if and only if $(M,J,g)$ is a K\"{a}hler type manifold.
\end{teor}
\textbf{Proof.} The manifold $(M,J,g)$ is a K\"{a}hler type manifold if and only if $\nabla^{\mathrm{g}} J=0$. As the Levi Civita connection of $g$ satisfies $\nabla^{\mathrm{g}} g = 0$ and $\mathrm{T}^{\mathrm{g}}=0$, then the derivation law $\nabla^{\mathrm{g}}$ also satisfies all conditions in Theorem \ref{teor:bienadaptada-ae-estructura}, thus proving the Levi Civita and the well adapted connections coincide.
Note that the other implication has been proved in the previous proposition, thus finishing the proof. $\blacksquare$
\bigskip
The above results show that the well adapted connection is the most natural extension of the Levi Civita connection to $(J^2=\pm1)$-metric manifolds.
\section{The Chern connection of an $(\alpha ,\varepsilon)$-manifold satisfying $\alpha \varepsilon =-1$}
\label{sec:chernconnection}
As we have pointed out in Section~\ref{sec:introduction}, there have been published several papers where the authors were looking for a \textit{canonical} connection in some of the four geometries, generalizing the Levi Civita connection. In the case of $(\alpha,\varepsilon)$-structures with $\alpha\varepsilon=1$ the connections obtained in \cite{ganchev-mihova} and \cite{mihova} coincide with the well adapted connection, as we have seen in the above Section. In the case $\alpha\varepsilon=-1$ one can define Chern-type connections, which in general do not coincide with the well adapted connection. In this Section we are going to define Chern-type connections for $\alpha\varepsilon=-1$, proving that one can not define them in the case $\alpha\varepsilon=1$, and finally we will characterize when such a connection coincides with the well adapted connection.
\bigskip
The Chern connection was introduced in \cite{chern}, assuming one has a Hermitian manifold. It also runs in the non-integrable case, because one has:
\begin{teor}[{\cite[Theor. 6.1]{gray}}]
Let $(M,J,g)$ be an almost Hermitian manifold, i.e., a manifold endowed with a $(-1,1)$-structure. Then there exists a unique linear connection $\nabla^{\mathrm{c}}$ in $M$ satisfaying $\nabla^{\mathrm{c}} J=0$, $\nabla^{\mathrm{c}} g=0$ and
\begin{equation*}
\mathrm{T}^{\mathrm{c}}(JX,JY)= -\mathrm{T}^{\mathrm{c}}(X,Y), \quad \forall X, Y \in \mathfrak{X} (M).
\end{equation*}
where $\mathrm{T}^{\mathrm{c}}$ denotes the torsion tensor of $\nabla^{\mathrm{c}}$.
\end{teor}
In the almost para-Hermitian case the connection also runs. In order to prove it, we need the following result:
\begin{lema}\label{teor:chernparahermitian}
Let $(M,J,g)$ be an almost para-Hermitian manifold, i.e, a manifold endowed with a $(1,-1)$-structure. Let $\nabla$ be any linear connection in $M$ with torsion tensor $\mathrm{T}$. Then, the following conditions are equivalent:
\begin{enumerate}
\renewcommand*{\theenumi}{\roman{enumi})}
\renewcommand*{\labelenumi}{\theenumi}
\item $\mathrm{T}(JX,JY)= \mathrm{T}(X,Y)$, for all vector fields $X, Y$ in $M$.
\item $ \mathrm{T}(X,Y) =0$, for all vector fields $X \in T^+_J (M)$ and $Y \in T^-_J(M)$.
\end{enumerate}
\end{lema}
{\bf Proof.}
$ i) \Rightarrow ii)$ Given $X \in T^+_J (M), Y \in T^-_J(M)$ one has $JX=X, JY=-Y$. Then,
\[
\mathrm{T}(JX,JY) = \mathrm{T}(X,Y)
\Rightarrow
-\mathrm{T}(X,Y)=\mathrm{T}(X,Y) \Rightarrow \mathrm{T}(X,Y) =0.
\]
$ii) \Rightarrow i)$ Given two vector fields $X, Y$ in $M$ one has the decompositions
\[
X=J^+X + J^-X, \quad Y = J^+Y-J^-Y, \quad JX = J^+X -J^-X, \quad JY=J^+Y-J^-Y,
\]
and then taking into account $ii)$ one has
\begin{eqnarray*}
\mathrm{T}(X,Y)&=& \mathrm{T} (J^+X,J^+Y)+\mathrm{T}(J^+X,J^-Y)+\mathrm{T}(J^-X,J^+Y)+\mathrm{T}(J^-X,J^-Y)\\
&=& \mathrm{T}(J^+X,J^+Y)+\mathrm{T}(J^-X,J^-Y),\\
\mathrm{T}(JX,JY)&=& \mathrm{T}(J^+X,J^+Y)-\mathrm{T}(J^+X,J^-Y)-\mathrm{T}(J^-X,J^+Y)+\mathrm{T}(J^-X,J^-Y)\\
&=& \mathrm{T}(J^+X,J^+Y)+\mathrm{T}(J^-X,J^-Y),
\end{eqnarray*}
thus proving $\mathrm{T}(JX,JY)= \mathrm{T}(X,Y)$. $\blacksquare$
\bigskip
In \cite[Prop. 3.1]{etayo} Cruceanu and one of us had defined a connection in an almost para-Hermitian manifold as the unique natural connection satisfying condition $ii)$ in the above lemma. Then, we can define the Chern-type connection on a $(J^2=\pm1)$-metric manifold satisfying $\alpha \varepsilon=-1$ as the connection determined in the following:
\begin{teor}[{\cite[Prop. 3.1]{etayo}, \cite[Theor. 6.1]{gray}}]
\label{teor:chern-connection}
Let $(M,J,g)$ be a $(J^2=\pm1)$-metric manifold with $\alpha\varepsilon =-1$. Then there exist a unique linear connection $\nabla^{\mathrm{c}}$ in $M$ reducible to the $G_{(\alpha,\varepsilon)}$-structure defined by $(J,g)$ whose torsion tensor $\mathrm{T}^{\mathrm{c}}$ satisfies
\[
\mathrm{T}^{\mathrm{c}}(JX,JY)= \alpha \mathrm{T}^{\mathrm{c}}(X,Y), \quad \forall X, Y \in \mathfrak{X} (M).
\]
This connection will be named the Chern connection of the manifold $(M,g,J)$.
\end{teor}
\begin{obs}
\label{noChern}
We are going to check that in a $(1,1)$-metric manifold there is no a unique reducible connection satisfying
\[
\mathrm{T}(X,Y) =0, \quad \forall X \in T^+_J (M), \forall Y \in T^-_J(M).
\]
Observe that Lemma \ref{teor:chernparahermitian} is also true in the case of an $(1,1)$-metric manifold, because the metric does not appear in the result. Thus, this Remark shows that a Chern connection can not be defined in a $(1,1)$-metric manifold.
\bigskip
Let us prove the result. Taking an adapted local frame $(X_1, \ldots, X_n, Y_1,\ldots, Y_n)$ to the $G_{(1,1)}$-structure in $U$, one has
\[
\mathrm{T}(X_i,Y_j)= \nabla_{X_i} Y_j -\nabla_{Y_j} X_i -[X_i, Y_j]=0, \quad \forall i, j =1, \ldots, n.
\]
As the linear connection $\nabla$ is reducible, then it is determined in $U$ by the following functions
\[
\nabla_{X_i} X_j =\sum_{k=1}^n \Gamma_{ij}^k X_k, \quad \nabla_{Y_i} X_j = \sum_{k=1}^n \bar \Gamma_{ij}^k X_k, \quad
\nabla_{X_i} Y_j =\sum_{k=1}^n \Gamma_{ij}^{k+n} Y_k, \quad \nabla_{Y_i} Y_j =\sum_{k=1}^n \bar \Gamma_{ij}^{k+n} Y_k, \quad i, j, k =1, \ldots, n.
\]
thus obtaining
\begin{eqnarray*}
g(\nabla_{X_i} Y_j, X_k) - g(\nabla_{Y_j} X_i, X_k) &=& g([X_i, Y_j], X_k),\\
g(\nabla_{X_i} Y_j, Y_k) - g(\nabla_{Y_j} X_i, Y_k) &=& g([X_i, Y_j], Y_k), \quad \forall i, j, k =1, \ldots, n,
\end{eqnarray*}
and then
\[
\Gamma_{ij}^{k+n} = g([X_i,Y_j],Y_k),\quad \bar \Gamma_{ij}^k = g([Y_j,X_i],X_k), \quad \forall i, j, k =1, \ldots, n.
\]
These equalities do not impose any condition on the functions
\[
\Gamma_{ij}^k, \bar \Gamma_{ij}^{k+n}, \quad i,j,k=1, \ldots, n,
\]
and then, the condition is not enough to determine a unique linear connection.
\end{obs}
Finally, we will obtain the relation between the Chern connection $\nabla^{\mathrm{c}}$ and the well adapted connection $\nabla^{\mathrm{w}}$, in the case $\alpha \varepsilon =-1$. We will use the potential tensor of
$\nabla^{\mathrm{w}}$ , which is given by
\[
S^{\mathrm{w}}(X,Y)=\nabla^{\mathrm{w}}_X Y -\nabla^{\mathrm{g}}_X Y, \quad \forall X, Y \in \mathfrak{X} (M).
\]
according to Definition \ref{teor:tensorpotencial}. The first result we need is the following:
\begin{prop}
Let $(M,J,g)$ be a $(J^2=\pm1)$-metric manifold. The following conditions are equivalent:
\begin{enumerate}
\renewcommand*{\theenumi}{\roman{enumi})}
\renewcommand*{\labelenumi}{\theenumi}
\item $\mathrm{T}^{\mathrm{w}}(X,Y) +\varepsilon \mathrm{T}^{\mathrm{w}}(JX,JY)=0$, for all vector fields $X,Y$ in $M$.
\item $S^{\mathrm{w}}(X,Y)= \frac{(-\alpha)}{2} (\nabla^{\mathrm{g}}_X J) JY$, for all vector fields $X,Y$ in $M$.
\end{enumerate}
\end{prop}
{\bf Proof.} As $\nabla^{\mathrm{w}}$ is a natural connection, by Lemma \ref{teor:natural}, one has
\[
\nabla^{\mathrm{w}}_X Y = \nabla^{\mathrm{g}}_X Y + S^{\mathrm{w}} (X,Y), \quad \forall X,Y \in \mathfrak{X} (M).
\]
Taking into account $\nabla^{\mathrm{g}}$ is torsionless, one easily checks the following relation
\[
\mathrm{T}^{\mathrm{w}}(X,Y)=S^{\mathrm{w}}(X,Y)-S^{\mathrm{w}}(Y,X), \quad \forall X,Y \in \mathfrak{X} (M).
\]
Given vector fields $X,Y,Z$ in $M$ one has
\begin{eqnarray*}
g(\mathrm{T}^{\mathrm{w}}(X,Y),Z)-g(\mathrm{T}^{\mathrm{w}}(Z,Y),X)&=& g(S^{\mathrm{w}}(X,Y),Z)-g(S^{\mathrm{w}}(Y,X),Z)\\
&-&g(S^{\mathrm{w}}(Z,Y),X)+g(S^{\mathrm{w}}(Y,Z),X)\\
&=& -g(S^{\mathrm{w}}(X,Z),Y)+g(S^{\mathrm{w}}(Z,X),Y)\\
&+&g(S^{\mathrm{w}}(Y,Z),X)+g(S^{\mathrm{w}}(Y,Z),X)\\
&=& g(\mathrm{T}^{\mathrm{w}}(Z,X),Y)+2g(S^{\mathrm{w}}(Y,Z),X),\\
g(\mathrm{T}^{\mathrm{w}}(JX,Y),JZ)-g(\mathrm{T}^{\mathrm{w}}(JZ,Y),JX)&=& g(\mathrm{T}^{\mathrm{w}}(JZ,JX),Y)+2g(S^{\mathrm{w}}(Y,JZ),JX),\\
&=& g(\mathrm{T}^{\mathrm{w}}(JZ,JX),Y)+2\alpha\varepsilon g(JS^{\mathrm{w}}(Y,JZ),X),
\end{eqnarray*}
Taking into account the above formulas and formula (\ref{eq:welladapted}) one obtains the following relation satisfied by the well adapted connection:
\[
g(\mathrm{T}^{\mathrm{w}}(Z,X)+\varepsilon\mathrm{T}^{\mathrm{w}}(JZ,JX),Y)=-2g(S^{\mathrm{w}}(Y,Z)+\alpha JS^{\mathrm{w}}(Y,JZ),X), \quad \forall X,Y, Z \in\mathfrak{X} (M).
\]
Then one has the following chain of equivalences:
\begin{eqnarray*}
\mathrm{T}^{\mathrm{w}}(Z,X)+\varepsilon \mathrm{T}^{\mathrm{w}}(JZ,JX) =0 &\Leftrightarrow& S^{\mathrm{w}}(Y,Z)+\alpha JS^{\mathrm{w}}(Y,JZ)=0\\
&\Leftrightarrow& JS^{\mathrm{w}}(Y,Z)+S^{\mathrm{w}}(Y,JZ)=0\\
&\Leftrightarrow& JS^{\mathrm{w}}(Y,Z)=- S^{\mathrm{w}}(Y,JZ)\\
&\Leftrightarrow& (\nabla^{\mathrm{g}}_Y J)Z = S^{\mathrm{w}}(Y,Z)-S^{\mathrm{w}}(Y,JZ)=2JS^{\mathrm{w}}(Y,Z)\\
&\Leftrightarrow& S^{\mathrm{w}}(Y,Z)=\frac{(-\alpha)}{2} (\nabla^{\mathrm{g}}_Y J) JZ,
\end{eqnarray*}
for all vector fields $X,Y,Z$ in $M$. Then one has
\begin{equation}
\mathrm{T}^{\mathrm{w}}(X,Y) +\varepsilon \mathrm{T}^{\mathrm{w}}(JX,JY)=0 \Leftrightarrow \nabla^{\mathrm{w}}_X Y = \nabla^{\mathrm{g}}_X Y + \frac{(-\alpha)}{2} (\nabla^{\mathrm{g}}_X J) JY, \quad \forall X,Y \in \mathfrak{X} (M). \ \blacksquare
\label{eq:nw=no}
\end{equation}
\begin{teor}
\label{teor:quasi}
Let $(M,J,g)$ be a $(J^2=\pm1)$-metric manifold with $\alpha\varepsilon=-1$. Then the well adapted connection and the Chern connection coincide if and only if
\[
\nabla^{\mathrm{w}}_X Y = \nabla^{\mathrm{g}}_X Y + \frac{(-\alpha)}{2} (\nabla^{\mathrm{g}}_X J) JY, \quad \forall X,Y \in \mathfrak{X} (M).
\]
\end{teor}
{\bf Proof. } As $\alpha ,\varepsilon \in \{ -1,1\} $ and $\alpha\varepsilon=-1$, then $\alpha =-\varepsilon $, and then equation (\ref{eq:nw=no}) reads as
\[
\mathrm{T}^{\mathrm{w}}(X,Y) -\alpha \mathrm{T}^{\mathrm{w}}(JX,JY)=0 \Leftrightarrow \nabla^{\mathrm{w}}_X Y = \nabla^{\mathrm{g}}_X Y + \frac{(-\alpha)}{2} (\nabla^{\mathrm{g}}_X J) JY, \quad \forall X,Y \in \mathfrak{X} (M),
\]
and the result trivially follows from Theorem \ref{teor:chern-connection}. $\blacksquare$
\bigskip
If $(M,J,g)$ is a $(J^2=\pm1)$-metric manifold with $\alpha\varepsilon=-1$, the connection given by the law derivation
\[
\nabla^{0}_X Y = \nabla^{\mathrm{g}}_X Y + \frac{(-\alpha)}{2} (\nabla^{\mathrm{g}}_X J) JY, \quad \forall X,Y \in \mathfrak{X} (M),
\]
is called the first canonical connection of $(M,J,g)$. Then the Chern connection and the well adapted connection are the same if and only if both connections coincide with the first canonical connection of $(M,J,g)$.
|
2,869,038,154,697 | arxiv | \section{Introduction}
A major challenge in the theory of quantum error correcting codes is to design codes that are
well suited for fault tolerant quantum computing. Such codes have many stringent requirements
imposed on them, constraints that are usually not considered in the design of classical codes.
An important metric that captures the suitability of a family of codes for
fault tolerant quantum computing is the threshold of that family of codes.
Informally, the threshold of a sequence of codes of increasing length is the maximum error rate that can be tolerated by the family as we increase the length of the codes in the sequence.
The threshold is affected by numerous factors and there is no single parameter that we can optimize to design codes with high threshold. Furthermore, in the literature thresholds are reported under
various assumptions. As the authors of \cite{landahl11} noted, there are three thresholds that
are of interest: i) the code threshold which assumes there are no measurement errors, ii) the phenomenological threshold which incorporates to some extent the errors due to measurement errors, and iii) the circuit
threshold which incorporates all errors due to gates and measurements. For a given family of codes,
invariably the code threshold is the highest and the circuit threshold the lowest.
One of the nonidealities that affects the lowering of thresholds is the introduction of measurement
errors. So codes which have same code thresholds, such as the toric codes and color codes,
can end up with different circuit thresholds \cite{landahl11,fowler11}. At this point one can attempt
to improve the circuit threshold by
designing codes that have efficient recovery schemes and are more robust
to measurement errors among other things. An important development in this direction has come in the
form of subsystem codes, also called as operator error correcting codes \cite{bacon06a,kribs05, kribs05b,kribs05c,poulin05,ps08}. By providing additional degrees
of freedom subsystem codes allow us to design recovery schemes which are
more robust to circuit nonidealities. That they can improve the threshold has already been reported
in the literature \cite{aliferis06}.
A class of codes that have been found to be suitable for fault tolerant computing are the
topological codes. These codes have local stabilizer generators, enabling the design of a local
architecture for computing with them and also have the highest thresholds reported so far \cite{raussen07}. It is
tempting to combine the benefits of these codes with the ideas of subsystem codes. This was first achieved in the work of Bombin \cite{bombin10}, followed by Suchara, Bravyi and Terhal \cite{suchara10}.
However, the code thresholds reported in \cite{suchara10} were lower than the
thresholds of the toric codes and color codes. Nonetheless, this should not lead us to a hasty conclusion that the topological subsystem codes are not as good as the toric codes. There are at least
two reasons why topological subsystem codes warrant further investigation. Firstly, the threshold reported in \cite{suchara10} is about 2\% while \cite{andrist12} showed that the topological subsystem
codes can have a threshold as high as 5.5\%. This motivates the
further study on decoding topological subsystem codes that are closer to their theoretical limits
as well as the study of subsystem codes that have higher code thresholds.
The second point that must be borne in mind is the rather surprising lower circuit threshold of color
code on the square octagon lattice as compared to the toric codes. Both of these codes have a code
threshold of about 11\%. But the circuit threshold of the color codes is about an order of magnitude
lower than that of the toric codes. Both codes enable local architectures for fault tolerant quantum computing, both architectures realize gates by code
deformation techniques, and both achieve universality in quantum computation through magic state
distillation. Moreover, the color codes considered in \cite{landahl11} unlike the surface code can even
realize the entire Clifford group transversally. Despite this apparent advantage over the toric codes,
the color codes lose out to the surface codes in one crucial aspect---the weight of the check operators.
Some of the check operators for the square octagon color code have a
weight that is twice the weight of the check operators in the toric codes. Even though these higher weight check operators are approximately a fifth of the operators, they appear to be the dominant reason for the lower circuit threshold of the color codes.
The preceding discussion indicates that measurement errors can severely undermine the performance of a code with many good properties including a good code threshold.
Thus any improvement in circuit techniques or error recovery schemes to make the circuits
more robust to these errors are likely to yield significant improvements in the circuit thresholds.
This is precisely where topological subsystem codes come into picture. Because they can be designed to
function with just two-body measurements, these codes can greatly mitigate the detrimental effects of
measurement errors. A strong case in favor of the suitability of the subsystem codes
with current quantum information technologies has already been made in \cite{suchara10}.
For all these reasons topological subsystem codes are worth further investigation.
This work is aimed at realizing the potential of topological subsystem codes. Our main contribution
in this paper is to give large classes of topological subsystem codes, which were not previously
known in literature.
Our results put at our disposal a huge arsenal of topological subsystem codes, which aids in the
evaluation of their promise for fault tolerant quantum computing. In addition to building upon the
work of \cite{suchara10} it also sheds light on color codes, an area of independent interest.
The paper is structured as follows. After reviewing the necessary background on subsystem codes
in Section~\ref{sec:bg}, we give our main results in Section~\ref{sec:tsc}. Then in
Section~\ref{sec:decoding} we show how to measure the stabilizer for the proposed codes in a consistent fashion.
We conclude with a brief discussion on the significance of these results in Section~\ref{sec:summary}.
\section{Background and Previous Work}\label{sec:bg}
\subsection{Subsystem codes}
In the standard model of quantum error-correction, information is protected by encoding it into a
subspace of the system Hilbert space.
In the subsystem model of error correction \cite{bacon06a,kribs05, kribs05b,kribs05c,poulin05,ps08}, the subspace is further decomposed as
$L\otimes G$. The subsystem $L$ encodes the logical information, while the subsystem $G$ provides
additional degrees of freedom; it is also called the gauge subsystem and said to encode the gauge qubits. The notation $[[n,k,r,d]]$ is used to denote a subsystem code on $n$ qubits, with $\dim L= 2^k$ and $\dim G= 2^r$ and able to detect errors of weight up to $d-1$ on the
subsystem $L$. In this model an $[[n,k,d]]$ quantum code is the same as an $[[n,k,0,d]]$ subsystem code.
The introduction of the gauge subsystem allows us to simplify the error
recovery schemes \cite{bacon06a,aliferis06} since errors that act only on the gauge subsystem need not be corrected. Although sometimes this comes at the expense of a reduced encoding rate, nonetheless as in the case of the Bacon-Shor code, this can substantially improve the performance with respect to the corresponding stabilizer code associated with it without affecting the rate \cite{bacon06a}.
We assume that the reader is familiar with the stabilizer formalism for quantum codes \cite{calderbank98, gottesman97}. We briefly review it for the subsystem codes \cite{poulin05,ps08}. A subsystem code is defined by a (nonableian) subgroup of the Pauli group;
it is called the gauge group $\mc{G}$ of the subsystem code. We denote by $S'=Z(\mc{G})$, the centre of $\mc{G}$. Let $\langle i{\bf I}, S\rangle =S'$. The subsystem code is simply the space stabilized by $S$. (Henceforth, we shall ignore phase factors and let $S$ be equivalent to $S'$.)
Henceforth, we shall ignore the phase factors and let $S$.
The bare logical operators of the code are given by the elements in $C(\mc{G})$, the centralizer of $\mc{G}$. (We view the identity also as a logical operator.) These logical operators do not act on the gauge subsystem but only on the information subsystem. The operators in $C(S)$ are called dressed logical operators and in general they also act on the gauge subsystem as well.
For an $[[n,k,r,d]]$ subsystem code, with the stabilizer dimension $\dim S = s$, we have the following relations:
\begin{eqnarray}
n&= & k+r+s,\\
\dim \mc{G} &= &2r+s,\label{eq:dim-G}\\
\dim C(\mc{G}) & = & 2k+s,\label{eq:dim-CG}\\
d & = & \min \{\wt(e) \mid e \in C(Z(\mc{G}))\setminus \mc{G} \}.\label{eq:distance}
\end{eqnarray}
The notation $\wt(e)$ is used to denote the number of qubits on which the error $e$ acts nontrivially.
\subsection{Color codes}
In the discussion on topological codes, it is tacitly assumed that the code is associated to a graph
which is embedded on some suitable surface.
Color codes \cite{bombin06} are a class of topological codes derived from 3-valent graphs with the additional property that they are 3-face-colorable. Such graphs are called 2-colexes. The stabilizer of the color code
associated to such a 2-colex is generated by operators defined as follows:
\begin{eqnarray}
B_{f}^{\sigma} = \prod_{i\in f} \sigma_i, \sigma \in \{X,Z \},
\end{eqnarray}
where $f$ is a face of $\Gamma_2$.
A method to construct 2-colexes from standard graphs was proposed in \cite{bombin07b}. Because of its relevance for us we
briefly review it here.
\renewcommand{\thealgorithm}{\Alph{algorithm}}
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\begin{algorithm}[H]
\floatname{algorithm}{Construction}
\caption{{\ensuremath{\mbox{ Topological color code construction}}}}\label{proc:tcc-bombin}
\begin{algorithmic}[1]
\REQUIRE {An arbitrary graph $\Gamma$.}
\ENSURE {A 2-colex $\Gamma_2$.}
\STATE Color each face of the embedding by $x\in \{r,b,g\}$.
\STATE Split each edge into two edges and color the face by $y\in \{ r,b,g\}\setminus x$ as shown below.
\begin{center}
\begin{tikzpicture}
\draw [color=black, thick](0,0)--(1,0);
\draw[fill =gray] (0,0) circle (2pt);
\draw[fill =gray] (1,0) circle (2pt);
\draw [color=black, fill=green!30, thick](2,0)..controls (2.5,0.25)..(3,0)--(3,0)..controls (2.5,-0.25)..(2,0);
\draw[fill =gray] (2,0) circle (2pt);
\draw[fill =gray] (3,0) circle (2pt);
\end{tikzpicture}
\end{center}
\STATE Transform each vertex of degree $d$ into a face containing $d$ edges and color it
$z\in \{r,b,g \} \setminus \{ x,y\}$.
Denote this graph by $\Gamma_2$.
\begin{center}
\begin{tikzpicture}
\draw[fill=blue!30, color=blue!30] (1,0)--(-0.5,sin{60})--(-0.5,-sin{60});
\draw [color=black, fill=green!30, thick](0,0)..controls (0.5,0.25)..(1,0)--(1,0)..controls (0.5,-0.25)..(0,0);
\draw[fill =gray] (0,0) circle (2pt);
\draw[fill =gray] (1,0) circle (2pt);
\draw [color=black, fill=green!30, thick, rotate=120](0,0)..controls (0.5,0.25)..(1,0)--(1,0)..controls (0.5,-0.25)..(0,0);
\draw [color=black, fill=green!30, thick, rotate=-120](0,0)..controls (0.5,0.25)..(1,0)--(1,0)..controls (0.5,-0.25)..(0,0);
\draw[fill =gray] (0,0) circle (2pt);
\draw[fill =gray, rotate=120] (1,0) circle (2pt);
\draw[fill =gray,rotate=-120] (1,0) circle (2pt);
\draw [fill=red] (3,0)+(-30:0.5)--+(30:0.5)--+(90:0.5)--+(150:0.5)--+(210:0.5)--+(270:0.5)--+(-30:0.5);
\foreach \i in {0,120,...,300}
{
\draw [color=green!30,fill=green!30] (3,0) +(\i-30:0.5)--+(\i-15:1) -- +(\i+15:1)-- +(\i+30:0.5)-- +(\i-30:0.5) ;
\draw [color=red, thick](3,0) +(\i+15:1)--+(\i+30:0.5);
\draw [color=red, thick](3,0) +(\i-30:0.5)--+(\i-30+15:1);
\draw [color=green, thick](3,0) +(\i+15:1)--+(\i+15:1.5);
\draw [color=green, thick](3,0) +(\i-30+15:1.5)--+(\i-30+15:1);
\draw[color=blue, thick] (3,0) +(\i+15:1)--+(\i-15:1);
\draw [fill =gray] (3,0) +(\i+15:1) circle (2pt);
\draw [fill =gray] (3,0) +(\i-15:1) circle (2pt);
}
\foreach \i in {0,120,...,240}
{
\draw [color=blue, thick](3,0) +(\i-30:0.5)--+(\i+30:0.5);
\draw [color=green, thick](3,0) +(\i+30:0.5)--+(\i+90:0.5);
}
\foreach \i in {0,60,...,300}
{
\draw [fill =gray] ((3,0) +(\i+30:0.5) circle (2pt);
}
\end{tikzpicture}
\end{center}
\end{algorithmic}
\end{algorithm}
Notice that in the above construction, every vertex, face and edge in $\Gamma$ lead to a face in
$\Gamma_2$. Because of this correspondence, we shall call a face in $\Gamma_2$ a $v$-face if its parent
in $\Gamma$ was a vertex, a $f$-face if its parent was a face and an $e$-face if its parent
was an edge. Note that an $e$-face is always 4-sided.
\subsection{Topological subsystem codes via color codes}
At the outset it is fitting to distinguish topological subsystem codes from non-topological codes
such as the Bacon-Shor codes that are nonetheless local. A more precise definition can be found in
\cite{bravyi09,bombin10}, but for our purposes it suffices to state it in the following terms.
\begin{compactenum}[(i)]
\item The stabilizer $S$ (and the gauge group) have local generators and $O(1)$ support.
\item Errors in $C(S)$ that have a trivial homology on the surface are in the stabilizer,
while the undetectable errors have a nontrivial homology on the surface.
\end{compactenum}
We denote the vertex set and edge set of a graph $\Gamma$ by $V(\Gamma)$, $E(\Gamma)$ respectively.
We denote the set of edges incident on a vertex $v$ by $\delta(v)$ and the edges that constitute the
boundary of a face by $\partial(f)$. We denote the Euler characteristic of a graph by $\chi$, where
$\chi= |V(\Gamma)| -| E(\Gamma)|+|F(\Gamma)|$.
The dual of a graph is the graph obtained by replacing
every face $f$ with a vertex $f^\ast$, and for every edge in the boundary of two faces $f_1$
and $f_2$, creating a dual edge connecting $f_1^\ast$ and $f_2^\ast$.
The subsystem code construction due to \cite{bombin10} takes the dual of a 2-colex,
and modifies it to obtain a subsystem code.
The procedure is outlined below:
\begin{algorithm}[H]
\floatname{algorithm}{Construction}
\caption{{\ensuremath{\mbox{ Topological subsystem code construction}}}}\label{proc:tsc-bombin}
\begin{algorithmic}[1]
\REQUIRE {An arbitrary 2-colex $\Gamma_2$.}
\ENSURE{Topological subsystem code. }
\STATE Take the dual of $\Gamma_2$. It is a 3-vertex-colorable graph.
\STATE Orient each edge as a directed edge as per the following:
\begin{center}
\begin{tikzpicture}
\draw [color=black, ->, thick](0,0)--(1,0);
\draw[fill =red] (0,0) circle (2pt);
\draw[fill =blue] (1,0) circle (2pt);
\draw [color=black, ->, thick](2,0)--(3,0);
\draw[fill =blue] (2,0) circle (2pt);
\draw[fill =green] (3,0) circle (2pt);
\draw [color=black, ->, thick](4,0)--(5,0);
\draw[fill =green] (4,0) circle (2pt);
\draw[fill =red] (5,0) circle (2pt);
\end{tikzpicture}
\end{center}
\STATE Transform each (directed) edge into a 4-sided face.
\begin{center}
\begin{tikzpicture}
\draw [color=black, ->, thick](0,0)--(1,0);
\draw[fill =gray] (0,0) circle (2pt);
\draw[fill =black] (1,0) circle (2pt);
\draw [color=blue, thick] (1.5,0.5)--(2.5,0.5);
\draw [color=blue, thick] (1.5,-0.5)--(2.5,-0.5);
\draw[color=red, ultra thick] (1.5,0.5)--(1.5,-0.5);
\draw[color=green, ultra thick] (2.5,0.5)--(2.5,-0.5);
\draw[color=gray, ->] (1.25,0) -- (2.75,0);
\draw[fill =gray] (1.5,0.5) circle (2pt);
\draw[fill =gray] (1.5,-0.5) circle (2pt);
\draw[fill =gray] (2.5,0.5) circle (2pt);
\draw[fill =gray] (2.5,-0.5) circle (2pt);
\end{tikzpicture}
\end{center}
\STATE Transform each vertex into a face with as many sides as its degree. (The preceding splitting of edges implicitly accomplishes this. Each of these faces has a boundary of alternating blue and red edges.) Denote this expanded graph as $\overline{\Gamma}$.
\begin{center}
\begin{tikzpicture}
\foreach \i in {0,60,...,300}
{
\draw [color=gray](0,0)--(\i:1);
}
\draw [fill =gray] (0,0) circle (2pt);
\foreach \i in {0,120,...,240}
{
\draw [color=green, thick](3,0) +(\i-30:0.5)--+(\i+30:0.5);
\draw [color=red, thick](3,0) +(\i+30:0.5)--+(\i+90:0.5);
}
\foreach \i in {0,60,...,300}
{
\draw [color=gray](3,0)--+(\i:1);
\draw [color=blue](3,0) +(\i+30:0.5)--+(\i+atan{0.25}:1);
\draw [color=blue](3,0) +(\i+30:0.5)--+(\i+60-atan{0.25}:1);
\draw [fill =gray] ((3,0) +(\i+30:0.5) circle (2pt);
}
\end{tikzpicture}
\end{center}
\STATE With every edge $e=(u,v)$, associate a link operator $\overline{K}_e\in \{X_u X_v, Y_u Y_v, Z_u Z_v \}$ depending on the color of the edge.
\STATE The gauge group is given by $\mc{G} = \langle \overline{K}_e \mid e\in E(\overline{\Gamma})\rangle$.
\end{algorithmic}
\end{algorithm}
Our presentation slightly differs from that of \cite{bombin10} with respect to step 2.
\begin{theorem}[\cite{bombin10}]
Let $\Gamma_2$ be a 2-colex embedded on a surface of genus $g$. The
subsystem code derived from $\Gamma_2$ via Construction~\ref{proc:tsc-bombin} has
the following parameters:
\begin{eqnarray}
[[3|V(\Gamma_2)|,2g,2|V(\Gamma_2)|+2g-2,d\geq \ell^\ast]],\label{eq:tsc-bombin}
\end{eqnarray}
where $\ell^\ast$ is the length of smallest nontrivial cycle in $\Gamma_2^\ast$.
\end{theorem}
The cost of the two-body measurements is reflected to some extent in the increased overhead for the
subsystem codes.
Comparing with the parameters of the color codes, this construction uses three times as many qubits as
the associated color code while at the same time encoding half the number of qubits. Our codes
offer a different tradeoff between the overhead and distance.
\subsection{Subsystem codes from 3-valent hypergraphs}
In this section we review a general construction for (topological) subsystem codes based on hypergraphs proposed in \cite{suchara10}. A hypergraph $\Gamma_h$ is an ordered pair $(V,E)$, where $E\subseteq 2^V$ is a collection of
subsets of $V$. The set $V$ is called the vertex set while $E$ is called the edge set. If all the
elements of $E$ are subsets of size 2, then $\Gamma_h$ is a standard graph. Any element of $E$ whose size is greater than 2 is
called a hyperedge and its rank is its size. The rank of a hypergraph is the maximum rank of its
edges. A hypergraph is said to be of degree $k$ if at every site $k$ edges are incident on it.
A hypercycle in a hypergraph is a set of edges such that on every vertex in the support of these edges
an even number of edges are incident \footnote{ There are various other definitions of hypercycles, see
for instance \cite{duke85} for an overview.}. Note that this definition of hypercycle includes the
standard cycles consisting of rank-2 edges. A hypercycle is said to have trivial homology if we can
contract it to a point, by contracting its edges. Homological equivalence of cycles is somewhat more complicated
than in standard graphs.
The following construction is due to \cite{suchara10}. Let $\Gamma_h$ be a hypergraph satisfying the following conditions:
\begin{compactenum}[H1)]
\item $\Gamma_h$ has only rank-2 and rank-3 edges.
\item Every vertex is trivalent.
\item Two edges intersect at most at one vertex\footnote{Condition H3 implies that the hypergraphs that we are interested are also reduced hypergraphs. A reduced hypergraph is one in which no edge is a subset of another edge.
}.
\item Two rank-3 edges are disjoint.
\end{compactenum}
We assume that at every vertex there is a qubit. For each rank-2 edge $e=(u,v)$ define a link
operator $K_e$ where
$K_e\in\{ X_u X_v, Y_u Y_v, Z_u Z_v\}$ and for each rank-3 edge $(u,v,w)$
define
\begin{eqnarray} K_e= Z_u Z_v Z_w.\label{eq:rank3LinkOp}
\end{eqnarray} The assignment of these link operators is such that
\begin{eqnarray}
K_e K_{e'}= (-1)^{|e\cap e'|} K_{e'}K_e. \label{eq:commuteRelns}
\end{eqnarray}
We denote the cycles of $\Gamma_h$ by $\Sigma_{\Gamma_h}$.
Let $\sigma$ be a hypercycle in $\Gamma_h$, then we associate a (cycle) operator $W(\sigma)$
to it as follows:
\begin{eqnarray}
W(\sigma)& = &\prod_{e\in \sigma} K_e\label{eq:loopOperator}.
\end{eqnarray}
The group of these cycle operators is denoted $\mc{L}_{\Gamma_h} $ and defined as
\begin{eqnarray}
\mc{L}_{\Gamma_h} &=& \langle W(\sigma)\mid \sigma \mbox{ is a hypercycle in } \Gamma_h \rangle \label{eq:cycleGroup}
\end{eqnarray}
It is immediate that $\dim \mc{L}_{\Gamma_h} = \dim \Sigma_{\Gamma_h}$.
\begin{algorithm}[H]
\floatname{algorithm}{Construction}
\caption{{\ensuremath{\mbox{ Topological subsystem code via hypergraphs}}}}\label{proc:tsc-suchara}
\begin{algorithmic}[1]
\REQUIRE {A hypergraph $\Gamma_h$ satisfying assumptions H1--4}
\ENSURE{A subsystem code specified by its gauge group $\mc{G}$. }
\STATE Color all the rank-3 edges, say with $r$. Then assign a 3-edge-coloring of $\Gamma_h$ using $\{r,g,b\}$.
\STATE Define a graph $\overline{\Gamma}$ whose vertex set is same as $\Gamma_h$.
\STATE For each rank-2 edge $(u,v)$ in $\Gamma_h$ assign an edge $(u,v)$ in $\overline{\Gamma}$ and
a link operator $\overline{K}_{u,v}=K_{u,v}$ as
\begin{eqnarray*}
\overline{K}_{u,v} =\left\{ \begin{array}{cl} X_u X_v & (u,v) \text{ is }r\\
Y_u Y_v & (u,v) \text{ is } g\\
Z_u Z_v & (u,v) \text{ is } b \end{array}\right.
\end{eqnarray*}
\STATE For each rank-3 edge $(u,v,w)$ assign three edges in $\overline{\Gamma}$, namely, $(u,v), (v,w), (w,u)$ and three link operators $\overline{K}_{u,v}=Z_uZ_v$, $\overline{K}_{v,w}=Z_vZ_w$, and $\overline{K}_{w,u}=Z_wZ_u$.
\STATE Define the gauge group $\mc{G} = \langle \overline{K}_e \mid e\in \overline{\Gamma}\rangle $.
\end{algorithmic}
\end{algorithm}
\begin{theorem}[\cite{suchara10}]\label{th:suchara-Const}
A hypergraph $\Gamma$ satisfying the conditions H1-4, leads to a subsystem code whose gauge group is
the centralizer of $\Sigma_{\Gamma_h}$, i.e., $\mc{G} = C(\mc{L}_{\Gamma_h})$.
\end{theorem}
Since $S=\mc{G}\cap C(\mc{G})$, a subgroup of cycles corresponds to the stabilizer. Let us denote
this subgroup of cycles by $\Delta_{\Gamma_h}$.
Note that we have slightly simplified the construction proposed in \cite{suchara10}, in that
we let our our link operators to be only $\{X\otimes X, Y\otimes Y, Z\otimes Z \}$. But we
expect that this results in no loss in performance, because the number of encoded qubits and the distance are topological invariants and are not affected by these choices.
Our notation is slightly different
from that of \cite{suchara10}. We distinguish between the link operators associated with the
hypergraph $\Gamma_h$ and the derived graph $\overline{\Gamma}_h$; they coincide for the rank-2 edges.
Because the hypergraph is 3-edge-colorable, we can partition the edge set of the hypergraph as
$E(\Gamma_h) = E_r\cup E_g\cup E_b$ depending on the color. The derived graph $\overline{\Gamma}_h$
is not 3-edge-colorable, but we group its edges by the edges of the parent edges in $\Gamma_h$.
Thus we can partition the edges of $\overline{\Gamma}_h$ also in terms of color as
$E(\overline{\Gamma}_h) = \overline{E}_r\cup \overline{E}_g\cup \overline{E}_b$.
This following result is a consequence of the definitions of $\mc{G}$, $\Sigma_{\Gamma_h}$ and Theorem~\ref{th:suchara-Const}.
\begin{corollary}\label{co:rank2Cycle}
If $\sigma$ is a cycle in $\Gamma_h$ and
consists of only rank-2 edges, then $W(\sigma)\in S$.
\end{corollary}
An obvious question posed by Theorem~\ref{th:suchara-Const} is how does one construct hypergraphs
that satisfy these constraints. This question will occupy us in the next section. A related question
is the syndrome measurement schedule for the associated subsystem code. This will be addressed in
Section~\ref{sec:decoding}.
\renewcommand{\thetheorem}{\arabic{theorem}}
\setcounter{theorem}{0}
\section{Proposed topological codes}\label{sec:tsc}
\subsection{Color codes}
While our main goal is to construct subsystem codes, our techniques use color codes as
intermediate objects. The previously known methods \cite{bombin07b} for color codes do not exhaust all possible color codes. Therefore we make a brief digression to propose a new method to construct color codes. Then we will return to the question of building subsystem codes.
The constructions presented in this paper assume that the associated
graphs and hypergraphs are connected, have no loops and all embeddings are such that the faces are homeomorphic to unit discs, in other words, all our embeddings are 2-cell embeddings.
\renewcommand{\thealgorithm}{\arabic{algorithm}}
\setcounter{algorithm}{0}
\renewcommand{\thealgorithm}{\arabic{algorithm}}
\begin{algorithm}[H]
\floatname{algorithm}{Construction}
\caption{{\ensuremath{\mbox{ Topological color code construction}}}}\label{proc:tcc-new}
\begin{algorithmic}[1]
\REQUIRE {An arbitrary bipartite graph $\Gamma$.}
\ENSURE {A 2-colex $\Gamma_2$.}
\STATE Consider the embedding of the bipartite graph $\Gamma$ on some surface. Take the dual of $\Gamma$, denote it $\Gamma^\ast$.
\STATE Since $\Gamma$ is bicolorable, $\Gamma^\ast$ is a 2-face-colorable graph.
\STATE Replace every vertex of $\Gamma^\ast$ by a face with as many sides as its degree such that
every new vertex has degree 3.
\begin{center}
\begin{tikzpicture}
\foreach \i in {0,60,...,300}
{
\draw [color=gray](0,0)--(\i+30:1);
}
\draw [fill =gray] (0,0) circle (2pt);
\foreach \i in {0,120,...,240}
{
\draw [color=blue, ultra thick](3,0) +(\i-30:0.5)--+(\i+30:0.5);
\draw [color=red, ultra thick](3,0) +(\i+30:0.5)--+(\i+90:0.5);
}
\foreach \i in {0,60,...,300}
{
\draw [color=green, ultra thick](3,0) +(\i+30:0.5)--+(\i+30:1);
\draw [color=black, fill =gray] ((3,0) +(\i+30:0.5) circle (2pt);
}
\end{tikzpicture}
\end{center}
\STATE The resulting graph is a 2-colex.
\end{algorithmic}
\end{algorithm}
\begin{theorem}[Color codes from bipartite graphs]\label{th:tcc-new}
Any 2-colex must be generated from Construction~\ref{proc:tcc-new} via some bipartite graph.
\end{theorem}
\begin{proof}
Assume that there is a 2-colex that cannot be generated by Construction~\ref{proc:tcc-new}. Assuming that the faces and the edges are 3-colored using $\{r,g,b\}$, pick any color
$c\in\{ r,g,b\}$. Then contract all the edges of the remaining colors, namely $\{r,g,b\}\setminus c$. This process shrinks the faces that are coloured $c$.
The $c$-colored faces become the vertices of the resultant 2-face-colorable graph.
The dual of this graph is bipartite as only bipartite graphs are 2-colorable. But this is precisely the reverse of the process described above. Therefore, the 2-colex must have risen from a bipartite graph.
\end{proof}
Note that there need not be a unique bipartite graph that generates a color code. In fact,
three distinct bipartite graphs may generate the same color code, using the above construction.
We also note that the 2-colexes obtained via construction~\ref{proc:tcc-bombin} have the property that for one of the colours, all the faces are of size 4.
The following result shows the relation between our result and Construction~\ref{proc:tcc-bombin}. The proof is straightforward and omitted.
\begin{corollary}\label{co:2valent}
The color codes arising from Construction~\ref{proc:tcc-bombin} can be obtained from Construction~\ref{proc:tcc-new} using bipartite graphs which have the property that one bipartition of vertices contains only vertices of degree two.
\end{corollary}
\subsection{Subsystem codes via color codes}
Here we outline a procedure to obtain a subsystem code from a color code. This uses the construction of
\cite{suchara10}.
We first construct a hypergraph that satisfies H1--4. We start with a 2-colex that
has an additional restriction, namely it has a nonempty set of faces each of which has a doubly even
number of vertices.
\begin{algorithm}[H]
\floatname{algorithm}{Construction}
\caption{{\ensuremath{\mbox{ Topological subsystem code construction}}}}\label{proc:tsc-new}
\begin{algorithmic}[1]
\REQUIRE {A 2-colex $\Gamma_2$, assumed to have a 2-cell embedding.}
\ENSURE {A topological subsystem code specified by the hypergraph $\Gamma_h$.}
\STATE We assume that the faces of $\Gamma_2$ are colored $r$, $b$, and $g$.
Let $\rm{F}_r$ be the collection of $r$-colored faces of $\Gamma_2$, and $\rm{F}\subseteq \rm{F}_r$ such
that $|f|\equiv 0 \bmod 4$ and $|f|>4$ for all $f\in \rm{F}$.
\FOR{$f \in \rm{F}$}
\STATE Add a face $f'$ inside $f$ such that
$|f|=2|f'|$.
\STATE Take a collection of alternating edges in the boundary of $f$. These are $|f|/2$ in number
and are all colored either $b$ or $g$.
\STATE Promote them to rank-3 edges by adding a vertex from $f'$ so
that the resulting hyperedges do not ``cross'' each other. In other words, the rank-3 edge is a
triangle and the triangles are disjoint. Two possible methods of inserting the rank-3 edges are
illustrated in Fig.~\ref{fig:insertRank3}. In the first method, the hyperedges can be inserted so that they are in the
boundary of the $g$ colored faces, see Fig.~\ref{fig:promotedFace-1}. Alternatively, the hyperedges can be inserted so that they are in
the boundary of the $b$ colored faces, see Fig.~\ref{fig:promotedFace-2}.
\STATE Color the rank-3 edge with the same color as the parent rank-2 edge.
\STATE Color the edges of $f'$ using colors distinct from the color of the rank-3 edges incident on
$f'$.
\ENDFOR
\STATE Denote the resulting hypergraph $\Gamma_h$ and use it to construct the
subsystem code as in Construction~\ref{proc:tsc-suchara}.
\end{algorithmic}
\end{algorithm}
\begin{figure}[htb]
\centering
\subfigure[A face $f$ in $\rm{F}$]{
\includegraphics{fig-unpromotedFace}
\label{fig:unpromotedFace}
}
\subfigure[Inserting rank-3 edges in $f$ by promoting the $b$-edges to rank-3 edges.]{
\includegraphics{fig-promotedFace-1}
\label{fig:promotedFace-1}
}
%
\subfigure[Inserting rank-3 edges in $f$ by promoting the $g$-edges to rank-3 edges.]{
\includegraphics{fig-promotedFace-2}
\label{fig:promotedFace-2}
}
\caption[Inserting rank-3 edges to convert $\Gamma_2$ to a hypergraph.]{%
(Color online) Inserting rank-3 edges in the faces of $\Gamma_2$ to obtain the hypergraph $\Gamma_h$. The rank-3 edges
correspond to triangles.}
\label{fig:insertRank3}
\end{figure}
\begin{theorem}[Subsystem codes from color codes]\label{th:tsc-new}
Construction~\ref{proc:tsc-new} gives hypergraphs which satisfy the constraints H1-4 and therefore
give rise to 2-local subsystem codes whose cycle group $\Sigma_{\Gamma_h} $ is defined as in Eq.~\eqref{eq:cycleGroup}
and gauge group is $\mc{G}=C(\mc{L}_{\Gamma_h})$.
\end{theorem}
\begin{proof}
Requirement H1 is satisfied because by construction, only rank-3 hyper edges are added to $\Gamma_2$,
which only contains rank-2 edges.
The hypergraph has two types of vertices those that come from $\Gamma_2$ and those that are
added due to introduction of the hyperedges. Since all hyperedges come by promoting an edge to a
hyperedge, it follows that the hypergraph is trivalent on the original vertices inherited from
$\Gamma_2$. By construction, the vertices in $V(\Gamma_h)\setminus V(\Gamma_2)$ are trivalent and
thus $\Gamma_h$ satisfies H2. Note that $|f|\equiv0 \bmod 4$ and $|f|>4$, therefore $f'$ can be assigned an edge coloring that ensures that $\Gamma_h$ is 3-edge colorable. Since $|f|>4$ we also ensure that no two edges intersect in more than one site, and H3 holds. By construction, all rank-3 edges are disjoint. This satisfies requirement H4. \end{proof}
Let us illustrate this construction using a small example.
It is based on the 2-colex shown in Fig.~\ref{fig:4-6-12lat}. The hypergraph derived
from this 2-colex is shown in Fig.~\ref{fig:4-6-12-hg}.
Its rate is nonzero.
\begin{center}
\begin{figure}[htb]
\includegraphics[scale=0.5,angle=90]{tsc4-6-12lat}
\caption{(Color online) Color code on a torus from a 4-6-12 lattice. Opposite sides are identified.}\label{fig:4-6-12lat}
\end{figure}
\end{center}
\begin{center}
\begin{figure}[htb]
\includegraphics[scale=0.5,angle=90]{tsc4-6-12}
\caption{(Color online) Illustrating Construction~\ref{proc:tsc-new}.}\label{fig:4-6-12-hg}
\end{figure}
\end{center}
At this point, Theorem~\ref{th:tsc-new} is still quite general and we do not have expressions for
the code parameters in closed form. Neither is the structure of the stabilizer and the logical operators very apparent. We impose some constraints on the set $\rm{F}$ so that we
can remedy this situation. These restrictions still lead to a large class of subsystem codes.
\begin{compactenum}[(i)]
\item $\rm{F}=\rm{F}_c$ is the set of all the faces
of a given color; see Theorem~\ref{th:tsc-1}.
\item $\rm{F}$ is an alternating set and $\rm{F}_c$ and $\rm{F}\setminus \rm{F}_c$ form a bipartite graph (in a sense which will be made precise shortly); see Theorem~\ref{th:tsc-2}.
\end{compactenum}
Before, we can evaluate the parameters of these codes, we need some additional results
with respect to the structure of the stabilizer and the centralizer of the gauge group.
The stabilizers vary depending on the set $\rm{F}$, nevertheless we can make some general statements
about a subset of these stabilizers.
\begin{figure}[ht]
\centering
\subfigure[A hypercycle $\sigma_1$ in $f$ (shown in bold edges) consisting of only rank-2 edges.]{
\includegraphics{fig-v-face-cycle-1}
\label{fig:rank2Cycle}
}
\subfigure[A hypercycle $\sigma_2$ in $f$ (shown in bold edges) with both rank-2 and rank-3 edges.]{
\includegraphics{fig-v-face-cycle-2}
\label{fig:hyperCycle-1}
}
\subfigure[A dependent hypercycle
$\sigma_3$ which is a combination of $\sigma_1$ and $\sigma_2$ over $\mathbb{F}_2$.]{
\includegraphics{fig-v-face-cycle-3}
\label{fig:hyperCycle-2}
}
\caption{(Color online) Stabilizer generators from a face in $\rm{F}$ for the subsystem codes of Construction~\ref{proc:tcc-new}; one of them is dependent. We shall view $\sigma_1$
and $\sigma_2$ as the two independent hypercycles associated with $f$.}\label{fig:stabGen-v-face-1}
\end{figure}
\begin{lemma}\label{lm:stabGens-tsc-1}
Suppose that $f$ is a face in $\rm{F}$ in Construction~\ref{proc:tcc-new}.
Then there are two independent hypercycles that we can associate with this face and consequently
two independent stabilizer generators as shown in Fig.~\ref{fig:stabGen-v-face-1}
\end{lemma}
\begin{proof}
We use the same notation as in Construction~\ref{proc:tcc-new}. Then Construction~\ref{proc:tcc-new} adds a new face $f'$ to $\Gamma_2$ in the interior of $f$. Let $\sigma_1$ be the
cycle formed by the rank-2 edges in the boundary of $f'$, see Fig.~\ref{fig:rank2Cycle}.
By Corollary~\ref{co:rank2Cycle}, $W(\sigma_1) \in S $.
Now let $\sigma_2$, see Fig.~\ref{fig:hyperCycle-1}, be the hypercycle consisting of all the edges in the boundary of $f$ and
an alternating set of rank-2 edges in the boundary of $f'$. In other words, $\sigma_2$ consists of all the rank-3 edges inserted in $f$ as well as the
rank-2 edges in its boundary and an alternating pair of rank-2 edges in $f'$. Because $|f|\equiv 0 \bmod 4$,
the boundary of $f'$ is 2-edge colorable.
To prove that $W(\sigma_2)$ can be generated by the elements of
$\mc{G}$, observe that $W(\sigma_2)$ can be split as
\begin{eqnarray*}
W(\sigma_2) & = & \prod_{e\in \partial(f)} K_e \prod_{e\in \partial(f')\cap E_g} K_e,
\end{eqnarray*}
where $E_g$ refers to the $r$-colored edges in $\Gamma_h$ and the boundary is with respect to
$\Gamma_h$. We can also rewrite this in terms of the link operators
in $\overline{\Gamma}_h$.
\begin{eqnarray*}
W(\sigma_2) &= & \prod_{e\in \partial(f) } \overline{K}_e \prod_{e\in \partial(f') \cap \overline{E}_r}\overline{K}_e
\end{eqnarray*}
where the boundary is with respect to
$\overline{\Gamma}_h$ and $\overline{E}_r$ now refers to the $r$-colored edges in $\overline{\Gamma}_{h}$.
This is illustrated in Fig.~\ref{fig:hyperCycleDecompos}.
The third cycle $\sigma_3$, see Fig.~\ref{fig:hyperCycle-2}, can be easily seen to be a combination of the cycles $\sigma_1$ and $\sigma_2$ over $\mathbb{F}_2$.
\end{proof}
\begin{figure}[ht]
\centering
\subfigure[Decomposing the hypercycle $\sigma_1$.]{
\includegraphics{fig-rank2CycleDecompos}
\label{fig:rank2CycleDecompos}
}
\subfigure[Decomposing the hypercycle $\sigma_2$.]{
\includegraphics{fig-hyperCycleDecompos}
\label{fig:hyperCycleDecompos}
}
\caption{(Color online) Decomposing $\sigma_i$ in Fig.~\ref{fig:stabGen-v-face-1} so that $W(\sigma_i)$
can be generated using the elements of $\mc{G}$. In each of the above $W(\sigma_i)$
can be generated as the product of link operators corresponding to the bold edges.
Note that these decompositions are with respect to the link operators of the derived graph $\overline{\Gamma}_h$ while
the cycles are defined with respect to the hypergraph $\Gamma_h$.
}\label{fig:stabGen-v-face-decompose}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics{fig-stabGen-f-face-1}
\caption{(Color online) A cycle $\sigma_1$ of rank-2 edges in the boundary of $f$, shown in bold, when $f$ has no rank-3 edges in its boundary. Some of the edges incident on $f$ maybe rank-3 but none in the boundary are.}
\label{fig:stabGen-f-face-1}
\end{figure}
\begin{figure}[ht]
\subfigure[A cycle $\sigma_1$ of rank-2 edges in the boundary of $f$, shown in bold,
note that a rank-3 edge is incident on every vertex of $f$ unlike Fig.~\ref{fig:stabGen-f-face-1}.]{
\includegraphics{fig-cycle-f-face-1}
\label{fig:cycle-f-face-1}
}
\subfigure[A cycle $\sigma_2$ of rank-2 and rank-3, shown in bold; $\sigma_2$ differs from the cycle in Fig.~\ref{fig:hyperCycle-1}, in that the ``outer'' rank-2 edges maybe either $r$ or $g$.]{
\includegraphics{fig-cycle-f-face-2}
\label{fig:cycle-f-face-2}
}
\subfigure[Decomposing the hypercycle $\sigma_2$ so that $W(\sigma_2)$
can be generated using the elements of $\mc{G}$. Note the decomposition refers to $\overline{\Gamma}_h$.]{
\includegraphics{fig-cycle-f-face-2-D}
\label{fig:cycle-f-face-2-D}
}
\caption{ (Color online) Stabilizer generators for a face which has no rank-3 edges in its boundary when $\rm{F}=\rm{F}_c $ and $f\not\in \rm{F}$.}
\label{fig:stabGen-f-face-2}
\end{figure}
\begin{lemma}\label{lm:stabGens-tsc-1-f}
Suppose that $f$ has no rank-3 edges in its boundary $\partial(f)$ as in Fig.~\ref{fig:stabGen-f-face-1}. Then $W(\partial(f))$ is in $S$.
Further, if $\rm{F}=\rm{F}_r$ and $f\not\in \rm{F}$, then we can associate another hypercycle $\sigma_2$ to $f$,
as in Fig.~\ref{fig:stabGen-f-face-2}, such that $W(\sigma_2)$ is in $S$.
\end{lemma}
\begin{proof}
If $f$ has no rank-3 edges in its boundary, then $W(\partial(f))$ is in $S$ by Corollary~\ref{co:rank2Cycle}. It is possible that some rank-3 edges are incident on $f$ even though they
are not in its boundary. This is illustrated in Fig.~\ref{fig:stabGen-f-face-1}.
If $\rm{F}=\rm{F}_r$, and $f\not\in \rm{F}$, then a rank-3 edge is incident on
every vertex of $f$ and we can form another cycle by considering all the rank-3 edges, and rank-2 edges
connecting all pairs of rank-3 edges, see Fig.~\ref{fig:cycle-f-face-2}. This includes an alternating set of edges in the boundary of $f$.
This is different from the hypercycle in Fig.~\ref{fig:hyperCycle-1} in that the ``outer'' rank-2
edges connecting the rank-3 edges maybe of different color.
Nonetheless by an augment similar to that in the proof of Lemma~\ref{lm:stabGens-tsc-1},
and using the decomposition shown in Fig.~\ref{fig:cycle-f-face-2-D} we can show that
$W(\sigma_2)$ is in $S$.
\end{proof}
\begin{remark}(Canonical cycles.) For the faces in which have two stabilizer generators associated
with them we make the following canonical choice for the stabilizer generators.
The first basis cycle $\sigma_1$ always refers to the cycle consisting of the rank-2 edges forming the
boundary of a face. The second basis cycle for $f$ is chosen to be the cycle in which the rank-3 edges
are paired with an adjacent rank-3 edge such that both the rank-2 edges pairing them are of the same
color.
\end{remark}
The decomposition as illustrated in Fig.~\ref{fig:cycle-f-face-2-D} works even when the stabilizer
is for a face which is adjacent to itself.
Next we prove a bound on the distance of the codes obtained via Construction~\ref{proc:tsc-new}.
This is defined by the cycles in space $\Sigma_{\Gamma_h}\setminus \Delta_{\Gamma_h}$. Recall that
$W(\sigma)\in S$, if $\sigma\in \Delta_{\Gamma_h}$.
\begin{lemma}(Bound on distance)\label{lm:tsc-distance}
The distance of the subsystem code obtained from Construction~\ref{proc:tsc-new} is upper bounded by
the number of rank-3 edges in the hypercycle with minimum number of rank-3 edges
in $\Sigma_{\Gamma_h}\setminus \Delta_{\Gamma_h}$.
\end{lemma}
\begin{proof}
Every undetectable error of the subsystem code can be written as $gW(\sigma)$ for some $g\in \mc{G}$
and $\sigma\in \Sigma_{\Gamma_h}\setminus \Delta_{\Gamma_h}$. It suffices therefore, to check by how much the weight of
$W(\sigma)$ can be reduced by acting with elements of $\mc{G}$.
In particular, we can reduce $W(\sigma)$ such that only the
rank-3 edges remain, and obtain an equivalent operator of lower weight.
We can further act on this so that corresponding to every rank-3 edge in $\sigma$
the modified error has support only on one of its vertices.
This reduced error operator has weight equal to the number of rank-3 edges in $\sigma$.
Thus the distance of the code is upper bounded by the number of rank-3 edges in the hypercycle with minimum number of hyperedges in $\Sigma_{\Gamma_h}\setminus \Delta_{\Gamma_h}$.
\end{proof}
It appears that this bound is tight, in that the distance is actually no less than the one specified
above.
\begin{theorem}\label{th:tsc-1}
Suppose that $\Gamma$ is a graph such that every vertex has even degree greater than 2. Then construct the
2-colex $\Gamma_2$ from $\Gamma$ using Construction~\ref{proc:tcc-bombin}. Then apply Consruction~
\ref{proc:tsc-new} with $\rm{F}$ being the set of $v$-faces of
$\Gamma_2$ and with the rank-3 edges being in the boundaries of the $e$-faces of $\Gamma_2$.
Let $\ell$ be the number of rank-3 edges in a cycle in $\Sigma_{\Gamma_h}\setminus \Delta_{\Gamma_h}$.
Then we obtain a
\begin{eqnarray}
[[6e,1+\delta_{\Gamma^\ast,\text{bipartite}}-\chi, 4e-\chi, d \leq \ell]]\label{eq:tscParams-1}
\end{eqnarray} subsystem code
where $e=|E(\Gamma)| $ and $\delta_{\Gamma^\ast,\text{bipartite}}=1$ if $\Gamma^\ast$ is bipartite and zero otherwise.
\end{theorem}
\begin{proof}
Assume that $\Gamma$ has $v$ vertices, $f$ faces and
$e$ edges. Let us denote this by the tuple $(v,f,e)$, then $\chi=v+f-e$. On applying Construction~\ref{proc:tcc-bombin}, we obtain a $2$-colex, $\Gamma_2$ with the parameters $(4e, v+f+e, 6e)$.
When we apply Construction~\ref{proc:tcc-new} to $\Gamma_2$, the resulting hypergraph $\Gamma_h$ has
$2e$ new vertices added to it. Further $2e$ edges are promoted to hyper edges, and as many new rank-2 edges
are created. Thus we have a hypergraph with $6e$ vertices, $2e$ hyperedges, $6e$ rank-2 edges.
The important thing to note is that the dimension of the hypercycle space of $\Gamma_h$ is related to
$I_{\Gamma_{h}}$, the vertex-edge
incidence matrix of $\Gamma_h$. Let $E(\Gamma_h)$ denote the edges of $\Gamma_h$ including the
hyperedges.
Then
\begin{eqnarray}
\dim \mc{L}_{\Gamma_h} = |E(\Gamma_h)| - \rk_2(I_{\Gamma_h}), \label{eq:dimCycleSpace}
\end{eqnarray}
where $\rk_2$ denotes the binary rank, \cite{duke85}.
By Lemma~\ref{lm:rankH-1},
$
\rk_2(I_{\Gamma_{h}}) = |V(\Gamma_h)|-1-\delta_{\Gamma^\ast,\text{bipartite}}.
$
It now follows that
\begin{eqnarray*}
\dim \mc{L}_{\Gamma_h} &=& |E(\Gamma_h)| - |V(\Gamma_h)| +1+\delta_{\Gamma^\ast,\text{bipartite}}.\\
&=& 8e - 6e+1+\delta_{\Gamma^\ast,\text{bipartite}}\\
&=& 2e+1+\delta_{\Gamma^\ast,\text{bipartite}}
\end{eqnarray*}
By Lemma~\ref{lm:stabGens-tsc-1}~and~\ref{lm:stabGens-tsc-1-f} every $v$-face and $f$-face of $\Gamma_2$ lead to two hypercycles in $\Gamma_h$. These are $2v+2f$ in number.
But depending on whether $\Gamma^\ast$ is bipartite of these
only $s=2v+2f-1-\delta_{\Gamma^\ast,\text{bipartite}}$ are independent hypercycles. The dependencies
are as given below:
\begin{eqnarray}
\prod_{f\in v\text{-faces}} W(\sigma_1^f)& = & \prod_{f\in f\text{-faces}} W(\sigma_2^f).\label{eq:vfaceDep0}
\end{eqnarray}
If $\Gamma^\ast$ is bipartite then we have the following additional dependency. Let $\Gamma$ be
face-colored black and white so that
$F(\Gamma) = F_1\cup F_2$, where $F_1$ and $F_2$ are the collection of black and white faces. Then
\begin{eqnarray}
\prod_{f\in f\text{-faces}} W(\sigma_1^f) \prod_{f\in F_1} W(\sigma_2^f) & = & \prod_{f\in v\text{-faces}} W(\sigma_2^f) \label{eq:vfaceDep1}\\
\prod_{f\in f\text{-faces}} W(\sigma_1^f) \prod_{f\in F_2} W(\sigma_2^f)&=& \prod_{f\in v\text{-faces}} W(\sigma_1^f)W(\sigma_2^f) \label{eq:vfaceDep2}
\end{eqnarray}
(Note that among equations~\eqref{eq:vfaceDep0}--\eqref{eq:vfaceDep2} only two are independent.)
All these are of trivial homology. There are no other independent cycles of trivial homology.
Furthermore, Lemma~\ref{lm:nontrivialCycleProp-1}~and~\ref{lm:nontrivialCycleProp-2} show that hypercycles of nontrivial homology are not in the gauge group. Thus all the remaining (nontrivial) hypercycles are not in the stabilizer.
We can now compute the number of encoded qubits as follows.
\begin{eqnarray*}
2k&=&\dim C(\mc{G})-s\\
& = & 2e+1+\delta_{\Gamma^\ast,\text{bipartite}} - (2v+2f-1-\delta_{\Gamma^\ast,\text{bipartite}})\\
& =& 2+2\delta_{\Gamma^\ast,\text{bipartite}}+2(e- v-f)
\end{eqnarray*}
which gives $k= 1+\delta_{\Gamma^\ast,\text{bipartite}}-\chi$ encoded qubits.
The number of gauge qubits $r$ can now be computed as follows:
\begin{eqnarray*}
r &=& n-k-s\\
& = & 6e - (1+\delta_{\Gamma^\ast,\text{bipartite}} -\chi) - (2v+2f-1-\delta_{\Gamma^\ast,\text{bipartite}})\\
& = & 6e-2v-2f = 4e-\chi.
\end{eqnarray*}
The bound on distance follows from Lemma~\ref{lm:tsc-distance}.
\end{proof}
\begin{remark}
Note that there are no planar non-bipartite graphs $\Gamma^\ast$ which satisfy the constraint in
Theorem~\ref{th:tsc-1}.
\end{remark}
\begin{remark}
We might consider a variation is possible on the above, namely, adding the hyper edges in the
$f$-faces as opposed to the $v$-faces. This however does not lead to any new codes that are not constructible using Theorem~\ref{th:tsc-1}. Adding them in the $f$-faces is equivalent to applying Theorem~\ref{th:tsc-1}
to the dual of $\Gamma$.
\end{remark}
In Theorem~\ref{th:tsc-1}, when $\Gamma^\ast$ is bipartite, the subsystem codes coincide with those obtained from \cite{bombin10}.
However in this situation, a different choice of
$F$ in Construction~\ref{proc:tsc-new} gives another family of codes that differ from \cite{bombin10}
and Theorem~\ref{th:tsc-1}. These codes are considered next. But first we need an intermediate result
about the hypercycles in $\Delta_{\Gamma_h}$ those that define the stabilizer. Some of such as those in Fig.~\ref{fig:stabGen-v-face-tsc-2-a} are similar to those in Fig.~\ref{fig:stabGen-v-face-1} but some such as those in
Fig.~\ref{fig:stabGen-v-face-tsc-2-b} are not.
\begin{figure}[htb]
\subfigure[A hypercycle $\sigma_1$ for a $v$-face in $F$.]{
\includegraphics{fig-rank2Cycle-th2}
\label{fig:rank2Cycle-th2}
}
\subfigure[A cycle $\sigma_2$ of rank-2 and rank-3 edges, shown in bold.]
{
\includegraphics{fig-rank3Cycle-th2}
\label{fig:rank3Cycle-th2}
}
\subfigure[Decomposing $\sigma_2$ so that $W(\sigma_2)$
can be generated using the elements of $\mc{G}$.]
{
\includegraphics{fig-rank3Cycle-decomp}
\label{fig:rank3Cycle-decomp}
}
\caption{(Color online) Stabilizer generators for a $v$-face in $\rm{F}$, for the subsystem codes in Theorem~\ref{th:tsc-2}. Also shown is the decomposition for $W(\sigma_2)$. The decomposition for $W(\sigma_1)$ is same as in Fig.~\ref{fig:rank2Cycle-th2}.}
\label{fig:stabGen-v-face-tsc-2-a}
\end{figure}
\begin{figure}[htb]
\includegraphics{fig-stabGen-v-face-tsc-2-b}
\caption{(Color online) Stabilizer generators for a $v$-face in $\rm{F}_r\setminus \rm{F}$, for the subsystem codes in Theorem~\ref{th:tsc-2}. i) $\sigma_1= \partial(f)$ (not shown) and ii) $\sigma_2$ (in bold) consists of the rank-3 edges of all the adjacent $f$-faces in $\rm{F}$ adjacent through an $e$-face and the rank-2 edges connecting them. The decomposition for $W(\sigma_2)$ is shown in Fig.~\ref{fig:stabGen-v-face-tsc-2-c}.} \label{fig:stabGen-v-face-tsc-2-b}
\end{figure}
Before, we give the next construction, we briefly recall the definition of a medial graph. The medial
graph of a graph $\Gamma$ is obtained by placing a vertex on every edge of $\Gamma$ and an edge between
two vertices if and only if they these associated edges in $\Gamma$ are incident on the same vertex.
We denote the medial graph of $\Gamma$ by $\Gamma_m$.
\begin{figure}[htb]
\includegraphics{fig-stabGen-v-face-tsc-2-c}
\caption{(Color online) Decomposition for $W(\sigma_2)$. The product of the link operators shown in bold edges gives $W(\sigma_2)$.
}
\label{fig:stabGen-v-face-tsc-2-c}
\end{figure}
\begin{theorem}\label{th:tsc-2}
Let $\Gamma$ be a graph whose vertices have even degrees greater than 2 and $\Gamma_m$ its medial graph. Construct the
2-colex $\Gamma_2$ from $\Gamma_m^\ast$ using Construction~\ref{proc:tcc-bombin}.
Since $\Gamma_m^\ast$ is bipartite, the set of v-faces of $\Gamma_2$, denoted $\rm{F}_r$, form a bipartition $\rm{F}_v\cup \rm{F}_f$, where $|\rm{F}_v| = |V(\Gamma)|$.
Apply Consruction~
\ref{proc:tsc-new} with the set $\rm{F}_v\subsetneq \rm{F}$ such that
the rank-3 edges are not in the boundaries of the $e$-faces of $\Gamma_2$.
Let $\ell$ be the number of rank-3 edges in a cycle in $\Sigma_{\Gamma_h}\setminus \Delta_{\Gamma_h}$.
Then we obtain a
\begin{eqnarray}
[[10e,1-\chi+\delta_{\Gamma^\ast,\text{bipartite}}, 6e-\chi, d\leq\ell]]\label{eq:tscParams-2}
\end{eqnarray} subsystem code, where $e=|E(\Gamma)|$.
\end{theorem}
\begin{proof}
The proof is somewhat similar to that of Theorem~\ref{th:tsc-1}, but there are important differences.
Suppose that $\Gamma$ has $v$ vertices, $f$ faces and $e$ edges. Let us denote this as
the tuple $(v,f,e)$. The medial graph $\Gamma_m$ is 4-valent and has $e$ vertices, $v+f$ faces and $2e$
edges. The dual graph $\Gamma_m^\ast$ has the parameters $(v+f,e,2e)$. Furthermore, $\Gamma_m^\ast$ is bipartite. The 2-colex
$\Gamma_2$ has the parameters, $(8e,v+f+3e, 12e)$. Of the $v+f+3e$ faces
$v+f$ are $v$-type, $e$ are $f$-type and $2e$ are $e$-type.
The hypergraph has
$10e$ vertices because a new vertex is added for every pair of rank-2 edge incident on the $v$-faces
in $\rm{F}_v$. These incident edges are all of one color, which are a third of the total edges of
$\Gamma_m^\ast$ i.e., $(12e/3)$.
Since a rank-3 edge is added only on one end of
these edges for every pair, this implies that $2e$
edges are promoted to rank-3 edges, as many new vertices and
new rank-2 edges are added to form the hypergraph $\Gamma_h$.
By Lemma~\ref{lm:rankH-1}, the rank of the vertex-edge incidence matrix of $\Gamma_h$ is $|V(\Gamma_h)|-1-\delta_{\Gamma^\ast,\text{bipartite}} = 10e-1-\delta_{\Gamma^\ast,\text{bipartite}}$. The total number
of edges of $\Gamma_h$ is $14e$ including the rank-3 edges. Thus the rank of the cycle space of
$\Gamma_h$ is
\begin{eqnarray}
\dim \mc{L}_{\Gamma_h} &= &14e-10e+1+\delta_{\Gamma^\ast,\text{bipartite}}\\
&=&4e+1+\delta_{\Gamma^\ast,\text{bipartite}}.
\end{eqnarray}
The stabilizer generators of this code are somewhat different than those in Theorem~\ref{th:tsc-1}.
Recall that the $v$-faces form a bipartition, $\rm{F}_v\cup \rm{F}_f=\rm{F} \cup (\rm{F}_r\setminus \rm{F})$, where $|\rm{F}_v|=v$
and $|\rm{F}_f|=f$. We insert the rank-3 edges only in the faces in $\rm{F}$, and by Lemma~\ref{lm:stabGens-tsc-1} each of these faces leads to two stabilizer generators. These are illustrated in Fig.~\ref{fig:stabGen-v-face-tsc-2-a}. The remaining $v$-faces namely those in $\rm{F}_r\setminus \rm{F}$, have no rank-3 edges
in their boundary. Therefore, by Lemma~\ref{lm:stabGens-tsc-1-f} there is a stabilizer generator
associated with the boundary of the face. The other generator associated to a face in $\rm{F}_r\setminus \rm{F}$ is slightly more complicated. It is illustrated in Fig.~\ref{fig:stabGen-v-face-tsc-2-b}. The idea behind the decomposition so that it is an element of the gauge group is illustrated
Fig.~\ref{fig:stabGen-v-face-tsc-2-c}.
Thus both the $v$-faces of $\Gamma_2$ give rise to two types of stabilizer generators.
Since these are $v+f$ in number, we have $2(v+f)$ due to them.
Each of the $e$-faces gives rise to one stabilizer generator giving $2e$ more generators. Thus there are
totally $2(v+f)+2e$. However there are some dependencies.
\begin{eqnarray}
\prod_{f\in \rm{F}_v} W(\sigma_1^f)&=& \prod_{f\in e\text{-faces}} W(\sigma_1^f)\prod_{f\in \rm{F}_f} W(\sigma_1^f)W(\sigma_2^f)
\end{eqnarray}
When $\Gamma^\ast$ is bipartite, then it induces a bipartition on the
$v$-faces in $\rm{F}_v=F_1\cup F_2$. as well as the
$e$-faces, depending on whether the $e$-face is adjacent to a $v$-face in
$F_1$ or $F_2$. Denote this bipartition of $e$-faces as $E_1\cup E_2$.
Then the following hold:
\begin{eqnarray*}
\prod_{f\in \rm{F}_v} W(\sigma_2^f) &=& \prod_{f\in E_1}W(\sigma_1^f)\prod_{f\in F_1} W(\sigma_2^f)
\prod_{f\in F_2} W(\sigma_1^f)\\
\prod_{f\in \rm{F}_v} W(\sigma_1^f) W(\sigma_2^f)&=& \prod_{f\in E_2} W(\sigma_1^f)\prod_{f\in F_1} W(\sigma_1^f)\prod_{f\in F_2} W(\sigma_2^f)
\end{eqnarray*}
Observe though there is only one new dependency when $\Gamma^\ast$ is bipartite.
The $f$-faces do not give rise to anymore independent
generators. Thus there are $s=2(v+f+e)-1-\delta_{\Gamma^\ast,\text{bipartite}}$ independent
cycles of trivial homology. The remaining cycles are of nontrivial homology. By Lemma~\ref{lm:nontrivialCycleProp-1}~and~\ref{lm:nontrivialCycleProp-2}, these cycles are not in the gauge group.
Therefore the number of encoded qubits is given by
\begin{eqnarray*}
2k& =& \dim \mc{L}_{\Gamma_h}- s\\
&=& 4e+1+\delta_{\Gamma^\ast,\text{bipartite}}- 2(v+f+e)+1+\delta_{\Gamma^\ast,\text{bipartite}}\\
&=& 2+2\delta_{\Gamma^\ast,\text{bipartite}}+2(e-v-f)\\
&=& 2+2\delta_{\Gamma^\ast,\text{bipartite}}- 2\chi
\end{eqnarray*}
Thus $k=1-\chi+\delta_{\Gamma^\ast,\text{bipartite}}$. It is now straightforward to compute the number of gauge qubits as
$r=n-k-s = 10e-(1+\delta_{\Gamma^\ast,\text{bipartite}}-\chi)-2(v+f+e)+1+\delta_{\Gamma^\ast,\text{bipartite}} = 6e-\chi$.
The bound on distance follows from Lemma~\ref{lm:tsc-distance}.
\end{proof}
Theorem~\ref{th:tsc-2} can be strengthened without having to go through a medial graph but rather
starting with an arbitrary graph $\Gamma$ and then constructing a 2-colex via Construction~\ref{proc:tcc-bombin}.
We now demonstrate that Construction~\ref{proc:tcc-new} gives rise to subsystem codes that
are different from those obtained in \cite{bombin10}.
\begin{lemma}\label{lm:bombinHypergraphProperty}
Suppose that we have a topological subsystem code obtained by Construction~\ref{proc:tsc-bombin}
from a 2-colex $\Gamma$. Then in the associated hypergraph shrinking
the hyperedges to a vertex gives a 6-valent graph and further replacing any multiple edges by a single edge gives us a 2-colex.
\end{lemma}
\begin{proof}
Construction~\ref{proc:tsc-bombin} adds a rank-3 edge in every face of $\Gamma^\ast$. On contracting
these rank-3 edges we end up with a graph whose vertices coincide with the faces of $\Gamma^\ast$.
Each of these vertices is now 6-valent and between any two adjacent vertices there are two edges. On
replacing these multiple edges by a single edge, we end up with a cubic graph. Observe that the
vertices of this graph are in one to one correspondence with the faces of $\Gamma^\ast$ while the edges
are also in one to one correspondence
with the edges of $\Gamma^\ast$. Further an edge is present only if two faces are adjacent. This is
precisely the definition of the dual graph. Therefore, the resulting graph is the same as $\Gamma$
and a 2-colex.
\end{proof}
\begin{theorem}
Construction~\ref{proc:tsc-new} results in codes which cannot be constructed using Construction~\ref{proc:tsc-bombin}. In particular, all the codes of Theorem~\ref{th:tsc-2} are distinct
from Construction~\ref{proc:tsc-bombin} and the codes of Theorem~\ref{th:tsc-1} are distinct
when $\Gamma$ therein is non-bipartite.
\end{theorem}
\begin{proof}
Let us assume that the Construction~\ref{proc:tsc-new} does not give us {\em any} new codes. Then every
code constructed using this method gives a code that is already constructed using
Construction~\ref{proc:tsc-bombin}. Lemma~\ref{lm:bombinHypergraphProperty} informs us that
contracting the rank-3 edges results in a 6-valent graph, which on replacing the multiple edges by
single edge gives us a 2-colex.
But note that if we applied the same procedure to a graph that is obtained from the proposed
construction, then we do not always satisfy this criterion. In particular, this is the case for the
subsystem codes of Theorem~\ref{th:tsc-2}. These codes do not give rise to a 6-valent lattice on
shrinking the rank-3 edges to a single vertex.
When we consider the codes of Theorem~\ref{th:tsc-1}, on contracting that rank-3 edges, we end with up
a 6-valent graph with double edges and replacing them leads to a cubic graph. In order that these codes
do not arise from Construction~\ref{proc:tsc-bombin}, it is necessary that this cubic graph is not a
2-colex. And if it were a 2-colex then further reducing the $v$-faces of this graph should give us a
a 2-face-colorable graph.
But this reduction results in the graph we started out with namely, $\Gamma^\ast$. Thus when
$\Gamma^\ast$ in non-bipartite, our codes are distinct from those in \cite{bombin10}.
\end{proof}
\begin{lemma}\label{lm:rankH-1}
The vertex-edge incidence matrices of the hypergraphs in Theorems~\ref{th:tsc-1}~and~\ref{th:tsc-2} have rank $|V(\Gamma_h)|-1-\delta_{\Gamma^\ast,\text{bipartite}}$.
\end{lemma}
\begin{proof}
We use the same notation as that of Theorems~\ref{th:tsc-1}~and~\ref{th:tsc-2}. Denote the vertex edge incidence matrix of
$\Gamma_2$ as $I_{\Gamma_{2}}$.
Depending on whether an edge in $\Gamma_2$ is promoted to
a hyperedge in $\Gamma_h$ we can distinguish two types of edges in $\Gamma_2$.
Suppose that the edges in $\{e_1,\ldots, e_l \}$ are not promoted while the edges in
$\{e_{l+1},\ldots, e_m \}$ are promoted.
\begin{eqnarray}
I_{\Gamma_2} = \kbordermatrix{
&e_1&\cdots&e_l&\vrule &e_{l+1}&\cdots& e_m \\
&i_{11} & \cdots & i_{1l}&\vrule &\cdot &\cdots&i_{1m}\\
& \vdots & \vdots & \ddots &\vrule& \vdots&\ddots&\vdots\\
& i_{n1} & \cdots &i_{nl }&\vrule&\cdot& \cdots& i_{nm}
}\label{eq:incidence2-colex}
\end{eqnarray}
The vertex-edge incidence matrix of $\Gamma_{h}$ is related to that of $I_{\Gamma_2}$ as follows:
\begin{eqnarray}
I_{\Gamma_h} &= &\kbordermatrix{
&e_1&\cdots&e_l&\vrule &e_{l+1}&\cdots& e_m &\vrule&e_{m+1}&\cdots & e_{q}\\
&i_{11} & \cdots & i_{1l}&\vrule &\cdot &\cdots&i_{1m}&\vrule&\\
& \vdots & \vdots & \ddots &\vrule& \vdots&\ddots&\vdots&\vrule&&\bf{0}\\
& i_{n1} & \cdots &i_{nl }&\vrule&\cdot& \cdots& i_{nm}&\vrule&\\
& & \bf{0} & &\vrule& &\bf{I}& &\vrule& &I_{\Gamma_h\setminus \Gamma_2}\\
}\nonumber\\
& = & \left[\begin{array}{ccc}\multicolumn{2}{c}{I_{\Gamma_2}}& 0 \\ 0 & I&I_{\Gamma_h\setminus \Gamma_2} \end{array}\right],\label{eq:incidenceHyper}
\end{eqnarray}
where $I_{\Gamma_h\setminus \Gamma_2}$ is the incidence matrix of the subgraph obtained by restricting
to the vertices $ V(\Gamma_h)\setminus V(\Gamma_2)$. We already know that $\rk_2(I_{{\Gamma}_2})$
is $|V(\Gamma_2)|-1$.
Suppose there is an additional linear dependence among the rows of $I_{\Gamma_h}$.
More precisely, let
\begin{eqnarray}
b=\sum_{v\in V(\Gamma_2)} a_v \delta(v) =
\sum_{v\in V(\Gamma_h)\setminus V(\Gamma_2)} a_v \delta(v),\label{eq:b}
\end{eqnarray}
where $\delta_{v}$ is the vertex-edge incidence vector of $v$.
Then $b$ must have no support on the edges in $\{e_1, \ldots, e_l \}\cup\{e_{m+1},\ldots, e_q \}$. It must have support only on the rank-3 edges of $\Gamma_h$.
Every rank-3 edge has the property that it is incident on exactly one vertex $u\in V(\Gamma_h)\setminus (\Gamma_2)$ and exactly two vertices in
$v,w\in V(\Gamma_2)$. Thus if a rank-3 edge has nonzero support in $b$, then $a_u\neq0$ and either
$a_v\neq 0$ or $a_w\neq0$ but not both.
\begin{center}
\begin{figure}[htb]
\includegraphics{fig-dependency-hg}
\caption{ (Color online) If $b$ defined in Eq.~\eqref{eq:b} has support on one rank-3 edge of a $v$-face, then it has support on all the rank-3 edges of the $v$-face. Further, $\{a_{w_0}, a_{w_2}, a_{w_4}, \ldots \}\cup \{a_{v_1},a_{v_3},\ldots \}$ are all nonzero
or $\{a_{w_1}, a_{w_3}, \ldots \}\cup \{a_{v_0},a_{v_2},\ldots \}$ are all nonzero.}\label{fig:dependency-hg}
\end{figure}
\end{center}
Suppose that a vertex $u_0 \in V(\Gamma_h)\setminus V(\Gamma_2)$ is such that $a_{u_0}\neq 0$.
Then because $b$ has no support on the edges in $\{e_{m+1},\ldots,e_{q} \}$,
all the rank-2 neighbors of $u_0$, that is those which are connected by rank-2 edges are also such that $a_{u_i}\neq 0$. This implies that in a given $v$-face, for all the vertices of $u_i\in (V(\Gamma_h)\setminus V(\Gamma_2)) \cap f'$, we have $a_{u_i}\neq 0$. Further, only one of the rank-3 neighbors of $u_i$, namely $v_i,w_i$, can have $a_{v_i}\neq 0 $ or $a_{w_i}\neq 0$, but not both. Additionally, pairs of these vertices must be adjacent as $b$ has no support on the rank-2 edges.
Thus either $\{a_{w_0}, a_{w_2}, a_{w_4}, \ldots \}\cup \{a_{v_1},a_{v_3},\ldots \}$ are all nonzero
or $\{a_{w_1}, a_{w_3}, \ldots \}\cup \{a_{v_0},a_{v_2},\ldots \}$ are all nonzero.
Alternatively, we can say only the vertices in the support of an alternating set of rank-2 edges in the
boundary of the face can have nonzero $a_v$ in $b$. Consequently these vertices belong to an alternating
set of $f$-faces in the boundary of $f$.
Consider now the construction in Theorem~\ref{th:tsc-1}, in this rank-3 edges
are in the boundary of every $v$-face and $e$-face of $\Gamma_2$. Further, they are all connected.
Consider two adjacent $v$-faces as shown in Fig.~\ref{fig:dependency-hg-th1}.
\begin{figure}[htb]
\includegraphics{fig-dependency-hg-th1}
\caption{(Color online) For the hypergraph in Theorem~\ref{th:tsc-1}, if $b$ has support on one rank-3 edge, then it has support on all rank-3 edges in
$\Gamma_h$. }\label{fig:dependency-hg-th1}
\end{figure}
If $ a_p\neq 0$, it implies that $a_r=0=a_s$ and $a_q\neq 0$.
If the rank-3 edge $e_j$ has support in $b$, then all the rank-3 edges incident on $f_2$ must also be present.
Since all the $v$-faces are connected, $b$ has support on all the rank-3 edges.
Also note that the $f$-face $f_3$ has vertices in its boundary which are in the support of $b$.
In order that no edge from its boundary is in the support of $b$, all the vertices in its boundary
must be such that $a_v\neq 0$. The opposite holds for the vertices in $f_4$. None of these
vertices must have $a_v\neq 0$. Thus the $f$-faces are portioned into two types and a consistent
assignment of $a_v$ is possible if and only if the $f$-faces form a bipartition. In other words,
$\Gamma^\ast$ is bipartite. Thus the additional linear dependency exists only when $\Gamma^\ast$
is bipartite.
Let us now consider the graph in Theorem~\ref{th:tsc-2}. In this case $\rm{F}$ and $\rm{F}_c\setminus \rm{F}$ form
a bipartition. And only the
the set $v$-faces in $F$ have the rank-3 edges in their boundary. Consider two adjacent $v$-faces of
$\Gamma_2$, $f_1\in \rm{F}_c\setminus \rm{F}$, $f_2 \in \rm{F}$, as shown in Fig.~\ref{fig:dependency-hg-th2}.
\begin{figure}[htb]
\includegraphics{fig-dependency-hg-th2}
\caption{(Color online) For the hypergraph in Theorem~\ref{th:tsc-2}, if $b$ has support on one rank-3 edge, then it has support on all rank-3 edges in
$\Gamma_h$. }\label{fig:dependency-hg-th2}
\end{figure}
In this case $a_p=a_q=a_r=a_s$. So either all the vertices of $f_1$ are present
or none at all. This creates a bipartition of the $v$-faces which are not having rank-3 edges in their
boundary. Thus a consistent assignment of $a_v$ is possible if and only if the rest of the
$v$-faces in $F_c\setminus F $ form a bipartition.
Since these are arising form the faces of $\Gamma$, this means that
an additional linear dependency exists if and only if $\Gamma^\ast$ is bipartite.
\end{proof}
\begin{lemma}\label{lm:nontrivialCycleProp-1}
Suppose that $\sigma$ is a homologically nontrivial hyper cycle of $\Gamma_h$ in Theorem~\ref{th:tsc-1}~or~\ref{th:tsc-2}.
Then $\sigma$ must contain some rank-3 edge(s).
\end{lemma}
\begin{proof}
We use the notation as in Construction~\ref{proc:tsc-new}.
We can assume that such a cycle does not contain a vertex from $V(\Gamma_h)\setminus V(\Gamma_2)$.
If such a vertex is part of the hyper cycle then all the vertices that belong to that $v$-face
are also part of it and there exists another cycle $\sigma'$ which consists of rank-2 edges and is not
incident on the vertices in $V(\Gamma_h)\setminus V(\Gamma_2)$.
Suppose on the contrary that $\sigma$ contains only rank-2 edges of $\Gamma_h$. In the hypergraphs
of Theorem~\ref{th:tsc-1}, every vertex in $\Gamma_h$ has one rank-3 edge incident on it, further each
vertex of $\Gamma_h$ is trivalent and 3-edge colourable with the rank-3 edges all colored the same. Therefore,
$\sigma$ consists of rank-2 edges which are alternating in color. Every vertex is in the boundary of some $f$-face of $\Gamma_2$, say $\Delta$. Note that an $f$-face does not have any rank-3 edge in its boundary although such an edge is incident on its vertices. This implies that $\sigma$ is the boundary of $\Delta$, therefore, homologically trivial cycle in contradiction our assumption. Therefore, $\sigma$ must contain some rank-3 edges. This proves the statement for the graphs in Theorem~\ref{th:tsc-1}.
Suppose now that $\sigma$ is a cycle in the hypergraphs from Theorem~\ref{th:tsc-2}.
Now assume that there is a vertex
in $\sigma$ that is in the $v$-face which has rank-3 edges in its boundary. This edge is incident
on two vertices which are such that the rank-3 edges are in the boundary while the rank-2 edges are
out going and form the boundary of the 4-sided $e$-face incident on $u$, $v$. Therefore, the hyper cycle
$\sigma$ can be modified so that it is not incident on any $v$-face which has a rank-3 edge in its boundary. This implies from the $e$-faces only those edges are present in $\sigma$ that are in the
boundary of $e$-face and an $v$-face that has no rank-3 edges in its boundary. This edge is also coloured
same the color of the $f$-faces in $\Gamma_2$. Further $\sigma$ cannot have any edges that are
of the same color as the $v$-faces. Thus $\sigma$ must have the edges that are colored $b$ and
$g$ the colors of the $f$-faces and $e$-faces respectively. But this implies that $\sigma$ is the
union of the boundaries of $v$-faces, because only if there are edges of $r$-type can it leave the
boundary of a $v$-face. This contradicts that $\sigma$ is non trivial homologically.
\end{proof}
\begin{lemma}\label{lm:nontrivialCycleProp-2}
Suppose that $\sigma$ is a homologically nontrivial hyper cycle of $\Gamma_h$ in Theorem~\ref{th:tsc-1} ~or~\ref{th:tsc-2}. Then $W(\sigma)$ is not in the gauge group.
\end{lemma}
\begin{proof}
Without loss of generality we can assume that $\sigma$ has a minimal number of rank-3 edges in it.
If not, we can compose it with another cycle in $\Delta_{\Gamma_h}$ to obtain one with fewer rank-3 edges. Note
that $W(\sigma) \in \mc{G}$ if and only if $W(\sigma')\in \mc{G}$.
Assume now that $W(\sigma)$ is in the gauge group. Let $E_2$ be the set of rank-2 edges and
$E_3$ be the set of rank-3 edges in $\Gamma_h$.
\begin{eqnarray*}
W(\sigma) =\prod_{e\in E_2\cap \sigma}K_e\prod_{e\in E_3\cap\sigma} K_e
\end{eqnarray*}
The edges in $E_2\cap \sigma $ are also edges in $\overline{\Gamma}_h$ and the associated link
operators are the same. Therefore, it implies that $Z$-only operator
$ O_{\sigma} = \prod_{e\in E_3\cap\sigma} K_e$
is generated by the gauge group consisting of operators of the form $\{ X\otimes X, Y\otimes Y, Z\otimes Z\}$.
The operator $O_\sigma$ consists of (disjoint) rank-3 edges alone and therefore,
for any edge $(u,v,w)$ in the support of $O_\sigma$,
for each of the qubits $u,v,w$, one of the following must be true:
\begin{compactenum}[(i)]
\item Exactly one operator $Z_uZ_v$,
$Z_vZ_w$, $Z_wZ_u$ is required to generate the $Z_iZ_j$ on a pair of the qubits, where
$i,j\in \{u,v,w\}$. The $Z$ operator on the
remaining qubit is generated by gauge generators of the form $X_iX_j$ and $Y_iY_k$, where $i$ is one of
$\{u,v,w\}$
\item The support on all the qubits is generated by
$X_iX_j$ and $Y_iY_k$, where $i$ is one of $\{u,v,w\}$.
\end{compactenum}
For a qubit not in the support of $O_{\sigma}$, either no generator acts on it or all the three
gauge operators $X_uX_i$, $Y_u Y_j$, and $Z_u Z_v$ act on it. If it is the latter case, then it follows
that $u,v$ must be in the support of same rank-3 edge and that $v$ is also not in the support of
$O_\sigma$.
Suppose that we can generate $O_{\sigma}$ as follows:
\begin{eqnarray*}
O_{\sigma} &= & K^{(x,y)} K^{(z)},
\end{eqnarray*}
where $K^{(x,y)}$ consists of only operators of the form $X\otimes X, Y\otimes Y$ and $K^{(z)}$
only of operators of the form $Z\otimes Z$.
From the Lemma~\ref{lm:nontrivialCycleProp-1}, we see that the $O_{\sigma} K^{(z)}$
must be trivial homologically. The rank-3 edges incident on the
support of $O_{\sigma} K^{(z)}$ are either in the support of $\sigma$ or not.
\begin{figure}[htb]
\includegraphics{fig-cycle-expansion}
\caption{A rank-3 edge which is not in the support of $O_\sigma$. The solid edges indicate the link operators which are in the support of $K^{(x,y)}K^{(z)}$, while the dashed edges do not.
The edge must occur in two cycles, one which encloses $f_a$, and another which encloses $f_b$. If the same
cycle encloses both $f_a$ and $f_b$, then the edge occurs twice in that cycle. If we consider the
stabilizer associated with these cycles then it has no support on this edge.
} \label{fig:nonOccuring}
\end{figure}
A rank-3 edge $e$ which is not in
$\sigma$ must be such that exactly two vertices from $e$ occur in the support of $O_{\sigma}K^{(z)}$.
There are two faces $f_a$ and $f_b$ associated \footnote{The two faces $f_a$ and $f_b$ could be the same face. In this case the associated cycle contains both the vertices.} with these two vertices, see Fig~\ref{fig:nonOccuring}.
There is a hypercycle that encloses $f_a$ whose support contains $e$ and another that encloses $f_b$
and whose support contains $e$.
The product of these two stabilizer elements has no support on $e$ but has support on the
edges in $O_\sigma$. We can therefore, find an appropriate combination which are associated with the
trivial cycle in the support of $K^{(x,y)}$ such that $\sigma$ has fewer rank-3 edges. But this contradicts the minimality of rank-3 edges in $\sigma$. Therefore, it is not possible to
generate $W(\sigma)$ within the gauge group if $\sigma$ is homologically nontrivial.
\end{proof}
{
\section{Syndrome measurement in topological subsystem codes}\label{sec:decoding}
One of the motivations for subsystem codes is the possibility of simpler recovery schemes. In this
section, we show how the many-body stabilizer generators can be measured using only two-body
measurements. This could help in lowering the errors due to measurement and relax the error tolerance
on measurement.
The proposed topological subsystem codes are not CSS-type unlike the Bacon-Shor codes. In CSS-type
subsystem codes, the measurement of check operators is somewhat simple compared to the present case.
The check operators are either $X$-type or $Z$-type. Suppose that we measure the $X$-type check
operators first. We can simply measure all the $X$-type gauge generators and then combine the outputs
classically to obtain the necessary stabilizer outcome.
When we try to $Z$-type operator subsequently, we measure the $Z$-type gauge operators and once again
combine them classically. This time around, the output of the $Z$-type gauge operators because they
anti-commute with some $X$-type gauge operator we have uncertainty in the individual $Z$-type
observables. Nonetheless because the $Z$-type check operator because it commutes with the $X$-type
gauge generators, it can still be measured without inconsistency.
When we deal with the non-CSS type subsystem codes, the situation is not so simple. We need to find
an appropriate decomposition of the stabilizers in terms of the gauge generators so that the individual
gauge outcomes can be consistently. So it must be demonstrated that the syndrome measurement can be performed by measuring the gauge generators and a schedule exists for all the stabilizer generators.
A condition that ensures that a certain decomposition of the stabilizer in terms of the
gauge generators is consistent was shown in \cite{suchara10}.
\renewcommand{\thetheorem}{\Alph{theorem}}
\setcounter{theorem}{3}
\begin{theorem}[Syndrome measurement \cite{suchara10}]\label{lm:stabDecomp}
Suppose we have a decomposition of a check operator $S$ as an ordered product of link operators $K_i$
such that
\begin{eqnarray}
S = K_m\cdots K_2 K_1 \mbox{ where } K_j \mbox{ is the link operator } K_{e_j}\\
\left[K_j, K_{j-1}\cdots K_1 \right] = 0 \mbox{ for all } j = 2,\cdots, m.
\end{eqnarray}
Let $s\in \mathbb{F}_2$ be the outcome of measuring S. Then to measure $S$, measure the link operators $K_i$ for $i=1$ to $m$
and compute $s=\oplus_{i=1}^{m} g_i$, where $g_i\in \mathbb{F}_2$ is the outcome of measuring $K_i$.
\end{theorem}
\renewcommand{\thetheorem}{\arabic{theorem}}
\setcounter{theorem}{8}
\renewcommand{\thetheorem}{\arabic{theorem}}
\setcounter{theorem}{8}
\begin{theorem}\label{th:syndrome}
The syndrome measurement of the subsystem codes in Theorems~\ref{th:tsc-1}~and~\ref{th:tsc-2} can be performed in three rounds using the following procedure, using the decompositions given in Fig.~\ref{fig:stabGen-v-face-decompose},~\ref{fig:stabGen-f-face-2}, for Theorem~\ref{th:tsc-1} and
Fig.~\ref{fig:stabGen-v-face-tsc-2-a},~\ref{fig:stabGen-v-face-tsc-2-c} for Theorem~\ref{th:tsc-2}.
\begin{compactenum}[(i)]
\item Let a stabilizer generator $W(\sigma) =\prod_{i} K_i \in S$ be decomposed as follows
\begin{eqnarray}
W(\sigma)=\prod_{i\in E_r}K_i \prod_{j\in E_g} K_j \prod_{k\in E_b} K_k = S_b S_g S_r
\end{eqnarray} where $K_i$ is a link operator and $E_r$, $E_g$, $E_b$ are the link operators corresponding to edges coloured $r$, $g$, $b$ respectively.
\item Measure the gauge operators corresponding to the edges of different color at each level.
\item Combine the outcomes as per the decomposition of $W(\sigma)$.
\end{compactenum}
\end{theorem}
\begin{proof}
In the subsystem codes of Theorem~\ref{th:tsc-1}, there are two stabilizer generators associated with
the $v$-face and $f$-face. Those associated with the $v$-face are shown in
Fig.~\ref{fig:stabGen-v-face-decompose}. Consider the first type of stabilizer generator
$W(\sigma_1)$. Clearly, $W(\sigma_1)$ consists of two kinds of link operators, $r$ type and
$g$ type. The link operators corresponding to the $r$-type edges are all disjoint and can therefore be measured in one round. In the second round, we can measure the link operators corresponding to
$g$-type edges. Since this is an even cycle we clearly have
$[S_g,S_r]=0$. Note that $E_b=\emptyset$ because there are no $b$-edges in $\sigma_1$.
A similar reasoning holds for the generator $W(\sigma_1)$ shown in Fig.~\ref{fig:stabGen-f-face-2}
corresponding to an $f$-face.
For the second type of the stabilizer generators $W(\sigma_2)$, observe that as illustrated in
Fig.~\ref{fig:stabGen-v-face-decompose}, the $r$-edges are disjoint with the ``outer'' $b$ and $g$-edges
and can be measured in the first round. The ``outer'' $g$-edges being disjoint with the $r$-edges
we satisfy the condition $[S_g,S_r]=0$. In the last round when we measure the $b$-edges, since the
$b$-edges and $g$-edges overlap an even number of times and being disjoint with the $r$-edges
we have $[S_b,S_gS_r]=0$. Thus by Theorem~\ref{lm:stabDecomp}, this generator can be measured
by measuring the gauge operators.
The same reasoning can be used to measure $W(\sigma_2)$ corresponding to the $f$-faces, but with
one difference. The outer edges are not all of the same color, however this does not pose a problem
because in this case as well we can easily verify that $[S_g,S_r]=0$, because they are
disjoint. Although the $b$-edges overlap with both the $r$ and $g$-edges note that each of them
individually commutes with $S_gS_r$ because they overlap exactly twice. Thus $[S_b,S_gS_r]=0$
as well and we can measure $W(\sigma_2)$ through the gauge operators.
Syndrome measurement of two disjoint stabilizers do not obviously interfere with each other. However,
when two generators have overlapping support, they will not interfere as demonstrated below.
Note that every every vertex of $\Gamma_h $ in Theorem~\ref{th:tsc-1} has a rank-3 edge incident on it.
As illustrated in Fig.~\ref{fig:nonIntereference}, edges which are not shared are essentially the rank-3 edges and each one of them figures in only one of the stabilizer generators, but because they
all commute they can be measured in the same round. The $r$ and $g$ edges are shared and
appear in the support stabilizer generators of two adjacent faces. Nonetheless because edges of
each color are disjoint they can be measured simultaneously. As has already been demonstrated the
edges of each color are such that for each stabilizer generator $[S_g,S_r]=0$ and $[S_b,S_gS_r]=0$.
\begin{figure}[htb]
\includegraphics{fig-nonInterference}
\caption{Noninterference of syndrome measurement. The faces $f_a$, $f_b$, $f_c$ have stabilizer generators that have overlapping support. The edges are labelled with the round in which they are
measured, the subscripts indicate the faces with which the edge is associated. Thus $3_a$ indicates that this edge should be measured in the third round and it is used in the stabilizer generator $W(\sigma_2)$ of the face $f_a$. } \label{fig:nonIntereference}
\end{figure}
A similar argument can be made for the codes in Theorem~\ref{th:tsc-2}, the proof is omitted.
\end{proof}
The argument above shows that the subsystem codes of Theorems~\ref{th:tsc-1}~and~\ref{th:tsc-2} can be measured in three rounds using the same procedure outlined in Theorem~
\ref{th:syndrome} if we assume that a single qubit can be involved in two distinct measurements
simultaneously. If this is not possible, then we need four time steps to measure all the checks.
The additional time step is due to the fact that a rank-3 edge results in three link operators.
However only two of these are independent and they overlap on a single qubit. To measure both operators, we need two
time steps. Thus the overall time complexity is no more than four time steps. This is in contrast
to the schedule in \cite{bombin10}, which takes up to six time steps.
\section{Conclusion and Discussion}\label{sec:summary}
\subsection{Significance of proposed constructions}
To appreciate the usefulness of our results, it is helpful to understand Theorem~\ref{th:suchara-Const}
in more detail.
First of all consider the complexity of finding hypergraphs which satisfy the requirements therein.
Finding if a cubic graph is 3-edge-colorable is known to be NP-complete \cite{holyer81}.
Thus determining if a 3-valent hypergraph is 3-edge-colorable is at least as hard. In view of the
hardness of this problem, the usefulness of our results becomes apparent.
One such family of codes is due to \cite{bombin10}. In this paper we provide new families of subsystem codes.
Although they are also derived from color codes, they lead to subsystem codes with different parameters. With respect to the results of \cite{suchara10}, our results bear a relation similar to a specific construction of quantum codes, say that of the Kitaev's topological codes, to the CSS construction.
Secondly, the parameters of the subsystem code constructed using Theorem~\ref{th:suchara-Const} depend on
the graph and the embedding of the graph. They are not immediately available in closed form for all
hypergraphs. We give two specific families of hypergraphs where the parameters can be given in closed form. In addition our class of hypergraphs naturally includes the hypergraphs arising in Bombin's
construction.
Thirdly, Theorem~\ref{th:suchara-Const}
does not distinguish between the case when the stabilizer is local and when the stabilizer
is non-local. Let us elaborate on this point. The subsystem code on the honeycomb lattice, for
instance, can be viewed as a hypergraph albeit with no edges of rank-3. In the associated subsystem
code the stabilizer can have support over a cycle which is nontrivial homologically.
In fact, we can even provide examples of subsystem codes derived from true hypergraphs in that there
exist edges or rank greater than two, and whose stabilizer can have elements associated to nontrivial
cycles of the surface. Consider for instance, the 2-colex shown in Fig.~\ref{fig:4-8-colex}.
The hypergraph derived from this 2-colex is shown in Fig.~\ref{fig:4-8-hg}. This particular code has a nonzero-rate even though its stabilizer includes cycles that are not nontrivial homologically.
\begin{figure}[htb]
\subfigure[2-colex.]{
\includegraphics{fig-4-8-colex}
\label{fig:4-8-colex}
}
\subfigure[Subsystem code.]{
\includegraphics{fig-4-8-hg}
\label{fig:4-8-hg}
}
\caption{(Color online) A subsystem code in which some of the stabilizer generators are nonlocal. This is derived from the color code on a torus from a square octagon lattice. Opposite sides are identified. }\label{fig:4-8-colex-2-hg}
\end{figure}
In contrast the subsystem codes proposed by Bombin, all have local stabilizers. It can be conceded that the locality of the stabilizer simplifies the decoding for stabilizer codes. But this is not necessarily a restriction for the subsystem codes.
A case in point is the family of Bacon-Shor codes which have non-local stabilizer generators.
It would be important to know what effect the non-locality of the stabilizer generators have on the threshold.
Although we do not provide a criterion as to when the subsystem codes are topological in the sense
of having a local stabilizer, our constructions provide a partial answer in this direction. It would
be certainly more useful to have this criterion for all the codes of Theorem~\ref{th:suchara-Const}.
Not every cubic graph can allow us to define a subsystem code. Only if it satisfies the
commutation relations, namely Eq.~\eqref{eq:commuteRelns} is it possible. As pointed out in
\cite{suchara10}, the bipartiteness of the graph plays a role. The
Petersen graph satisfies H1--4 being a cubic graph but with no
hyperedges. But it does not admit a subsystem code to be defined because there is no consistent
assignment of colours that enables the definition of the gauge group. In other words, we cannot assign
the link operators such that Eq.~\eqref{eq:commuteRelns} are satisfied. We therefore, add the 3-edge-
colorability requirement to the hypergraph construction of the Suchara et al. \cite{suchara10}.
\begin{center}
\begin{figure}[htb]
\includegraphics{fig-peterson}
\caption{The Petersen graph although cubic and satisfying H1--4, does not lead to a subsystem code via
Construction~\ref{proc:tsc-suchara}; it is not 3-edge colorable.}
\end{figure}
\end{center}
Fig.\ref{fig:contrib} illustrates our contributions in relation to previous work.
\begin{center}
\begin{figure}[htb]
\includegraphics{fig-contrib}
\caption{Proposed constructions in context. Note that some of the hypergraph based subsystem codes may have homologically nontrivial stabilizer generators
}\label{fig:contrib}
\end{figure}
\end{center}
\begin{acknowledgments}
This work was supported by the Office of the Director of National Intelligence - Intelligence Advanced Research Projects Activity through Department of Interior contract D11PC20167. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoI/NBC, or the U.S. Government.
\end{acknowledgments}
\def$'${$'$}
|
2,869,038,154,698 | arxiv | \section{Introduction}
Extensive
experimental work has recently been aimed towards electrostatically
defining and controlling semiconductor quantum dots~\cite{kouwenhoven-report-1,prb-ciorga:16315,prb-elzerman:161308,prl-petta:186802,prb-pioro-ladriere:125307}.
These efforts are impelled by proposals
for using localized electron spin~\cite{pra-loss:120} or charge
states~\cite{jjap-vanderwiel:2100}, respectively, as qubits, the elementary
registers of the hypothetical quantum computer.
The complete control of the
quantum dot charge down to the limit of only one trapped
conduction band electron was demonstrated by monitoring single electron tunneling (SET) current
through the device as well as by a nearby charge
detector~\cite{prb-ciorga:16315, prl-field:1311, prl-sprinzak:176805}.
In this article, we present data on an electron droplet in which the charge can
be controlled all the way to the limit of one electron. The quantum dot is
defined electrostatically by using split gates on top of an epitaxially grown
AlGaAs/GaAs heterostructure. We observe a wide tunability of the electronic
transport properties of our device. Recent work
focused either on the case of weak coupling between a quantum dot and its
leads~\cite{prb-ciorga:16315}, or on the Kondo regime of strong coupling to the
leads~\cite{prl-sprinzak:176805}. Here, we explore a structure that can be
fully tuned between these limits. In addition, we demonstrate how the shape of
the quantum dot confinement potential can be
distorted within the given gate geometry \cite{prb-kyriakidis:035320} all the way into a double well
potential describing a double quantum dot~\cite{anticrossing,ep2ds,kondo}. The charge
of the electron droplet can be monitored during the deformation process.
The heterostructure used for the measurements embeds a two-dimensional
electron system (2DES) $120\un{nm}$ below the crystal surface. The electron
sheet density and mobility in the 2DES at the temperature of $T=4.2\un{K}$ are
$n_\text{s} \simeq 1.8\times 10^{15}\,\text{m}^{-2}$ and $\mu \simeq 75
\,\text{m}^2/\text{Vs}$, respectively. We estimate the 2DES temperature
to be of the order $T_\text{2DES} \sim 100\,\text{mK}$.
Our gate electrode geometry for defining a quantum dot, shown in the SEM micrograph of
\begin{figure}[tb]
\begin{center}
\epsfig{file=figure1, width=9cm}
\end{center}
\vspace*{-0.4cm}
\caption{
(Color online) (a) SEM micrograph of the gate electrodes used to
electrostatically define a quantum dot (marked as QD) and a quantum point
contact (marked as QPC). (b) Exemplary measurement of the absolute value of
the SET current $I$ through the quantum dot as a
function of the center gate voltage $\ensuremath{U_\text{gC}}$ and the bias voltage $\ensuremath{U_\text{SD}}$. (c)
Differential transconductance $G_\text T(\ensuremath{U_\text{gC}})$ of the QPC measured at
identical parameters as in (b) but for $\ensuremath{U_\text{SD}}=0$. The numerals $N=0,\,1,\,2,\,3$
in (b) and (c) depict the actual number of conduction band electrons trapped
in the quantum dot.
}
\label{fig1}
\end{figure}
Fig.~\ref{fig1}(a), is designed following a geometry introduced by Ciorga
{\it et al.}~\cite{prb-ciorga:16315}. Because of the triangular shape of
the confinement potential, an increasingly negative voltage on the plunger gate
\ensuremath{\mathrm g_\text{C}}\ depletes the quantum dot and simultaneously shifts the potential minimum
towards the tunnel barriers defined by gates \ensuremath{\mathrm g_\text{X}}\ and \ensuremath{\mathrm g_\text{L}}, or \ensuremath{\mathrm g_\text{X}}\ and \ensuremath{\mathrm g_\text{R}},
respectively. This way, the tunnel barriers between the leads and the
electron droplet can be kept transparent enough to allow the detection of SET
current through the quantum dot even for an arbitrarily small number of trapped
conduction band electrons~\cite{prb-ciorga:16315}.
Fig.~\ref{fig1}(b) shows an exemplary color scale plot of the
measured quantum dot SET current $\left| I \right|$ as a function of the gate voltage \ensuremath{U_\text{gC}}\ and
the source drain voltage \ensuremath{U_\text{SD}}. Within the diamond-shaped light regions in
Fig.~\ref{fig1}(b) SET is hindered by
Coulomb blockade and the charge of the quantum dot is constant. The gates marked
\ensuremath{\mathrm g_\text{R}}\ and \ensuremath{\mathrm g_\text{QPC}}\ in Fig.~\ref{fig1}(a) are used to define a quantum point contact
(QPC). As demonstrated in Refs.~\cite{prl-field:1311} and
\cite{prl-sprinzak:176805}, a nearby QPC can provide a non-invasive way to
detect the charge of a quantum dot electron by electron. The result of such a
measurement is shown in Fig.~\ref{fig1}(c), where the transconductance $G_\text
T=\text{d}I_\text{QPC} / \text{d}\ensuremath{U_\text{gC}}$ obtained using a lock-in amplifier is plotted for $\ensuremath{U_\text{SD}}\simeq 0$, along the
corresponding horizontal trace in Fig.~\ref{fig1}(b). Note that
Figs.~\ref{fig1}(b) and (c) have identical $x$ axes.
The advantage of using a QPC charge
detector is that its sensitivity is almost independent of the quantum dot charge
state. In contrast, the current through the quantum dot decreases as it is discharged
electron by electron, because of an increase of the tunnel barriers between the
quantum dot and the leads. This can be clearly seen by a comparison of the
magnitude of the current oscillations in Fig.~\ref{fig1}(b) with the
transconductance minima in Fig.~\ref{fig1}(c).\footnote{An apparent double
peak structure in Fig.~\ref{fig1}(b) around $\ensuremath{U_\text{SD}} \sim 0$ can be explained
by noise rectification effects.} The QPC
transconductance measurement plotted in Fig.~\ref{fig1}(c) shows no pronounced
local minima corresponding to changes of the quantum dot charge for $\ensuremath{U_\text{gC}} <
-1\un{V}$. This indicates that the quantum dot is here entirely
uncharged. This observation has been confirmed by further careful tests as
e.g.\ variation of the tunnel barriers or variation of the QPC lock-in
frequency and QPC bias. The inferred number of conduction band
electrons $N=0,\,1\,,\dots$ trapped in the quantum dot is indicated in
the Coulomb blockade regions in Figs.~\ref{fig1}(b) and
\ref{fig1}(c). \footnote{The SET current shown in Fig.~\ref{fig1}(b) between
$N=0$ and
$N=1$ can not be resolved for $\ensuremath{U_\text{SD}}\sim 0$. We ascribe this to an asymmetric
coupling of the quantum dot to the leads.}
In the following we demonstrate the flexibility provided by the use of voltage
tunable top-gates for a lateral confinement of a 2DES. We first focus on the
regime of a few electron quantum dot weakly coupled to its leads, where the
shell structure of an artificial two-dimensional atom in the circularly
symmetric case is described by the
Fock--Darwin states~\cite{zphys-fock:446,proccam-darwin:86}. Secondly, we present measurements with the quantum dot
strongly coupled to its leads. Here we observe Kondo features. Finally, we
explore the deformation of the few electron droplet into a serial double
quantum dot by means of changing gate voltages. The transport spectrum of this
artificial molecule has been described in previous publications for the
low electron number limit ($0\le N\le 2$)~\cite{anticrossing,ep2ds,kondo,mauterndorf}.
\section{Weak coupling to the leads}
The regime of a few electron quantum dot weakly coupled to its leads is reached
for gate voltages of $\ensuremath{U_\text{gL}}=-0.52\un{V}$, $\ensuremath{U_\text{gR}}=-0.565\un{V}$, and
$\ensuremath{U_\text{gX}}=-0.3\un{V}$. The observed Coulomb blockade oscillations are shown in
Fig.~\ref{fig2}(a),
\begin{figure}[th]\begin{center}
\epsfig{file=figure2, width=9cm}
\end{center}
\vspace*{-0.4cm}
\caption{
(Color online) (a) Differential conductance $G$ of the quantum dot in dependence
on a magnetic field \ensuremath{B_\perp}\ perpendicular to the 2DES and the voltage on gate
\ensuremath{\mathrm g_\text{C}}. All other gate voltages are kept fixed (see main text).
(b) \ensuremath{B_\perp}-field dependence of a relative energy corresponding to the local
maxima of G. The traces are numerically obtained from the measurement
shown in (a) after a conversion of the gate voltage to energy and subtraction
of an arbitrary but \ensuremath{B_\perp}-field independent
energy, respectively. Black arrows mark common features of all traces.
A gray vertical line indicates the first ground state transition of the quantum
dot for $N \gtrsim 4$. Inset: Qualitative prediction for the traces, using a
Fock-Darwin potential and the constant interaction model.
}
\label{fig2}
\end{figure}
where the differential conductance $G\equiv\text{d}I/\text{d}\ensuremath{U_\text{SD}}$ of the quantum dot is
plotted in a logarithmic (color) scale as a function of center gate voltage
\ensuremath{U_\text{gC}}\ and magnetic field perpendicular to the 2DES \ensuremath{B_\perp}. The absolute
number $N$ of trapped electrons within the Coulomb blockade regions,
derived by means of the QPC charge detection, is indicated
by numerals.
The characteristic \ensuremath{B_\perp}-field dependence of the local maxima of differential
conductance in Fig.~\ref{fig2}(a), marking the Coulomb oscillations of SET, has
also been observed via capacitance spectroscopy of lateral quantum
dots~\cite{prl-ashoori:613} and via transport spectroscopy of vertically etched
quantum dots~\cite{prl-tarucha:3613}.
The addition energy of a quantum dot for each electron number $N$ can be derived from the
vertical distance (in \ensuremath{U_\text{gC}}) between the local SET maxima,
by converting the gate voltage scale \ensuremath{U_\text{gC}}\ into a local potential energy. The
conversion factor for the present quantum dot has been obtained from nonlinear transport
measurements; a constant conversion factor is used as first-order
approximation~\cite{kouwenhoven-report-1}. Accordingly,
in Fig.~\ref{fig2}(b) the \ensuremath{B_\perp}\ dependence of the differential conductance
maxima positions is plotted after conversion to energy scale. The traces are obtained by
numerically tracking the local SET maxima in Fig.~\ref{fig2}(a). An arbitrary but
\ensuremath{B_\perp}-independent energy is subtracted from each trace, such that all
traces are equidistant at $\ensuremath{B_\perp}=1\un{T}$ -- i.e.\ at a magnetic field high
enough such that orbital effects are not relevant to the \ensuremath{B_\perp}\ dependence of the addition enery
anymore. For a direct comparison the inset of
Fig.~\ref{fig2}(b) displays the \ensuremath{B_\perp}-dependence expected within the so-called constant
interaction model~\cite{kouwenhoven-report-1}, that approximates many particle
effects with a classical capacitance term, for the so-called Fock-Darwin states.
These are solutions of the single particle Schr\"odinger equation of a ``two-dimensional atom''.
In detail the vector potential of \ensuremath{B_\perp}\ and the Fock-Darwin potential
$V= m^\ast\omega_0^2 r^2/2$ are considered. The latter describes a
two-dimensional harmonic oscillator with
characteristic frequency $\omega_0$ and effective electron mass $m^\ast$, at the
distance $r$ from its potential minimum~\cite{zphys-fock:446,proccam-darwin:86}. The
harmonic approximation is justified for a few electron quantum dot with a
relatively smooth electrostatic confinement as usually provided by remote gate
electrodes.
Although not necessarily expected for lateral quantum dots, where the tunnel
barriers to the 2DES leads automatically induce symmetry breaking,
for electron numbers $1 \le N\le 7$ the measured \ensuremath{B_\perp}\ dependence
(Fig.~\ref{fig2}(b)) resembles these model expectations (inset). The observed and
predicted pairing of SET differential conductance maxima corresponds to an
alternating filling of two-fold spin-degenerate levels~\cite{prl-tarucha:3613,
nature-fuhrer:822, prl-luscher:2118}.
A local maximum of addition energy is visible at $N=6$, which would correspond
to a filled shell in a circular symmetric potential~\cite{prl-tarucha:3613}.
For $4\le N\le 7$ the first orbital ground
state transition is visible as cusps at $0.25\un{T} \lesssim \ensuremath{B_\perp} \lesssim
0.3\un{T}$. The cusps are marked by a vertical gray line in Fig.~\ref{fig2}(b) and its
inset, respectively. The magnetic field at which this
transition happens allows to estimate the characteristic energy scale of the
confinement potential~\cite{rpp-kouwenhoven:701}
$\hbar \omega_0= \sqrt{2} \, \hbar \omega_c(\ensuremath{B_\perp}) \sim 680\,\mu\text{eV}$.
The expected maximum slopes of the $E(\ensuremath{B_\perp})$ traces are given by the
orbital energy shift and expected to be in the order of $\text d E/\text d \ensuremath{B_\perp}
= \pm \hbar\omega_c /2B$, where $\omega_c = e \ensuremath{B_\perp}/m^\ast$ is the cyclotron
frequency in GaAs. These expected maximum slopes are indicated in the upper left
corner of Fig.~\ref{fig2}(b) and agree well with our observations.
For the $4\le N\le 5$ transition and at a small magnetic field $\ensuremath{B_\perp} \lesssim
0.2\un{T}$ our data exhibit a pronounced cusp marking a slope reversal, as indicated
by a gray ellipsoid in Fig.~\ref{fig2}(b). Assuming a circularly symmetric
potential, this deviation from the
prediction within the constant interaction model can be understood in terms of
Hund`s rules by taking into account the exchange coupling of two electron
spins~\cite{prl-tarucha:3613}. Along this model the exchange energy can be
estimated to be $J\sim 90\,\mu\text{eV}$ for the involved states. Interestingly,
an according deviation from the constant interaction model for the $3\le N \le
4$ transition~\cite{prl-tarucha:3613} predicted by Hund's rules is not observed
in our measurement. This behavior might be related to a possibly more asymmetric
confinement potential at lower electron number, lifting the required orbital level degeneracy.
For $N\ge 7$ the $E(\ensuremath{B_\perp})$ traces do not anymore resemble the
Fock-Darwin state predictions. We attribute this to modifications of the transport spectrum
caused by electron-electron interactions. In addition, the measurements
plotted in Fig.~\ref{fig2}(a) indicate strong co-tunneling currents within
the Coulomb blockade regions for $N\gtrsim 7$. This can be seen by the
growing conductance in the Coulomb blockade regions as the electron
number is increased.
At the magnetic fields of $\ensuremath{B_\perp} \simeq 0.88\,\text{T}$ and $\ensuremath{B_\perp} \simeq
1.17\,\text{T}$ all traces exhibit a common shift, as marked by black arrows
in Fig.~\ref{fig2}(b). This may be explained by an abrupt change of the
chemical potential in the leads, since at these magnetic fields the 2DES in
the leads reaches even integer filling factors of $\ensuremath{\nu_\text{2DES}}=8$ and $\ensuremath{\nu_\text{2DES}}=6$,
respectively.\footnote{A step-like feature in the data at $\ensuremath{B_\perp} \simeq 1.75\un{T}$ can be identified
with the filling factor $\ensuremath{\nu_\text{2DES}}=4$ (gray arrow in Fig.~\ref{fig2}(b)),
however here the observation is far less clear
than at $\ensuremath{\nu_\text{2DES}}=6$ and $\ensuremath{\nu_\text{2DES}}=8$.
At higher filling factors $\ensuremath{\nu_\text{2DES}}=10, 12, \dots$ (also gray arrows) the effect
diminishes and is
partially shadowed by the orbital transitions.} The integer filling factors of the 2DES have been identified in
the Coulomb blockade measurements up to $\ensuremath{\nu_\text{2DES}}=1$ at $\ensuremath{B_\perp} \simeq 7.1\un{T}$,
where as in previous publications~\cite{prb-ciorga:16315} also a shift at odd
\ensuremath{\nu_\text{2DES}}\ is observed (data not shown).
\section{Strong coupling to the leads}
By increasing the voltages on the side gates \ensuremath{U_\text{gL}}\ and \ensuremath{U_\text{gR}}\ the quantum dot in the
few electron limit is tuned into a regime of strong coupling to the leads.
During this process the position of the SET differential conductance maxima is
tracked so that the quantum dot charge state remains well known. At strong
coupling we observe enhanced differential conductance in Coulomb blockade
regions due to the Kondo effect~\cite{ptp-kondo:37, nature-goldhaber-gordon:156,
prl-goldhaber-gordon:5225}.
Fig.~\ref{fig3}(a)
\begin{figure}[th]
\begin{center}
\epsfig{file=figure3, width=9cm}
\end{center}
\vspace*{-0.4cm}
\caption{
(Color online) (a) Differential conductance $G$ at strong coupling to the leads
as a function of perpendicular magnetic field \ensuremath{B_\perp}\ and gate voltage \ensuremath{U_\text{gC}}. A
distinct chessboard-like pattern of enhanced conductance is observed (see dotted
lines). Black arrows mark Shubnikov-de-Haas conductance minima of the 2DES in the leads. (b)
Conductance traces $G(\ensuremath{U_\text{gC}})$ at constant $\ensuremath{B_\perp}=495\un{mT}$ for different
cryostat temperatures. The traces are measured along the vertical line marked with ``B''
in (a). (c) Cryostat temperature dependence of the conductance $G$ at $\ensuremath{B_\perp}=495\un{mT}$
and $\ensuremath{U_\text{gC}}=-0.635\un{V}$ (vertical gray line in (b)). The solid line is a model
curve for a Kondo temperature of $T_\text{K}=1.9\un{K}$ (see text for
details).
}
\label{fig3}
\end{figure}
shows part of the transport spectrum of the quantum dot as a
function of \ensuremath{B_\perp}\ and \ensuremath{U_\text{gC}}\ at $\ensuremath{U_\text{gL}}=-0.508\un{V}$,
$\ensuremath{U_\text{gR}}=-0.495\un{V}$, and $\ensuremath{U_\text{gX}}=-0.3\un{V}$. Compared to the weak coupling case
displayed in Fig.~\ref{fig2} the SET differential conductance maxima (almost
horizontal lines) are broader in Fig.~\ref{fig3}. This broadening can be
explained by a much stronger coupling to the leads.
In addition, a background differential conductance increases
monotonously towards more positive gate voltage \ensuremath{U_\text{gC}}. This background is
independent of the Coulomb blockade oscillations. The quantum dot is here
near the mixed valence regime where charge quantization within the
confinement potential is lost. Thus, the conductance background is explained by
direct scattering of electrons across the quantum dot. Vertical lines of decreased
differential conductance, marked in Fig.~\ref{fig3}(a) with black arrows,
indicate minima in the density of states at the Fermi energy of the lead 2DES
caused by Shubnikov-de-Haas oscillations.
Between the maxima of SET differential conductance Coulomb blockade is
expected. Instead we observe a distinct chessboard-like
pattern of areas of enhanced or supressed differential conductance, in the
region
highlighted by the white dashed or dotted lines in Fig.~\ref{fig3}(a). This
feature is independent of the Shubnikov-de-Haas oscillations (vertical lines).
Similar phenomena have already been observed in many-electron quantum dots and
have been identified as a \ensuremath{B_\perp}-dependent Kondo effect~\cite{prl-schmid:5824}. The magnetic
field perpendicular to the 2DES leads to the formation of Landau-like core and
ring states in the quantum dot, as sketched in
Fig.~\ref{fig3}(c)~\cite{prl-sprinzak:176805, prb-keller:033302}. The electrons
occupying the lowermost Landau level effectively form an outer ring and dominate
the coupling of the quantum dot to its leads, whereas the higher Landau-like
levels form a nearly isolated electron state in the core of the quantum
dot~\cite{prb-mceuen:11419, prl-vaart:320, prl-stopa:046601}. On one hand, with
increasing magnetic field one electron after
the other moves from the core into the outer ring, and hence the total spin of the
strongly coupled outer ring can oscillate between $S=0$ and $S=1/2$. Only for
a finite spin the Kondo-effect causes an enhanced differential conductance. On the
other hand a change in \ensuremath{U_\text{gC}}\ eventually results in a change of the total number
and total spin of the conduction band electrons trapped in the quantum dot.
In addition, charge redistributions between the Landau-like levels of the quantum
dot may influence the SET maxima positions~\cite{prl-sprinzak:176805, prb-mceuen:11419,
prl-stopa:046601}. The combination
of these effects explains the observed chessboard-like pattern of enhanced and
supressed differential conductance through the quantum dot.
For a higher magnetic field where the filling factor falls below $\nu=2$ inside the
electron droplet the separation in outer ring and core state does not exist
anymore. The chessboard-like
pattern disappears and the Kondo-effect is expected to depend monotonously on
\ensuremath{B_\perp}. Indeed, for \ensuremath{B_\perp}\ larger than a field marked by the dashed white line in
Fig.~\ref{fig3}(a) the Kondo-current stops to oscillate as a function of \ensuremath{B_\perp}.
From this we conclude that the dashed white line in
Fig.~\ref{fig3}(a) identifies the $\nu=2$ transition inside the quantum dot.
Fig.~\ref{fig3}(b) displays exemplary traces $G(\ensuremath{U_\text{gC}})$ of the differential
conductance as a function of the gate voltage \ensuremath{U_\text{gC}}\ at a fixed magnetic field
$\ensuremath{B_\perp}=495\un{mT}$ for different cryostat temperatures. These traces are taken
along the black vertical line in Fig.~\ref{fig3}(a) marked by `B`. The vertical
line in Fig.~\ref{fig3}(b) marks the expected position of a minimum of the
differential conductance due to Coulomb blockade, as indeed observed for the
traces recorded at high temperature. At low temperature, instead of a
minimum an enhanced differential conductance is measured due to the Kondo
effect. Note, that the two minima of the differential conductance adjacent to
the Kondo feature in
Fig.~\ref{fig3}(b) show the usual temperature behavior indicating the here the
Kondo effect is absent (in accordance with the chessboard-like pattern in
Fig.~\ref{fig3}(a)). Fig.~\ref{fig3}(c) displays the differential conductance at
the center of the Coulomb blockade region marked by the vertical line in
Fig.~\ref{fig3}(b), as a function of the cryostat temperature. The solid line is
a model curve given by $G(T)= G_0 \left(
{T_\text{K}'^2}/ \left( T^2 + T_\text{K}'^2 \right) \right)^{s} + \ensuremath{G_\text{offset}}$ with
$T_\text{K}'=\ensuremath{T_\text{K}} /{\sqrt{2^{1/s}-1}}$~\cite{prl-goldhaber-gordon:5225}. The low temperature limit of the
Kondo differential conductance $G_0$ is taken as a free parameter, as well as an offset
$\ensuremath{G_\text{offset}}$ that has been introduced to take into account the effect of the
temperature-independent background current described above. For $s=0.22$ as
expected for spin-$1/2$ Kondo effect~\cite{prl-goldhaber-gordon:5225} we find
best agreement between the model and our data at a Kondo temperature of $\ensuremath{T_\text{K}}
=1.9\un{K}$, a limit Kondo conductance $G_0=0.41 \, e^2/h$ and a conductance offset
$\ensuremath{G_\text{offset}}=0.73 \, e^2/h$. All nearby areas of enhanced Kondo differential conductance
display a similar behaviour with Kondo temperatures in the range of $1.2\un{K}
\lesssim \ensuremath{T_\text{K}} \lesssim 2.0\un{K}$.
In addition, the dependence of the differential conductance $G$ on the
source-drain voltage \ensuremath{U_\text{SD}}\ has been measured for different regions of the
parameter range in Fig.~\ref{fig3}(a) (data not shown). These measurements are
fully consistent with above results. They display a zero-bias
conductance anomaly in the high conductance 'Kondo' regions, that can be
suppressed by changing the magnetic field \ensuremath{B_\perp}.
\section{Deformation into a double quantum dot}
The shape of the confinement potential of our quantum
dot can be modified by changing the voltages applied to the
split gate electrodes. This is a general feature of electrostatically defined
structures in a 2DES. A non-parabolic confinement potential is e.g.\ discussed
by the authors of Ref.~\cite{prb-kyriakidis:035320}. Here, we demonstrate a controlled
deformation of the confinement potential, transforming one local minimum, i.e.\ a quantum
dot, into a double well potential describing a double quantum dot.
Such a transition is shown in Fig.~\ref{fig4},
\begin{figure}[tb]\begin{center}
\epsfig{file=figure4, width=9cm}
\end{center}
\vspace*{-0.4cm}
\caption{
(Color online) Differential conductance of the electron droplet as a function of \ensuremath{U_\text{gC}}\
(x axis) and the simultaneously varied side gate voltages $\ensuremath{U_\text{gL}} \propto \ensuremath{U_\text{gR}}$
(y axis). As the gate voltage is decreased below $\ensuremath{U_\text{gC}} \simeq -1.2\un{V}$
lines of conductance maxima form pairs with smaller distance, indicating the
deformation of the quantum dot into a double quantum dot (see text). Insets: A
SEM micrograph of the top gates with sketches of the approximate potential
shapes of the quantum dot or double quantum dot. The third inset shows a
sketch of the stability diagram as expected for the case of a double quantum
dot. The thick solid lines are guides for the eye.}
\label{fig4}
\end{figure}
which plots Coulomb blockade oscillations of differential conductance (color scale) in
dependence of the center gate voltage \ensuremath{U_\text{gC}}\ along the x-axis. We aim to
transform a quantum dot charged by $N=0,\,1,\, 2,\,...$ electrons into a
peanut-shaped double quantum dot with the same charge
(see insets of Fig.~\ref{fig4}). This is done by creating a high
potential ridge between gates \ensuremath{\mathrm g_\text{X}}\ and \ensuremath{\mathrm g_\text{C}} , i.e.\ by making \ensuremath{U_\text{gC}}\ more
negative. In order to keep the overall charge of our device constant, both side
gate voltages \ensuremath{U_\text{gL}}\ and \ensuremath{U_\text{gR}}\ (y-axis) are changed in the opposite direction
than \ensuremath{U_\text{gC}}. For the opposed center gate \ensuremath{\mathrm g_\text{X}}\ we choose $\ensuremath{U_\text{gX}}=-0.566\un{V}$,
causing a significantly higher potential than in the previous measurements.
For $\ensuremath{U_\text{gC}}\gtrsim -1\un{V}$ the Coulomb oscillations are in first order
quasiperiodic, as can be seen in the upper right quarter of
Fig.~\ref{fig4}.
This is expected for a single quantum dot with addition energies large compared
to the orbital quantization energies.
In contrast, for more negative \ensuremath{U_\text{gC}}\ an onset of
a doubly periodic behavior is observed. I.e.\ along the thick
solid horizontal line in the lower left corner of Fig.~\ref{fig4} the
distance between adjacant conductance maxima oscillates, most clearly visible for $N<4$.
Such a doubly periodic behaviour is expected for a double quantum dot in case of a symmetric
double well potential. This is the case along the thick solid line in
the inset of Fig.~\ref{fig4} sketching the double quantum dot's stability
diagram. In a simplified picture, if the double
quantum dot is charged by an odd number of electrons the charging energy for the
next electron is approximately given by the int{\sl er}dot Coulomb repulsion of
two electrons separated by the tunnel barrier between the adjacent quantum
dots. However, for an even number of electrons the charging energy for the next
electron corresponds to the larger int{\sl ra}dot Coulomb repulsion between two
electrons confined within the same quantum dot. Therefore, the difference
between int{\sl er}dot and int{\sl ra}dot Coulomb repulsion on a double
quantum dot causes the observed doubly periodic oscillation.
The asymmetry of the double quantum dot with respect to the potential minima of
the double well potential can be controlled by means of the side gate voltages
\ensuremath{U_\text{gL}}\ and \ensuremath{U_\text{gR}} . Coulomb blockade results in a stability diagram characteristic
for a double quantum dot as sketched in an inset of Fig.~\ref{fig4} in
dependence of the side gate voltages~\cite{prb-hofmann:13872, prl-blick:4032,
rmp-wiel:1}. Gray lines separate areas of stable charge configurations. The
corners where three different stable charge configurations coexist are called
triple points of the stability diagram. For a serial double quantum dot with
weak interdot tunnel coupling, the charge of both quantum dots can fluctuate
only near the triple points and only here current is expected to flow. The
bisector of the stability digram (solid bold line in the inset) defines a
symmetry axis, along which the double well potential and, hence, the charge
distribution in the double quantum dot is symmetric. In the case of two (one)
trapped conduction band electrons we identify our structure as an artificial
two-dimensional helium (hydrogen) atom that can be continuously transformed into
an (ionized) molecule consisting of two hydrogen atoms.
To prove the presence of a few electron double quantum dot after performing the
described transition, we plot in Fig.~\ref{fig5}
\begin{figure}[tb]\begin{center}
\epsfig{file=figure5, width=9cm}
\end{center}
\vspace*{-0.4cm}
\caption{
(Color online) (a) Dc-current through the double quantum dot, (b)
transconductance $G_\text{T} \equiv \text{d}I_\text{QPC}/\text{d}U_\text{gL}$ of the nearby QPC
used as a double quantum dot charge sensor, with identical axes \ensuremath{U_\text{gL}}\ and
\ensuremath{U_\text{gR}}. The additional gate voltages are in both plots chosen as $\ensuremath{U_\text{gC}}=-1.4\un{V}$,
$\ensuremath{U_\text{gX}}=-0.566\un{V}$, and $\ensuremath{U_\text{gQPC}}=-0.458\un{V}$.
}
\label{fig5}
\end{figure}
the measured stability diagram of our device. Fig.~\ref{fig5}(a) shows the linear
response dc current through the device ($\ensuremath{U_\text{SD}}=50\,\mu\text{V}$) as a function of the side gate voltages
\ensuremath{U_\text{gL}}\ and \ensuremath{U_\text{gR}}. Fig.~\ref{fig5}(b) displays the QPC
transconductance $G_\text{T}\equiv \text{d}I_\text{QPC}/\text{d}U_\text{gL}$. The areas of stable charge configurations are
marked by numerals indicating the number of conduction band electrons in the
left / right quantum dot~\cite{anticrossing, ep2ds}. Both plots clearly feature
areas of stable charge configurations separated by either a current maximum (in
(a)) or a transconductance minimum (in (b)), respectively.
The transconductance measurement confirms the electron numbers obtained from
the single QD case, as even for very asymmetric confinement potential no
further discharging events towards more negative gate voltages \ensuremath{U_\text{gL}}\ and \ensuremath{U_\text{gR}}\ are observed.
In comparison to
the gray lines in the inset of Fig.~\ref{fig4} the edges of the hexagon pattern are here
strongly rounded. This indicates a sizable interdot tunnel coupling that cannot be
neglected compared to the interdot Coulomb interaction~\cite{anticrossing,
ep2ds}. A large interdot tunnel coupling results in molecular states delocalized
within the double quantum dot. This additionally explains the observation of finite current
not only on the triple points of the stability diagram, but also along edges of
stable charge configurations in Fig.~\ref{fig5}(a). Here the total charge
of the molecule fluctuates, allowing current via a delocalized state. In
previous publications the low-energy spectrum of the observed double well potential was
analyzed and the tunability of the tunnel coupling demonstrated~\cite{anticrossing, ep2ds}.
\section*{Summary}
Using a triangular gate geometry, a highly versatile few electron quantum dot has been
defined in the 2DES of a GaAs/AlGaAs heterostructure. The couplings between the
quantum dot and its leads can be tuned in a wide range. For weak quantum dot --
lead coupling, the shell structure of the states for $1 \lesssim N \lesssim 7$
trapped conduction band electrons is observed. The transport spectrum supports
the assumption of a Fock-Darwin like trapping potential and subsequent filling
of spin-degenerate states. A deviation from the model prediction can be
partially explained by the alignment of spins according to Hund's rule. For
strong quantum dot -- lead coupling, a chessboard pattern of regions of
enhanced zero bias conductance in dependence of a magnetic field perpendicular
to the 2DES is observed. The enhanced conductance regions are explained in terms
of the Kondo effect, induced by the formation of Landau-like core and ring
states in the quantum dot. Finally, for strongly negative center gate voltages,
the quantum dot trapping potential can be distorted at constant charge into a
peanut shaped double quantum dot with strong interdot tunnel coupling.
\section*{Acknowledgements}
We like to thank L. Borda and J.\,P.\ Kotthaus for helpful discussions. We
acknowledge financial support by the Deutsche Forschungs\-ge\-mein\-schaft via
the SFB 631 ``Solid state based quantum information processing'' and the
Bundesministerium f\"ur Bildung und Forschung via DIP-H.2.1. A.\, K.\ H\"uttel
thanks the Stiftung Maximilianeum for support.
\section*{References}
\bibliographystyle{iopart-num}
|
2,869,038,154,699 | arxiv | \section{Introduction}
Lattice QCD (LQCD) is one of the original computational grand
challenges~\cite{grandChallenge}. Increasingly accurate numerical
solutions of this quantum field theory are being used in tandem with
experiment and observation to gain a deeper quantitative understanding
for a range of phenomena in nuclear and high energy physics.
Advances during the last quarter century required prodigious
computational power, the development of sophisticated algorithms, and
highly optimized software. As a consequence LQCD is one of the driver
applications that have stimulated the evolution of new architectures
such as the BlueGene series~\cite{QCDOCtoBG}. Graphics processing
unit (GPU) clusters challenge us to adapt lattice field theory
software and algorithms to exploit this potentially transformative
technology. Here we present methods allowing QCD linear solvers to
scale to hundreds of GPUs with high efficiency. The resulting
multi-teraflop performance is now comparable to typical QCD codes
running on capability machines such as the Cray and the BlueGene/P
using several thousand cores.
GPU computing has enjoyed a rapid growth in popularity in recent years
due to the impressive performance to price ratio offered by the
hardware and the availability of free software development tools and
example codes. Currently, the fastest supercomputer in the world,
Tianhe-1A, is a GPU-based system, and several large-scale GPU
systems are either under consideration or are in active development.
Examples include the Titan system proposed for the Oak Ridge Leadership
Computing Facility (OLCF) and the NSF Track 2 Keeneland system to be
housed at the National Institute for Computational Sciences (NICS).
Such systems represent a larger trend toward heterogeneous
architectures, characterized not only by multiple processor types (GPU
and conventional CPU) with very different capabilities, but also by a
deep memory hierarchy exhibiting a large range of relevant bandwidths
and latencies. These features are expected to typify at least one
path (or ``swim-lane'') toward exascale computing. The intrinsic
imbalance between different subsystems can present bottlenecks and a
real challenge to application developers. In particular, the PCI-E
interface currently used to connect CPU, GPU, and communications fabric
on commodity clusters can prove a severe impediment for strong-scaling
the performance of closely coupled, nearest-neighbor stencil-like
codes into the capability computing regime. Overcoming such
limitations is vital for applications to succeed on future large-scale
heterogeneous resources.
We consider the challenge of scaling LQCD codes to a large number of
GPUs. LQCD is important in high energy and nuclear physics as it is
the only currently known first-principles non-perturbative approach for calculations
involving the strong force. Not only is LQCD important from the point
of view of physics research, but historically LQCD codes have often
been used to benchmark and test large scale computers. The balanced
nature of QCD calculations, which require approximately 1 byte/flop
in single precision, as well as their regular memory access and nearest
neighbor communication patterns have meant that LQCD codes could be
deployed quickly, scaled to large partitions, and used to exercise the
CPU, memory system, and communications fabric. In GPU computing, LQCD
has been highly successful in using various forms of data compression and mixed
precision solvers \cite{Clark:2009wm} to alleviate memory bandwidth
contraints on the GPU in order to attain high performance. Our
multi-GPU codes
\cite{Babich:2010:PQL:1884643.1884695,Gottlieb:2010zz,Shi:2011ipdps}
are in production, performing capacity analysis calculations on systems at several
facilities, including Lincoln and EcoG (NCSA), Longhorn (TACC), the ``9g''
and ``10g'' clusters at Jefferson Laboratory (JLab), and Edge (LLNL).
The challenge is now to scale LQCD computations into the O(100) GPU
regime, which is required if large GPU systems are to replace the more
traditional massively parallel multi-core supercomputers that are
used for the gauge generation step of LQCD. Here, capability-class
computing is required since the algorithms employed consist of single
streams of Monte Carlo Markov chains, and so require strong scaling.
Our previous multi-GPU implementations utilized a strategy of
parallelizing in the time direction only, using traditional
iterative Krylov solvers (e.g., conjugate gradients). This severely
limits the number of GPUs that can be used and thus maximum
performance. In order to make headway, it has become important to
parallelize in additional dimensions in order to give sublattices with
improved surface-to-volume ratios and to explore algorithms that
reduce the amount of communication altogether such as domain
decomposed approaches.
In this paper, we make the following contributions: (i) we parallelize
the application of the discretized Dirac operator in the QUDA library
to communicate in multiple dimensions, (ii) we investigate the utility
of an additive Schwarz domain-decomposed preconditioner for the
Generalized Conjugate Residual (GCR) solver, and (iii) we perform
performance tests using (i) and (ii) on partitions of up to 256 GPUs.
The paper is organized as follows: we present an outline of LQCD
computations in Sec. \ref{s:LQCD} and discuss iterative linear solvers
in Sec. \ref{s:Linear}. A brief overview of previous and related work
is given in Sec. \ref{s:Previous}, and in Sec. \ref{s:QUDA} we
describe the QUDA library upon which this work is based. The
implementation of the multi-dimensional, multi-GPU parallelization is
discussed in Sec. \ref{s:MultiDim}. The construction of our optimized
linear solvers are elaborated in Sec. \ref{s:Solver}. We present
performance results in Sec. \ref{s:Results}. Finally, we summarize and
conclude in Sec. \ref{s:Conclusions}.
\section{Lattice QCD} \label{s:LQCD}
Weakly coupled field theories such as quantum electrodynamics
can by handled with perturbation theory.
In QCD, however, at low
energies perturbative expansions fail and a non-perturbative
method is required. Lattice QCD is the only known, model
independent, non-perturbative tool currently available to perform
QCD calculations.
LQCD calculations are typically Monte-Carlo evaluations of a
Euclidean-time path integral. A sequence of configurations of the gauge fields
is generated in a process known as {\em
configuration generation}. The gauge configurations are
importance-sampled with respect to the lattice action and
represent a snapshot of the QCD vacuum. Configuration generation is
inherently sequential as one configuration is generated from the
previous one using a stochastic evolution process. Many variables can
be updated in parallel and the focused power of capability computing
systems has been essential. Once the field configurations have been
generated, one moves on to the second stage of the calculation, known
as {\em analysis}. In this phase, observables of interest are
evaluated on the gauge configurations in the ensemble, and the results
are then averaged appropriately, to form {\em ensemble averaged}
quantities. It is from the latter that physical results such as
particle energy spectra can be extracted. The analysis phase can be
task parallelized over the available configurations in an ensemble and
is thus extremely suitable for capacity level work on clusters, or
smaller partitions of supercomputers.
\subsection{Dirac PDE discretization}
The fundamental interactions of QCD, those taking place between quarks
and gluons, are encoded in the quark-gluon interaction differential
operator known as the Dirac operator. A proper discretization of the
Dirac operator for lattice QCD requires special care. As is common in PDE
solvers, the derivatives are replaced by finite differences. Thus on
the lattice, the Dirac operator becomes a large sparse matrix, \(M\),
and the calculation of quark physics is essentially reduced to many
solutions to systems of linear equations given by
\begin{equation}
Mx = b.
\label{eq:linear}
\end{equation}
Computationally, the brunt of the numerical work in LQCD for both the
gauge generation and analysis phases involves solving such linear
sytems.
A small handful of discretizations are in common use, differing in
their theoretical properties. Here we focus on two of the most widely-used forms
for $M$, the Sheikholeslami-Wohlert
\cite{Sheikholeslami:1985ij} form (colloquially known as {\em
Wilson-clover}), and the improved staggered form, specifically the
$a^2$ tadpole-improved (\emph{asqtad}) formulation \cite{RevModPhys.82.1349}.
\subsection{Wilson-clover matrix}
The Wilson-clover matrix is a central-difference discretization of the
Dirac operator, with the addition of a diagonally-scaled Laplacian to
remove the infamous fermion doublers (which arise due to the red-black
instability of the central-difference approximation). When acting in
a vector space that is the tensor product of a 4-dimensional
discretized Euclidean spacetime, {\it spin} space, and {\it color}
space it is given by
\begin{align}
M_{x,x'}^{WC} &= - \frac{1}{2} \displaystyle \sum_{\mu=1}^{4} \bigl(
P^{-\mu} \otimes U_x^\mu\, \delta_{x+\hat\mu,x'}\, + P^{+\mu} \otimes
U_{x-\hat\mu}^{\mu \dagger}\, \delta_{x-\hat\mu,x'}\bigr) \nonumber \\
&\quad\, + (4 + m + A_x)\delta_{x,x'} \nonumber \\
&\equiv - \frac{1}{2}D_{x,x'}^{WC} + (4 + m + A_{x}) \delta_{x,x'}.
\label{eq:Mclover}
\end{align}
Here \(\delta_{x,y}\) is the Kronecker delta; \(P^{\pm\mu}\) are
\(4\times 4\) matrix projectors in {\it spin} space; \(U\) is the QCD
gauge field which is a field of special unitary $3\times 3$ (i.e.,
SU(3)) matrices acting in {\it color} space that live between the
spacetime sites (and hence are referred to as link matrices); \(A_x\)
is the \(12\times12\) clover matrix field acting in both spin and
color space,\footnote{Each clover matrix has a Hermitian block
diagonal, anti-Hermitian block off-diagonal structure, and can be
fully described by 72 real numbers.} corresponding to a first order
discretization correction; and \(m\) is the quark mass parameter. The
indices \(x\) and \(x'\) are spacetime indices (the spin and color
indices have been suppressed for brevity). This matrix acts on a
vector consisting of a complex-valued 12-component \emph{color-spinor}
(or just {\em spinor}) for each point in spacetime. We refer to the
complete lattice vector as a spinor field.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=2.5in]{dslash}
\end{center}
\caption{\label{fig:dslash}The nearest neighbor stencil part of the
lattice Dirac operator $D$, as defined in (\ref{eq:Mclover}), in the $\mu-\nu$
plane. The \emph{color-spinor} fields are located on the
sites. The SU(3) color matrices $U^\mu_x$ are associated with the links. The
nearest neighbor nature of the stencil suggests a natural even-odd (red-black)
coloring for the sites.}
\end{figure}
\subsection{Improved staggered matrix}
The staggered matrix again is a central-difference discretization of
the Dirac operator; however, the fermion doublers are removed
through ``staggering'' the spin degrees of freedom onto neighboring
lattice sites. This essentially reduces the number of spin degrees of
freedom per site from four to one, which reduces the computational
burden significantly. This transformation, however, comes at the
expense of increased discretization errors, and breaks the so-called
quark-flavor symmtery. To reduce these discretization errors, the
gauge field that connects nearest neighboring sites on the lattice
(\(U^\mu_x\) in Equation \ref{eq:Mclover}) is smeared, which essentially is
a local averaging of the field. There are many prescriptions for this
averaging; and here we employ the popular asqtad
procedure \cite{RevModPhys.82.1349}. The errors are
further reduced through the inclusion of the third neighboring spinors
in the derivative approximaton. The asqtad matrix is given by
\begin{align}
M_{x,x'}^{IS} &= - \frac{1}{2} \displaystyle \sum_{\mu=1}^{4} \bigl(
\hat{U}_x^\mu\, \delta_{x+\hat\mu,x'}\, + \hat{U}_{x-\hat\mu}^{\mu \dagger}\, \delta_{x-\hat\mu,x'} +\nonumber\\
& \quad\quad\quad\quad \check{U}_x^\mu\, \delta_{x+3\hat\mu,x'}\, + \check{U}_{x-3\hat\mu}^{\mu \dagger}\, \delta_{x-3\hat\mu,x'}\bigr) + m\delta_{x,x'} \nonumber \\
&\equiv - \frac{1}{2}D_{x,x'}^{IS} + m\delta_{x,x'}.
\label{eq:Mstag}
\end{align}
Unlike \(M^{WC}\), the matrix \(M^{IS}\) consists solely of a
derivative term \(D^{IS}\) and the mass term. There are two gauge
fields present: \(\hat{U}^\mu_x\) is the {\it fat} gauge field, and is
the field produced from locally averaging \(U^\mu_x\); and
\(\check{U}^\mu_x\) is the {\it long} gauge field produced by taking
the product of the links \(U^\mu_x U^\mu_{x+\hat{\mu}}
U^\mu_{x+2\hat{\mu}}\). While both of these fields are functions of
the original field \(U^\mu_x\), in practice, these fields are
pre-calculated before the application of \(M^{IS}_{x,x'}\) since
iterative solvers will require the application of \(M^{IS}_{x,x'}\)
many hundreds or thousands of times. Since there are no separate spin
degrees of freedom at each site, this matrix acts on a vector of
complex-valued 3-component colors; however, for convenience we nevertheless
refer to the complex lattice vector as a spinor field.
\section{Iterative solvers}
\label{s:Linear}
\subsection{Krylov solvers}
For both discretizations under consideration, \(M\) is a large sparse matrix, and iterative Krylov
solvers are typically used to obtain solutions to Equation (\ref{eq:linear}),
requiring many repeated evaluations of the sparse matrix-vector
product. The Wilson-clover matrix is non-Hermitian, so either
Conjugate Gradients \cite{Hestenes:1952} on the normal equations (CGNE
or CGNR) is used, or more commonly, the system is solved directly
using a non-symmetric method, e.g., BiCGstab \cite{vanDerVorst:1992}.
Even-odd (also known as red-black) preconditioning is almost always
used to accelerate the solution finding process for this system, where
the nearest neighbor property of the \(D^{WC}\) matrix is exploited
to solve the Schur complement system~\cite{Degrand1990211}.
The staggered fermion matrix is anti-Hermitian, and has the
convenient property that when multiplied by its Hermitian conjugate,
the even and odd lattices are decoupled and can be solved independently
from each other. There are no commonly used preconditioners for the
staggered matrix. When simulating asqtad fermions, for both the gauge
field generation and for the analysis stages, one is confronted with
solving problems of the form
\begin{equation}
(M^\dagger M + \sigma_i I) x_i = b\qquad{i=1\ldots N}
\label{eq:multishift}
\end{equation}
where \(\sigma_i\) is a constant scalar and \(I\) is the
identity matrix. This is equivalent to solving \(N\) different linear
systems at different mass parameters for a constant source \(b\).
Since the generated Krylov spaces are the same for each of these
linear systems, one can use a multi-shift solver (also known as a
multi-mass solver) to produce all \(N\) solutions simultaneously in
the same number of iterations as the smallest shift (least well
conditioned)~\cite{Jegerlehner:1996pm}.
In both cases, the quark mass controls the condition number of the
matrix, and hence the convergence of such iterative solvers.
Unfortunately, physical quark masses correspond to nearly indefinite
matrices. Given that current lattice volumes are at least \(10^8\)
degrees of freedom in total, this represents an extremely
computationally demanding task. For both the gauge generation and
analysis stages, the linear solver accounts for 80--99\% of the
execution time.
\subsection{Additive Schwarz preconditioner}
As one scales lattice calculations to large core counts, on
leadership-class partitions, one is faced with a strong-scaling
challenge: as core counts increase, for a fixed global lattice volume
the local sub-volume per core decreases and the surface-to-volume ratio
of the local sub--lattice increases. Hence, the ratio of communication
to local computation also grows. For sufficiently many cores, it
becomes impossible to hide communication by local computation and the
problem becomes communications bound, at the mercy of the system
interconnect. This fact, with the need for periodic global reduction
operations in the Krylov solvers, leads to a slowdown of their
performance in the large scaling limit. For GPU-based clusters, where
inter-GPU communication is gated by the PCI-E bus, this limitation is
substantially more pronounced and can occur in partitions as small as
O(10) GPUs or less.
This challenge of strong scaling of traditional Krylov solvers
motivates the use of solvers which minimize the amount of
communication. Such solvers are commonly known as
domain-decomposition solvers and two forms of them are commonly used:
multiplicative Schwarz and additive Schwarz processes~\cite{schwarz}.
In this work we focus upon the additive Schwarz method. Here, the
entire domain is partitioned into blocks which may or may not overlap.
The system matrix is then solved within these blocks, imposing
Dirichlet (zero) boundary conditions at the block boundaries. The
imposition of Dirichlet boundary conditions means that no
communication is required between the blocks, and that each block can
be solved independently. It is therefore typical to assign the blocks
to match the sub-domain assigned to each processor in a parallel
decomposition of a domain. A tunable parameter in these solvers is
the degree of overlap of the blocks, with a greater degree of overlap
corresponding to increasing the size of the blocks, and hence the
amount of computation required to solve each block. A larger overlap
will typically lead to requiring fewer iterations to reach
convergence, since, heuristically, the larger sub blocks, will
approximate better the original matrix and hence their inverses will
form better preconditioners. Note that an additive Schwarz solver
with non-overlapping blocks is equivalent to a block-Jacobi solver.
Typically, Schwarz solvers are not used as a standalone solver, but rather
they are employed as preconditioners for an outer Krylov method.
Since each local system matrix is usually solved using an iterative
solver, this requires that the outer solver be a {\em flexible} solver.
Generalized conjugate residual (GCR) is such a solver, and
we shall employ it for the work in this paper.
\section{Overview of related work} \label{s:Previous}
Lattice QCD calculations on GPUs were originally reported in
\cite{Egri2007631} where the immaturity of using GPUs for general
purpose computation necessitated the use of graphics APIs. Since the
advent of CUDA in 2007, there has been rapid uptake by the LQCD
community (see \cite{Clark:2009pk} for an overview). More recent work
includes \cite{Alexandru:2011ee}, which targets the computation of
multiple systems of equations with Wilson fermions where the systems
of equations are related by a linear shift. Solving such systems is
of great utility in implementing the overlap formulation of QCD. This
is a problem we target in the staggered-fermion solver below. The
work in \cite{Chiu:2011rc} targets the domain-wall fermion formulation
of LQCD. This work concerns the QUDA library \cite{Clark:2009wm},
which we describe in Sec.\ \ref{s:QUDA} below.
Most work to date has concerned single-GPU LQCD implementations, and
beyond the multi-GPU parallelization of
QUDA~\cite{Babich:2010:PQL:1884643.1884695,Shi:2011ipdps} and the
work in \cite{Alexandru:2011sc} which targets a multi-GPU
implementation of the overlap formulation, there has been little
reported in the literature, though we are aware of other
implementations which are in production~\cite{Borsani}.
Domain-decomposition algorithms were first introduced to LQCD in
\cite{Luscher:2003qa}, through an implementation of the Schwarz
Alternating Procedure preconditioner, which is a multiplicative
Schwarz preconditioner. More akin to the work presented here is the
work in \cite{Osaki:2010vj} where a restricted additive Schwarz
preconditioner was implemented for a GPU cluster. However, the work
reported in \cite{Osaki:2010vj} was carried out on a rather small
cluster containing only 4 nodes and connected with Gigabit Ethernet.
The work presented here aims for scaling to O(100) GPUs using a QDR
Infiniband interconnect.
\section{QUDA}\label{s:QUDA}
The QUDA library is a package of optimized CUDA kernels and wrapper
code for the most time-consuming components of an LQCD computation.
It has been designed to be easy to interface to existing code bases,
and in this work we exploit this interface to use the popular LQCD
applications Chroma and MILC. The QUDA library has attracted a
diverse developer community of late and is being used in production at
LLNL, Jlab and other U.S.\ national laboratories, as well as in Europe.
The latest development version is always available in a publically-accessible
source code repository~\cite{githubQUDA}.
QUDA implements optimized linear solvers, which when running on a
single GPU achieve up to 24\% of the GPU's peak performance through
aggressive optimization. The general strategy is to assign a single
GPU thread to each lattice site, each thread is then responsible for
all memory traffic and operations required to update that site on the lattice
given the stencil operator. Maximum memory bandwidth is obtained
by reordering the spinor and gauge fields to achieve memory coalescing
using structures of float2 or float4 arrays, and using the texture
cache where appropriate. Memory traffic reduction is employed where
possible to overcome the relatively low arithmetic intensity of the
Dirac matrix-vector operations, which would otherwise limit
performance. Strategies include: (a) using compression for the
\(SU(3)\) gauge matrices to reduce the 18 real numbers to 12 (or 8)
real numbers at the expense of extra computation; (b) using similarity
transforms to increase the sparsity of the Dirac matrices; (c)
utilizing a custom 16-bit fixed-point storage format (here on referred
to as half precision) together with mixed-precision linear solvers to
achieve high speed with no loss in accuracy.
Other important computational kernels provided in the library include
gauge field smearing routines for constructing the fat gauge field used
in the asqtad variant of the improved staggered discretization, as well as
force term computations required for gauge field generation.
The extension of QUDA to support multiple GPUs was reported in
\cite{Babich:2010:PQL:1884643.1884695}, where both strong and weak
scaling was performed on up to 32 GPUs using a lattice volume of
\(32^3\times256\) with Wilson-clover fermions. This employed
partitioning of the lattice along the time dimension only, and was
motivated by expediency and the highly asymmetric nature of
the lattices being studied. While this strategy was sufficient to achieve excellent
(artificial) weak scaling performance, it severely limits the strong
scaling achievable for realistic volumes because of the increase in
surface-to-volume ratio. The application of these strategies to the
improved staggered discretization was described in \cite{Shi:2011ipdps},
where strong scaling was achieved on up to 8 GPUs using a lattice
volume of \(24^3\times96\). Here the single dimensional
parallelization employed restricts scaling more severely than for
Wilson-clover because of the 3-hop stencil of the improved staggered
operator which decreases the locality of the operator.
\section{Multi-dimensional partitioning} \label{s:MultiDim}
\subsection{General strategy}
Because lattice discretizations of the Dirac operator generally only couple
sites of the lattice that are nearby in spacetime, the first step in any
parallelization strategy is to partition the lattice.
As indicated above, prior to this work multi-GPU parallelization of
the QUDA library had been carried out with the lattice partitioned
along only a single dimension. The time ($T$) dimension was
chosen, first because typical lattice volumes are asymmetric with $T$
longest, and secondly because this dimension corresponds to the
slowest varying index in memory in our implementation, making it
possible to transfer the boundary face from GPU to host with a
straightforward series of memory copies. Going beyond this approach
requires much more general handling of data movement, computation,
and synchronization, as we explore here.
In the general case, upon partitioning the lattice each GPU is
assigned a 4-dimensional subvolume that is bounded by at most eight
3-dimensional ``faces.'' Updating sites of the spinor field on or
near the boundary requires data from neighboring GPUs. The data
received from a given neighbor is stored in a dedicated buffer on the
GPU which we will refer to as a ``ghost zone'' (since it shadows data
residing on the neighbor). Computational kernels are modified so as
to be aware of the partitioning and read data from the appropriate
location --- either from the array corresponding to the local subvolume
or one of the ghost zones. Significant attention is paid to
maintaining memory coalescing and avoiding thread divergence, as
detailed below.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=3.5in]{spinor_layout}
\end{center}
\caption{\label{fig:spinor_layout} Spinor field layout in host and GPU
memory for the staggered discretization (consisting of 6 floating
point numbers per site). Here Vh is half the local volume
of the lattice, corresponding to the number of sites in an even/odd
subset. Layout for the Wilson-clover discretization is similar,
wherein the spinor field consists of 24 floating point numbers per
site.}
\end{figure}
Ghost zones for the spinor field are placed in memory after the local
spinor field so that BLAS-like routines, including global reductions,
may be carried out efficently. While the ghost spinor data for the T
dimension is contiguous and can be copied without a gather operation,
the ghost spinor data for the other three dimensions must be collected
into contiguous GPU memory buffers by a GPU kernel before it can be
transfered to host memory. The ghost zone buffers are then exchanged
between neighboring GPUs (possibly residing in different nodes). Once
inter-GPU communication is complete, the ghost zones are copied to
locations adjoining the local array in GPU memory. Allocation of
ghost zones and data exchange in a given dimension only takes place
when that dimension is partitioned, so as to ensure that GPU memory as well as
PCI-E and interconnect bandwidth are not wasted. Layout of the local
spinor field, ghost zones, and padding regions are shown in
Fig.~\ref{fig:spinor_layout}. The padding region is of adjustable
length and serves to reduce partition
camping~\cite{Clark:2009wm,Ruetsch:2009pc} on previous-generation
NVIDIA GPUs.\footnote{This is less a concern for the Tesla M2050 cards
used in this study, as the Fermi memory controller employs
address hashing to alleviate this problem.} The gauge field is
allocated with a similar padding region, and we use this space to
store ghost zones for the gauge field, which must only be transfered
once at the beginning of a solve. The layout is illustrated in
Fig.~\ref{fig:gauge_layout}.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=3.5in]{gauge_layout}
\end{center}
\caption{\label{fig:gauge_layout}Gauge field layout in host and GPU
memory. The gauge field consists of 18 floating point numbers per
site (when no reconstruction is employed) and is ordered on the GPU
so as to ensure that memory accesses in both interior and
boundary-update kernels are coalesced to the extent possible.}
\end{figure}
For communication, our implementation is capable of employing either
of two message-passing frameworks -- MPI or QMP. The latter ``QCD
message-passing'' standard was originally developed to provide a
simplified subset of communication primitives most used by LQCD codes,
allowing for optimized implementations on a variety of architectures,
including purpose-built machines that lack MPI. Here we rely on the
reference implementation, which serves as a thin layer over MPI itself
(but nevertheless serves a purpose as the communications interface used
natively by Chroma, for example). Accordingly, performance with the
two frameworks is virtually identical. At present, we assign GPUs to
separate processes which communicate via message-passing. Exploration
of peer-to-peer memory copies, recently added in CUDA 4.0, and host-side
multi-threading is underway.
\subsection{Interior and exterior kernels}
In~\cite{Babich:2010:PQL:1884643.1884695,Shi:2011ipdps}, where only
the time dimension of the lattice was partitioned, we separated the
application of the Dirac operator into two kernels, one to update
sites on the boundaries of the local sublattice (the \emph{exterior
kernel}) and one to perform all remaining work (the \emph{interior
kernel}). Here we extend this approach by introducing one exterior
kernel for every dimension partitioned, giving a total of four
exterior kernels in the most general case. The interior kernel
executes first and computes the spinors interior to the subvolume, as
well as any contributions to spinors on the boundaries that do not
require data from the ghost zones. For example, if a spinor is
located only on the T+ boundary, the interior kernel computes
contributions to this spinor from all spatial dimensions, as well as
that of the negative T direction. The contribution from the positive
T direction will be computed in the T exterior kernel using the ghost
spinor and gauge field from the T+ neighbor. Since spinors on the
corners belong to multiple boundaries, they receive contributions from
multiple exterior kernels. This introduces a data dependency between
exterior kernels, which must therefore be executed sequentially.
Another consideration is the ordering used for assigning threads to
sites. For the interior kernel and T exterior kernel, the
one-dimensional thread index (given in CUDA C by {\tt
(blockIdx.x*blockDim.x + threadIdx.x)}) is assigned to sites of the
four-dimensional sublattice in the same way that the spinor and gauge
field data is ordered in memory, with X being the fastest varying
index and T the slowest. It is thus guaranteed that all spinor and
gauge field accesses are coalesced. In the X,Y,Z exterior kernels,
however, only the destination spinors are indexed in this way, while
the ghost spinor and gauge field are indexed according to a different
mapping. This makes it impossible to guarantee coalescing for both
reads and writes; one must choose one order or the other for assigning
the thread index. We choose to employ the standard T-slowest mapping
for the X,Y,Z exterior kernels to minimize the penalty of uncoalesced
accesses, since a greater fraction of the data traffic comes from the
gauge field and source spinors.
\subsection{Computation, communication, and streams}
Our implementation employs CUDA streams to overlap computation with
communication, as well as to overlap GPU-to-host with inter-node
communication. Two streams per dimension are used, one for gathering
and exchanging spinors in the forward direction and the other in the
backward direction. One additional stream is used for executing the
interior and exterior kernels, giving a total of 9 streams as shown in
Fig.\ \ref{fig:overlap}. The gather kernels for all dimensions are
launched on the GPU immediately so that communication in all
directions can begin. The interior kernel is executed after all
gather kernels finish, overlapping completely with the communication.
We use different streams for different dimensions so that the
different communication components can overlap with each other,
including the device-to-host memory copy, the copy from pinned host
memory to pagable host memory, the MPI send and receive, the memory
copy from pagable memory to pinned memory on the receiving side, and
the final host-to-device memory copy. While the interior kernel can
be overlapped with communications, the exterior kernels must wait for
arrival of the ghost data. As a result, the interior kernel and
subsequent exterior kernels are placed in the same stream, and each
exterior kernel blocks waiting for communication in the corresponding
dimension to finish. For small subvolumes, the total communication
time over all dimensions is likely to exceed the interior kernel run
time, resulting in some interval when the GPU is idle (see
Fig.\ \ref{fig:overlap}) and thus degrading overall performance.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=3.5in]{overlap}
\end{center}
\caption{\label{fig:overlap}Usage of CUDA streams in the application
of the Dirac operator, illustrating the multiple stages of
communication. A single stream is used for the interior and
exterior kernels, and two streams per dimension are used for gather
kernels, PCI-E data transfer, host memory copies, and inter-node
communication.}
\end{figure}
When communicating over multiple dimensions with small subvolumes, the
communication cost dominates over computation, and so any reduction in
the communication is likely to improve performance. The two host
memory copies are required due to the fact that GPU pinned memory is
not compatible with memory pinned by MPI implementations; GPU-Direct
\cite{gpudirect} was not readily available on the cluster used in this
study. We expect to be able to remove these extra memory copies in
the future when better support from GPU and MPI vendors is
forthcoming. CUDA 4.0, recently released, includes a promising
GPU-to-GPU direct communication feature that we will explore in the
future to further reduce the communication cost.
\section{Dirac operator performance}
\subsection{Hardware description}
For the numerical experiments discussed in this paper we used the Edge
visualization cluster installed at Lawrence Livermore National
Laboratory. Edge is comprised of a total of 216 nodes, of which 206
are compute nodes available for batch jobs. Each compute node is
comprised of dual-socket six-core Intel X5660 Westmere CPUs running at
2.8GHz and two NVIDIA Tesla M2050 GPUs, running with error correction
(ECC) enabled. The two GPUs share a single x16 PCI-E connection to the
I/O hub (IOH) via a switch. Eight of the remaining PCI-E lanes serve
a quad data rate (QDR) InfiniBand interface which can thus run at full
bandwidth. The compute nodes run a locally maintained derivative of a
CentOS 5 kernel with revision {\tt 2.6.18-chaos103}.
To build and run our software we used OpenMPI version 1.5 built on top
of the system GNU C/C++ compiler version 4.1.2. To build and link
against the QUDA library we used release
candidate 1 of CUDA version 4.0 with driver version 270.27.
\subsection{Wilson-clover}
\begin{figure}[htb]
\begin{center}
\includegraphics[width=3.5in]{clover_dslash}
\end{center}
\caption{\label{fig:clover-dslash}Strong-scaling results for the Wilson-clover
operator in single (SP) and half (HP) precisions (\(V = 32^3\times256\), 12 gauge reconstruction).}
\end{figure}
In Fig.\ \ref{fig:clover-dslash}, we show the strong scaling of the
Wilson-clover operator on up to 256 GPUs. We see significant departures
from ideal scaling for more than 32 GPUs, as increasing the surface-to-volume
ratio increases the amount of time spent in communication, versus
computation. It seems that, for more than 32
GPUs, we are no longer able to sufficiently overlap computation with
communication, and the implementation becomes communications
bound. We note also that as the communications overhead grows,
the performance advantage of the half precision operator over the
single precision operator appears diminished. The severity of the
scaling violations seen here highlights the imbalance between the
communications and compute capability of GPU clusters. To overcome
this constraint, algorithms which reduce communication, such
as the domain-decomposition algorithms described below, are absolutely
essential.
\subsection{Improved staggered}
In Fig.\ \ref{fig:asqtad-dslash}, we plot the performance per GPU for
the asqtad volume used in this study. A number of interesting
observations can be made about this plot. At a relatively low number
of GPUs, where we are less communications-bound, having faster kernel
performance is more important than the optimal surface-to-volume
ratio. As the number of GPUs is increased, the minimization of the
surface-to-volume ratio becomes increasingly important, and the XYZT
partitioning scheme, which has the worst single-GPU performance,
obtains the best performance on 256 GPUs.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=3.5in]{stag_dslash}
\end{center}
\caption{\label{fig:asqtad-dslash}Strong-scaling performance
for the asqtad operator in double (DP) and single (SP) precision.
The legend labels denote which dimensions are partitioned between
GPUs (\(V = 64^3\times192\), no gauge reconstruction). }
\end{figure}
\section{Building scalable solvers}
\label{s:Solver}
\subsection{Wilson-clover additive Schwarz preconditioner}
The poor scaling of the application of the Wilson-clover matrix at a
high number of GPUs at this volume motivates the use of
communication-reducing algorithms. In this initial work, we
investigate using a non-overlapping additive Schwarz preconditioner
for GCR. In the text that follows, we refer to this algorithm as
GCR-DD.
Implementation of the preconditioner is simple: essentially, we just
have to switch off the communications between GPUs. This means that
in applying the Dirac operator, the sites that lie along the
communication boundaries only receive contributions from sites local
to that node. Additionally, since the solution in each domain is
independent from every other, the reductions required in each of the
domain-specific linear solvers are restricted to that domain only.
As a result the solution of the preconditioner linear system
will operate at similar efficiency to the equivalent single-GPU
performance at this local volume. The imposition of the Dirichlet
boundary conditions upon the local lattice leads to a vastly reduced
condition number. This, coupled with the fact that only a very loose
approximation of the local system matrix is required, means that only
a small number of steps of minimum residual (MR) are required to
achieve satisfactory accuracy.
The outer GCR solver builds a Krylov subspace, within which the
residual is minimized and the corresponding solution estimate is
obtained. Unlike solvers with explicit recurrence relations, e.g.,
CG, the GCR algorithm for the general matrix problem requires explicit
orthogonalization at each step with respect to the previously
generated Krylov space. Thus, the size of the Krylov space is limited
by the computational and memory costs of orthogonalization. After the
limit of the Krylov space has been reached, the algorithm is
restarted, and the Krylov space is rebuilt.
Similar to what was reported for BiCGstab~\cite{Clark:2009wm}, we
have found that using mixed precision provides significant
acceleration of the GCR-DD solver algorithm. We exclusively use
half precision for solving the preconditioned system. This is natural
since only a very loose approximation is required. Additionally, the
restarting mechanism of GCR provides a natural opportunity for using
mixed precision: the Krylov space is built up in low precision and
restarted in high precision. This approach also conserves the limited
GPU memory, allowing for larger Krylov spaces to be built, albeit in
lower precision. We follow the implicit solution update scheme
described in \cite{Luscher:2003qa} since this reduces the
orthogonalization overhead. For the physics of interest, the
inherent noise present in the Monte Carlo gauge generation process is such
that single-precision accuracy is sufficient. Thus we have found best
performance using a single-half-half solver, where the GCR restarting
is performed in single precision, the Krylov space construction and
accompanying orthogonalization is done in half precision, and
the preconditioner is solved in half precision. In minimizing
the residual in half precision, there is the inherent risk of the iterated
residual straying too far from the true residual. Thus, we have added
an early termination criteria for the Krylov subspace generation,
where if the residual is decreased by more than a given tolerance
\(\delta\) from the start of the Krylov subspace generation, the
algorithm is restarted. The mixed-precision GCR-DD solver is
illustrated in Algorithm \ref{alg:gcr}.
\begin{algorithm}[htb]
\SetKwData{True}{true}
\SetAlgoLined
\DontPrintSemicolon
\(k=0\)\;
\(r_0 = b - Mx_0\)\;
\(\hat{r}_0 = r_0\)\;
\(\hat{x} = 0\)\;
\While{\(||r_0|| > tol\)}{
\(\hat{p}_k = \hat{K} \hat{r}_k\)\;
\(\hat{z}_k = \hat{M}\hat{p}_k\)\;
\tcp{Orthogonalization}
\For{$i\leftarrow 0$ \KwTo $k-1$}{
\(\beta_{i,k} = (\hat{z}_i, \hat{z}_k)\)\;
\(\hat{z}_k = \hat{z}_k - \beta_{i,k}\hat{z}_i\)\;
}
\(\gamma_k = ||\hat{z}_k||\)\;
\(z_k = z_k / \gamma_k\)\;
\(\alpha_k = (\hat{z}_k, \hat{r}_k)\)\;
\(\hat{r}_{k+1} = \hat{r}_k - \alpha_{k}\hat{z}_k\)\;
\(k=k+1\)\;
\BlankLine
\tcp{High precision restart}
\If{\(k=k_{max}\) {\bf or} \(||\hat{r}_k||/||r_0|| < \delta\) {\bf or} \(||\hat{r}_k|| < tol \)} {
\For{$l\leftarrow k-1$ {\bf down} \KwTo \(0\)}{
{\bf solve} \(\gamma_l\chi_l + \sum_{i=l+1}^{k-1} \beta_{l,i} \chi_i = \alpha_l\) {\bf for} \(\chi_l\)\;
}
\(\hat{x} = \sum_{i=0}^{k-1} \chi_i \hat{p}_i\)\;
\(x\) = \(x+\hat{x}\)\;
\(r_{0}\) = \(b' - Ax\)\;
\(\hat{r}_0 = r_0\)\;
\(\hat{x} = 0\)\;
\(k=0\)\;
}
}
\label{alg:gcr}
\caption{Mixed-precision GCR-DD solver. Low-precision fields are
indicated with a hat (\(\;\hat{}\;\)); e.g., \(x\) and \(\hat{x}\)
correspond to high- and low-precision respectively. The
domain-decomposed preconditioner matrix is denoted by \(K\), and the
desired solver tolerance is given by \(tol\). The parameter
\(k_{max}\) denotes the maximum size of the Krylov subspace.}
\end{algorithm}
\subsection{Improved staggered}
While using a multi-shift CG solver can lead to significant speedups
on traditional architectures, their use is less clear-cut on GPUs:
multi-shift solvers cannot be {\it restarted}, meaning that using a
mixed-precision strategy is not possible; the extra BLAS1-type linear
algebra incurred is extremely bandwidth intensive and so can reduce
performance significantly; multi-shift solvers come with much larger
memory requirements since one has to keep both the \(N\) solution and
direction vectors in memory.
With these restrictions in mind, we have employed a modified
multi-shift solver strategy where we solve Equation
(\ref{eq:multishift}) using a pure single-precision multi-shift CG
solver and then use mixed-precision sequential CG, refining each of
the \(x_i\) solution vectors until the desired tolerance has been
reached.\footnote{Unfortunately, such an algorithm is not amenable to
the use of half precision since the solutions produced from the
initial multi-shift solver would be too inaccurate, demanding a
large degree of correction in the subsequent sequential
refinements.} This allows us to perform most of the operations in
single-precision arithmetic while still achieving double-precision
accuracy. Since double precision is not introduced until after the
multi-shift solver is completed, the memory requirements are much
lower than if a pure double-precision multi-shift solver were
employed, allowing the problem to be solved on a smaller number of
GPUs. When compared to doing just sequential mixed-precision CG, the
sustained performance measured in flops is significantly lower because
of the increased linear algebra; however, the time to solution is
significantly shorter.
\section{Solver performance results} \label{s:Results}
\subsection{Wilson-clover}
Our Wilson-Clover solver benchmarks were run with the QUDA library
being driven by the Chroma \cite{Edwards:2004sx} code. The solves were
performed on a lattice of volume $32^3 \times 256$ lattice sites from
a recent large scale production run, spanning several facilities
including Cray machines at NICS and OLCF, as well as BlueGene/L facilities
at LLNL and a BlueGene/P facility at Argonne National Laboratory
(ANL). The quark mass used in the generation of the configuration
corresponds to a pion mass of $\simeq$230 MeV in physical units
\cite{Lin:2008pr}.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=3.5in]{clover_flops}
\end{center}
\caption{\label{fig:clover_flops}Sustained strong-scaling performance in
Gflops of the Wilson-clover mixed precision BiCGstab and GCR-DD solvers (\(V =
32^3\times 256\), 10 steps of MR used to evaluate the preconditioner).}
\end{figure}
In Fig.\ \ref{fig:clover_flops}, we plot the sustained performance in
Tflops of both the BiCGstab and GCR-DD solvers. For the BiCGstab
solver, we can see that despite the multi-dimensional parallelization,
we are unable to effectively scale past 32 GPUs because of the
increased surface-to-volume ratio. The GCR-DD solver does not suffer
from such problems and scales to 256 GPUs. As described above, the
raw flop count is not a good metric of actual speed since the the
iteration count is a function of the local block size. In
Fig.\ \ref{fig:clover_time}, we compare the actual time to solution
between the two solvers. While at 32 GPUs BiCGstab is a superior
solver, past this point GCR-DD exhibits significantly reduced time to
solution, improving performance over BiCGstab by 1.52x, 1.63x, and
1.64x at 64, 128, and 256 GPUs respectively. Despite the improvement
in scaling, we see that at 256 GPUs we have reached the limit of this
algorithm. While we have vastly reduced communication overhead by
switching to GCR-DD, there is still a significant fraction of the
computation that requires full communication. This causes an Amdahl's
law effect to come into play, which is demonstrated by the fact that
the slope of the slow down for GCR and BiCGstab is identical in moving
from 128 to 256 GPUs. Additionally, we note that if we perform a
single-GPU run with the same per-GPU volume as considered here for 256
GPUs, performance is almost a factor of two slower than that for a run
corresponding to 16 GPUs. Presumably this is due to the GPU not being
completely saturated at this small problem size.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=3.5in]{clover_time}
\end{center}
\caption{\label{fig:clover_time} Sustained strong-scaling time to
solution in seconds of the Wilson-clover mixed precision BiCGstab and GCR-DD solvers (\(V =
32^3\times 256\), 10 steps of MR used to evaluate the preconditioner).}
\end{figure}
\begin{figure}[htb]
\begin{center}
\includegraphics[width=3.5in]{arch_compare}
\end{center}
\caption{\label{fig:arch_compare} Strong-scaling benchmarks on a
lattice of size $32^3 \times 256$ from Cray XT4 (Jaguar), Cray XT5
(JaguarPF) and BlueGene/P (Intrepid). Solves were done to
double-precision accuracy. The Cray solvers used mixed
(double-single) precision; the BG/P solver used pure double
precision.}
\end{figure}
In terms of raw performance, it can be seen that the GCR-DD solver
achieves greater than 10 Tflops on partitions of 128 GPUs
and above. Thinking more conservatively, one can use the improvement
factors in the time to solution between BiCGStab and GCR to assign an
``effective BiCGStab performance'' number to the GCR solves. On 128 GPUs
GCR performs as if it were BiCGStab running at 9.95 Tflops, whereas on
256 GPUs it is as if it were BiCGStab running at 11.5 Tflops. To put
the performance results reported here into perspective, in
Fig.\ \ref{fig:arch_compare} we show a strong-scaling benchmark from a
variety of leadership computing systems on a lattice of the same
volume as used here. Results are shown for the Jaguar Cray XT4
(recently retired) and Jaguar PF Cray XT5 systems at OLCF, as well as the
Intrepid BlueGene/P facility at ANL. The performance range of 10-17
Tflops is attained on partitions of size greater than 16,384 cores on
all these systems. Hence, we believe it is fair to say that the
results obtained in this work are on par with capability-class
systems.
\subsection{Improved staggered}
The results for improved staggered fermions were obtained using the
QUDA library driven by the publicly available MIMD Lattice
Collaboration (MILC) code~\cite{MILC}. The \(64^3\times 192\) gauge
fields used for this study corresponds to a pion mass of $\simeq$320
MeV in physical units~\cite{Bazavov:2009bb}.
In Fig.\ \ref{fig:stag_cg}, we plot the performance of the mixed-precision
multi-shift CG algorithm. When running the full solver, the minimum
number of GPUs that can accommodate the task is 64. Reasonable strong
scaling is observed, where we achieve a speed-up of 2.56x in moving
from 64 to 256 GPUs.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=3.5in]{stag_cg}
\end{center}
\caption{\label{fig:stag_cg}Sustained strong-scaling performance in
Gflops of the asqtad mixed-precision multi-shift solver. The legend
labels denote which dimensions are partitioned between GPUs (\(V =
64^3\times 192\)).}
\end{figure}
With 256 GPUs, we achieve 5.49 Tflops with double-single mixed
precision. In Ref.\ \cite{Shi:2011ipdps}, we observed an approximately
20\% increase in iteration count for the mixed precision solver,
comparing to the pure double precision one. To put this in
perspective, the CPU version of MILC running on Kraken, a Cray XT5
system housed at NICS, achieves 942 Gflops with 4096 CPU cores for the
double precision multi-shift solver. This means one GPU computes
approximately as fast as 74 CPU cores in large-scale runs.
\section{Conclusions} \label{s:Conclusions}
Our main result is to demonstrate that by the use of
multi-dimensionsional parallelization and an additive Schwarz
preconditioner, the Wilson-clover solver for lattice QCD can be
successfully strong-scaled to over 100 GPUs. This is a significant
achievement demonstrating that GPU clusters are capable of delivering
in excess of 10 teraflops of performance, a minimal capability
required to apply GPU clusters to the generation of lattice ensembles.
Additionally, multi-dimensional parallelization has enabled the use of
GPUs for asqtad solvers at leading-edge lattice volumes, a feat which
was previously not possible because of the decreased locality of the
asqtad operator.
Clearly, the present use of a simple non-overlapping additive Schwarz
preconditioner is only a first step. It is very likely that more
sophisticated methods with overlapping domains or multiple levels of
Schwarz-type blocking to take advantage of the multiple levels of
memory locality that a GPU cluster offers can be devised to improve
the scaling substantially. Moreover, we view GPUs and the use of the
Schwarz preconditioner as parts of a larger restructuring of
algorithms and software to address the inevitable future of
heterogeneous architectures with deep memory hierarchies. We
anticipate that the arsenal of tools needed for the future of lattice
QCD and similarly structured problems (e.g., finite difference
problems, material simulations, etc.) at the exascale will include
domain decomposition, mixed-precision solvers and data
compression/recomputation strategies.
\section{Acknowledgments}
We gratefully acknowledge the use of the Edge GPU cluster at Lawrence
Livermore National Laboratory. This work was supported in part by NSF
grants OCI-0946441, OCI-1060012, OCI-1060067, and PHY-0555234, as well
as DOE grants DE-FC02-06ER41439, DE-FC02-06ER41440, DE-FC02-06ER41443,
DE-FG02-91ER40661, and DE-FG02-91ER40676. BJ additionally
acknowledges support under DOE grant DE-AC05-06OR23177, under which
Jefferson Science Associates LLC manages and operates Jefferson Lab.
GS is funded through the Institute for Advanced Computing Applications
and Technologies (IACAT) at the University of Illinois at
Urbana-Champaign. The U.S. Government retains a non-exclusive,
paid-up, irrevocable, world-wide license to publish or reproduce this
manuscript for U.S. Government purposes.
\bibliographystyle{utcaps}
|
2,869,038,154,700 | arxiv | \section{Introduction}
The Standard Model (SM) has been very successful in predicting and
fitting all the experimental measurements up-to-date over
energies ranging many orders of magnitude\cite{Beringer:1900zz}.
Unfortunately the SM is only a patchwork where several sectors
remain totally unconnected. Flavor physics for example involves
quark masses, mixings angles and CP violating phases appearing in
the Cabibbo-Kobayashi-Maskawa (CKM) quarks mixing
matrix\cite{Cabibbo:1963yz,Kobayashi:1973fv}. These parameters
unavoidably have to be measured and are independent from
parameters present in other sectors like Electroweak Symmetry
breaking, Quantum Chromodynamics, etc. Other sectors remain to be
tested like CP violation in the up-quarks sector and even tensions
with experimental measurements remain to be cleared (see for
instance
refs.\cite{Bona:2009cj,Hara:2010dk,Aubert:2009wt,Lees:2012xj}).
This is why it is important to find processes where the SM
predictions are very well known and a simple measurement can show
their discrepancy. One of these processes is the rare decays and
other \lq null' tests which correspond to an observable strictly
equal to zero within SM. So any deviation from zero of these \lq
null' tests observables is a clear signal of Physics beyond SM.
This is the case of Cabibbo-Favored (CF) and Double Cabibbo
Suppressed (DCS) non-leptonic charm decays where the direct CP
violation is very suppressed given that penguin diagrams are
absent\cite{Ryd:2009uf,Artuso:2008vf,Antonelli:2009ws}.
Even with the observation of $D^0$ oscillation
\cite{Aaltonen:2007ac,Staric:2007dt,Aubert:2007wf,:2012di,Lees:2012qh,Asner:2012xb}
and the first signal of CP violation in $D\to 2\pi,\ 2K$ (Singly
Cabibbo Suppressed (SCD) modes)
\cite{Frabetti:1993fw,Aitala:1996sh,Aitala:1997ff,Bonvicini:2000qm,Link:2001zj,Csorna:2001ww,Arms:2003ra,
Acosta:2004ts,Aubert:2006sh,Aubert:2007if,Staric:2008rx,Mendez:2009aa,Ko:2010ng,Aaij:2011in,Collaboration:2012qw,
Charles:2012rn}, it is not clear that the SM
\cite{Burdman:2001tf,Grossman:2006jg,Golowich:2006gq,Golowich:2007ka,Bigi:2011re,Hochberg:2011ru,
Brod:2011re,Cheng:2012wr,Giudice:2012qq} can describe correctly
the CP violation in the up quarks sector. It is even more
difficult as large distance contributions are important and
difficult to be evaluated
\cite{Donoghue:1985hh,Golowich:1998pz,Falk:2004wg,Cheng:2010rv,Gronau:2012kq}.
From the point of view of New Physics (NP), CP violation in CF and
DCS modes is an excellent opportunity given that it is very
suppressed in the SM and it is not easy to find a NP model able to
produce a reasonable CP violation signal. Thus measuring CP
violation in these channels is a very clear signal of New Physics.
Up to now, only $D^0\leftrightarrow \bar D^0$ oscillations have
been observed and their parameters have been
measured\cite{Beringer:1900zz,Aaltonen:2007ac,Staric:2007dt,Aubert:2007wf,:2012di,Lees:2012qh,Asner:2012xb}:
\begin{eqnarray}
x\equiv {\Delta m_d\over \Gamma_D}=0.55^{+0.12}_{-0.13} \ \ & , & y\equiv {\Delta \Gamma_D\over 2\Gamma_D}=0.83(13)\\,
\left|{q\over p}\right|=0.91^{0.18}_{0.16} \ \ &,&\ \phi\equiv {\rm arg}\ (q/p) =-\left(10.2^{+9.4}_{-8.9}\right)^\circ
\end{eqnarray}
where $x\neq 0$ or/and $y\neq 0$ mean oscillations have been
observed, while $|q/p|\neq 1$ and/or $\phi\neq 0$ are necessary to
have CP violation. The theoretical estimations of these
parameters\cite{Beringer:1900zz} are not easy as they have large
uncertainties given that the $c$ quark is not heavy enough to
apply Heavy quark effective theory (HQE) (like in $B$
physics)\cite{Georgi:1992as}. Similarly it is not light enough to
use Chiral Perturbation Theory (CPTh) (like in Kaon physics).
Besides there are cancellations due to the GIM
mechanism\cite{Cabibbo:1963yz,Glashow:1970gm}. Theoretically CP
violation in the charm sector is smaller than in the $B$ and kaon
sectors. This is due to a combination factors: CKM matrix elements
($\left|V_{ub}V_{cb}^*/V_{us}V_{cs}^*\right|^2\sim 10^{-6}$) and
the fact that $b$ quark mass is small compared to top mass. CP
violation in the $b$-quark sector is due to the large top quark
mass, while in the kaon is due to a combination of the charm and
top quark.
Experimental data should be improved within the next years with
LHCB \cite{Gersabeck:2011zz} and the different Charm Factory
project \cite{Aushev:2010bq}. In table (\ref{table1}) the
experimentally measured Branching ratios and CP asymmetries are
given for different non-leptonic $D$ decays.
\begin{table}
\centering
\begin{tabular}{|l|l|l||l|l|l|} \hline
Mode & BR[\%] &$A_{\rm CP}$ [\%] & Mode & BR[\%] &$A_{\rm CP}$ [\%] \\ \hline
$D^0 \to K^-\pi^+$\ CF &3.95(5) & - &
$D^0 \to \bar K^0\pi^0$\ CF &2.4(1) & - \\ \hline
$D^0 \to \bar K^0\eta$\ CF &0.96(6) & - &
$D^0 \to \bar K^0\eta'$\ CF &1.90(11) & - \\ \hline
$D^+ \to \bar K^0\pi^+$\ CF &3.07(10) & - &
$D_s^+ \to K^+\bar K^0$\ CF &2.98(8) & - \\ \hline
$D_s^+ \to \pi^+\eta$\ CF &1.84(15) & - &
$D_s^+ \to \pi^+\eta'$\ CF &3.95(34) & - \\ \hline \hline
$D^0 \to K^+\pi^-$\ DCS &$1.48(7)\cdot 10^{-4}$ & - &
$D^0 \to K^0\pi^0$\ DCS &- & - \\ \hline
$D^0 \to K^0\eta$\ DCS &- & - &
$D^0 \to K^0\eta'$\ DCS &- & - \\ \hline
$D^+ \to K^0\pi^+$\ DCS &- & - &
$D^+ \to K^+\pi^0$\ DCS &$1.72(19)\cdot 10^{-2}$ & - \\ \hline
$D^+ \to K^+\eta$\ DCS &$1.08(17)\cdot 10^{-2}$ & - &
$D^+ \to K^+\eta'$\ DCS &$1.76(22)\cdot 10^{-2}$ & - \\ \hline
$D_s^+ \to K^+K^0$\ DCS &- & - &&& \\ \hline\hline
$D^0 \to \pi^-\pi^+$ &0.143(3) & 0.22(24)(11) \\ \hline
$D^0 \to K^- K^+$ &0.398(7) & -0.24(22)(9) &
$A_{\rm CP}(K^+K^-)-A_{\rm CP}(\pi^+\pi^-)$ & -- & -0.65(18) \\ \hline
$D^+ \to K_S^0\pi^+$ &1.47(7) & -0.71(19)(20) &
$D^\pm \to \pi^+\pi^- \pi^\pm$ &0.327(22) & 1.7(42) \\ \hline
$D^\pm \to K^\mp\pi^\pm \pi^\pm$ &9.51(34) & -0.5(4)(9) &
$D^\pm \to K_s^0\pi^\pm \pi^0$ &6.90(32) & 0.3(9)(3) \\ \hline
$D^\pm \to K^+K^- \pi^\pm$ &0.98(4) & 0.39(61) & & & \\ \hline
\end{tabular}
\caption{Direct CP in $D$ non-leptonic decays, from Heavy Flavor Averaging Group HAFG \cite{hafg,Beringer:1900zz}}
\label{table1}
\end{table}
In this paper, we study in details the CP asymmetry for the CF
$D^0 \rightarrow K^- \pi^+$ decay. In sect. II, we give the
general description of the Effective Hamiltonian describing this
decay within SM and show how to evaluate the strong phases needed
to get CP violating observables. These strong phases are generated
through Final State Interaction (FSI). In sect. III, we first
evaluate the SM prediction for the CP asymmetry and we show that
within SM, such CP asymmetry is experimentally out of range. In
sect. IV, New Physics models are introduced and their
contributions to CP asymmetry are evaluated. Finally, we conclude
in sect. V.
\section{ General Description of CF non leptonic $D^0$ decays into $K^-$ and $\pi^+$}
In general the Hamiltonian describing $D^0 \to K^- \pi^+$ is given by
\begin{eqnarray}
{\cal L}_{\rm eff.} &=& {G_F\over\sqrt{2}}V_{cs}^*V_{ud} \left[\sum_{i,\ a} c_{1ab}^i \bar s\Gamma^ic_a\bar u \Gamma_id_b+ \sum_{i,\ a} c_{2ab}^i \bar u\Gamma^ic_a\bar s \Gamma_id_b \right]
\end{eqnarray}
with $i=$S, V and T for respectively scalar (S), vectorial (V) and
tensorial (T) operators. The Latin indexes $a,\ b=L,\ R$ and
$q_{L,\ R}=(1\mp \gamma_5)q$.
Within the SM, only two operators contribute to the effective
hamiltonian for this
process\cite{Ryd:2009uf,Artuso:2008vf,Antonelli:2009ws}. The other
operators can only be generated through new physics.
\begin{eqnarray}
{\cal H} &=& {G_F\over\sqrt{2}}V_{cs}^*V_{ud}\left(c_1\bar s\gamma_\mu c_L\bar u \gamma^\mu d_L+c_2\bar u \gamma_\mu c_L\bar s\gamma^\mu d_L\right)+{\rm h.c.} \\
&=& {G_F\over\sqrt{2}}V_{cs}^*V_{ud}\left(c_1{\cal O}_1+c_2{\cal O}_2\right)+{\rm h.c.}
\label{SMH}
\end{eqnarray}
where $a_1\equiv c_1+ c_2/N_c =1.2\pm 0.1$ and $a_2\equiv
c_2-c_1/N_C=-0.5\pm
0.1$\cite{Ryd:2009uf,Artuso:2008vf,Antonelli:2009ws} where $N_C$
is the color number. For the case $D\to
K\pi$\cite{Ryd:2009uf,Artuso:2008vf,Antonelli:2009ws} one has that
\begin{eqnarray}
A_{D^0\to K^-\pi^+} &=&-i{G_F\over \sqrt{2}}V_{cs}^*V_{ud} \left[a_1X^{\pi^+}_{D^0K^-}+a_2X^{D^0}_{K^-\pi^+} \right], \label{1}\\
BR&=&{\tau_D p_K\over 8\pi m_D^2}|A|^2
\end{eqnarray}
where BR is the Branching ratio of the process. $\tau_D$ is the D
lifetime, $p_K$ is the Kaon momentum and $m_D$ is the $D$ meson
mass. The $X^{\pi^+}_{D^0K^-}$ and $X^{D^0}_{K^-\pi^+}$ can be
expressed in the following way:
\begin{eqnarray}
X^{P_1}_{P_2P_3}= if_{P_1}\Delta_{P_2P_3}^2 F_0^{P_2P_3}(m_{P_1}^2),\ \Delta_{P_2P_3}^2=m_{P_2}^2-m_{P_3}^2
\end{eqnarray}
where $f_{D}$ and $f_{K}$ are the decay constants for $D$ and
$K$ mesons respectively and $F_0^{DK}$ and $F_0^{D\pi}$ are the
corresponding form factors. These amplitudes have been computed
within the so called naive factorization approximation (NFA)
without including the Final State Interaction (FSI). In NFA, no
strong CP conserving phases are obtained (and therefore no CPV is
predicted) but it is well known that FSI effects are very
important in these channels
\cite{Buccella:1994nf,Falk:1999ts,Rosner:1999xd,Gao:2006nb,Cheng:2010ry}.
In principle you have many FSI contributions: resonances, other
intermediate states, rescattering, and so on. Resonances are
specially important in this region given that they are abundant.
They can be included and seems to produce appropriate strong
phases \cite{Cheng:2010ry}. However the other contributions
mentioned above have to be included too, rendering the theoretical
prediction cumbersome. A more practical approach, although less
predictive, is obtained by fitting the experimental data
\cite{Buccella:1994nf,Cheng:2010ry}. This is the so called quark
diagram approach. Within this approach, the amplitude is
decomposed into parts corresponding to generic quark diagrams.
The main contributions are the tree level quark contribution (T),
exchange quark diagrams (E), color-suppressed quarks diagrams (C).
Their results can be summarized in the following way, for the
process under consideration\cite{Cheng:2010ry}:
\begin{eqnarray}
A_{D^0\to K^-\pi^+} &\equiv & V_{cs}^*V_{ud}(T+E) \label{4}
\end{eqnarray}
with \begin{eqnarray}
T &=& (3.14\pm 0.06)\cdot 10^{-6}{\rm GeV}\nonumber \\
E &=& 1.53^{+0.07}_{-0.08}\cdot 10^{-6}\cdot {\rm e}^{(122\pm 2)^\circ\ i}\ {\rm GeV} \label{amplitude}
\end{eqnarray}
where in NFA they can be approximately written as
\begin{eqnarray}
T& \simeq & {G_F\over \sqrt{2}}a_1f_\pi(m_D^2-m_K^2)F_0^{DK}(m_\pi^2) \\
E& \simeq & -{G_F\over \sqrt{2}}a_2f_D(m_K^2-m_\pi^2)F_0^{K\pi}(m_D^2)
\end{eqnarray}
In the rest of this work we are going to use the values obtained
by the experimental fit, given in eq. (\ref{amplitude}).
\section{ CP asymmetry in $D^0 \to K^- \pi^+$ within SM}
\begin{figure}[tbhp]
\includegraphics[width=4.5cm]{Wbox}\hspace{1.cm}
\caption{Feynman diagram for CF processes: Box contribution.}
\end{figure}
In the case of CF (and DCF) processes the corrections are very
small (see diagrams in fig.1 and fig.2) and are generated through
box and di-penguin
diagrams\cite{Donoghue:1986cj,Petrov:1997fw,box}. In this section,
we shall evaluate these contributions.
The box contribution is given as \cite{box,He:2009rz}
\begin{eqnarray}
\Delta {\cal H} &=& \frac{G_F^2m_W^2}{ 2\pi^2} V_{cD}^*V_{uD} V_{Us}^*V_{Ud}f(x_U,\ x_D) \bar u\gamma_\mu c_L\bar s\gamma^\mu d_L \\
&=&\frac{G_F^2m_W^2}{ 2\pi^2} \lambda^D_{cu}\lambda^U_{sd}f(x_U,\ x_D){\cal O}_2 \\
&=&\frac{G_F^2m_W^2}{ 2\pi^2} b_x {\cal O}_2\nonumber
\end{eqnarray}
where
\begin{eqnarray}
b_x &\equiv & \lambda^D_{cu}\lambda^U_{sd}f(x_U,\ x_D) \\
&=& V_{cd}^*V_{ud}\left( V_{us}^*V_{ud}f_{ud}+V_{cs}^*V_{cd}f_{cd}+V_{ts}^*V_{td}f_{td} \right)
\nonumber \\
&& + V_{cs}^*V_{us}\left( V_{us}^*V_{ud}f_{us}+V_{cs}^*V_{cd}f_{cs}+V_{ts}^*V_{td}f_{ts} \right) +V_{cb}^*V_{ub}\left( V_{us}^*V_{ud}f_{ub}+V_{cs}^*V_{cd}f_{cb}+V_{ts}^*V_{td}f_{tb} \right)
\nonumber \\
&=& V_{cs}^*V_{us}\left[ V_{cs}^*V_{cd}\left(f_{cs}-f_{cd}-f_{us}+f_{ud}\right)+V_{ts}^*V_{td}\left(f_{ts}-f_{td} -f_{us}+f_{ud} \right) \right] \nonumber \\
& & + V_{cb}^*V_{ub}\left[V_{cs}^*V_{cd}\left(f_{cb}-f_{cd} -f_{ub}+ f_{ud}\right)+V_{ts}^*V_{td}\left(f_{tb}-f_{td}-f_{ub}+f_{ud} \right) \right]
\end{eqnarray}
with $\lambda_{DD'}^U \equiv V_{UD}^*V_{UD'}$, $\lambda_{UU'}^D \equiv V_{UD}^*V_{U'D}$, $U=u,\ c,\ t$ and $D=d,\ s,\ b$, $x_q=(m_q/m_W)^2$ and $f_{UD} \equiv f(x_U,x_D)$ \cite{inami}
\begin{eqnarray}
f(x,\ y) ={7xy-4\over 4(1-x)(1-y)} +{1\over x-y}\left[ {y^2\log y\over (1-y)^2}\left(1-2x+{xy\over 4}\right)- {x^2\log x\over (1-x)^2}\left(1-2y+{xy\over 4}\right) \right] \nonumber
\end{eqnarray}
Numerically, one obtains
\begin{equation}
b_x \simeq 3.6\cdot 10^{-7} {\rm e}^{0.07\cdot i}
\end{equation}
The quark masses are taken their values at $m_c$ scale as given in \cite{Beringer:1900zz}.
The other contribution to the Lagrangian is the dipenguin and it gives \cite{Donoghue:1986cj,Petrov:1997fw,Chia:1983hd}
\begin{figure}[tbhp]
\includegraphics[width=7.5cm]{dipenguin}
\caption{Feynman diagram for CF processes:di-penguins contribution.}
\end{figure}
\begin{eqnarray}
\Delta {\cal H} &=& -{G_F^2\alpha_S\over 8\pi^3}\left[ \lambda^D_{cu} E_0(x_D)\right] \left[ \lambda^U_{sd} E_0(x_U)\right] \bar s \gamma_\mu T^a d_L \left(g^{\mu \nu} \Box-\partial^\mu\partial^\nu \right) \bar u \gamma_\nu T^a c_L\nonumber \\
&=& -{G_F^2\alpha_S\over 8\pi^3}pg \bar s \gamma_\mu T^a d_L \left(g^{\mu \nu} \Box-\partial^\mu\partial^\nu \right) \bar u \gamma_\nu T^a c_L \\
&\equiv & {G_F^2\alpha_S\over 16\pi^3} p_g {\cal O} \nonumber \\
p_g&\equiv & \left[ \lambda^D_{cu} E_0(x_D)\right] \left[ \lambda^U_{sd} E_0(x_U)\right]=\left[ V_{cs}^*V_{us}\left(E_0(x_s)-E_0(x_d)\right)+ V_{cb}^* V_{ub}\left( E_0(x_b)-E_0(x_d) \right) \right]
\nonumber \\
&& \left[ V_{cd}V_{cs}^* \left(E_0(x_c)-E_0(x_u)\right) + V_{td}V_{ts}^* \left(E_0(x_t)-E_0(x_u)\right)\right]
\end{eqnarray} where $T^a$ are the generator of $SU(3)_C$.
Numerically, $p_g \simeq -1.62\cdot{\rm e}^{-0.002 i}$ and the Inami functions are given by
\begin{eqnarray}
E_0(x) &=& {1\over 12(1-x)^4}\left[ x(1-x)(18-11x-x^2)-2(4-16x+9x^2)\log(x)\right]
\end{eqnarray}
The operator ${\cal O}$ can be reduced as
\begin{eqnarray}
{\cal O} &=& \bar s \gamma_\mu T^a d_L \left(g^{\mu \nu} \Box-\partial^\mu\partial^\nu \right) \bar u \gamma_\nu T^a c_L = \bar s \gamma_\mu T^a d_L \Box \left(\bar u \gamma^\nu T^a c_L\right) + \bar s \partial \hskip-0.2cm /\ T^a d_L \bar u \partial \hskip-0.2cm /\ T^a c_L
\nonumber \\
&=& -q^2 \bar s \gamma_\mu T^a d_L \bar u \gamma^\mu T^a c_L-\left(m_s \bar s T^a d_{S-P}+ m_d\bar s T^a d_{S+P} \right) \cdot \left(m_c \bar u T^a c_{S+P}+m_u \bar u T^a c_{S-P}\right) \nonumber \\
&&-q^2 \bar s \gamma_\mu T^a d_L \bar u \gamma^\mu T^a c_L -m_s m_c \bar s T^a d_L \bar u T^a c_R - m_d m_u\bar s T^a d_R \bar u T^a c_L \nonumber \\
&& -m_s m_u\bar s T^a d_L \bar u T^a c_L-m_d m_c\bar s T^a d_R \bar u T^a c_R
\end{eqnarray}
where $q^2$ is the gluon momentum and $N$ is the colour number. This expression can be simplified using the fact that
\begin{eqnarray}
\bar s \gamma_\mu T^a d_L \bar u \gamma^\mu T^a c_L &=& {1\over 2}\left({\cal O}_1-{1\over N}{\cal O}_2\right) \nonumber \\
\bar s T^a d_L \bar u T^a c_R &=&-{1\over 4}\bar s \gamma_\mu c_R \bar u \gamma^\mu d_L -{1\over 2N}\bar s d_L \bar u c_R \nonumber \\
\bar s T^a d_R \bar u T^a c_L &=& -{1\over 4}\bar s \gamma_\mu c_L \bar u \gamma^\mu d_R -{1\over 2N}\bar s d_R \bar u c_L \nonumber \\
\bar s T^a d_L \bar u T^a c_L &=&-{1\over 4}\bar s c_L \bar u d_L -{1\over 16}\bar s\sigma_{\mu\nu} c_L \bar u \sigma^{\mu\nu} d_L -{1\over 2N}\bar s d_L \bar u c_L \nonumber \\
\bar s T^a d_R \bar u T^a c_R &=&-{1\over 4}\bar s c_R \bar u d_R -{1\over 16}\bar s\sigma_{\mu\nu} c_R \bar u \sigma^{\mu\nu} d_R-{1\over 2N}\bar s d_R \bar u c_R
\end{eqnarray}
Once taking the expectation values, one obtains
\begin{eqnarray}
\left<{\cal O}\right> &=&-q^2 \left<\bar s \gamma_\mu T^a d_L \bar u \gamma^\mu T^a c_L\right> -m_s m_c \left<\bar s T^a d_L \bar u T^a c_R\right>
-m_d m_u\left<\bar s T^a d_R \bar u T^a c_L\right> \nonumber \\
&&-m_s m_u\left<\bar s T^a d_L \bar u T^a c_L\right>-m_d m_c\left<\bar s T^a d_R \bar u T^a c_R\right>
\nonumber \\
&\simeq &-{q^2\over 2}\left(1-{1\over N^2}\right)X^{\pi^+}_{D^0K^-}+{m_sm_c\over 4}\left(1-{1\over N}\right)X^{\pi^+}_{D^0K^-}+{5m_d\over 8Nm_s}m_D^2X^{D^0}_{K^- \pi^+}
\end{eqnarray}
Hence, one gets for the Wilson coefficients
\begin{eqnarray}
\Delta a_1 &=& -{G_Fm_W^2\over \sqrt{2}\ \pi^2V_{cs}^*V_{ud}N } b_x- {G_F \alpha_S\over 4\sqrt{2} \pi^3V_{cs}V_{us}^*}\left[{q^2\over 2}\left(1-{1\over N^2}\right)-{m_cm_s\over 4}\left(1-{1\over N}\right) \right] p_g \nonumber \\
&\simeq & 2.8\cdot 10^{-8}{\rm e}^{-0.004i} \nonumber \\
\Delta a_2 &=& -{G_Fm_W^2\over \sqrt{2}\ \pi^2V_{cs}^*V_{ud} } b_x- {G_F \alpha_S\over 4\sqrt{2} \pi^3V_{cs}V_{us}^*}{5m_d m_D^2\over 8Nm_s}p_g \nonumber \\
& \simeq & -2.0\cdot 10^{-9}{\rm e}^{0.07i}
\end{eqnarray}
where to obtain the last result it has been used the fact that for
the decay $D^0\to K^-\pi^+$, one can approximate $q^2=(p_c \mp
p_u)^2=(p_s\pm p_d)^2\simeq
(p_D-p_\pi/2)^2=(m_D^2+m_K^2)/2+3m_\pi^2/4$, by assuming that
$p_c\simeq p_D$ and $p_u\simeq p_\pi/2$ and $\alpha_S\simeq 0.3$.
It should be noticed that the box contribution is dominated by the
heavy quarks while the penguin is by the light ones. The direct CP
asymmetry is then
\begin{eqnarray}
A_{CP} &=& {|A|^2-|\bar A|^2 \over |A|^2+|\bar A|^2 }= {2|r|\sin(\phi_2-\phi_1)\sin(\alpha_E)))\over |1+ r|^2 }= 1.4\cdot 10^{-10}
\end{eqnarray}
with $r=E/T$, $a_i\to a_i+\Delta a_i=a_i+|\Delta a_i|\exp[i\Delta
\phi_i]$ and $\phi_i\simeq\Delta a_i \sin\Delta \phi_i/a_i$ and
$\alpha_E$ is the conserving phase which appears in
eq.(\ref{amplitude}).
\section{New Physics}
With New Physics, the general Hamiltonian is not only given by
${\cal O}_{1,2}$. The expressions of the expectation values of
these operators can be found in the appendix. It is important to
notice that as expected only two form factors appear, namely
$\chi^{D^0}_{K^- \pi^+}$ and $\chi^{\pi^+}_{D^0 K^-}$. This is
important to take into account the FSI interactions as the first
one is identified as E contribution and the second one is
identified as T contribution. In the next subsections, we shall
calculate the Wilson coefficient for different models of New
Physics. The first case will be assuming to have extra SM fermion
family. The second example will be to compute the CP asymmetry
generated by a new charged gauge boson as it appears for instance
in models based on gauge group $SU(2)_L \times SU(2)_R \times
U(1)_{B-L}$ and our last subsection is dedicated to the effects CP
asymmetry coming from new charged Higgs-like scalar fields,
applying to two Higgs extension of the SM (type II and type III).
\subsection{Contributions to $A_{CP}$ from extra SM fermion family}
A simple extension of the SM is the introduction of a new
sequential generation of quarks and leptons (SM4). A fourth
generation is not exclude by precision
data\cite{Frampton:1999xi,Maltoni:1999ta, He:2001tp,
Novikov:2002tk, Kribs:2007nz, Hung:2007ak,
Bobrowski:2009ng,Hashimoto:2010at}. Recent reviews on
consequences of a fourth generation can be found in
\cite{Holdom:1986rn,Hill:1990ge,
Carpenter:1989ij,Hung:1997zj,Ham:2004xh, Fok:2008yg,Hou:2008xd,
Kikukawa:2009mu, Hung:2009hy,Holdom:2009rf,Hung:2009ia}.
The $B\to K \pi$ CP asymmetries puzzles is easily solved by a fourth generation
\cite{Soni:2008bc,Hou:2005hd,Hou:2006jy} with a mass within the following range\cite{Soni:2008bc}:
\begin{eqnarray}
400\; \mathrm{ GeV} <& m_{u_4} & < 600\; \mathrm{ GeV} .
\end{eqnarray}
The value of SM4 parameters compatibles with the high precision LEP measurements
\cite{Maltoni:1999ta,He:2001tp,Novikov:2002tk,Bobrowski:2009ng} are
\begin{eqnarray}\label{eq:benchmarks}
m_{u_4} - m_{d_4} &&\simeq
\left( 1 + \frac{1}{5} \ln \frac{m_H}{115 \; \mathrm{GeV}}
\right) \times 50 \; \mathrm{GeV} \\
|V_{u d_4}|,|V_{u_4 d}| &&\lsim 0.04
\end{eqnarray}
where $V$ is the CKM quark mixing matrix which is now a $4\times 4$ unitary matrix.
The direct search limits from LEPII and CDF \cite{Achard:2001qw,Lister:2008is,Aaltonen:2009nr} are given by:
\begin{eqnarray}\label{LEP-CDF}
m_{u_4} & >& 311\; \mathrm{GeV}\; \\
m_{d_4} & >& 338\; \mathrm{GeV}. \nonumber
\end{eqnarray}
Direct search by Atlas and CMS coll. have excluded $m_{d_4}<480$
GeV and $m_{q_4}<350$ GeV
\cite{Magnin:2009zz,Eberhardt:2012ck,Alok:2010zj}, above the tree
level unitarity limit, $m_{u_4}<\sqrt{4\pi/3}\ v\simeq 504$ GeV.
But SM4 is far to be completely understood. Most of the
experimental constraints are model-dependent. For instance it has
been shown in \cite{Geller:2012wx} that the bound on $m_{u_4}$
should be relaxed up to $m_{u_4} > 350 GeV$ if the decay $u_4
\rightarrow ht$ dominates. The recent LHC results which observe an
excess in the $H \rightarrow \gamma \gamma$ corresponding to a
Higgs mass around 125 $GeV$ \cite{ATLAS:2012ae,Chatrchyan:2012tx}
seems to exclude the SM4 scenario \cite{Djouadi:2012ae} but this
results is based on the fact that once we include the next-to
leading order electroweak corrections, the rate
$\sigma(gg\rightarrow H)\times Br(H\rightarrow \gamma \gamma) $ is
suppressed by more than 50\% compared to the rate including only
the leading order corrections \cite{Georgi:1977gs,
Djouadi:1991tka,Denner:2011vt,
Djouadi:2012ae,Kuflik:2012ai,Eberhardt:2012sb}. This could be a
signal of a non-perturbative regime which in SM4 can be easily
reached at this scale due to the fourth generation strong Yukawa
couplings. Therefore, direct and model-independent searches for
fourth generation families at collider physics are still necessary
to completely exclude the SM4 scenario.
The CP asymmetry in model with a fourth family is easy to compute as the
contributions come from the same diagrams in the SM with just adding an extra
$u_4\equiv t'$ and $d_4\equiv b'$. Similarly in ref.\cite{Alok:2010zj},
it has been found that new CKM matrix elements can be obtained (all consistent with zero and for $m_{b'}=600$ GeV) to be
\begin{eqnarray}
s_{14} &=&|V_{ub'}|=0.017(14),\ s_{24}={|V_{cb'}|\over c_{14}}={0.0084(62)\over c_{14}},\ s_{34}={|V_{tb'}|\over c_{14}c_{24}}= {0.07(8)\over c_{14}c_{24}} \nonumber \\
|V_{t'd}| &=& |V_{t's}| = 0.01(1),\ |V_{t'b}| = 0.07(8),\ |V_{t'b'}|=0.998(6),\ |V_{tb}|\geq 0.98 \nonumber \\
\tan \theta_{12} &=& \left|{V_{us}\over V_{ud}}\right|,\ s_{13}={|V_{ub}| \over c_{14}},\ \delta_{13}=\gamma=68^\circ \nonumber \\
|V_{cb}| &=& |c_{13}c_{24}s_{23}-u_{13}^*u_{14}u_{24}^*|\simeq c_{13}c_{24}s_{23} \label{ckm4}
\end{eqnarray}
The two remaining phases ($\phi_{14}$ and $\phi_{24}$) are
unbounded. Thus the absolute values of the CKM elements for the
three families remain almost unchanged but not their phases. From
these values one obtains
\begin{eqnarray}
s_{13} &=& 0.00415,\ s_{12}=0.225,\ s_{23}=0.04,\ s_{14}=0.016,\ s_{24}=0.006,\ s_{34}=0.04
\end{eqnarray}
For a 4th sequential family the maxima value for the CP violation is obtained as
\begin{eqnarray}
A_{{\rm CP}} &\simeq&-1.1\cdot 10^{-7} \nonumber \\
\end{eqnarray}
where one uses $|V_{ub'}| = 0.06,\ |V_{cb'}|=0.03,\ |V_{tb'}|=0.25,\ \phi_{14}=-2.9,\ \phi_{24}=1.3$
This maximal value is obtained when the parameters mentioned
above are varied in a the range allowed by the experiential
constrains, according to eq. \ref{ckm4} in a 'three sigma' range.
The phases are varied in the whole range, from $-\pi$ to $\pi$.
Thus one can obtain an enhancement of thousand that may be large
but still very far from the experimental possibilities.
\subsection{A new charged gauge boson as Left Right models}
In this section, we shall look to see what could be the effect on
the CP asymmetry coming from a new charged gauge boson coupled to
quarks and leptons. As an example of such models, we apply our
formalism to a well known extension of the Standard Model based on
extending the SM gauge group including a gauge $SU(2)_R$
\cite{Pati:1973rp,Mohapatra:1974hk,Mohapatra:1974gc,Senjanovic:1975rk,Senjanovic:1978ev}.
So now, our gauge group defining the electroweak interaction is
given by $SU(2)_L \times SU(2)_R \times U(1)_{B-L}$. This SM
extension has been extensively studied in previous works (see for
instance refs.
\cite{Beall:1981ze,Cocolicchio:1988ac,Langacker:1989xa,Cho:1993zb,Babu:1993hx}
) and their parameters have been strongly constrained by
experiments
\cite{Beringer:1900zz,Alexander:1997bv,Acosta:2002nu,Abazov:2006aj,Abazov:2008vj,Abazov:2011xs}.
Recently, CMS \cite{Chatrchyan:2012meb,Chatrchyan:2012sc} and
ATLAS \cite{Aad:2011yg,Aad:2012ej} at LHC have improved
the bound on the scale of the $W_R$ gauge boson mass
\cite{Maiezza:2010ic}. The new diagrams contributing to $D \to K
\pi$ are similar to the SM tree-level diagrams with $W_L $ is
replaced by a $W_R$. These diagrams contribute to the effective
Hamiltonian in the following way assuming no mixing between $W_L$
and $W_R$ gauge bosons :
\begin{eqnarray}
{\cal H}_{\rm LR} &=& {G_F\over\sqrt{2}}\left({g_R m_W\over g_L m_{W_R}}\right)^2V_{Rcs}^*V_{Rud}\left(c_1'\bar s\gamma_\mu c_R\bar u \gamma^\mu d_R+c_2'\bar u \gamma_\mu c_R\bar s\gamma^\mu d_R\right)+{\rm H.C.} \nonumber \\
&=& {G_F\over\sqrt{2}}V_{cs}^*V_{ud}\left(c_1{\cal O}_1+c_2{\cal O}_2\right)+{\rm H.C.}
\end{eqnarray}
where $g_{L}$ and $g_{R}$ are the gauge $SU(2)_{L}$ and
$SU(2)_{R}$ couplings respectively. $m_W$ and $m_{W_R}$ are
the $SU(2)_{L}$ and $SU(2)_{R}$ charged gauge boson masses respectively. $ V_R$
is the quark mixing matrix which appears in the right sector of
the lagrangian similar to the CKM quark mixing matrix. This new
contribution can enhance the SM prediction for the CP asymmetry
but still it is suppressed due to the limit on $M_{W_R}$ which has
to be of order $2.3$ TeV \cite{Maiezza:2010ic} in case of
no-mixing Left right models.
In refs.\cite{Chen:2012usa,Lee:2011kn} it has been shown
that the mixing between the left and the right gauge bosons can
strongly enhance any CP violation in the Charm and muon sector.
This LR mixing is restricted by deviation to non-unitarity of the
CKM quark mixing matrix. The results were that the Left-Right
(LR) mixing angle called $\xi$ has to be smaller than
0.005\cite{Wolfenstein:1984ay} and right scale $M_R$ bigger than
2.5 TeV\cite{Maiezza:2010ic}. If the Left-Right is not manifest
(essentially that $g_R$ could be different from $g_L$ at
Unification scale), the limit on $M_R$ scale is much less
restrictive and the right gauge bosons could be as light as $0.3$
TeV \cite{Olness:1984xb}. In such a case, $\xi$ can be as large
as $0.02$ if large CP violation phases in the right sector are
present \cite{Langacker:1989xa} still compatible with experimental
data \cite{Jang:2000rk,Badin:2007bv,Lee:2011kn}. Recently,
precision measurement of the muon decay parameters done by TWIST
collaboration \cite{MacDonald:2008xf,TWIST:2011aa} put model
independent limit on $\xi$ to be smaller than 0.03 (taking
$g_L=g_R$). Let's now compute the effect of the LR mixing gauge
boson on our CP asymmetry. So first, one defines the charged
current mixing matrix\cite{Chen:2012usa}
\begin{eqnarray}
\left(\begin{array}{c}
W_L \\
W_R
\end{array}\right) =
\left(\begin{array}{cc}
\cos \xi & -\sin \xi \\
{\rm e}^{i\omega}\sin \xi & {\rm e}^{i\omega}\cos \xi
\end{array}\right)
\left(\begin{array}{c}
W_1 \\
W_2
\end{array}\right)\simeq
\left(\begin{array}{cc}
1 & - \xi \\
{\rm e}^{i\omega}\xi & {\rm e}^{i\omega}
\end{array}\right)
\left(\begin{array}{c}
W_1 \\
W_2
\end{array}\right)
\end{eqnarray}
where $W_1$ and $W_2$ are the mass eigenstates and $\xi\sim 10^{-2}$.
Thus the charged currents interaction part become
\begin{eqnarray}
{\cal L} &\simeq & -{1\over \sqrt{2}} \bar U \gamma_\mu \left(g_LVP_L+g_R\xi \bar V^RP_R\right)DW_1^\dagger-
{1\over \sqrt{2}} \bar U \gamma_\mu \left(-g_L\xi VP_L+g_R\bar V^RP_R\right)DW_2^\dagger
\end{eqnarray}
where $V=V_{\rm CKM}$ and $\bar V^R={\rm e}^{i\omega}V^R$.
Once one integrates out the $W_1$ in the usual way and
neglecting the $W_2$ contributions given its mass is much higher,
one obtains the effective hamiltonian responsible of our process:
\begin{eqnarray}
{\cal H}_{\rm eff.} &=& {4G_F\over \sqrt{2}}\left[c_1 \ \bar{s}\gamma_\mu \left(V^*P_L+{g_R\over g_L}\xi \bar V^{R*}P_R \right)_{cs}c \ \ \bar{u}\gamma^\mu \left(VP_L+{g_R\over g_L}\xi \bar VP_R \right)_{ud} d \right.
\nonumber \\
&& \left. c_2 \ \bar s_\alpha \gamma_\mu \left(V^*P_L+{g_R\over g_L}\xi \bar V^{R*}P_R \right)_{cs}c_\beta\bar u_\beta\gamma^\mu \left(VP_L+{g_R\over g_L}\xi \bar VP_R \right)_{ud} d_\alpha
\right]+{\rm h.\ c.}
\nonumber \\
\end{eqnarray}
where $\alpha,\beta$ are color indices. It is easy to check that
taking the limit $\xi \to 0$, one obtains eq.(\ref{SMH}) with the
only difference comes from the $c_2$ terms, the Fierz transformation has
been applied. The terms of the effective Hamiltonian
proportional to $\xi$ are: \bea \Delta {\cal H}_{\rm eff}&\simeq &
{G_F\over \sqrt{2}}{g_R\over g_L}\xi \left[c_1\bar s
\gamma_\mu V_{cs}^*c_L\bar u \gamma^\mu
\bar V^R_{ud}d_R+ c_1 \bar s\gamma_\mu \bar
V^{R*}_{cs}c_R \bar u \gamma^\mu V_{ud}d_L
\right.
\nonumber \\
&& \left. c_2\bar s_\alpha \gamma_\mu V_{cs}^*c_{L\beta} \bar u_\beta \gamma^\mu
\bar V^R_{ud}d_{R\alpha}+ c_2 \bar s_\alpha \gamma_\mu \bar V^{R*}_{cs}c_{R\beta} \bar u_\beta \gamma^\mu V_{ud}d_{L\alpha}
\right]+{\rm h.\ c.}
\eea
The contribution to the amplitude proportional to $\xi$ is then given by:
\begin{eqnarray}
\Delta A &= & -{iG_F\over \sqrt{2}}{g_R\over g_L}\xi \left[-c_1 V_{cs}^*\bar V^R_{ud}
\left( X^{\pi^+}_{D^0K^-}+{2\over N}\chi^{D^0} X^{D^0}_{K^-\pi^+} \right)+ c_1 \bar V^{R*}_{cs} V_{ud} \left( X^{\pi}_{D^0K^-}+{2\over N}\chi^{D^0} X^{D^0}_{K^-\pi^+} \right) \right.
\nonumber \\
&& \left. -c_2 V_{cs}^*\bar V^R_{ud} \left(2\chi^{D^0} X^{D^0}_{K^-\pi^+}+{1\over N} X^{\pi^+}_{D^0K^-} \right)+ c_2\bar V^{R*}_{cs}V_{ud} \left(2 \chi^{D^0} X^{D^0}_{K^-\pi^+}+{1\over N} X^{\pi^+}_{D^0K^-} \right)\right]
\nonumber \\
&= & {iG_F\over \sqrt{2}}{g_R\over g_L}\xi \left(V_{cs}^*\bar V^R_{ud}-\bar V^{R*}_{cs} V_{ud}\right)\left( a_1 X^{\pi^+}_{D^0K^-}+ 2\chi^{D^0} a_2X^{D^0}_{K^-\pi^+} \right) \nonumber \\
&=&-{g_R\over g_L}\xi \left(\bar V^{R*}_{cs} V_{ud}-V_{cs}^*\bar V^R_{ud}\right)\left( T- 2\chi^{D^0} E \right)
\end{eqnarray}
where$\chi^{\pi^+} $ and $\chi^{D^0}$ are defined as \bea \chi^{\pi^+}&=&{m_\pi^2\over (m_c-m_s)(m_u+m_d)}\nonumber\\
\chi^{D^0}&=& {m_D^2\over (m_c+m_u)(m_s-m_d)}\eea
The CP asymmetry becomes
\begin{eqnarray}
A_{\rm CP} ={4(g_R/g_L)\xi \over V_{cs}^*V_{ud}|1+r|^2}\left(1+2\chi^{D^0}\right)
{\rm Im}\left(\bar V_{cs}^{R*}V_{ud}-V_{cs}^* \bar V_{ud}^R \right)
{\rm Im}(r)
\end{eqnarray}
with $r=E/T$. For a value as large as $\xi\sim 10^{-2}$ the
asymmetry can be as large as 0.1. Also, we should notice that to
obtain this results, it has been used the fact that the
chiralities don't mix under strong interactions, if the quark
masses are not taken into account. This is approximately the case
in the evolution of the Wilson coefficients from $m_W$ to $m_c$ as
the quark in the loop are the down quarks contrarily to process
like $b \to s \gamma$ where the quarks in the QCD corrections are
the up quarks and in that case, a strong effect from top quarks
could be expected
\cite{Cho:1991cj,Buras:1993xp,Chetyrkin:1996vx,Buras:2011we}. In
our case, as a first approximation, the QCD corrections to the
Wilson coefficient coming from the running of the renormalization
group from $m_W$ to $m_c$ can be safely neglected.
\subsection{Models with Charged Higgs contributions}\label{SMHeff}
Our last example of new physics is considering contribution to the
effective Hamiltonian responsible of the $D^0 \to K^- \pi^+$
process due to a new charged Higgs fields. The simple SM
extensions which include new charged Higgs fields are the two
Higgs doublet models (2HDM)\cite{Haber:1978jt,Abbott:1979dt}.
Usually, it is used to classify these 2HDM in three types: type I,
II or III (for a review see ref. \cite{Branco:2011iw}). In 2HDM
type II models (like Minimal Supersymmetric Standard Model), one
Higgs couples to the down quarks and charged leptons and the other
Higgs couples to up type quarks. LEP has performed a Direct search
for a charged Higgs in type II 2HDM and they obtained a bound of
$78.6$ GeV \cite{Searches:2001ac}. Recent results on $ B\to \tau
\nu$ obtained by BELLE \cite{Hara:2010dk} and BABAR
\cite{Aubert:2009wt} have strongly improved the indirect
constraints on the charged Higgs mass in type II 2HDM
\cite{Baak:2011ze}:
\begin{equation}
m_{H^+}> 240 GeV \ \ at \ \ 95 \% CL
\end{equation}
2HDM type III is a general model where both Higgs couples to up
and down quarks. Of course, this means that 2HDM type III can
induce Flavor violation in Neutral Current and thus it can be
used to strongly constrain the new parameters in the model. We
shall focus our interest to the two Higgs doublet of type III as
the other two can be obtained from type III taking some limits. In
the 2HDM of type III, the Yukawa Lagrangian can be written as
\cite{Crivellin:2010er,Crivellin:2012ye} :
\begin{eqnarray}
\mathcal{L}^{eff}_Y &=& \bar{Q}^a_{f\,L} \left[
Y^{d}_{fi} \epsilon_{ab}H^{b\star}_d\,-\,\epsilon^{d}_{fi} H^{a}_u \right]d_{i\,R}\\
&-&\bar{Q}^a_{f\,L} \left[ Y^{u}_{fi}
\epsilon_{ab} H^{b\star}_u \,+\, \epsilon^{ u}_{fi} H^{a}_d
\right]u_{i\,R}\,+\,\rm{H.c}. \,,\nonumber
\end{eqnarray}
where $\epsilon_{ab}$ is the totally antisymmetric tensor, and
$\epsilon^q_{ij}$ parametrizes the non-holomorphic corrections
which couple up (down) quarks to the down (up) type Higgs doublet.
After electroweak symmetry breaking, $\mathcal{L}^{eff}_Y$ gives
rise to the following charged Higss-quarks interaction
Lagrangian:
\begin{equation}
\mathcal{L}^{eff}_{H^\pm} = \bar{u}_f {\Gamma_{u_f d_i
}^{H^\pm\,LR\,\rm{eff} } }P_R d_i
+ \bar{u}_f {\Gamma_{u_f d_i }^{H^\pm\,RL\,\rm{eff} } }P_L d_i\, ,\\
\label{Higgs-vertex}
\end{equation}
with \cite{Crivellin:2012ye} \bea {\Gamma_{u_f d_i
}^{H^\pm\,LR\,\rm{eff} } } &=& \sum\limits_{j = 1}^3 {\sin\beta\,
V_{fj} \left( \frac{m_{d_i }}{v_d} \delta_{ji}-
\epsilon^{ d}_{ji}\tan\beta \right), }
\nonumber\\
{\Gamma_{u_f d_i }^{H^ \pm\,RL\,\rm{eff} } } &=& \sum\limits_{j =
1}^3 {\cos\beta\, \left( \frac{m_{u_f }}{v_u} \delta_{jf}-
\epsilon^{ u\star}_{jf}\tan\beta \right)V_{ji}}
\label{Higgsv}
\eea
Here $v_u$ and $v_d$ are the vacuum expectations values of the
neutral component of the Higgs doublets, $V$ is the CKM matrix and $tan \beta = v_u/v_d$.
Using the Feynman-rule given in Eq. (\ref{Higgs-vertex}) we can
compute the effective Hamiltonian resulting from the tree level
exchanging charged Higgs diagram that governs the process under
consideration namely,
\be {\mathcal H}_{eff}= \frac{ G_F}{\sqrt{2}}V^*_{cs}V_{ud}
\sum^4_{i=1} C^H_i(\mu) Q^H_i(\mu),\ee where $C^H_i$ are the
Wilson coefficients obtained by perturbative QCD running from
$M_{H^{\pm}}$ scale to the scale $\mu$ relevant for hadronic decay
and $Q^H_i$ are the relevant local operators at low energy scale
$\mu\simeq m_c$. The operators can be written as
\bea
Q^H_1 &=&(\bar{s} P_R c)(\bar{u} P_L d),\nonumber\\
Q^H_2 &=&(\bar{s} P_L c)(\bar{u} P_R d),\nonumber\\
Q^H_3 &=&(\bar{s} P_L c)(\bar{u} P_L d),\nonumber\\
Q^H_4 &=&(\bar{s} P_R c)(\bar{u} P_R d),
\eea
And the Wilson coefficients $C^H_i$, at the electroweak scale,
are given by
\begin{eqnarray}
C^H_1 &=& \frac {\sqrt{2} }{ G_F V^*_{cs}V_{ud}
m^2_H} \bigg(\sum\limits_{j = 1}^3
{\cos\beta\, V_{j1} \left( \frac{m_u }{v_u} \delta_{j1}-
\epsilon^{ u\star}_{j1}\tan\beta \right)}\bigg)\bigg(
\sum\limits_{k= 1}^3 {\cos\beta\,V^{\star}_{k2}} \left(
\frac{m_c}{v_u} \delta_{k2}-\epsilon^{ u}_{k2}\tan\beta
\right)\bigg),\nonumber\\
C^H_2 &=& \frac {\sqrt{2} }{ G_F V^*_{cs}V_{ud}
m^2_H} \bigg(\sum\limits_{j = 1}^3
{\sin\beta\,V_{1j} \left( \frac{m_d }{v_d} \delta_{j1}-
\epsilon^{ d}_{j1}\tan\beta \right)}\bigg)\bigg( \sum\limits_{k=
1}^3 {\sin\beta\,V^{\star}_{2k}} \left( \frac{m_s}{v_d}
\delta_{k2}-\epsilon^{ d\star}_{k2}\tan\beta
\right)\bigg)\nonumber\\
C^H_3 &=& \frac {\sqrt{2} }{ G_F V^*_{cs}V_{ud}
m^2_H} \bigg(\sum\limits_{j = 1}^3 {\cos\beta\, V_{j1} \left(
\frac{m_u }{v_u} \delta_{j1}- \epsilon^{ u\star}_{j1}\tan\beta
\right)}\bigg)\bigg( \sum\limits_{k= 1}^3
{\sin\beta\,V^{\star}_{2k}} \left( \frac{m_s}{v_d}
\delta_{k2}-\epsilon^{ d\star}_{k2}\tan\beta
\right)\bigg),\nonumber\\
C^H_4 &=& \frac {\sqrt{2} }{ G_F V^*_{cs}V_{ud}
m^2_H}\bigg( \sum\limits_{k= 1}^3
{\cos\beta\,V^{\star}_{k2}} \left( \frac{m_c}{v_u}
\delta_{k2}-\epsilon^{ u}_{k2}\tan\beta
\right)\bigg)\bigg(\sum\limits_{j = 1}^3 {\sin\beta\,V_{1j} \left(
\frac{m_d }{v_d} \delta_{j1}- \epsilon^{ d}_{j1}\tan\beta
\right)}\bigg) \nonumber \\
\label{Higgsw}
\end{eqnarray}
We now discuss the experimental constraints on the $ \epsilon^{
q}_{ij}$ where $q=d,u$. The flavor-changing elements
$\epsilon^d_{ij}$ for $i\neq j$ are strongly constrained from
FCNC processes in the down sector because of tree-level neutral
Higgs exchange. Thus, we are left with only
$\epsilon^d_{11},\epsilon^d_{22}$. Concerning the elements
$\epsilon^u_{ij}$ we see that only
$\epsilon^u_{11},\epsilon^u_{22}$ can significantly effects the
Wilson coefficients without any CKM suppression. Other
$\epsilon^u_{ij}$ terms will be so small as the CKM suppression
will be of orders $\lambda$ or $\lambda^2$ or higher and so we
neglect them in our analysis.
One of the important constraints that on $ \epsilon^{
q}_{ij}$ where $q=d,u$ can be obtained by applying
the naturalness criterion of 't Hooft to the quark masses.
According to the naturalness criterion of 't Hooft, the
smallness of a quantity is only natural if a symmetry is gained in
the limit in which this quantity is zero \cite{Crivellin:2012ye}.
Thus it is unnatural to have large accidental cancellations
without a symmetry forcing these cancellations. Applying the
naturalness criterion of 't Hooft to the quark masses in the 2HDM
of type III we find that\cite{Crivellin:2012ye}
\begin{eqnarray}
|v_{u(d)} \epsilon^{d(u)}_{ij}|\leq \left|V_{ij}\right|\,{\rm max
}\left[m_{d_i(u_i)},m_{d_j(u_j)}\right]\,.
\end{eqnarray}
which leads to
\begin{eqnarray}
|\epsilon^{d(u)}_{ij}|\leq \frac{\left|V_{ij}\right|\,{\rm max
}\left[m_{d_i(u_i)},m_{d_j(u_j)}\right]}{|v_{u(d)}|}\,.\label{constr}
\end{eqnarray}
\begin{figure}
\includegraphics[width=6.5cm]{natrcribeta10}
\includegraphics[width=6.5cm]{natrcribeta100}
\caption{Constraints on $\epsilon^u_{22}$. Left plot
corresponding to $\tan\beta =10$ while right plot corresponding
to $\tan\beta =100$.}\label{higsplane}
\end{figure}
Clearly from the previous equation that
$\epsilon^u_{11},\epsilon^d_{11},\epsilon^d_{22}$ will be severely
constrained by their small masses while $\epsilon^u_{22}$ will be
less constrained. Clearly from Eq.(\ref{constr}), the constraints
imposed on $\epsilon^u_{22}$ are $\tan\beta$ dependent. We now
apply the constraints imposed on the real and imaginary parts of
$\epsilon^{u}_{22}$ corresponding to two different values of
$\tan\beta$ namely for two cases $\tan\beta =10$ and $\tan\beta
=100$ using Eq.(\ref{constr}). In Fig.(\ref{higsplane}) we show
the allowed regions for the two cases. Clearly the constraints are
sensitive to the value of $\tan\beta $ where the constraints are
weak for large values of $\tan\beta $. Since $C^H_1$ and $C^H_4$
are proportional to $\epsilon^u_{22}$ thus they will be several
order of magnitudes larger than $C^H_2$ and $C^H_3$. In fact this
conclusion can be seen from Eq.(\ref{Higgsw}) and thus in
our analysis we drop $C^H_2$ and $C^H_3$. Now possible other
constraints on $\epsilon^u_{22}$ can be obtained from $D-\bar{D}$ mixing, $K-\bar{K}$ mixing.
For $K-\bar{K}$ mixing, the new contribution from charged Higgs
mediation corresponding to top quark running in the loop will be
much dominant than the contribution in the case where the charm
quark runing in the loop. This is due to the dependency of the
contribution on the ratio of the quark mass running in the loop to
the charged Higgs mass. Thus the expected constraints from
$K-\bar{K}$ mixing might be relevant on $\epsilon^u_{32}$and
$\epsilon^u_{31}$ not on $\epsilon^u_{22}$. In fact, as mentioned
in ref.\cite{Crivellin:2012ye}, the constraints on
$\epsilon^u_{32}$and $\epsilon^u_{31}$ are even weak and
$\epsilon^u_{32}$and $\epsilon^u_{31}$ can be sizeable. By a
similar argument we can neither use the process $b\rightarrow
s\gamma$ nor the Electric dipole moment (EDM) to constraint
$\epsilon^u_{22}$.
Regarding $D-\bar{D}$ mixing one expects a similar situation like
that in $K-\bar{K}$ about the dominance of top quark contribution.
However due to the CKM suppression factors the top quark
contribution will be smaller than the charm contribution.
\subsubsection{$D-\bar{D}$ mixing constraints}\label{DDbar}
We take into accounts only box diagram that contribute to
$D-\bar{D}$ mixing mediated by exchanging strange quark and
charged Higgs. Other contributions from box diagram mediated by
down or bottom quarks and charged Higgs are suppressed by the CKM
factors. Since SM contribution to $D-\bar{D}$ mixing is very small
we neglect its contribution and neglect its interference with
charged Higgs mediation contribution. Thus effective Hamiltonian
for this case can be written as:
\be {\mathcal H}^{|\Delta
C|=2}_{H^{\pm}}= \frac{1}{m^2_{H^{\pm}}}\sum^4_{i=1} C_i(\mu)
Q_i(\mu)+\tilde{C}_i(\mu) \tilde{Q}_i(\mu),
\ee where $C_i,\tilde{C}_i$ are the Wilson coefficients obtained by
perturbative QCD running from $M_H$ scale to the scale $\mu$
relevant for hadronic decay and $Q_i,\tilde{Q}_i$ are the relevant
local operators at low energy scale
\bea
Q_1 &=&(\bar{u}\gamma^{\mu} P_L c)(\bar{u} \gamma_{\mu} P_L c),\nonumber\\
Q_2 &=&(\bar{u} P_L c)(\bar{u} P_L c),\nonumber\\
Q_3 &=&(\bar{u}\gamma^{\mu} P_L c)(\bar{u} \gamma_{\mu} P_R c),\nonumber\\
Q_4 &=&(\bar{u} P_L c)(\bar{u} P_R c),\nonumber\\
\label{QDoper}
\eea
where we drop color indices and the operators $\tilde{Q}_i$ can be
obtained from $Q_i$ by changing the chirality $L\leftrightarrow
R$. The Wilson coefficients $C_i$,
are given by
\begin{eqnarray}
C_1 &=& \frac{ I_1(x_s)}{64 \pi^2 } \bigg(\sum\limits_{j = 1}^3
{\sin\beta\,V^*_{2j} \left( \frac{m_s}{v_d} \delta_{j2}-
\epsilon^{ d}_{j2}\tan\beta \right)}\bigg)^2\bigg(\sum\limits_{k =
1}^3 {\sin\beta\,V_{1k}
\left( \frac{m_s}{v_d}\delta_{k2}- \epsilon^{ d}_{k2}\tan\beta \right)}\bigg)^2,\nonumber\\
C_2 &=& \frac{m^2_s I_2(x_s)}{16 \pi^2 m^2_{H^{\pm}}}
\bigg(\sum\limits_{j = 1}^3 {\sin\beta\,V^*_{2j} \left(
\frac{m_s}{v_d} \delta_{j2}- \epsilon^{ d}_{j2}\tan\beta
\right)}\bigg)^2 \bigg(\sum\limits_{k = 1}^3 {\cos\beta\, V_{k2}
\left( \frac{m_u }{v_u} \delta_{k1}- \epsilon^{
u\star}_{k1}\tan\beta
\right)}\bigg)^2 ,\nonumber\\
C_3 &=& \frac{ I_1(x_s)}{64 \pi^2 } \bigg(\sum\limits_{j = 1}^3 {\sin\beta\,V^*_{2j} \left(
\frac{m_s}{v_d} \delta_{j2}- \epsilon^{ d}_{j2}\tan\beta
\right)}\bigg)\bigg(\sum\limits_{k = 1}^3 {\sin\beta\,V_{1k}
\left( \frac{m_s}{v_d}\delta_{k2}- \epsilon^{ d}_{k2}\tan\beta
\right)}\bigg)\nonumber\\&\times&\bigg(\sum\limits_{l = 1}^3
{\cos\beta\, V_{l2} \left( \frac{m_u }{v_u} \delta_{l1}-
\epsilon^{ u\star}_{l1}\tan\beta
\right)}\bigg)\bigg(\sum\limits_{n = 1}^3 {\cos\beta\, V^*_{n2}
\left( \frac{m_c }{v_u} \delta_{n2}-
\epsilon^{ u\star}_{n2}\tan\beta \right)}\bigg),\nonumber\\
C_4 &=& \frac{m^2_s I_2(x_s)}{16 \pi^2 m^2_{H^{\pm}}}
\bigg(\sum\limits_{j = 1}^3 {\sin\beta\,V^*_{2j} \left(
\frac{m_s}{v_d} \delta_{j2}- \epsilon^{ d}_{j2}\tan\beta
\right)}\bigg)\bigg(\sum\limits_{k = 1}^3 {\sin\beta\,V_{1k}
\left( \frac{m_s}{v_d}\delta_{k2}- \epsilon^{ d}_{k2}\tan\beta
\right)}\bigg)\nonumber\\&\times&\bigg(\sum\limits_{l = 1}^3
{\cos\beta\, V_{l2} \left( \frac{m_u }{v_u} \delta_{l1}-
\epsilon^{ u\star}_{l1}\tan\beta
\right)}\bigg)\bigg(\sum\limits_{n = 1}^3 {\cos\beta\, V^*_{n2}
\left( \frac{m_c }{v_u} \delta_{n2}-
\epsilon^{ u\star}_{n2}\tan\beta \right)}\bigg).\nonumber\\
\label{DDWilson}
\end{eqnarray}
where $x_s=m^2_s/m^2_{H^{\pm}}$ and the integrals are defined as
follows:
\bea I_1 (x_s) &=& \frac{x_s+1 }{(x_s-1)^2}+\frac{-2 x_s\ln
(x_s)}{(x_s-1)^3},\nonumber\\
I_2(x_s)&=& \frac{-2 }{(x_s-1)^2}+\frac{(x_s+1)\ln
(x_s)}{(x_s-1)^3} \eea
The Wilson coefficients $\tilde{C}_i$ are
given by
\begin{eqnarray}
\tilde{C}_1 &=& \frac{ I_1(x_s)}{64 \pi^2 } \bigg(\sum\limits_{j =
1}^3 {\cos\beta\, V_{j2} \left( \frac{m_u }{v_u} \delta_{j1}-
\epsilon^{ u\star}_{j1}\tan\beta
\right)}\bigg)^2\bigg(\sum\limits_{k = 1}^3 {\cos\beta\, V^*_{k2}
\left( \frac{m_c }{v_u} \delta_{k2}-
\epsilon^{ u\star}_{k2}\tan\beta \right)}\bigg)^2, \nonumber\\
\tilde{C}_2 &=& \frac{m^2_s I_2(x_s)}{16 \pi^2 m^2_{H^{\pm}}}
\bigg(\sum\limits_{j = 1}^3 {\cos\beta\, V^*_{j2} \left( \frac{m_c
}{v_u} \delta_{j2}- \epsilon^{ u\star}_{j2}\tan\beta
\right)}\bigg)^2\bigg(\sum\limits_{k = 1}^3 {\sin\beta\,V_{1k}
\left( \frac{m_s}{v_d} \delta_{k2}- \epsilon^{ d}_{k2}\tan\beta
\right)} \bigg)^2 \nonumber\\
\tilde{C}_3 &=& C_3 ,\nonumber\\
\tilde{C}_4 &=& C_4. \label{DDWilson}
\end{eqnarray}
Our set of operators $Q_1$, $Q_2$ and $Q_4$ given in Eq.(\ref{QDoper}) are
equivalent to their corresponding operators given in
Refs.\cite{Petrov:2010gy,Petrov:2011un} while the operators
$\tilde{Q}_1$ and $\tilde{Q}_2$ are equivalent to $Q_6$ and $Q_7$
given in the same references respectively. Moreover $Q_3$, given
in Eq.(\ref{QDoper}), can be related to $Q_5$ in
Refs.\cite{Petrov:2010gy,Petrov:2011un} by Fierz identity. For the
rest of the operators, $\tilde{Q}_3$ and $\tilde{Q}_4$, they
are equivalent to $Q_5$ and $Q_4$ in
Refs.\cite{Petrov:2010gy,Petrov:2011un} since their matrix
elements are equal. Thus our Wilson coefficients can be subjected
to the constraints given in
Ref.\cite{Petrov:2010gy,Petrov:2011un} and so we find that
\bea \mid C_{1} \mid &\leq& 5.7\times
10^{-7}\bigg[\frac{m_{H^{\pm}}}{1\, TeV}\bigg]^2\nonumber\\
\mid C_{2} \mid &\leq& 1.6\times
10^{-7}\bigg[\frac{m_{H^{\pm}}}{1\,
TeV}\bigg]^2\nonumber\\
\mid C_{3} \mid &\leq& 3.2\times
10^{-7}\bigg[\frac{m_{H^{\pm}}}{1\, TeV}\bigg]^2\nonumber\\
\mid C_{4} \mid &\leq& 5.6\times
10^{-8}\bigg[\frac{m_{H^{\pm}}}{1\,
TeV}\bigg]^2\nonumber\\\label{DC1til}\eea
the constraints on $\tilde{C}_1-\tilde{C}_4$ are similar to those
$C_1-C_4$. As can be seen from Eq.(\ref{DC1til}) the constraints
on the Wilson coefficients will be strong for small charged Higgs
masses.
We can proceed now to derive the constraints on $\epsilon^{
u}_{22}$ using the upper bound on $\tilde{C}_2$ for instance.
Keeping terms corresponding to first order in $\lambda $ where $\lambda $
is the CKM parameter we find that, for $m _{H^{\pm}}=300$ GeV and
$\tan\beta=55$ \bea
\tilde{C}_2\times 10^{12} &\simeq& 3 \,\bigg(-53.6 \,\epsilon^{ d}_{12} - 12.7 \,\epsilon^{
d}_{22}+0.007\bigg)^2
\bigg( -12.4 \, \epsilon^{ u\,*}_{12} -53.4 \,
\epsilon^{u\,*}_{22} +0.007 \bigg)^2\label{Ctild22}\eea
While for $m _{H^{\pm}}=300$ GeV and $\tan\beta=500$ we find
\bea
\tilde{C}_2\times 10^{14} &\simeq& 3.6 \,\bigg(-487.1 \,\epsilon^{ d}_{12} -115.0 \,\epsilon^{
d}_{22}+0.06\bigg)^2
\bigg( -112.5\, \epsilon^{ u\,*}_{12} -486.7 \,
\epsilon^{u\,*}_{22} +0.007 \bigg)^2 \label{Ctild21}\eea
In both Eqs.(\ref{Ctild21},\ref{Ctild22}) we can drop terms
proportional to $\epsilon^{ u\,*}_{12} $ to a good approximation
as they have small coefficients in comparison to $\epsilon^ u_{22}
$ and also since $\epsilon^{ u,d}_{ij} $ with $i\neq j $ are
always smaller than the diagonal elements $\epsilon^{ u,d}_{ii} $.
On the other hand we know that $\epsilon^d_{12} $ can not be large
to not allow flavor changing neutral currents and so we can drop
terms proportional to $\epsilon^d_{12}$ in
Eqs.(\ref{Ctild21},\ref{Ctild22}) to a good approximation also.
thus we are left with $\epsilon^d_{22}$ and $\epsilon^u_{22}$ in
both Eqs.(\ref{Ctild21},\ref{Ctild22}). Comparing their
coefficients shows that $\epsilon^u_{22}$ has a large coefficient
and thus we can drop $\epsilon^d_{22}$ terms. An alternative way
is to assume that $\epsilon^{ u}_{22}$ terms are the dominant
ones in comparison to the other $\epsilon^{ u,d}_{ij}$ terms and
proceed to set upper bounds on $\epsilon^{ u}_{22}$. In fact even
if we consider other Wilson coefficients rather than $\tilde{C}_2$
this conclusion will not be altered. Under the assumption
$\epsilon^{d}_{12} = \epsilon^{d}_{22}=\epsilon^{u}_{12}=0$ and
using the upper bound corresponding to $ m _{H^{\pm}}=300 $ GeV on
$ \tilde{C}_{2}$, using Eq.(\ref{DC1til}), which reads in this
case \bea \mid \tilde{C}_{2} \mid &\leq& 1.4 \times
10^{-8}\label{DC1ti2}\eea
Clearly from Eqs.(\ref{Ctild22},\ref{Ctild21},\ref{DC1ti2}) the
bounds that can be obtained on $\epsilon^{ u}_{22}$ will be so
loose and thus $D-\bar{D}$ mixing can not lead to a strong
constraints on $\epsilon^{ u}_{22}$.
\subsubsection{$D_q\to\tau\nu $ constraints}\label{DDbar}
The decay modes $D_q\to\tau\nu$ where $q=d$ or $q=s$ can be
generated in the SM at tree level via W boson mediation. Within
the 2HDM of type III under consideration, the charged Higgs can
mediate these decay modes at tree level also and hence the total
branching ratios, following a similar notations in
Ref.\cite{Crivellin:2012ye}, can be expressed as
\bea {\mathcal B}(D^+_q\to\tau^+\nu) & =&
\frac{G_F^2|V_{cq}|^2}{8\pi} m_\tau^2 f_{D_q}^2 m_{D_q}
\left(1-\frac{m_\tau^2}{m_{D_q}^2}\right)^2 \tau_{D_q} \nonumber \\
&&\times
\left| 1+ \frac{m_{D_q }^{2}}{(m_c+m_q )\, m_{\tau}}
\frac{(C_{R}^{cq\,*}-C_L^{cq\,*})}{C_{SM}^{cq\,*}} \right|^2\,.
\eea
Where we have used \cite{Na:2012uh}
\be \langle 0|\bar{q} \gamma^5 c |D_q\rangle= \frac{f_{D_q}m_{D_q
}^{2}}{(m_c+m_q )}\ee
Where the SM Wilson coefficient is given by $ C_{{\rm SM}}^{cq} =
{ 4 G_{F}} \; V^{}_{cq}/{\sqrt{2}}$ and the Wilson coefficients
$C_{L}^{cq}$ and $C_{R}^{cq}$ at the matching scale are given by
\begin{equation}
\renewcommand{\arraystretch}{1.5}
\begin{array}{l}
C_{R(L)}^{cq} = \frac{{ -1}}{M_{H^{\pm}}^{2}} \; \Gamma_{cq}^{LR(RL),H^{\pm}} \; \frac{m_\tau}{v}\tan\beta \,,
\end{array}
\label{CRQ}\end{equation}
with the vacuum expectation value $v\approx174{\rm GeV}$ and
$\Gamma_{cq}^{LR(RL),H^{\pm}}$ can be read from Eq.(\ref{Higgsv}).
Setting the charged Higgs contribution to zero and $f_{D_s}=248\pm
2.5$ MeV \cite{Davies:2010ip}, we find that ${\mathcal
B}^{SM}(D^+_d\to\tau^+\nu) \simeq 9.5\times 10^{-4} $ and
${\mathcal B}^{SM}(D^+_s\to\tau^+\nu) = (5.11\pm 0.11)\times
10^{-2} $ which is in close agreement with the results in
Ref.\cite{Akeroyd:2007eh,Mahmoudi:2007vz,Mahmoudi:2008tp}. The
experimental values of these Branching ratios are given by
${\mathcal B}(D^+_d\to\tau^+\nu) < 2.1\times 10^{-3} $
\cite{Rubin:2006nt} while ${\mathcal B}(D^+_s\to\tau^+\nu) =
(5.38\pm 0.32)\times 10^{-2} $\cite{Asner:2010qj}. Keeping the
terms that are proportional to the dominant CKM elements we find
for $ q=d $ \bea {\Gamma_{c d }^{H^ \pm\,RL\,\rm{eff} } } &=&
{\cos\beta\,V_{11}} \left( -\epsilon^{ u\,^*}_{12}\tan\beta
\right)\nonumber\\
{\Gamma_{c d }^{H^ \pm\,LR\,\rm{eff} } } &=& {\sin\beta\,V_{11}
\left( \frac{m_d }{v_d} - \epsilon^{ d}_{11}\tan\beta
\right)}\nonumber\\
\label{Higgsww}
\eea
While for $ q=s $ we find
\bea {\Gamma_{c s }^{H^ \pm\,RL\,\rm{eff} } } &=&
{\cos\beta\,V_{22}} \left(\frac{m_c}{v_u} -\epsilon^{
u\,^*}_{22}\tan\beta
\right)\nonumber\\
{\Gamma_{c s }^{H^ \pm\,LR\,\rm{eff} } } &=& {\sin\beta\,V_{22}
\left( \frac{m_s }{v_d} - \epsilon^{ d}_{22}\tan\beta
\right)}\nonumber\\
\label{Higgswww}
\eea
Clearly from the last two equations, we need to consider the decay
mode $D^+_s\to\tau^+\nu $ to constrain $\epsilon^{ u}_{22}$. For
$\tan \beta = 10$ we find that
\bea {\Gamma_{c s }^{H^ \pm\,RL\,\rm{eff} } }\times 10^{-3}
&\simeq& 0.71 - 968.6\, \epsilon^{ u}_{22}
\nonumber\\
{\Gamma_{c s }^{H^ \pm\,LR\,\rm{eff} } }\times 10^{-3} &\simeq&
5.3 - 9686.0 \, \epsilon^{ d}_{22}
\label{Higgswww}
\eea
Clearly the coefficient of $\epsilon^{ d}_{22}$ is one order of
magnitude larger than $\epsilon^{ u}_{22}$ and for larger $\tan
\beta$ one expects to be larger than. However, $\epsilon^{
d}_{22}$ is severely constraint by naturalness criterion and thus
we expect that the term proportional to $\epsilon^{ u}_{22}$ to
be larger and thus in our analysis we can drop $\epsilon^{
d}_{22}$ term and proceed to obtain the required constraints.
\begin{figure}[tbhp]
\includegraphics[width=6.5cm]{Dstau200}
\includegraphics[width=6.5cm]{Dstau500}
\caption{Constraints on $\epsilon^u_{22}$ from ${\mathcal
B}(D^+_s\to\tau^+\nu)$. Left plot corresponding to $\tan\beta
=200$ while right plot corresponding to $\tan\beta =500$. In both
cases we take $m_{H^{\pm}}=200$ GeV.} \label{higsplaneDs1}\label{higsplaneDs1}
\end{figure}
\begin{figure}[tbhp]
\includegraphics[width=6.5cm]{Dstaubeta350mH300}
\includegraphics[width=6.5cm]{Dstaubeta500mH300}
\caption{Constraints on $\epsilon^u_{22}$ from ${\mathcal
B}(D^+_s\to\tau^+\nu)$. Left plot corresponding to $\tan\beta
=350$ while right plot corresponding to $\tan\beta =500$. In both
cases we take $m_{H^{\pm}}=300$ GeV.}\label{higsplaneDs2}
\end{figure}
We show in Figs.(\ref{higsplaneDs1},\ref{higsplaneDs2}) the
allowed regions for the real and imaginary parts of
$\epsilon^{u}_{22}$ corresponding to two different values of the
charged Higgs mass namely, $m_{H^{\pm}}=200$ and $m_{H^{\pm}}=300$
and for different values of $\tan\beta$. Our objective here is to
show the dependency of the constraints on $m_{H^{\pm}}$ and
$\tan\beta $. We see from the Figures that, for $\tan\beta =500$,
the constraints become loose with the increasing of
$m_{H^{\pm}}$. This is expected as Wilson coefficients of the
charged Higgs are inversely proportional to the square of
$m_{H^{\pm}}$ and thus their contributions to ${\mathcal
B}(D^+_s\to\tau^+\nu)$ become small for large $m_{H^{\pm}}$ which
in turn make the constraints obtained are loose. Another remark
from the figure is that the constraints become strong with the
increasing of the value of $\tan\beta$ which is expected also from
Eq.(\ref{CRQ}). This in contrast to the constraints derived by
applying the naturalness criterion where we showed that the
constraints become loose with the increasing of the value of
$\tan\beta$.
\subsubsection{CP violation in Charged Higgs}
The total amplitude including SM and charged Higgs contribution
can be written as
\be {\mathcal A}= \bigg(C^{SM}_1+\frac{1}{N} C^{SM}_2 +
\chi^{\pi^+}(C^H_1-C^H_4)\bigg)X_{D^0K^-}^{\pi^+}-
\bigg(C^{SM}_2+{1\over N} C^{SM}_1
+\frac{1}{2N}\big(C^H_1-\chi^{D^0}
C^H_4\big)\bigg)X_{K^-\pi^+}^{D^0}\label{HigsT}\ee with
$X^{P_1}_{P_2P_3}= if_{P_1}\Delta_{P_2P_3}^2
F_0^{P_2P_3}(m_{P_1}^2)$, $\Delta_{P_2P_3}^2=m_{P_2}^2-m_{P_3}^2$
and $\chi^{\pi^+} $ and $\chi^{D^0}$ are previously defined as \bea \chi^{\pi^+}&=&{m_\pi^2\over (m_c-m_s)(m_u+m_d)}\nonumber\\
\chi^{D^0}&=& {m_D^2\over (m_c+m_u)(m_s-m_d)}\eea The form of
the amplitude, ${\mathcal A}$, shows how charged Higgs
contribution can affect only the short physics (Wilson
coefficients) without any new effect on the long range physics
(hadronic parameters). Thus strong phase will not be affected by
including charged Higgs contributions while the weak phase will be
affected. We can rewrite Eq.(\ref{HigsT}) in terms of the
amplitudes $T$ and $E$ introduced before in the case of the SM as
follows: \be {\mathcal A}= V^*_{cs}V_{ud}(T^{SM+H}+E^{SM+H})
\label{HigsTt}\ee where \bea T^{SM+H}= 3.14\times 10^{-6}&\simeq&
\frac{G_F}{\sqrt{2}}a^{SM+H}_1
f_{\pi}(m^2_D-m^2_K)F^{DK}_0(m^2_{\pi})\nonumber\\
E^{SM+H}= 1.53\times 10^{-6}e^{122^{\circ}i} &\simeq&
\frac{G_F}{\sqrt{2}}a^{SM+H}_2
f_D(m^2_K-m^2_{\pi})F^{K\pi}_0(m^2_D)\eea
where \bea a^{SM+H}_1&=&\bigg(C^{SM}_1+\frac{1}{N} C^{SM}_2 +
\chi^{\pi^+}(C^H_1-C^H_4)\bigg)\nonumber\\
&=&\bigg(a_1+\Delta a_1+
\chi^{\pi^+}(C^H_1-C^H_4)\bigg)\label{a1t}\eea \bea a^{SM+H}_2=-
\bigg(a_2 +\Delta a_2+\frac{1}{2N}\big(C^H_1-\chi^{D^0}
C^H_4\big)\bigg)\label{a2t}\eea
The CP asymmetry can be obtained using the relation
\begin{eqnarray}
A_{CP} &=& {|{\mathcal A}|^2-|\bar {\mathcal A}|^2 \over
|{\mathcal A}|^2+|\bar {\mathcal A}|^2 } ={2|T^{SM+H}||
E^{SM+H}|\sin(\phi_1-\phi_2)\sin(-\alpha_E)\over |T^{SM+H} +
E^{SM+H} |^2 }
\end{eqnarray}
with $\phi_i= Arg [a^{SM+H}_i] $ and $\alpha_E= Arg(\chi_E)$. As an example let us take
$Re(\epsilon^u_{22})=0.04$, $Im(\epsilon^u_{22})=0.03$ which is
allowed point for $\tan\beta =10$. In this case we find that for a
value of $m_{H^{\pm}}=500 $ GeV we find that $ A_{CP}\simeq - 3.7
\times 10^{-5}$ while for $m_{H}=300 $ GeV we find that $
A_{CP}\simeq - 1 \times 10^{-4}$. Let us take another example
where $Re(\epsilon^u_{22})=-0.1$, $Im (\epsilon^u_{22})=-0.3$
which is allowed point for $\tan\beta =500$ and $m_{H^{\pm}}=300
$ GeV. Repeating the same steps as above we find that $
A_{CP}\simeq 5.3\times 10^{-2}$. Clearly in charged Higgs models
the predicted CP asymmetry is so sensitive to the value of
$\tan\beta$ and to the value of Higgs mass.
\section{Conclusion}
In this paper, we have studied the Cabibbo favored non-leptonic
$D^0$ decays into $ K^- \pi^+$. We have shown that the Standard
Model prediction for the corresponding CP asymmetry is strongly
suppressed and out of experimental range even taking into account
the large strong phases coming from the Final State Interactions.
Then we explored new physics models taking into account three
possible extensions namely, extra family, extra gauge bosons
within Left-Right Grand Unification models and extra Higgs Fields.
The fourth family model strongly improved SM prediction of the CP
asymmetry but still the predicted CP asymmetry is far of the
reach of LHCB or SuperB factory as SuperKEKB. The most promising
models are no-manifest Left-Right extension of the SM where the LR
mixing between the gauge bosons permits us to get a strong
enhancement in the CP asymmetry. In such a model, it is possible
to get CP asymmetry of order $10 \%$ which is within the range of
LHCB and next generation of charm or B factory. The
non-observation of such a huge CP asymmetry will strongly
constrain the parameters of this model. In multi Higgs extensions
of the SM, the 2HDM type III is the most attractive as it permits
to solve at the same time the puzzle coming from $B \to \tau \nu$
and give a large contribution to this CP asymmetry depending on
the charged Higgs masses and couplings. A maximal value of $ 5\%$
can be reached with a Higgs mass of $300$ GeV and large $tan
\beta$.
\section*{Acknowledgements}
G.F. thanks A. Crivellin for useful discussion. D. D. is grateful to
Conacyt (M\'exico) S.N.I. and Conacyt project (CB-156618), DAIP
project (Guanajuato University) and PIFI (Secretaria de Educacion
Publica, M\'exico) for financial support. G.F. work is supported
by research grants NSC 99- 2112-M-008- 003-MY3, NSC
100-2811-M-008-036 and NSC 101- 2811-M-008-022 of the National
Science Council of Taiwan.
|
2,869,038,154,701 | arxiv | \section{Introduction}
If $f:\C\to\C$ is a transcendental entire function (that is, a non-polynomial holomorphic
self-map of the complex plane), the \emph{escaping set} of $f$ is defined as
\[ I(f) := \{z\in\C: f^n(z)\to\infty\}. \]
(Here $f^n=\underset{n\text{ times}}{\underbrace{f\circ\dots\circ f}}$
denotes the $n$-th iterate of $f$, as usual.)
This set has recently received much attention in the study of transcendental dynamics,
due to the structure it provides to the dynamical plane of such functions. It is neither an
open nor a closed subset of the complex plane and tends to have interesting
topological properties. In the simplest cases
(see \cite{devaneykrych,baranskihyperbolic,strahlen,boettcher}), the set $I(f)$ is homeomorphic
to a subset of a ``Cantor Bouquet'' (a certain uncountable disjoint union of curves to
$\infty$), and in particular $I(f)$ is disconnected for these functions. It has recently come to light that there are many situations where $I(f)$ is in fact
connected. Rippon and Stallard showed that this is the case for any entire function
having a multiply-connected wandering domain \cite{ripponstallardfatoueremenko} and also for
many entire functions of small order of growth \cite{ripponstallardsmallgrowth}.
These examples have
infinitely many critical values. The latter condition is not necessary, as there
are even maps with connected escaping set in the family
\[ f_{a}:\C\to\C;\quad z\mapsto \exp(z)+a\]
of \emph{exponential maps}, which may be considered
the simplest parameter space
of transcendental entire functions. (These maps have no critical points, and
exactly one \emph{asymptotic value}, namely the omitted value $a$.)
Indeed, it was shown in \cite{escapingconnected}
that the escaping set is connected for the standard exponential map $f_0$, while
all path-connected components of $I(f)$ are relatively closed and nowhere dense. The proof
uses previously known results about this particular function, thus leaving open the
possibility of the connectedness of $I(f_0)$ being a rather unusual phenomenon.
Motivated by this result, Jarque \cite{jarqueconnected} showed that $I(f_a)$ is connected
whenever $a$ is a \emph{Misiurewicz parameter}, i.e.\ when
the singular value $a$ is preperiodic. In this note, we extend his proof
to a wider class of parameters. Our results suggest that connectedness of the escaping
set is in fact true for ``most'' parameters for which the singular value belongs to the Julia
set $J(f_a)=\cl{I(f_a)}$.\footnote
The Julia set is defined as the set of non-normality of the
family of iterates of $f$. For certain entire functions, including all exponential maps,
it coincides with the closure of $I(f_a)$ by
\cite{alexescaping,alexmisha}.}
If $a\notin J(f_a)$, then $f_a$ has an attracting or parabolic
periodic orbit, and it is well-known that the Julia set,
and hence the escaping set, is a disconnected
subset of $\C$.
The main condition used in our paper is the following combinatorial notion, first introduced
in \cite{nonlanding}.
\begin{defn} \label{defn:accessible}
We say that the singular value
$a$ of an exponential map $f=f_a$
is \emph{accessible} if $a\in J(f)$ and
there is an injective curve
$\gamma:[0,\infty)\to J(f)$ with $\gamma(0)=a$, $\gamma(t)\in I(f)$ for $t>0$
and $\re \gamma(t)\to\infty$ as $t\to\infty$.
\end{defn}
\begin{remark}[Remark 1]
It follows from \cite[Corollary 4.3]{markuslassedierk} that this definition
is indeed equivalent to the one given in \cite{nonlanding}, and in particular that
the requirement
$\re \gamma(t)\to\infty$ could be omitted.
\end{remark}
\begin{remark}[Remark 2]
It is not known whether the condition that the singular value $a$ is accessible
is always satisfied when $a$ belongs to the Julia set (as far as we know,
this is an open question even for quadratic polynomials). Known cases include all
Misiurewicz parameters, all parameters for which the singular value escapes
and a number of others. Compare \cite[Remark 2 after Definition 2.2]{nonlanding}.
\end{remark}
If $\gamma$ is as in this definition, then every component of
$f^{-1}(\gamma)$ is a curve
tending to $\infty$ in both directions.
The set $\C\setminus f^{-1}(\gamma)$ consists of
countably many ``strips'' $S_k$ ($k\in\Z$), which we will assume are
labelled such that
$S_k = S_0 + 2\pi i k$ for all $k$. For our purposes, it does not matter
which strip is labelled
as $S_0$, although it is customary to use one of two conventions: either
$S_0$ is the strip that contains the points $r+\pi i$ for sufficiently large
$r$, or alternatively the strip containing the singular value $a$ (provided
that $f(a)\notin \gamma$).
For any point $z\in \C\setminus I(f)$, there is a sequence $\extaddress{\u}=\u_0\u_1 \u_2 \dots $
of integers, called the \emph{itinerary} with respect to
this
partition, such that
$f^{j-1}(z)\in S_{\u_j}$ for all $j\geq 0$. Every escaping point
whose orbit does not
intersect the curve $\gamma$ also has such an itinerary. The itinerary
of the singular value (if it exists) is called the \emph{kneading sequence}
of $f$.
\begin{thm} \label{thm:main}
Let $f(z)=\exp(z)+a$ be an exponential map. If
\begin{enumerate}
\item the singular value $a$ belongs to the escaping set, or
\item the singular value $a$ belongs to $J(f)\setminus I(f)$ and
is accessible with non-periodic kneading
sequence,
\end{enumerate}
then $I(f)$ is a connected subset of $\C$.
\end{thm}
\begin{remark}[Remark 1]
All \emph{path-connected} component of $I(f)$ are nowhere dense
under the hypotheses of the theorem \cite[Lemma 4.2]{nonlanding}.
\end{remark}
\begin{remark}[Remark 2]
The theorem applies, in particular, to the exponential map $f=\exp$; this gives an
alternative proof of the main result of \cite{escapingconnected}.
\end{remark}
Conjecturally, if $f_a$ has a Siegel disk with bounded-type rotation number, then the
singular value $a$ is accessible in our sense, and furthermore accessible from the
Siegel disk. In this case, the kneading sequence would be periodic and the Julia set
(and hence the escaping set) disconnected. On the other hand, it is plausible that
the escaping set of $f_a$ is connected whenever $f_a$ does not have a nonrepelling periodic
orbit.
The second half of Theorem \ref{thm:main} does
not have a straightforward generalization to
other families. This is because the proof relies on the fact that
$f^{-1}(\gamma)\subset I(f)$ because the singular value $a$ is omitted, and hence connected
sets of
nonescaping points cannot cross the partition boundaries.
In fact, Mihaljevi\'c-Brandt \cite{helenaconjugacy} has
shown that for many postcritically preperiodic
entire functions, including those in the complex cosine family
$z\mapsto a\exp(z) + b\exp(-z)$, the escaping set is disconnected.
On the other hand, the proof of the first part of our theorem should apply to
much more general functions;
in particular, to all cosine maps for which both critical values escape.
We recall that \emph{Eremenko's conjecture} \cite{alexescaping}
states that every connected component of the escaping set of a transcendental entire function
is unbounded. This is true for all exponential maps \cite{expescaping} and indeed
for much larger classes of entire functions \cite{strahlen,eremenkoproperty}.
Despite progress, the question remains open in general, while it is now known
that some related but stronger properties may fail
(compare e.g.\ \cite{strahlen}). The connectivity of the escaping set for a wide variety of
exponential maps illustrates some of the counterintuitive properties one may encounter in the
study of connected components of a planar set that is neither
open nor closed (and exposes the difficulties of constructing a counterexample
should the conjecture turn out to be false). It seems likely that a better
understanding of these
phenomena will provide further insights into Eremenko's conjecture.
\subsection*{Structure of the article.} In Section \ref{sec:escapingpoints},
we collect some background about the escaping set of an exponential map.
In Section \ref{sec:setsofnonescapingpoints}, we establish an important
preliminary result. The proof of Theorem \ref{thm:main} is then carried
out in Section \ref{sec:proof}, separated into two different cases
(Theorems \ref{thm:nonperiodic} and \ref{thm:nonendpoint}).
\subsection*{Basic notation.} As usual, we denote the complex plane by
$\C$, and the Riemann sphere by $\Ch=\C\cup\{\infty\}$. The closure
of a set $A$ in $\C$ and in $\Ch$ will be
denoted $\cl{A}$ resp.\ $\hat{A}$.
Boundaries will be understood to be
taken in $\Ch$, unless explicitly stated otherwise.
\section{Escaping points of exponential maps}\label{sec:escapingpoints}
It was shown by Schleicher and Zimmer \cite{expescaping}
that the escaping set $I(f_a)$ of any exponential map is
organized in curves to infinity, called \emph{dynamic rays} or
\emph{hairs}, which come equipped with a combinatorial structure and
ordering. We do not require a precise understanding of this
structure. Instead, we
take an axiomatic approach, collecting here only
those properties that will be used in
our proofs.
\begin{prop} \label{prop:properties}
Let $f(z)=\exp(z)+a$ be an exponential map.
\begin{enumerate}
\item If $a\in I(f)$, then $a$ is accessible in the sense of
Definition \ref{defn:accessible}.
\item If $U\subset\C$ is an open set with $U\cap J(f)\neq\emptyset$, then
there is a curve $\gamma:[0,\infty)\to I(f)$ with $\gamma(0)\in U$ and
$\re \gamma(t)\to \infty$ as $t\to\infty$. \label{item:toplusinfinity}
\end{enumerate}
\end{prop}
\begin{proof}
The first statement
follows from \cite[Theorem 6.5]{expescaping}.
To prove the second claim, we use the fact that there is a collection of
uncountably many pairwise disjoint curves to $\infty$ in the
escaping set. (This also follows from \cite{expescaping}, but has
been known much longer: see \cite{dgh,devaneytangerman}.)
Hence there is a curve $\alpha:[0,\infty)\to I(f)$ with
$\lim_{t\to\infty}|\alpha(t)|=\infty$ and
$f^j(a)\notin \alpha$ for all $j\geq 0$. In particular, $f^{-1}(\alpha(0))$ is an infinite set,
and by Montel's theorem, there exist
$n\geq 1$ and
some $z_0\in U$ such that $f^n(z_0)=\alpha(0)$. We can analytically
continue the branch of $f^{-n}$ that takes $\alpha(0)$ to $z_0$ to obtain
a curve $\gamma:[0,\infty)\to I(f)$ with
$f^n\circ\gamma = \alpha$ and $\gamma(0)=z_0\in U$.
We have $|f(\gamma(t))|\to\infty$ as
$t\to\infty$. As $|f(z)|\leq \exp(\re(z))+|a|$ for all $z$, we thus have
$\re\gamma(t)\to+\infty$ as $t\to\infty$, as
claimed.
\end{proof}
\subsection*{Exponentially bounded itineraries}
For the rest of this section, fix an exponential map $f=\exp(z)+a$
with accessible
singular value, and an associated partition into itinerary strips
$S_j$. Recall that we defined the itinerary of a point only if its orbit
never belongs to the strip boundaries.
It simplifies terminology if we can speak of itineraries for
\emph{all} points. Hence we adopt the (slightly non-standard)
convention that any sequence
$\extaddress{\u}=\u_0 \u_1 \u_2 \dots$ with $f^j(z)\in\cl{S_{\u_j}}$ is called an
itinerary of $z$. Thus $z$ has a \emph{unique itinerary} if and only if
its orbit does not enter the strip boundaries.
\begin{defn}
An itinerary $\extaddress{\u}=\u_0 \u_1 \u_2 \dots$ is \emph{exponentially bounded} if there is
a number $x\geq 0$ such
that
$2\pi |\u_j| \leq \exp^j(x)$
for all $j\geq 0$.
\end{defn}
\begin{remark}[Remark 1]
At first glance it may seem that the itinerary of every point
$z\in\C$ is exponentially bounded,
since certainly $|f^n(z)|$, and thus $|\im f^n(z)|$, are exponentially bounded
sequences.
However,
in general, we have no a priori control over how the imaginary parts in the strips $S_j$ behave
as the real parts tend to $-\infty$.
Nonetheless, it seems plausible that all points have exponentially bounded
itineraries;
certainly this is true for well-controlled cases such as Misiurewicz
parameters.
We leave this question aside, as its resolution is not
required for our purposes.
\end{remark}
\begin{remark}[Remark 2]
If $z$ does not have a unique itinerary, we take the statement ``$z$ has exponentially
bounded itinerary'' to mean that all itineraries of $z$ are exponentially bounded.
However, two itineraries of $z$
differ by at most $1$ in every entry, so this is equivalent to
saying that $z$ has at least one exponentially bounded itinerary.
\end{remark}
\begin{prop} \label{prop:expbounded}
If $z\in\C$ belongs to the closure of some
path-connected component of $I(f)$ (in particular, if $z\in I(f)$ or $z=a$), then
$z$ has exponentially bounded itinerary.
\end{prop}
\begin{proof}
Let $z_0\in I(f)$, and let $\extaddress{\u}$ be an itinerary of $z_0$.
Then $\re f^j(z_0)\to+\infty$, and in particular there exists $R\in\R$ such that
$\re f^n(z_0)\geq R$ for all $j\geq 0$.
The domain $S_0$ is bounded by two components of $f^{-1}(\gamma)$. Each of these
has bounded imaginary parts
in the direction where the real parts tend to $+\infty$.
(In fact, each preimage component is asymptotic to a straight line
$\{\im z = 2\pi k\}$ for some $k\in\Z$, but we do not require this fact.)
In particular,
\[ M := \sup\{ |\im z|:\ z\in \cl{S_0},\ \re z \geq R\} <\infty.\]
Then it follows that
$|\im f^j(z_0)-2\pi \u_j| \leq M$ for all $j$, and hence
\[ 2\pi |\u_j| \leq |\im f^j(z_0)|+M. \]
Set
$\alpha := \ln(3(|a|+M+2))$. Elementary calculations give
\begin{align} \notag
\exp(|z|+\alpha)&= 3(|a|+M+2)\exp(|z|) \geq
\exp(|z|) + 2(|a|+M+2) \\ \label{eqn:elementary}&\geq
\exp(\re z) + |a| + M + \ln 3 + (|a|+M+2) \\ \notag&\geq
\exp(\re z) + |a|+ M + \alpha \geq |f(z)|+M+\alpha
\end{align}
for all $z\in\C$. It follows that
\[ 2\pi |\u_j| \leq |\im f^j(z_0)|+M \leq |f^j(z_0)|+M \leq \exp^j(|z_0|+\alpha) \]
for all $j\geq 0$, so $z_0$ has exponentially bounded itinerary.
Also, it is shown in \cite{markuslassedierk}
that the partition boundaries, i.e.\ the components of
$f^{-1}(\gamma)$, are path-connected components of $I(f)$ (where $\gamma\subset I(f)$
is the curve connecting the singular value to infinity). So if $C$ is the path-connected
component of $I(f)$ containing $z_0$, then
$f^j(C)\subset\cl{S_{\u_j}}$, and hence $f^j(\cl{C})\subset\cl{S_{\u_j}}$, for all
$j\geq 0$. So all points in $\cl{C}$ have exponentially bounded itinerary, as claimed.
\end{proof}
\subsection*{Escaping endpoints}
There are two types of escaping points:
\begin{defn} \label{defn:endpoint}
Suppose that $f_a$ is an exponential map and $z\in I(f)$.
We say that $z$ is a \emph{non-endpoint} if there is an
injective curve $\gamma:[-1,1]\to I(f_a)$ with
$\gamma(0)=z$; otherwise $z$ is called an \emph{endpoint}.
\end{defn}
It follows from \cite{markuslassedierk} that this coincides with
the classification into ``escaping endpoints of rays'' and
``points on rays'' given in \cite{expescaping}; we use the above
definition here because it is easier to state. In \cite{expescaping},
escaping endpoints were completely classified; we only require the
following fact.
\begin{prop}[{\cite{expescaping}}]
Let $f_a$ be an exponential map with $a\in I(f)$ (so in particular $a$ is accessible), and suppose
$a$ is an endpoint.
Then the kneading sequence of $f$ is unique and unbounded.
\end{prop}
For exponential maps with an attracting fixed point, any non-endpoint is
inaccessible from the
attracting basin \cite{devgoldberg}. The following is a variant of this fact that holds
for every exponential map.
\begin{prop} \label{prop:comb}
Suppose that $f=\exp(z)+a$ is an exponential map and suppose that
$z\in I(f)$ is not an endpoint. Then any closed connected
set $A\subset\C$ with $z\in A$ and $\#A>1$
contains uncountably many escaping points.
\end{prop}
\begin{proof}[Sketch of proof]
The idea is that any
path-connected component of $I(f)$ is accumulated on
both from above and below by other such components. This is by
now a well-known argument; see e.g.\
\cite[Lemma 3.3]{nonlanding} and \cite[Lemma 13]{expbifurcationlocus},
where it is used in a slightly different context. We
provide a few more details for completeness.
We may assume that $A$ intersects only countably many
different path-connected
components of $I(f)$; otherwise we are done. Let
$\gamma:[-1,1]\to I(f)$ be as in Definition \ref{defn:endpoint},
with $\gamma(0)=z$.
Then there are two sequences $\gamma^+_n:[-1,1]\to I(f)$ and
$\gamma^-_n:[-1,1]\to I(f)$ of curves that do not intersect $A$ and
that converge locally
uniformly to $\gamma$ from both sides of $\gamma$. Since $A$ is closed,
it follows that we must have either $\gamma\bigl([-1,0]\bigr)\subset A$
or $\gamma\bigl([0,1]\bigr)\subset A$.
\end{proof}
\section{Closed subsets of non-escaping points}
\label{sec:setsofnonescapingpoints}
Let us say that a set $A\subset\C$ \emph{disconnects} the set
$C\subset\C$ if
$C\cap A=\emptyset$ and
(at least) two different connected components of $\C\setminus A$ intersect $C$.
The following lemma was used in \cite{jarqueconnected} to prove the connectivity
of the escaping set for Misiurewicz exponential maps.
\begin{lem}\label{lem:disconnected}
Let $C\subset\C$. Then $C$ is disconnected if and only if there is a closed connected set
$A\subset\C$ that disconnects $C$.
\end{lem}
\begin{proof}
The ``if'' part is trivial. If $C$ is disconnected, then by the
definition of connectivity
there are two points $z,w\in C$ and an open set $U\subset\C$ with
$\partial U\cap C=\emptyset$
such that $z\in U$ and $w\notin U$. By passing to a connected component
if necessary, we may assume that $U$ is connected. Let
$V$ be the connected component of
$\Ch\setminus \hat{U}$ that contains $w$. Then $V$ is simply connected
with $\partial V\subset \partial U\subset\Ch\setminus A$. It follows that
$\C\setminus V$ has exactly one connected component. Thus
$A := \partial V\cap \C$ is a closed connected set that
disconnects $z$ and $w$, as required.
\end{proof}
Thus, in order to prove the connectedness of the escaping set,
we need to study closed connected sets of non-escaping points and show
that these cannot disconnect $I(f)$. The following proposition
will be the main ingredient in this argument.
\begin{prop} \label{prop:boundedimage}
Let $f=f_a$ be an exponential map with accessible singular value $a$.
Let $A\subset\C$ be closed and connected. Suppose that furthermore the points in
$A$ have uniformly exponentially bounded itineraries, i.e.\ there exists
a number $x$ with the following property: if $n\geq 0$ and $\u\in\Z$ such that
$f^n(A)\cap \cl{S_{\u}}\neq\emptyset$, then $2\pi|\u|\leq \exp^n(x)$.
If $A\cap I(f)$ is bounded,
then there is $n\geq 0$ such that $f^n(A)$ is bounded.
\end{prop}
This
is essentially a (simpler) variant
of \cite[Lemma 6.5]{nonlanding}, and
can be proved easily in the same manner using
the combinatorial terminology of that paper.
Instead, we give an alternative proof
--- quite similar to the proof of the main theorem of
\cite{jarqueconnected} --- that does not require
familiarity with these concepts.
\begin{proof}
We prove the converse, so suppose that $f^n(A)$ is unbounded for all $n$.
(In particular, $A$ is nonempty.)
We need to show that $A\cap I(f)$ is unbounded.
Similarly as in the proof of Proposition \ref{prop:expbounded}, set
\[ M := \sup\{ |\im z|:\ z\in S_0,\ \re z \geq 0\}.\]
and $\alpha := \ln(3(|a|+M+2))$. Also pick some $z_0\in A$ and let
$x_0 \geq \max(|z_0|,x)+\alpha$ be arbitrary.
The hypotheses and (\ref{eqn:elementary}) imply that
\begin{equation}\label{eqn:imparts}
|\im f^n(z)| \leq \exp^n(x) + M \leq \exp^n(x_0) \end{equation}
whenever $z\in A$ and $n\geq 0$ such that $\re f^n(z)\geq 0$. Also,
again by (\ref{eqn:elementary}),
\begin{equation}\label{eqn:z0} |f^n(z_0)|\leq \exp^n(x_0). \end{equation}
Let $n\in\N$. Recall that $f^n(A)A$ is connected and unbounded by assumption.
Hence by (\ref{eqn:z0}), there exists some
$z_n\in A$ with
\[ |f^n(z_n)| = \exp^n(x_0). \]
We claim that
\begin{equation} \label{eqn:pullback}
\exp^j(x_0)-1\leq |f^j(z_n)| \leq 2\exp^j(x_0)+1
\end{equation}
for $j=0,\dots,n$. Indeed, if $j<n$ is such that
(\ref{eqn:pullback}) is true for $j+1$, then
\begin{align*}
\re f^j(z_n) &= \ln|f^{j+1}(z_n)-a| \geq
\ln(|f^{j+1}(z_n)|-|a|) \\
&\geq \ln(\exp^{j+1}(x_0)-|a|-1) =
\exp^j(x_0) - \ln\frac{\exp^{j+1}(x_0)}{\exp^{j+1}(x_0)-|a|-1} \\
&\geq \exp^j(x_0) - \ln 2 > \exp^j(x_0) - 1. \end{align*}
Similarly, we see that
\[
\re f^j(z_n) \leq \exp^j(x_0)+1. \]
Together with \eqref{eqn:imparts}, this yields~\eqref{eqn:pullback} for $j$.
Now let $z$ be any accumulation point of the sequence $z_n$; since
$a$ is closed (and the sequence is bounded), we have $z\in A$.
By continuity, \eqref{eqn:pullback} holds also for $z$, and hence
$z\in A\cap I(f)$. As $x_0$ can be chosen arbitrarily large, we have shown that
$A\cap I(f)$ is unbounded, as required.
\end{proof}
\section{Proof of Theorem \ref{thm:main}} \label{sec:proof}
The following two lemmas study the properties of sets that can
disconnect the escaping set of an exponential map with accessible singular value.
\begin{lem} \label{lem:Aunbounded}
Let $f$ be an exponential map and
suppose that $A\subset\C\setminus I(f)$
disconnects the escaping set. Then the real parts of $A$ are not bounded
from above.
\end{lem}
\begin{proof}
This follows immediately from
Proposition~\ref{prop:properties}~(\ref{item:toplusinfinity}).
\end{proof}
\begin{lem} \label{lem:stillseparating}
Let $f=f_a$ be an exponential map with accessible singular
value $a$.
Suppose that $A\subset\C\setminus I(f)$ is connected and
disconnects the escaping set. Then
\begin{enumerate}
\item If the real parts of $A$ are bounded from
below, then $f(A)$ also disconnects the escaping set.
\item The common itinerary of the points in $A$ is exponentially bounded.
\end{enumerate}
\end{lem}
\begin{proof}
Let $\gamma$ be the curve from Definition \ref{defn:accessible}.
Let $U$ be the component of $\C\setminus A$ that contains a left half plane, and let
$V\neq U$ be another component of $\C\setminus A$ with $V\cap I(f)\neq\emptyset$.
(Such a component exists by assumption.) Every component of
$f^{-1}(\gamma)$ intersects every left half plane. Thus $f^{-1}(\gamma)\subset U$, and
in particular $\cl{V}\cap f^{-1}(\gamma)=\emptyset$.
This means that $\cl{V}$ is contained in a single itinerary domain $S_j$
and the real parts in $V$ are bounded from below.
As $f|_S$ is
a conformal isomorphism between $S_j$ and $\C\setminus \gamma$, it follows
that $f(V)$ is a component of $\C\setminus f(A)$ that intersects
the escaping set but does not intersect $\gamma$. Hence $f(A)$ disconnects
the escaping set, which proves the first claim.
Note that the second claim is trivial if $\cl{A}$ intersects the escaping set, since every
escaping point has exponentially bounded itinerary by Proposition
\ref{prop:expbounded}. So we may suppose that
$A$ is a closed set. Also recall that $a$ has exponentially bounded itinerary, which means that
we may assume that
$a\notin \cl{f^n(A)}$ for all $n\geq 0$. Then the real parts of $f^n(A)$ are
bounded from below for all $n$.
Hence for all $n\geq 0$,
$f^n(A)$ is a closed subset of $\C$ with real parts bounded from below and disconnecting
$I(f)$. Let us say that $f^n(A)$ \emph{surrounds} a set $X\subset\C\setminus f^n(A)$ if
$X$ does not belong to the component of $\C\setminus f^n(A)$ that contains a left half plane.
By assumption, $A$ surrounds some escaping point $z_0$. We claim that, for every $n\geq 0$,
\begin{enumerate}
\item[(*)] $f^n(A)$ surrounds either $f^n(z_0)$ or $f^j(a)$ for some $j< n$.
\end{enumerate}
This follows by induction using the same argument similarly
as in the first part of
the proof.
Indeed,
let $w=f^n(z_0)$ or $w=f^j(a)$ be the point surrounded by $f^n(A)$ by the
induction
hypothesis, let $U$ be the component of $\C\setminus f^n(A)$ containing $w$, and let
$S_j$ be the itinerary strip containing $f^n(A)$ and hence $U$. Now
$f:S_j\to\C\setminus\gamma$ is a conformal isomorphism. Thus either $f(U)$, and hence
$f(w)$, is surrounded
by $f^{n+1}(A)$, or $U$ is mapped to the component of $\C\setminus f^{n+1}(A)$ that contains
a left half plane and $f^{n+1}(A)$ surrounds $\gamma$,
and hence $a$. The induction is complete in either case.
Because
$z_0$ and $a$ both have exponentially bounded itineraries, it follows from (*) that all points in
$A$ do also.
\end{proof}
Now we are ready to prove Theorem \ref{thm:main}. We begin by treating the
case where $f$ has a unique and non-periodic kneading sequence. This includes
the second case of Theorem \ref{thm:main}, as well as the case of all
escaping endpoints.
\begin{thm} \label{thm:nonperiodic}
Suppose that $f=f_a$ is an exponential map with accessible singular value with
unique kneading sequence $\extaddress{\u}=\u_0 \u_1 \u_2 \dots$.
If $\extaddress{\u}$ is not periodic, then $I(f)$ is connected.
\end{thm}
\begin{proof}
We prove the converse. So suppose that $I(f)$ is disconnected; we must show
that $\extaddress{\u}$ is periodic. By Lemma \ref{lem:disconnected}, there exists
a closed connected set $A\subset\C$ that disconnects the set of escaping points.
Then all
points of $A$ have a common itinerary $\extaddress{\u}'$, and this itinerary is exponentially
bounded by Lemma \ref{lem:stillseparating}. Note that
\begin{enumerate}
\item[(*)] \emph{If $k\geq 0$ is such that $f^k(A)$ is unbounded to the left,
then $a\in\cl{f^{k+1}(A)}$, and hence $\sigma^{k+1}(\extaddress{\u}')=\extaddress{\u}$.}
\end{enumerate}
(Here $\sigma$ denotes the shift map; i.e.\
$\sigma(\u_0 \u_1 \u_2 \dots) = \u_1 \u_2 \dots$.)
By Proposition \ref{prop:boundedimage}, $f^k(A)$ is bounded for some $k$;
let $k_1$ be minimal with this property.
Since $A$ is unbounded by Lemma \ref{lem:Aunbounded}, we must have
$k_1>0$, and since $f^{k_1-1}(A)$ is contained in one of the domains
$S_j$, it follows that $f^{k_1-1}(A)$ is unbounded to the left.
Now let $k_0$ be the minimal number for which $f^{k_0}(A)$ is unbounded
to the left. By Lemma \ref{lem:stillseparating},
$f^{k_0}(A)$ also disconnects the escaping set, and hence is unbounded
to the right by Lemma \ref{lem:Aunbounded}. Thus
$f^{k_0+1}(A)$ is unbounded, and therefore $k_0+1<k_1$ by definition.
So (*) implies that
\[ \sigma^{k_1}(\extaddress{\u}')= \extaddress{\u} = \sigma^{k_0+1}(\extaddress{\u}'), \]
and hence $\sigma^{k_1-k_0-1}(\extaddress{\u})=\extaddress{\u}$. Thus we have seen that
$\extaddress{\u}$ is periodic, as claimed.
\end{proof}
We now complete the proof of Theorem \ref{thm:main} by covering the case
where the singular value is escaping but not an endpoint. (Note that there
are parameters that satisfy the hypotheses of both Theorem
\ref{thm:nonperiodic} and Theorem \ref{thm:nonendpoint}.)
\begin{thm} \label{thm:nonendpoint}
Suppose that $f(z)=\exp(z)+a$ is an exponential map with $a\in I(f)$ such that $a$ is
a non-endpoint. Then $I(f)$ is connected.
\end{thm}
\begin{proof}
The singular value is accessible by Proposition \ref{prop:properties}. As shown in the proof of
Theorem \ref{thm:nonperiodic}, if $I(f)$ was disconnected,
there would be an unbounded, closed, connected set
$A\subset \C\setminus I(f)$ and some number $k_0$ such that
$f^{k_0+1}(A)\cup\{a\}$ is closed and connected. But this is impossible
by the Proposition \ref{prop:comb}.
\end{proof}
\bibliographystyle{hamsplain}
|
2,869,038,154,702 | arxiv |
\subsection{\textsc{Starformer}}
\textsc{Starformer} is a multi-task transformer model designed to perform segmentation, tracking, action recognition and re-identification (STAR) in livestock by extending the popular DETR object detection model \cite{carion2020end, zhu2020deformable}. DETR introduced the concept of \emph{object queries} -- a fixed number of learned positional embeddings. These embeddings can be extracted as representations for possible object instances in an image. Motivated by this, \textsc{starformer} extends DETR to learn individual embeddings that are more discerning of an instance via the STAR multi-task learning.
The base model for \textsc{starformer} is a DETR model with detection and segmentation heads pretrained on COCO detection and panoptic segmentation dataset. Since pigs are not part of the 80 classes of the COCO dataset, we trained a new classification head through re-training the base DETR \footnote{https://github.com/facebookresearch/detr} architecture by unfreezing both its encoder and decoder.
Figure \ref{main_fig} represents the architecture of \textsc{starformer} -- a ResNet-101 backbone followed by a transformer comprising 6 encoder-decoder layers with fixed size positional encodings (object queries).
Based on \emph{a priori} knowledge of the number of pigs, the transformer module generates $N$ latent embeddings, each corresponding to an individual pig. The key idea of \textsc{starformer} is to improve the embeddings by designing four heads, each optimizing a loss function of the STAR tasks.
For segmentation, \textsc{starformer} uses a multi-head attention layer and a feature pyramid network (FPN) - style CNN.
The detection head consists of a Feed Forward Network (FFN) that is a 3-layer perceptron with ReLU activation function and a linear projection layer. The detection head FFN predicts a bounding box, and the linear layer assigns a label to each pig. Actions are detected by parsing the instance level embeddings through another FFN which in turn augments the instance level embeddings (output of Decoder), to classify each object (pigs) into "Active" (standing) or "Inactive" (sitting/lying) classes. As shown in figure \ref{main_fig}, for the tracking head, we devise a spatio-temporal contrastive training approach which aims to increase the similarity of an individual pig across the temporal direction while making sure the embeddings for pigs within the same frame are dissimilar. To further enhance these embeddings for long-term pig re-identification, we extend
the spatio-temporal contrastive training approach on non-continous frames. Frames are taken pairwise from a batch of $K$ frames, resulting in $\Comb{K}{2}$ possible combinations. Such a training strategy can extract motion patterns and shape variations in pigs, making the model to implicitly learn individual representations even for long-term scenarios.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.43]{starformer.pdf}
\caption{Schematic representation of \textsc{starformer} typically designed for livestock monitoring. The four losses corresponding to the four heads namely detection ($\mathcal{L_D}$), segmentation ($\mathcal{L_S}$), action ($\mathcal{L_A}$) (red - active, yellow - inactive) and spatio-temporal contrastive losses $\mathcal ({L_{STC}}$) are shown.}
\label{main_fig}
\end{figure}
\subsection{Multi-objective formulation for embedding enrichment}
We discuss here briefly the loss functions associated with the different heads of our \textsc{starformer} network.
\textbf{Detection loss --} following the DETR strategy, we employ the Hungarian loss $\mathcal{L}_D$ \cite{carion2020end}, but with only one class (pigs). This loss primarily combines the classification loss (cross-entropy loss training the model to classify as pig or background) and the bounding box loss (linear combination of L1 loss, and generalised IoU loss).
\textbf{Segmentation loss --} we pass the feature embeddings to the instance segmentation head, and simply use an \emph{argmax} over the mask scores at each pixel, and assign the corresponding categories to the resulting masks.
The final resolution of the masks has stride of four and each mask is supervised independently using the DICE/F-1 loss \cite{milletari2016fully} and Focal loss \cite{lin2017focal}.
\textbf{Spatio-Temporal Contrastive Loss --} to ensure that our tracking model works well against the strong visual similarity among the pigs, we introduce a customized contrastive loss term that trains the model to better differentiate between the multiple pigs within the same frame, as well as improves the motion flow across subsequent frames for any individual pig. To compute the spatio-temporal contrastive loss $\mathcal{L}_{STC}$, we use the embeddings $\psi_t^{(i)}$ obtained from the last decoder layer for every individual pig from two closely spaced frames of the video. Here, $i \in N$ denotes the index of the pig, and $N$ denotes the total number of pigs as well as the number of embeddings per frame. We define $\mathcal{L}_{STC}$ as
\begin{equation}
\mathcal{L}_{STC} = \lambda_s\mathcal{L}_s + \lambda_{ds}\mathcal{L}_{ds},
\end{equation}
where, $\mathcal{L}_s$ and $\mathcal{L}_{ds}$ denote similarity and dissimilarity loss terms, and $\lambda_s$ and $\lambda_{ds}$ are the respective weighting terms.
To compute the measures of similarity and dissimilarity, we employ the cosine distance metric. Further, the similarity loss $\mathcal{L}_{s}$ is computed for each frame individually, and for the $t^{\text{th}}$ frame, it can be stated as
\begin{align}
\mathcal{L}_{s} = \sum_{i, j}\frac{\psi_t^{(i)} \cdot \psi_t^{(j)}}{\|\psi_t^{(i)}\|\|\psi_t^{(j)}\|} \enskip \forall \enskip {i, j}\in \{1, 2, \hdots, N\} \text{ and }i \neq j.
\end{align}
To compute $\mathcal{L}_{ds}$, we choose $\tau$ subsequent frames of the video and compute the loss for each frame pair for all $N$ objects or animals.
Based on this, we define
\begin{equation}
\mathcal{L}_{ds} = \sum_{i=1}^N \sum_{t_1, t_2} \left(1 - \frac{\psi_{t_1}^{(i)} \cdot \psi_{t_2}^{(i)}}{\|\psi_{t_1}^{(i)}\|\|\psi_{t_2}^{(i)}\|}\right) \enskip \forall \enskip t_1, t_2 \in \tau \text{ and } t_1 \neq t_2.
\end{equation}
\textbf{Action loss.} We conjecture that basic activity such as sitting or standing can help in augmenting the learned embeddings $\psi_t^{(i)}$ $\forall i \in N $ with useful information about a pig's shape and size. This is important as one of the most discerning factor in pigs are their shapes and sizes. We place an action classification head which classifies each pig into 2 classes i.e. active (standing) or inactive (sitting) using a binary
cross entropy loss, and is denoted as $\mathcal{L}_A$.
\subsection{Baseline and Evaluation Metrics}
To understand and benchmark how different heads of \textsc{starformer} contribute towards its performance, we introduce multiple baselines and evaluated metrices and we discuss them below with respect to the 4 tasks.
\textit{Segmentation.} The performance of \textsc{starformer} on instance level segmentation is compared with state-of-the-art implementation of MaskR-CNN \cite{he2017mask} as in \cite{wu2019detectron2} and DETR whose decoder and object queries are fine-tuned using our training dataset. Note that, while evaluating \textsc{starformer} and DETR, we fix the number of predictions to be equal to the number of pigs in that video. This can be beneficial in livestock setting as the number of animals in closed environment will remain fixed over the course of a video. This constrains the model to not allow over or under predicting the number of embeddings. We report the mean average precision (mAP) over different IoU thresholds, from 0.5 to 0.95 (written as `0.5:0.95') and also mAP at 0.5 threshold.
\textit{Tracking.} The segmentation masks obtained from the segmentation models are used to perform multi-object tracking by matching these masks temporally. Pig tracking is constrained such that the number of pigs remains same throughout the video. We use this constraint and fix the number of predictions to the number of pigs $N$ in the video which is known \emph{a priori}. For each video, we consider only the top $N$ predictions out of all object queries. Note that, we can also get an initial estimate of the number of pigs. This can be done in different ways such as by using the mode of the number of masks estimated over a period of time (burn-in period, before we start the tracking), which we have seen is a robust estimator of the number of pigs (pig counting). For the first frame, we form a one-to-one mapping between the ground truth instances with the predicted instances by greedily matching the pairs with maximum mask IoU at each step. Using this mapping and the mapping between the predicted instances across time frames, we match the ground truth instance of each frame with its corresponding predicted instance.
Although there are many methods available for tracking using segmentation masks or embeddings \cite{kuhn1955hungarian, bewley2016simple}, but in livestocks monitoring since the animals (there number and instances) are fixed, the camera is not moving and no animal leaves or enters the scene. These restrictions enables us to perform tracking with a rather straightforward matching strategy. For a proper analysis of how the tracking module performs, we use 2 different matching algorithms and compare the performances for each. Brief descriptions of these follow below.
\vspace{-1.5em}
\begin{enumerate}[noitemsep]
\item \emph{Matching by mask.} Similarity between pigs is computed as IoU of their segmentation masks. We match the pigs by greedily matching the pair of pigs that exhibit highest IoU among all the pairs.
\item \emph{Matching by Embedding.} To compute the extent of similarity between embeddings corresponding to different pigs, cosine distance measure is used. For every pig being matched, the distance in the Euclidean space should be less than $R$.
\end{enumerate}
We propose \emph{constrained multi-object tracking and segmentation accuracy} (cMOTSA) as a metric to evaluate problems of tracking with constraints.
Due to the constraint of fixed count of livestock throughout the video, there will be no false negatives (FN) since 1-1 mapping exists now between the ground-truths and the respective instances obtained from prediction.
We hope that this evaluation metric can accurately assess the capability of \textsc{starformer} in learning unique representations for each pig instance. It is defined as the ratio of the number of true positives \text{TP} (matched instance pairs with a mask IoU greater than 0.5) to all the positive predictions (|TP| + |FP|). False positives (FP) are the instance pairs with a mask IoU less than or equal to 0.5. Further, we also evaluate the tracking performance using scMOTSA, a soft variant of cMOTSA defined as $\text{scMOTSA} = \widetilde{\text{TP}} / (|\text{TP}| + |\text{FP}|)$, where $\widetilde{\text{TP}}$ denotes soft true positives. See details in the supplementary material.
The standard evaluation metrics of MOTS, as stated in \cite{voigtlaender2019mots}, cannot be used for our study since these metrics require that there exists no overlap between masks of any two objects in the ground-truth as well as in the predictions. In other words, every pixel is allowed to be assigned to a maximum of one object. In our dataset, this is not the case and there occur frequent cases of pigs overlapping. Clearly, this property adds the instances of labelled occlusions in our dataset. Occlusion among the hard challenges of tracking \cite{gupta2021icpr, kuipers2020eccvw}, and we hope that model training on such datasets could also introduce invariance to a certain extent.
\textit{Action Classification.} The efficacy of \textsc{starformer} for the action classification task is evaluated through comparison with a ResNet-101 inspired model (Ac-ResNet) \cite{he2016deep} trained specifically to classify each pig into two classes - inactive (sitting) or active (standing). Details related to this baseline are provided in the supplementary material. We use area under the curve in receiver operating characteristic (AUC-ROC) curve as the evaluation metric.
\textit{Pig Re-Identification.}
We use Cumulative Matching Characteristics (CMC) scores \cite{gray2007evaluating} to compare re-identification between \textsc{starformer} and DETR. CMC curves are the most popular evaluation metrics for re-identification methods. CMC-$k$, also referred as Rank-$k$ matching accuracy, represents
the probability that a correct match appears in the top $k$
ranked retrieved results. Ranking, in our case is done by calculating embedding distances between pigs of different frame. CMC top-$k$ accuracy is 1 if correct match appears among the top $k$ values, else 0.
We plot CMC top-$k$ accuracies for discrete inter-frame intervals, i.e, the time interval between the two frames for which re-identification is being done.
\subsection{Results}
\begin{table}[h]
\centering
\scalebox{0.78}{
\begin{tabular}{lcccccccccc}
\toprule
\multicolumn{1}{c}{\multirow{2}{*}{\textbf{Method}}} & \multicolumn{4}{c}{\textbf{Loss}} & \multicolumn{2}{c}{\textbf{mAP IoU:}} & \multicolumn{2}{c}{\textbf{Match masks}}
& \multicolumn{2}{c}{\textbf{Match embeddings}} \\
\multicolumn{1}{c}{} & \multicolumn{1}{l}{$\mathcal{L}_D$} & \multicolumn{1}{l}{$\mathcal{L}_S$} & \multicolumn{1}{l}{$\mathcal{L}_A$} & \multicolumn{1}{l}{$\mathcal{L}_{STC}$} & \multicolumn{1}{l}{0.5:0.95} & \multicolumn{1}{l}{0.5} & \multicolumn{1}{c}{cMOTSA} & \multicolumn{1}{c}{scMOTSA}
& \multicolumn{1}{c}{cMOTSA} & \multicolumn{1}{c}{scMOTSA}\\ \midrule
Mask R-CNN & - & - & - & - & 0.598 & 0.860 & 0.617 & - & - & - \\
DETR & \checkmark & \checkmark & - & - & 0.600 & 0.866 & 0.621 & 0.534 & 0.604 & 0.522 \\
\midrule
\textsc{starformer} & \checkmark & \checkmark & - & - & 0.663 & \textbf{0.920} & 0.743 & 0.642 & 0.714 & 0.611 \\
\textsc{starformer} & \textbf{\checkmark} & \checkmark & \checkmark & - & 0.666 & \textbf{0.920} & 0.792 & 0.691 & 0.785 & 0.676
\\
\textsc{starformer} & \textbf{\checkmark} & \checkmark & \checkmark & \checkmark & \textbf{0.668} & \textbf{0.920} & \textbf{0.805} & \textbf{0.704} & \textbf{0.793} & \textbf{0.686}\\ \bottomrule
\end{tabular}}
\caption{Performance scores for \textsc{starformer} and other baseline models for the tasks of segmentation and tracking obtained on validation set of pig livestock.}
\label{seg_track_result}
\end{table}
\begin{table}[h]
\scalebox{0.9}{
\begin{tabular}{lcccccccc}
\toprule
\multicolumn{1}{c}{\multirow{2}{*}{Method}} & \multicolumn{1}{c}{\textbf{Seg.} } & \multicolumn{1}{c}{\textbf{Track(M)}} &
\multicolumn{1}{c}{\textbf{Track(E)}} &
\multicolumn{1}{c}{\textbf{Action}} & \multicolumn{3}{c}{\textbf{Re-Identify (CMC)}} \\
\multicolumn{1}{c}{} & \multicolumn{1}{c}{mAP} & \multicolumn{1}{c}{cMOTSA} & \multicolumn{1}{c}{cMOTSA} & \multicolumn{1}{c}{AUC} & \multicolumn{1}{c}{R1} & \multicolumn{1}{c}{R5} & \multicolumn{1}{c}{R10} \\ \midrule
Ac-ResNet & - & - & - & 0.768 & - & - & - \\
Mask R-CNN & 0.627 & 0.550 & - & - & - & - & - \\
DETR & 0.639 & 0.600 & 0.569 & - & 0.678 & 0.846 & 0.904 \\
\textsc{starformer} & \textbf{0.690} & \textbf{0.778 } & \textbf{0.756} & \textbf{0.985} & \textbf{0.771} & \textbf{0.895} & \textbf{0.939} \\
\bottomrule
\end{tabular}}
\caption{Performance scores obtained for \textsc{starformer} and the baseline models on the 4 STAR tasks. Here mAP is computed for 0.5:0.95. Further, Track(M) and Track(E) correspond to cases of matching by masks and matching my embeddings, respectively, and Action implies action recognition.}
\label{pigtrace}
\end{table}
We discuss here briefly the results of our experiments and present the important insights.
Table \ref{seg_track_result} presents the results for segmentation and tracking of pigs on a validation set obtained with \textsc{starformer} as well as our baseline models. We observe that \textsc{starformer} consistently outperforms the two baseline models for all the evaluation metrices of segmentation and tracking. While the improvements for segmentation are approximately 6\%, absolute improvements of up to 20\% are observed for the task of segmentation. We also retrained our network with a Swin-Transformer backbone \cite{liu2021swin} and achieved a result of 0.76 mAP on the \textsc{PigTrace} dataset for the segmentation task. This was indeed a significant improvement in segmentation performance.
Further, for action classification, \textsc{starformer} obtains an AUC score of 0.98 compared to 0.742 obtained for the Ac-ResNet baseline. These results clearly demonstrate that training the model simultaneously over multiple tasks provides accurate performance over individual tasks themselves.
To further understand the effect of having multiple task heads, we also analyze a few cases where one of more task heads are removed from the original \textsc{starformer} model. These cases are also reported in Table \ref{seg_track_result}. As can be seen, the head with spatio-temporal contrastive loss when removed, has no adverse impact on segmentation performance but reduces the tracking performance by approximately 1\%. No change on segmentation is expected since contrastive loss primarily focuses on temporal flow of information in our case, while segmentation treats objects in every frame independent of each other. Similarly, when removing the action classification loss, tracking performance is significantly affected.
We further studied how well \textsc{starformer} performs for the task of re-identification and the results are presented in Fig. \ref{fig_reidentify}. We see that both DETR as well as \textsc{starformer} perform equally well for large values of $k$. However, large values of $k$ are not very suited for practical purposes, and performance at lower values of $k$ is more important. For lower values, we see that performance of DETR drops significantly for all choices of inter-frame intervals. On the contrary, \textsc{starformer} is more stable with very small drops for lower values of $k$. This implies that for long-term tracking, \textsc{starformer} is expected to be more reliable.
\textsc{PigTrace.} We further analyzed the performance of \textsc{starformer} on \textsc{Pigtrace} dataset and the results are presented in Table \ref{pigtrace}. \textsc{Starformer} provides significant performance gains for all evaluation metrics across all the four STAR tasks.
\begin{figure}[t]
\centering
\subfigure[DETR Embeddings]{\label{fig:a}\includegraphics[width=60mm]{reid_detr.jpeg}}
\subfigure[\textsc{Starformer} Embeddings]{\label{fig:b}\includegraphics[width=60mm]{reid_star.jpeg}}
\vspace{-1em}
\caption{CMC curves for pig re-identifcation. Here, inter-frame interval implies the number of frames to be skipped to test the efficacy of re-identification, and rank $k$ implies the number of top predictions among which the desired target falls to be deemed as correct.}
\label{fig_reidentify}
\end{figure}
\section{Introduction}
\input{1.introduction}
\section{Related Work}
\input{2.related_work}
\section{\textsc{Pigtrace} Dataset}
\input{3.Dataset}
\section{Proposed Method}
\input{4.proposed_method}
\section{Experiments}
\input{5.experiments}
\section{Conclusions and Future Scope}
\input{6.conclusion}
|
2,869,038,154,703 | arxiv | \section{Introduction}\label{sec:intro}
Knotted structures appear in physical fields in a wide range of areas of theoretical physics; in liquid crystals \cite{km:2016topology, ma:2014knotted, ma:2016global}, optical fields \cite{mark}, Bose-Einstein condensates \cite{bec}, fluid flows \cite{daniel1, daniel2}, the Skyrme-Faddeev model \cite{sutcliffe} and several others.
Mathematical constructions of initially knotted configurations in physical fields make experiments and numerical simulations possible. However, the knot typically changes or disappears as the field evolves with time as prescribed by some differential equation or energy functional. There are some results regarding the existence of stationary solutions of the harmonic oscillator and the hydrogen atom \cite{danielberry, daniel:coulomb}, and the existence of solutions to certain Schrödinger equations that describe any prescribed time evolution of a knot \cite{danielschr}. In particular, this implies the existence of solutions that contain a given knot for all time, i.e., the knot is \textit{stable} or \textit{robust}. However, more general (i.e., regarding more general differential equations) explicit analytic constructions of such solutions are not known.
In the case of electromagnetic fields and Maxwell's equations, the first knotted solution was found by Ra\~nada \cite{ranada}. His field contains closed magnetic and electric field lines that form the Hopf link for all time. Using methods from \cite{bode:2016polynomial} and \cite{weaving} we can algorithmically construct for any given link $L$ a vector field $\mathbf{B}:\mathbb{R}^3\to\mathbb{R}^3$ that has a set of closed field lines in the shape of $L$ and that can be taken as an initial configuration of the magnetic part of an electromanetic field, say at time $t=0$. However, these links cannot be expected to be stable, since they usually undergo reconnection events as time progresses and the field evolves according to Maxwell's equations, or they disappear altogether. Necessary and sufficient conditions for the stability of knotted field lines are known \cite{kedia2}, but so far only the family of torus links has been constructed and thereby been proven to arise as stable knotted field lines in electromagnetism.
In \cite{kedia} Kedia et al. offer a construction of null electromagnetic fields with stable torus links as closed electric and magnetic field lines using an approach developed by Bateman \cite{bateman}. In this article we prove that their construction can be extended to any link type, implying the following result:
\begin{theorem}
\label{thm:main}
For every $n$-component link $L=L_1\cup L_2\cup\cdots\cup L_n$ and every subset $I\subset\{1,2,\ldots,n\}$ there is an electromagnetic field $\mathbf{F}$ that satisfies Maxwell's equations in free space and that has a set of closed field lines (electric or magnetic) ambient isotopic to $L$ for all time, with closed electric field lines that are ambient isotopic to $\bigcup_{i\in I}L_i$ for all time and closed magnetic field lines that are ambient isotopic to $\bigcup_{i\notin I}L_i$ for all time.
\end{theorem}
This shows not only that every pair of links $L_1$ and $L_2$ can arise as a set of robust closed electric and magnetic field lines, respectively, but also that any linking between the components of $L_1$ and $L_2$ can be realised.
We would like to point out that the subset $I$ of the set of components of $L$ does not need to be non-empty or proper for the theorem to hold. As a special case, we may choose $L$ and $I$ such that $\bigcup_{i\in I}L_i$ and $\bigcup_{i\notin I}L_i$ are ambient isotopic, which shows the following generalisation of the results in \cite{kedia}.
\begin{corollary}
For any link $L$ there is an electromagnetic field $\mathbf{F}$ that satisfies Maxwell's equations in free space and whose electric and magnetic field both have a set of closed field lines ambient isotopic to $L$ for all time.
\end{corollary}
The proof of the theorem relies on the existence of certain holomorphic functions, whose explicit construction eludes us at this moment. As a consequence, Theorem \ref{thm:main} guarantees the existence of the knotted fields, but does not allow us to provide any new examples beyond the torus link family.
The closed field lines at time $t=0$ turn out to be projections into $\mathbb{R}^3$ of real analytic Legendrian links with respect to the standard contact structure in $S^3$. This family of links has been studied by Rudolph in the context of holomorphic functions as totally tangential $\mathbb{C}$-links \cite{rudolphtt, rudolphtt2}.
The remainder of the article is structured as follows. In Section \ref{sec:background} we review some key mathematical concepts, in particular Bateman's construction of null electromagnetic fields and knots and their role in contact geometry. Section \ref{sec:leg} summarises some observations that relate the problem of constructing knotted field lines to a problem on holomorphic extendability of certain functions. The proof of Theorem \ref{thm:main} can be found in Section \ref{sec:proof}, where we use results by Rudolph, Burns and Stout to show that the functions in question can in fact be extended to holomorphic functions. In Section \ref{sec:disc} we offer a brief discussion of our result and some properties of the resulting electromagnetic fields.
\ \\
\textbf{Acknowledgements:} The author is grateful to Mark Dennis, Daniel Peralta-Salas and Vera Vertesi for helpful discussions. The author was supported by JSPS KAKENHI Grant Number JP18F18751 and a JSPS Postdoctoral Fellowship as JSPS International Research Fellow.
\section{Mathematical background}
\label{sec:background}
\subsection{Knots and links}
\label{sec:knots}
For $m\in\mathbb{N}$ we write $S^{2m-1}$ for the $(2m-1)$-sphere of unit radius:
\begin{equation}
S^{2m-1}=\{(z_1,z_2,\ldots,z_m)\in\mathbb{C}^2:\sum_{i=1}^m |z_i|^2=1\}.
\end{equation}
Via stereographic projection we have $S^3\cong \mathbb{R}^3\cup\{\infty\}$. A \textit{link} with $n$ components in a 3-manifold $M$ is (the image of) a smooth embedding of $n$ circles $S^1\sqcup S^1\sqcup\ldots\sqcup S^1$ in $M$. A link with only one component is called a \textit{knot}. The only 3-manifolds that are relevant for this article are $M=S^3$ and $M=\mathbb{R}^3$.
Knots and links are studied up to ambient isotopy or, equivalently, smooth isotopy, that is, two links are considered equivalent if one can be smoothly deformed into the other without any cutting or gluing. This defines an equivalence relation on the set of all links and we refer to the equivalence class of a link $L$ as its \textit{link type} or, in the case of a knot, as its \textit{knot type}. It is very common to be somewhat lax with the distinction between the concept of a link and its link type. When there is no risk of confusion we will for example refer to \textit{a link $L$} even though we really mean the link type, i.e., the equivalence class, represented by $L$.
One special family of links/link types is the family of torus links $T_{p,q}$ and the equivalence classes that they represent. It consists of all links that can be drawn on the surface of an unknotted torus $\mathbb{T}=S^1\times S^1$ in $\mathbb{R}^3$ or $S^3$ and they are characterised by two integers $p$ and $q$, the number of times the link winds around each $S^1$. This definition leaves an ambiguity regarding the sign of $p$ and $q$, i.e., which direction is considered as positive wrapping around the meridian and the longitude. This ambiguity is removed by the standard convention to choose
\begin{equation}
(\rho \rme^{\rmi q\varphi}, \sqrt{1-\rho^2}\rme^{\rmi p\varphi})
\end{equation}
as a parametrisation of the $(p,q)$-torus knot in the unit 3-sphere $S^3\subset\mathbb{C}^2$ with $p,q>0$, where the parameter $\varphi$ ranges from 0 to $2\pi$ and $\rho$ is the solution to $\rho^{|p|}=\sqrt{1-\rho^2}^{|q|}$. It follows that for positive $p$ and $q$ the complex curve $z_1^p-z_2^q=0$ intersects $S^3$ in the $(p,q)$-torus knot $T_{p,q}$ \cite{milnor}.
Knot theory is now a vast and quickly developing area of mathematics with many connections to biology, chemistry and physics. For a more extensive introduction we refer the interested reader to the standard references \cite{adams, rolfsen}. The role that knots play in physics is discussed in more detail in \cite{atiyah, kauffman}.
\subsection{Bateman's construction}
\label{sec:bateman}
Our exposition of Bateman's work follows the relevant sections in \cite{kedia}. In electromagnetic fields that are null for all time the electric and magnetic field lines evolve like unbreakable elastic in an ideal fluid flow. They are dragged in the direction of the Poynting vector field with the speed of light \cite{irvine, kedia2}. This means that the link types of any closed field lines remain unchanged for all time. In the following we represent a time-dependent electromagnetic field by its Riemann-Silberstein vector $\mathbf{F}=\mathbf{E}+\rmi \mathbf{B}$, where $\mathbf{E}$ and $\mathbf{B}$ are time-dependent real vector fields on $\mathbb{R}^3$, representing the electric and magnetic part of $\mathbf{F}$, respectively.
It was shown in \cite{kedia2} that the nullness condition
\begin{equation}
\mathbf{E}\cdot\mathbf{B}=0,\qquad \mathbf{E}\cdot\mathbf{E}-\mathbf{B}\cdot\mathbf{B}=0, \qquad\text{for all }t\in\mathbb{R}
\end{equation}
is equivalent to $\mathbf{F}$ being both null and shear-free at $t=0$, that is,
\begin{equation}
(\mathbf{E}\cdot\mathbf{B})|_{t=0}=0,\qquad (\mathbf{E}\cdot\mathbf{E}-\mathbf{B}\cdot\mathbf{B})|_{t=0}=0,
\end{equation}
and
\begin{align}
((E^i E^j-B^i B^j)\partial_j V_i)|_{t=0}&=0,\nonumber\\
((E^i B^j+E^j B^i)\partial_j V_i)|_{t=0}&=0,
\end{align}
where $\mathbf{V}=\mathbf{E}\times\mathbf{B}/|\mathbf{E}\times\mathbf{B}|$ is the normalised Poynting field and the indices $i,j=1,2,3$ enumerate the components of the fields $\mathbf{E}=(E_1,E_2,E_3)$, $\mathbf{B}=(B_1,B_2,B_3)$ and $\mathbf{V}=(V_1,V_2,V_3)$.
It is worth pointing out that the Poynting vector field $\mathbf{V}$ of a null field satisfies the Euler equation for a pressure-less flow:
\begin{equation}
\partial_t \mathbf{V}+(\mathbf{V}\cdot\nabla)\mathbf{V}=0.
\end{equation}
More analogies between null light fields and pressure-less Euler flows are summarised in \cite{kedia2}.
The transport of field lines by the Poynting field of a null electromagnetic field was made precise in \cite{kedia2}. We write $W=\frac{1}{2}(\mathbf{E}\cdot\mathbf{E}+\mathbf{B}\cdot\mathbf{B})$ for the electromagnetic density. The normalised Poynting vector field $\mathbf{V}$ transports (where it is defined) $\mathbf{E}/W$ and $\mathbf{B}/W$. In the following construction $V$ can be defined everywhere and since, $\partial_t W + \nabla\cdot(W\mathbf{V})=0$, the nodal set of $W$ is also transported by $\mathbf{V}$. This implies that if $L_1$ is a link formed by closed electric field lines at time $t=0$ and $L_2$ is a link formed by closed magnetic field lines of such an electromagnetic field at $t=0$ (and in particular $W\neq 0$ on $L_1$ and $L_2$), then their time evolution according to Maxwell's equations does not only preserve the link types of $L_1$ and $L_2$, but also the way in which they are linked, i.e., the link type of $L_1\cup L_2$.
Bateman discovered a construction of null electromagnetic fields \cite{bateman}, which guarantees the stability of links and goes as follows. Take two functions $\alpha, \beta:\mathbb{R}\times\mathbb{R}^3\to\mathbb{C}$ that satisfy
\begin{equation}
\label{eq:1}
\nabla \alpha\times\nabla\beta=\rmi (\partial_t\alpha\nabla\beta-\partial_t\beta\nabla\alpha),
\end{equation}
where $\nabla$ denotes the gradient with respect to the three spatial variables.
Then for any pair of holomorphic functions $f,g:\mathbb{C}^2\to\mathbb{C}$ the field defined by
\begin{equation}
\mathbf{F}=\mathbf{E}+\rmi \mathbf{B}=\nabla f(\alpha,\beta)\times\nabla g(\alpha,\beta)
\end{equation}
satisfies Maxwell's equations and is null for all time. The field $\mathbf{F}$ can be rewritten as
\begin{equation}
\mathbf{F}=h(\alpha, \beta)\nabla\alpha\times\nabla\beta,
\end{equation}
where $h=\partial_{z_1} f\partial_{z_2} g-\partial_{z_2} f\partial_{z_1} g$ and $(z_1,z_2)$ are the coordinates in $\mathbb{C}^2$. Since $f$ and $g$ are arbitrary holomorphic functions, we obtain a null field for any holomorphic function $h:\mathbb{C}^2\to\mathbb{C}$.
Kedia et al. used Bateman's construction to find concrete examples of electromagnetic fields with knotted electric and magnetic field lines \cite{kedia}. In their work both the electric and the magnetic field lines take the shape of torus knots and links. They consider
\begin{align}
\label{eq:stereo3}
\alpha&=\frac{x^2+y^2+z^2-t^2-1+2\rmi z}{x^2+y^2+z^2-(t-\rmi)^2},\nonumber\\
\beta&=\frac{2(x-\rmi y)}{x^2+y^2+z^2-(t-\rmi)^2},
\end{align}
where $x$, $y$ and $z$ are the three spatial coordinates and $t$ represents time. It is a straightforward calculation to check that $\alpha$ and $\beta$ satisfy Equation (\ref{eq:1}). Note that for any value of $t=t_*$, the function $(\alpha,\beta)|_{t=t_*}:\mathbb{R}^3\to\mathbb{C}^2$ gives a diffeomorphism from $\mathbb{R}^3\cup\{\infty\}$ to $S^{3}\subset \mathbb{C}^2$.
The construction of stable knots and links in electromagnetic fields therefore comes down to finding holomorphic functions $f$ and $g$, or equivalently one holomorphic function $h$. Since the image of $(\alpha, \beta)$ is $S^3$, it is not necessary for these functions to be holomorphic (or even defined) on all of $\mathbb{C}^2$. It suffices to find functions that are holomorphic on an open neighbourhood of $S^3$ in $\mathbb{C}^2$.
Kedia et al. find that for $f(z_1,z_2)=z_1^p$ and $g(z_1,z_2)=z_2^q$ the resulting electric and magnetic fields both contain field lines that form the $(p,q)$-torus link $T_{p,q}$. Hence there is a construction of flow lines in the shape of torus links that are stable for all time.
\begin{remark}
\label{remark}
It was wrongly stated in \cite{kedia} and \cite{quasi} that for $t=0$ the map $(\alpha,\beta)$ in Equation (\ref{eq:stereo3}) is the inverse of the standard stereographic projection. In fact, the inverse of the standard stereographic projection is given by $(u,v):\mathbb{R}^3\to S^3$,
\begin{align}
\label{eq:stereo4}
u&=\frac{x^2+y^2+z^2-1+2\rmi z}{x^2+y^2+z^2+1},\nonumber\\
v&=\frac{2(x+\rmi y)}{x^2+y^2+z^2+1},
\end{align}
so that $(\alpha,\beta)|_{t=0}$ is actually the inverse of the standard stereographic projection followed by a mirror reflection that sends $\text{Im}(z_2)$ to $-\text{Im}(z_2)$ or equivalently it is a mirror reflection in $\mathbb{R}^3$ along the $y=0$-plane followed by the inverse of the standard stereographic projection.
Kedia et al.'s choice of $f$ and $g$ was (in their own words) `guided' by the hypersurface $z_1^p\pm z_2^q=0$. Complex hypersurfaces like this and their singularities have been extensively studied by Milnor and others \cite{brauner, milnor} and it is well-known that the hypersurface intersects $S^3$ in the $(p,q)$-torus knot $T_{p,q}$. Even though this made the choice of $f$ and $g$ somewhat intuitive (at least for Kedia et al.), there seems to be no obvious relation between the hypersurface and the electromagnetic field that would enable us to generalise their approach. Since their fields contain the links $T_{p,q}$ in $\mathbb{R}^3$, the corresponding curves on $S^3$ are actually the mirror image $T_{p,-q}$. Therefore, it seems more plausible that (if there is a connection to complex hypersurfaces at all) the relevant complex curve is $z_1^pz_2^q-1=0$, which intersects a 3-sphere of an appropriate radius in $T_{p,-q}$ \cite{rudolphtt}. However, in contrast to Milnor's hypersurfaces, this intersection is \textit{totally tangential}, i.e., at every point of intersection the tangent plane of the hypersurface lies in the tangent space of the 3-sphere. This is an interesting property that plays an important role in the generalisation of the construction to arbitrarily complex link types in the following sections.
\end{remark}
\subsection{Contact structures and Legendrian links}
\label{sec:contact}
A \textit{contact structure} on a 3-manifold $M$ is a smooth, completely non-integrable plane distribution $\xi\subset TM$ in the tangent bundle of $M$. It can be given as the kernel of a differential 1-form, a \textit{contact form} $\alpha$, for which the non-integrability condition reads
\begin{equation}
\alpha\wedge\rmd\alpha\neq 0.
\end{equation}
It is a convention to denote contact forms by $\alpha$. This should not be confused with the first component of the map $(\alpha,\beta)$ in Equation (\ref{eq:stereo3}). Within this subsection $\alpha$ refers to a contact form, in all other sections it refers to Equation (\ref{eq:stereo3}). The choice of $\alpha$ for a given $\xi$ is not unique, but the non-integrability property is independent of this choice.
In other words, for every point $p\in M$ we have a plane (a 2-dimensional linear subspace) $\xi_p$ in the tangent space $T_p(M)$ given by $\xi_p=\text{ker}_p\alpha$, which is the kernel of $\alpha$ when $\alpha$ is regarded as a map $T_pM\to\mathbb{R}$. The non-integrability condition ensures that there is a certain twisting of these planes throughout $M$. We call the pair of manifold $M$ and contact structure $\xi$ a \textit{contact manifold} $(M,\xi)$.
The \textit{standard contact structure} $\xi_0$ on $S^3$ is given by the contact form
\begin{equation}
\alpha_0=\sum_{j=1}^2 (x_j\rmd y_j-y_j\rmd x_j),
\end{equation}
where we write the complex coordinates $(z_1,z_2)$ of $\mathbb{C}^2$ in terms of their real and imaginary parts: $z_j=x_j+\rmi y_j$.
There are two interesting geometric interpretations of the standard contact structure $\xi_0$. Firstly, the planes are precisely the normals to the fibers of the Hopf fibration $S^3\to S^2$. Secondly, the planes are precisely the complex tangent lines to $S^3$.
A link $L$ in a contact manifold $(M,\xi)$ is called a \textit{Legendrian link} with respect to the contact structure $\xi$, if it is everywhere tangent to the contact planes, i.e., $T_pL\subset \xi_p$. It is known that every link type in $S^3$ has representatives that are Legendrian. In other words, for every link $L$ in $S^3$ there is a Legendrian link with respect to the standard contact structure on $S^3$ that is ambient isotopic to $L$.
More details on contact geometry and the connection to knot theory can be found in \cite{etnyre, geiges}.
\section{Legendrian field lines}
\label{sec:leg}
In this section we would like to point out some observations on Bateman's construction. Bateman's construction turns the problem of constructing null fields with knotted field lines into a problem of finding appropriate holomorphic functions $h:\mathbb{C}^2\to\mathbb{C}$. Our observations turn this into the question whether for a given Legendrian link $L$ with respect to the standard contact structure on $S^3$ a certain function defined on $L$ admits a holomorphic extension.
\begin{lemma}
\label{revlemma}
Let $h:\mathbb{C}^2\to\mathbb{C}$ be a function that is holomorphic on an open neighbourhood of $S^3$ and let $\mathbf{F}=h(\alpha,\beta)\nabla\alpha\times\nabla\beta$ be the corresponding electromagnetic field with $(\alpha,\beta)$ as in Equation (\ref{eq:stereo3}). Suppose $L$ is a set of closed magnetic field lines or a set of closed electric field lines of $\mathbf{F}$ at time $t=0$. Then $(\alpha,\beta)|_{t=0}(L)$ is a Legendrian link with respect to the standard contact structure on $S^3$.
\end{lemma}
\textit{Proof}: It is known that all fields that are constructed with the same choice of $(\alpha,\beta)$ have the same Poynting field, independent of $h$. For $(\alpha,\beta)$ as in Equation (\ref{eq:stereo3}) with $t=0$ its pushforward by $(\alpha,\beta)|_{t=0}$ is tangent to the fibers of the Hopf fibration. By the definition of the Poynting field, the electric and magnetic field are orthogonal to the Poynting field and it is a simple calculation that their pushforwards by $(\alpha,\beta)|_{t=0}$ are orthogonal as well. Therefore, they must be normal to the fibers of the Hopf fibration. Hence the pushforward of all electric and magnetic field lines by $(\alpha,\beta)$ are tangent to the standard contact structure on $S^3$. In particular, any closed electric or magnetic field line is a Legendrian link with respect to the standard contact structure. \qed
A more general statement of Lemma \ref{revlemma} is proven in \cite{quasi}. It turns out that $(\alpha,\beta)$ define a contact structure for each value of $t$, where time evolution is given by a 1-parameter family of contactomorphisms, and all sets of closed flow lines at a fixed moment in time are (the images in $\mathbb{R}^3$ of) Legendrian links with respect to the corresponding contact structure.
Lemma \ref{revlemma} tells us that (the projection of) closed field lines form Legendrian links. We would like to go in the other direction, starting with a Legendrian link and constructing a corresponding electromagnetic field for it.
We define the map $\varphi=(\alpha,\beta)|_{t=0}:\mathbb{R}^3\cup\{\infty\}\to S^3$. The particular choice of $(\alpha,\beta)$ in Equation (\ref{eq:stereo3}) does not only determine a contact structure, but also provides us with an explicit orthonormal basis of the plane $\xi_p$ in $T_pS^3$ for all $p\in S^3\backslash\{(1,0)\}$, given by
\begin{equation}
\xi_p=\text{span}\{v_1,v_2\}
\end{equation}
where $v_1$ and $v_2$ are given by
\begin{align}
\label{eq:v1}
v_1&=\varphi_*\left(\frac{(x^2+y^2+z^2+1)^3}{8} \text{Re}\left(\nabla \alpha\bigr\rvert_{t=0} \times \nabla \beta\bigr\rvert_{t=0}\right)\right)\nonumber\\
&=-x_2 \frac{\partial}{\partial x_1}+y_2\frac{\partial}{\partial y_1}+x_1\frac{\partial}{\partial x_2}-y_1\frac{\partial}{\partial y_2},\nonumber\\
v_2&=\varphi_*\left(\frac{(x^2+y^2+z^2+1)^3}{8}\text{Im}\left(\nabla \alpha\bigr\rvert_{t=0} \times \nabla \beta\bigr\rvert_{t=0}\right)\right)\nonumber\\
&=-y_2 \frac{\partial}{\partial x_1}-x_2\frac{\partial}{\partial y_1}+y_1\frac{\partial}{\partial x_2}+x_1\frac{\partial}{\partial y_2}.
\end{align}
They are pushforwards of multiples of $\text{Re}(\nabla \alpha \times \nabla \beta)|_{t=0}$ and $\text{Im}(\nabla \alpha \times \nabla \beta)|_{t=0}$ by $\varphi$. It is easy to see from these expressions that $v_1$ and $v_2$ are orthonormal and span the contact plane $\xi_p$ at each point $p\in S^3\backslash\{(1,0)\}$. The point $p=(1,0)$ is excluded, since it is $(1,0)=\varphi(\infty)$.
A magnetic field $\mathbf{B}$ constructed using Bateman's method satisfies
\begin{align}
\label{eq:Bv1v2}
\mathbf{B}&=\text{Im}(\mathbf{F})\nonumber\\
&=\text{Re}(h(\alpha,\beta))\text{Im}(\nabla \alpha \times \nabla \beta)+\text{Im}(h(\alpha,\beta))\text{Re}(\nabla \alpha \times \nabla \beta),
\end{align}
while the electric field $\mathbf{E}$ satisfies
\begin{align}
\label{eq:Ev1v2}
\mathbf{E}&=\text{Re}(\mathbf{F})\nonumber\\
&=\text{Re}(h(\alpha,\beta))\text{Re}(\nabla \alpha \times \nabla \beta)-\text{Im}(h(\alpha,\beta))\text{Im}(\nabla \alpha \times \nabla \beta).
\end{align}
In particular, both fields are at every point a linear combination of $\text{Re}(\nabla \alpha \times \nabla \beta)$ and $\text{Im}(\nabla \alpha \times \nabla \beta)$ and their pushforwards by $\varphi$ are linear combinations of $v_1$ and $v_2$. The fact that $v_1$ and $v_2$ are a basis for the contact plane $\xi_p$ for all $p\in S^3\backslash\{(1,0)\}$ implies that Equations (\ref{eq:Bv1v2}) and (\ref{eq:Ev1v2}) provide an alternative proof of Lemma \ref{revlemma}. Hence every closed field line must be a Legendrian knot and the holomorphic function $h$ describes the coordinates of the field with respect to this preferred basis.
Suppose now that we have an $n$-component Legendrian link $L=L_1\cup L_2\cup\ldots\cup L_n$ with respect to the standard contact structure on $S^3$, with $(1,0)\not\in L$, a subset $I\subset\{1,2\ldots,n\}$, and a non-zero section $X$ of its tangent bundle $TL\subset \xi_0\subset TS^3$. We can define a complex-valued function $H:L\to\mathbb{C}$ given by
\begin{align}
\label{eq:H}
\text{Re}(H(z_1,z_2))&=X_{(z_1,z_2)}\cdot v_1,&&\text{for all }(z_1,z_2)\in L_i, i\in I\nonumber\\
\text{Im}(H(z_1,z_2))&=-X_{(z_1,z_2)} \cdot v_2,&&\text{for all }(z_1,z_2)\in L_i, i\in I,\nonumber\\
\text{Re}(H(z_1,z_2))&=X_{(z_1,z_2)}\cdot v_2,&&\text{for all }(z_1,z_2)\in L_i, i\notin I\nonumber\\
\text{Im}(H(z_1,z_2))&=X_{(z_1,z_2)} \cdot v_1,&&\text{for all }(z_1,z_2)\in L_i, i\notin I,
\end{align}
where $\cdot$ denotes the standard scalar product in $\mathbb{R}^4=T_{(z_1,z_2)}\mathbb{C}^2$.
\begin{proposition}
If there is an open neighbourhood $U$ of $S^3\subset\mathbb{C}^2$ and a holomorphic function $h:U\to\mathbb{C}$ with $h|_L=H$, then the corresponding electromagnetic field $\mathbf{F}=h(\alpha, \beta)\nabla\alpha\times\nabla\beta$ at $t=0$ has closed field lines ambient isotopic to (the mirror image of) $L$, with closed electric field lines in the shape of (the mirror image of) $\bigcup_{i\in I}L_i$ and magnetic field lines in the shape of (the mirror image of) $\bigcup_{i\notin I}L_i$.
\end{proposition}
\textit{Proof}: For every point $q\in \varphi^{-1}(\bigcup_{i\notin I}L_i)$ we have
\begin{align}
\label{eq:tang}
\mathbf{B}|_{t=0}(q)=&\left(\text{Re}(h(\alpha,\beta))\text{Im}(\nabla\alpha\times\nabla\beta)\right.\nonumber\\
&\left.+\text{Im}(h(\alpha,\beta))\text{Re}(\nabla\alpha\times\nabla\beta)\right)\Bigr\rvert_{t=0,(x,y,z)=q}\nonumber\\
=&\frac{8}{(|q|^2+1)^3}\left(((\text{Re}(H(\alpha,\beta))(\varphi^{-1})_*(v_2)\right.\nonumber\\
&\left.+\text{Im}(H(\alpha,\beta))(\varphi^{-1})_*(v_1))\right)\Bigr\rvert_{t=0,(x,y,z)=q}\nonumber\\
=&\frac{8}{(|q|^2+1)^3} (\varphi^{-1})_*(X_{(\alpha,\beta)})\Bigr\rvert_{t=0,(x,y,z)=q},
\end{align}
where $|\cdot|$ denotes the Euclidean norm in $\mathbb{R}^3$. The second equality follows from $h|_L=H$ and Equation (\ref{eq:v1}). The last equality follows from the orthonormality of the basis $\{v_1,v_2\}$, the definition of $H$ and the fact that $L$ is Legendrian. Equation (\ref{eq:tang}) states that at $t=0$ the field $\mathbf{B}$ is everywhere tangent to $\varphi^{-1}(\bigcup_{i\notin I}L_i)$. In particular, at $t=0$ the field $\mathbf{B}$ has a set of closed flow lines that is ambient isotopic to the mirror image of $\bigcup_{i\notin I}L_i$ (cf. Remark \ref{remark}).
Similarly, for every $q\in\varphi^{-1}(\bigcup_{i\in I}L_i)$ we have
\begin{align}
\label{eq:tang2}
\mathbf{E}|_{t=0}(q)=&\left(\text{Re}(h(\alpha,\beta))\text{Re}(\nabla\alpha\times\nabla\beta)\right.\nonumber\\
&\left.-\text{Im}(h(\alpha,\beta))\text{Im}(\nabla\alpha\times\nabla\beta)\right)\Bigr\rvert_{t=0,(x,y,z)=q}\nonumber\\
=&\frac{8}{(|q|^2+1)^3}\left(((\text{Re}(H(\alpha,\beta))(\varphi^{-1})_*(v_1)\right.\nonumber\\
&\left.-\text{Im}(H(\alpha,\beta))(\varphi^{-1})_*(v_2))\right)\Bigr\rvert_{t=0,(x,y,z)=q}\nonumber\\
&=\frac{8}{(|q|^2+1)^3} (\varphi^{-1})_*(X_{(\alpha,\beta)})\Bigr\rvert_{t=0,(x,y,z)=q}.
\end{align}
The same arguments as above imply that at $t=0$ the field $\mathbf{E}$ is everywhere tangent to $\varphi^{-1}(\bigcup_{i\in I}L_i)$, so that at $t=0$ the field $\mathbf{E}$ has a set of closed flow lines that is ambient isotopic to $\bigcup_{i\in I}L_i$. \qed
Since the constructed fields are null for all time, the topology of the electric and magnetic field lines does not change, and the fields contain $L$ for all time. We hence have the following corollary.
\begin{corollary}
Let $L=L_1\cup L_2\cup\ldots\cup L_n$ be an $n$-component Legendrian link with respect to the contact structure in $S^3$ with $I\subset \{1,2\ldots,n\}$ and a non-vanishing section of its tangent bundle such that the corresponding function $H:L\to \mathbb{C}$ allows a holomorphic extension $h:U\to\mathbb{C}$ to an open neighbourhood $U$ of $S^3$. Then $\mathbf{F}=h(\alpha,\beta)\nabla\alpha\times\nabla\beta$ has a set of closed field lines that is ambient isotopic to the mirror image of $L$ for all time, with a set of closed electric field lines that is ambient isotopic to the mirror image of $\bigcup_{i\in I}L_i$ for all time and a set of closed magnetic field lines that is ambient isotopic to the mirror image of $\bigcup_{i\notin I}L_i$ for all time.
\end{corollary}
Therefore, what we have to show in order to prove Theorem \ref{thm:main} is that every link type (with every choice of a subset of its components) has a Legendrian representative as in the corollary.
\section{The proof of the theorem}
\label{sec:proof}
We have seen in the previous section that Theorem \ref{thm:main} can be proven by showing that every link type has a Legendrian representative for which a certain function has a holomorphic extension. Questions like this, regarding the existence of holomorphic extensions of functions defined on a subset of $\mathbb{C}^m$, are important in the study of complex analysis in $m$ variables and are in general much more challenging when $m>1$. In this section, we first prove that every link type has a Legendrian representative with certain properties regarding real analyticity. We then review a result from complex analysis by Burns and Stout that guarantees that for this class of real analytic submanifolds of $\mathbb{C}^2$ contained in $S^3$ the desired holomorphic extension exists, thereby proving Theorem \ref{thm:main}.
\begin{lemma}
\label{lem}
Every link type has a real analytic Legendrian representative $L$ that admits a non-zero section of its tangent bundle, such that for any given subset $I$ of its set of components the corresponding function $H:L\to\mathbb{C}$ as in Equation (\ref{eq:H}) is real analytic.
\end{lemma}
\textit{Proof}: The lemma is essentially proved in \cite{rudolphtt2}, where it is shown that every link has a Legendrian representative $L$ (with respect to the contact structure in $S^3$) that is the image of a smooth embedding, given by a Laurent polynomial $\eta_i=(\eta_{i,1},\eta_{i,2}):S^1\to S^3\subset\mathbb{C}^2$ in $\rme^{\rmi \chi}$ for each component $L_i$. The set of functions $\eta_i$ in \cite{rudolphtt2} is obtained by approximating some smooth embedding, whose image is a Legendrian link $L'$ of the same link type as $L$. It is a basic exercise in contact topology to show that we can assume that $(1,0)\not\in L'$ \cite{etnyre} and hence also $(1,0)\not\in L$.
Since each $\eta_i$ is a real analytic embedding, the inverse $\eta_i^{-1}:L\to S^1$ is real analytic in $x_1$, $y_1$, $x_2$ and $y_2$ for all $i=1,2,\ldots,n$. Likewise $\partial_\chi\eta_i:S^1\to TL_i$ is real analytic in $\chi$ and non-vanishing, since $\eta_i$ is an embedding. It follows that the composition $X\defeq(\partial_\chi \eta_i)\circ\eta_i^{-1}:L_i\to TL_i$ is a real analytic non-vanishing section of the tangent bundle of $L_i$ for all $i=1,2,\ldots,n$. Equations (\ref{eq:H}) and (\ref{eq:v1}) then directly imply that $H$ is also real analytic, no matter which subset $I$ of the components of $L$ is chosen. \qed
It was shown in \cite{rudolphtt} that a link $L$ in $S^3$ is a real analytic Legendrian link if and only if it is a \textit{totally tangential} $\mathbb{C}$\textit{-link}, i.e., $L$ arises as the intersection of a complex plane curve and $S^3$ that is tangential at every point. Recall from Remark \ref{remark} that the torus links constructed in \cite{kedia} arise in this way, where the complex plane curve is $z_1^pz_2^q-1=0$ and the radius of the 3-sphere is chosen appropriately. Links that arise as transverse intersections of complex plane curves and the 3-sphere, so-called \textit{transverse $\mathbb{C}$-links} or, equivalently, \textit{quasipositive links}, have been studied as stable vortex knots in null electromagnetic fields in \cite{quasi}.
Following Burns and Stout \cite{burns} we call a real analytic submanifold $\Sigma$ of $\mathbb{C}^2$ that is contained in $S^3$ an \textit{analytic interpolation manifold (relative to the 4-ball $B$)} if every real analytic function $\Sigma\to\mathbb{C}$ is the restriction to $\Sigma$ of a function that is holomorphic on some neighbourhood of $B$. The neighbourhood depends on the function in question.
\begin{theorem}[Burns-Stout \cite{burns}]
\label{thm:burns}
$\Sigma$ is an analytic interpolation manifold if and only if $T_p(\Sigma)\subset T_p^{\mathbb{C}}(S^3)$ for every $p\in\Sigma$, where $T_p^{\mathbb{C}}(S^3)$ denotes the maximal complex subspace of $T_p(S^3)$.
\end{theorem}
The result stated in \cite{burns} holds in fact for more general ambient spaces and their boundaries, namely strictly pseudo-convex domains with smooth boundaries. The open 4-ball $B$ with boundary $\partial B=S^3$ is easily seen to be an example of such a domain.
\textit{Proof of Theorem \ref{thm:main}}: By Lemma \ref{lem} every link type can be represented by a real analytic Legendrian link $L$. It is thus a real analytic submanifold of $\mathbb{C}^2$ that is contained in $S^3$. The condition $T_pL\subset T_p^{\mathbb{C}}(S^3)$ is equivalent to $L$ being a Legendrian link with respect to the standard contact structure on $S^3$. Hence $L$ is an analytic interpolation manifold. Since Lemma \ref{lem} also implies that for every choice of $I$ the function $H:L\to \mathbb{C}$ can be taken to be real analytic, Theorem \ref{thm:burns} implies that $H$ is the restriction of a holomorphic function $h:U\to\mathbb{C}$, where $U$ is some neighbourhood of $S^3$.
The discussion in Section \ref{sec:leg} shows that the electromagnetic field
\begin{equation}
\mathbf{F}=h(\alpha,\beta)\nabla\alpha\times\nabla\beta
\end{equation}
has a set of closed electric field lines in the shape of the mirror image of $\bigcup_{i\in I}L_i$ and a set of closed magnetic field lines in the shape of the mirror image of $\bigcup_{i\notin I}L_i$ at time $t=0$. Since the constructed field is null for all time, $\mathbf{F}$ contains these links for all time, which concludes the proof of Theorem \ref{thm:main}, since every link has a mirror image. \qed
\section{Discussion}
\label{sec:disc}
We showed that every link type arises as a set of stable electric and magnetic field lines in a null electromagnetic field. Since these fields are obtained via Bateman's construction, they share some properties with the torus link fields in \cite{kedia}. They are for example shear-free and have finite energy.
However, since the proof Theorem \ref{thm:main} only asserts the existence of such fields, via the existence of a holomorphic function $h$, other desirable properties of the fields in \cite{kedia} are more difficult to investigate. The electric and magnetic field lines in \cite{kedia} lie on the level sets of $\text{Im}(\alpha^p\beta^q)$ and $\text{Re}(\alpha^p\beta^q)$. At this moment, it is not clear (and doubtful) if the fields in Theorem \ref{thm:main} have a similar integrability property. It is, however, very interesting that the relevant function $z_1^pz_2^q$, whose real/imaginary part is constant on integral curves of the (pushforward of the) magnetic/electric field, is (up to an added constant) exactly the complex plane curve whose totally tangential intersection with $S^3$ gives the $(p,-q)$-torus link. In light of this observation, we might conjecture about the fields in Theorem \ref{thm:main}, which contain $L$, that if the electric/magnetic field lines really lie on the level sets of a pair of real functions, then the real and imaginary parts of $F$ would be natural candidates for such functions, where $F=0$ intersects $S^3$ totally tangentially in the mirror image of $L$. So far $z_1^pz_2^q-1=0$ is the only explicit example of such a function (resulting in the $(p,-q)$-torus link) that the author is aware of, even though it is known to exist for any link. It is this lack of explicit examples and concrete constructions that makes it difficult to investigate this conjecture and other properties of the fields from Theorem \ref{thm:main}.
Kedia et al. also obtained concrete expressions for the helicity of their fields \cite{kedia}. Again, the lack of concrete examples makes it difficult to obtain analogous results.
Since the fields in Theorem \ref{thm:main} are obtained via Bateman's construction, all their Poynting fields at $t=0$ are tangent to the fibers of the Hopf fibration. It is still an open problem to modify the construction, potentially via a different choice of $\alpha$ and $\beta$ to obtain knotted fields, whose underlying Poynting fields give more general Seifert fibrations.
|
2,869,038,154,704 | arxiv | \section{Introduction}
\paragraph{Timed logics} In the context of real-time systems verification, it is natural and desirable to add \emph{timing constraints} to
\emph{Linear Temporal Logic} (\textup{\textmd{\textsf{LTL}}}{})~\cite{Pnueli1977}
to enable reasoning about timing behaviours of such systems.
For instance, one may write $\phi_1 \mathbin{\mathbf{U}}_I \phi_2$ to assert
that $\phi_1$ holds until a `witness' point where $\phi_2$ holds,
and the time difference between now and that point lies within the
\emph{constraining interval} $I$.
The resulting logic, \emph{Metric Temporal Logic} (\textup{\textmd{\textsf{MTL}}}{})~\cite{Koy90},
can be seen as a fragment of \emph{Monadic First-Order Logic of Order and Metric} (\textup{\textmd{\textsf{FO[$<, +1$]}}}{})~\cite{AluHen93}, the timed counterpart of the classical \emph{Monadic First-Order Logic of Order} (\textup{\textmd{\textsf{FO[$<$]}}}{}).
There are, nonetheless, some loose ends in this analogy. For instance, while \textup{\textmd{\textsf{LTL}}}{} is as expressive as \textup{\textmd{\textsf{FO[$<$]}}}{}~\cite{Kamp1968, Gabbay1980}, it is noted early on
that certain `non-local' timing properties in \textup{\textmd{\textsf{FO[$<, +1$]}}}{}, albeit being very simple,
cannot be expressed in timed temporal logics like \textup{\textmd{\textsf{MTL}}}{}~\cite{AluHen94}.
As a concrete example, the property `every $p$-event is followed by a $q$-event and, later, an $r$-event within the next 10 time units', written as the \textup{\textmd{\textsf{FO[$<, +1$]}}}{} formula
\begin{equation}\label{eq:pqr}
\forall x \, (p(x) \Rightarrow \exists y \, \Big(q(y) \land \exists z \, \big(r(z) \land x \leq y \leq z \leq x + 10)\big)\Big)
\end{equation}
is not expressible in \textup{\textmd{\textsf{MTL}}}{}---indeed, no `finitary' extension of \textup{\textmd{\textsf{MTL}}}{} can be \emph{expressively complete}
for \textup{\textmd{\textsf{FO[$<, +1$]}}}{}~\cite{Rabinovich2007}.\footnote{(\ref{eq:pqr}) can, however, be expressed in \textup{\textmd{\textsf{MTL}}}{} if the \emph{continuous} semantics of the logic is adopted
or past modalities are allowed; see~\cite{Bouyer2010} for details.}
A more serious practical concern
is that the satisfiability problem for \textup{\textmd{\textsf{MTL}}}{} is undecidable~\cite{AluHen93, Ouaknine2006}. For this reason, research efforts have been
focused on fragments of \textup{\textmd{\textsf{MTL}}}{} with decidable satisfiability, most notably
\emph{Metric Interval Temporal Logic} (\textup{\textmd{\textsf{MITL}}}{}), the fragment of \textup{\textmd{\textsf{MTL}}}{} in which
`punctual' constraining intervals are not allowed~\cite{AluFed96}.
In particular, \textup{\textmd{\textsf{MITL}}}{} formulae can be effectively translated into \emph{timed automata} (\textup{\textmd{\textsf{TA}}}{s})~\cite{AluDil94},
giving practical $\mathrm{EXPSPACE}$ decision procedures for its \emph{satisfiability} and \emph{model-checking} problems~\cite{BriEst14,
BriGee17, BriGee17b}.
\paragraph{Automata modalities}
It is well known that properties that are necessarily \emph{second order} (e.g.,~`$p$ holds at all even positions') cannot be expressed in \textup{\textmd{\textsf{LTL}}}{} or \textup{\textmd{\textsf{MITL}}}{}.
Fortunately, it is possible to add \emph{automata modalities} into
\textup{\textmd{\textsf{LTL}}}{}
at no additional computational cost~\cite{Wolper1994, Sistla1985}.
In timed settings, the logic obtained from \textup{\textmd{\textsf{MITL}}}{} by adding
\emph{time-constrained} automata modalities defined by non-deterministic finite automata (\textup{\textmd{\textsf{NFA}}}{s})
is called \emph{Extended Metric Interval Temporal Logic} (\textup{\textmd{\textsf{EMITL}}}{})~\cite{Wilke1994}.
From a theoretical point of view, \textup{\textmd{\textsf{EMITL}}}{} is a \emph{fully decidable}
formalism (i.e.~constructively closed under all Boolean operations and with decidable satisfiability~\cite{HenRas98})
whose class of timed languages strictly contains
that of \textup{\textmd{\textsf{MITL}}}{} and B\"uchi automata.\footnote{A very recent paper of Krishna, Madnani, and Pandya~\cite{KriMad18} showed
that this class admits some alternative characterisations (namely, a syntactic fragment of \textup{\textmd{\textsf{OCATA}}}{s} and
a timed monadic second-order logic).}
In practice, it can be argued that automata modalities are natural, easy-to-use
extensions of the usual \textup{\textmd{\textsf{MITL}}}{} modalities. %
They also allow properties like (\ref{eq:pqr}), which often
emerge in application domains like healthcare and automotive engineering,
to be written as specifications.
\begin{example}[\cite{Abbas17}]\label{ex:icd}
Discrimination algorithms are implemented in implantable cardioverter defibrillators (ICDs)
to detect potentially dangerous heartbeat patterns. As a simple example, one may want to check
whether \emph{the number of heartbeats in one minute is between $120$ and $150$}. This can be expressed
as the \textup{\textmd{\textsf{CTMITL}}}{}~\cite{KriMad16} formula $\mathop{\mathbf{C}}_{[0, 59]}^{\geq 120} p \wedge \mathop{\mathbf{C}}_{[0, 59]}^{\leq 150} p$
where $p$ denotes a peak in the cardiac signal. The \emph{counting modalities}
$\mathop{\mathbf{C}}^{\sim k}_{I}$ (where $0 \in I$, which is the case here),
as well as~(\ref{eq:pqr}), be expressed straightforwardly in terms of automata.
\end{example}
\begin{example}[adapted from~\cite{OpenScenario16}]\label{ex:overtaking}
In autonomous driving, one may want to specify that
\emph{a car overtaking another from the left must be done in $10$ seconds}.
Suppose the lane on the left is empty and the events are sampled sufficiently frequently (say $5$ms), this can be expressed as
the \textup{\textmd{\textsf{EMITL}}}{} formula $\A_{[0, 10]} (\texttt{TTC > 4}, \dots)$
(see \figurename~\ref{fig:vehicles} and \figurename~\ref{fig:overtaking})
where $\texttt{TTC}$ is the time to collision,
$\texttt{dist}$ is the longitudinal distance between the two vehicles, and
$\texttt{to\_left}$, $\texttt{to\_right}$ are the actions for merging to the left/right lane---these are taken immediately after $\texttt{TTC <= 4}$
and $\texttt{dist >= 5}$, respectively.
\end{example}
\begin{figure}[h]
\centering
\begin{tikzpicture}
\node[inner sep=0pt] (redcar) at (0,0)
{\includegraphics[width=.1\textwidth]{red.png}};
\node[inner sep=0pt] (bluecar) at (3,0)
{\includegraphics[width=.1\textwidth]{blue.png}};
\node[inner sep=0pt] (bluecar) at (8,0) {};
\draw[dashed,->, very thick] (1,0) to[curve through={(1.1,0) (1.5,0.9) (2,1) (2.5,1) (3,1) (3.5,1) (4,1) (4.5, 1) (5,1) (5.5,1) (6,0.9) (6.4,-.1) (6.5, -.1)}] (7,-.1);
\draw[|<->|][dotted] (1,-.2) -- (2,-.2) node[midway,below=0.2cm] {\scriptsize $\texttt{TTC == 4}$};
\draw[|<->|][dotted] (4,.6) -- (6,.6) node[midway,below=0.2cm] {\scriptsize $\texttt{dist == 5}$};
\end{tikzpicture}
\caption{The red car overtakes the blue car from the left.}
\label{fig:vehicles}
\end{figure}
\begin{figure}[h]
\centering
\scalebox{.75}{\begin{tikzpicture}[node distance = 4cm]
\node[initial left ,state] (0) {};
\node[state, right=2.6cm of 0] (1) {};
\node[state, right=1.3cm of 1] (2) {};
\node[state, right=2.6cm of 2] (3) {};
\node[state, accepting, right=1.3cm of 3] (4) {};
\path
(0) edge[loopabove, ->] node[above, align=center]{\scriptsize $\texttt{TTC > 4}$} (0)
(2) edge[loopabove, ->] node[above, align=center]{\scriptsize $\texttt{dist < 5}$} (2)
(0) edge[->] node[above] {\scriptsize $\texttt{TTC <= 4}$} (1)
(2) edge[->] node[above] {\scriptsize $\texttt{dist >= 5}$} (3)
(3) edge[->] node[above] {\scriptsize $\texttt{to\_right}$} (4)
(1) edge[->] node[above] {\scriptsize $\texttt{to\_left}$} (2);
\end{tikzpicture}}
\caption{$\mathcal{A}$ in Example~\ref{ex:overtaking}.}
\label{fig:overtaking}
\end{figure}
Compared with \textup{\textmd{\textsf{LTL}}}{} and \textup{\textmd{\textsf{MITL}}}{}, however, translating \textup{\textmd{\textsf{EMITL}}}{}
into \textup{\textmd{\textsf{TA}}}{s} is considerably more challenging.
The original translation by Wilke~\cite{Wilke1994}
is non-elementary and thus not suitable for practical purposes.
Krishna, Madnani, and Pandya~\cite{KriMad17}
showed that any \textup{\textmd{\textsf{EMITL}}}{} formula can be encoded into an \textup{\textmd{\textsf{MITL}}}{} formula of doubly exponential size (which can then be translated into a \textup{\textmd{\textsf{TA}}}{}),
but this does not match the $\mathrm{EXPSPACE}$ lower bound inherited from \textup{\textmd{\textsf{MITL}}}{}.
More recently, Ferr\`ere~\cite{Ferrere18} proposed an asymptotically optimal construction from \textup{\textmd{\textsf{MIDL}}}{} (\emph{Metric Interval Dynamic Logic},
which is strictly more expressive and subsumes \textup{\textmd{\textsf{EMITL}}}{}) formulae
to \textup{\textmd{\textsf{TA}}}{s}, but it is very complicated and relies heavily on the use of \emph{diagonal constraints} (i.e.~comparison between clocks) which are, in general, not preferred in practice~\cite{Bouyer03, Bouyer2005, Gastin2018} and not well-supported by existing model checkers.\footnote{It is possible to obtain a diagonal-free \textup{\textmd{\textsf{TA}}}{} from an \textup{\textmd{\textsf{EMITL}}}{} formula
by first applying the construction in~\cite{Ferrere18} and then removing the diagonal constraints~\cite{Bouyer05}.
This, however, is expensive and difficult to implement.}
\paragraph{Contributions}
We consider a simple fragment of \textup{\textmd{\textsf{EMITL}}}{}, which we call $\textup{\textmd{\textsf{EMITL}}}_{0, \infty}$, obtained
by allowing only lower- and upper-bound constraining intervals (e.g.,~$[0, a)$ and $(b, \infty)$)
and \textup{\textmd{\textsf{EECL}}}{}~\cite{Raskin1999} (adding automata modalities to \emph{Event Clock Logic} \textup{\textmd{\textsf{ECL}}}{}).
The satisfiability and model-checking problems for $\textup{\textmd{\textsf{EMITL}}}_{0, \infty}$ and \textup{\textmd{\textsf{EECL}}}{} are much cheaper
than that of \textup{\textmd{\textsf{EMITL}}}{} ($\mathrm{PSPACE}$-complete vs $\mathrm{EXPSPACE}$-complete).
Moreover, we show that they are already as expressive as full \textup{\textmd{\textsf{EMITL}}}{}---this is in sharp contrast with the situation for
`vanilla' $\textup{\textmd{\textsf{MITL}}}_{0, \infty}$/$\textup{\textmd{\textsf{ECL}}}$ and \textup{\textmd{\textsf{MITL}}}{}, where the latter is strictly more expressive
when interpreted over timed words~\cite{HenRas98, Raskin1999}---making them
\emph{expressive} yet \emph{tractable} real-time specification formalisms.
We then show that $\textup{\textmd{\textsf{EMITL}}}_{0, \infty}$
admits a much simpler translation into \textup{\textmd{\textsf{TA}}}{s}.
Specifically, by effectively decoupling the timing
and operational aspects of automata modalities,
overlapping obligations imposed by a single automaton subformula
can be handled in a purely fifo manner
with a set of \emph{sub-components} (each of which is a simple one-clock \textup{\textmd{\textsf{TA}}}{}
with a polynomial-sized symbolic representation),
avoiding the use of diagonal constraints altogether.\footnote{For simplicity
we focus on logics with only future modalities, but our results readily carry over to the versions with both future and past modalities,
thanks to the compositional nature of our construction (cf. e.g.,~\cite{Kesten1998, Nickovic2008}).}
This makes our construction better suited to be implemented
to work with existing highly efficient algorithmic back ends (e.g.,~\textsc{Uppaal}~\cite{Behrmann2006} and
\textsc{LTSmin}~\cite{KanLaa15}).
\paragraph{Related work}
The idea of extending \textup{\textmd{\textsf{LTL}}}{} to capture the full class
of $\omega$-regular languages dates back to the seminal works of Clarke, Sistla, Vardi, and
Wolper~\cite{Wolper1983, Wolper1994, Sistla1985, Sistla1985b} in the early 1980s.
In particular, it is shown that \textup{\textmd{\textsf{LTL}}}{} with \textup{\textmd{\textsf{NFA}}}{} modalities---which
essentially underlies various industrial specification languages
like ForSpec~\cite{Armoni02} and PSL~\cite{Eisner06}---are expressively equivalent to B\"uchi automata,
yet the model-checking and satisfiability problems remain
$\mathrm{PSPACE}$-complete, same as \textup{\textmd{\textsf{LTL}}}{}.\footnote{There are other ways to extend \textup{\textmd{\textsf{LTL}}}{} to achieve
$\omega$-regularity, e.g., adding monadic second-order quantifiers
($\textup{\textmd{\textsf{QPTL}}}$~\cite{Sistla1985}) or
least/greatest fixpoints ($\textup{\textmd{\textsf{$\mu$LTL}}}$~\cite{Banieqbal1987, Vardi1987}).
These formalisms unfortunately suffer from higher complexity or less readable syntax.}
Our approach generalises the construction in~\cite{Wolper1994}
in the case of finite acceptance.
Henzinger, Raskin, and Schobbens~\cite{HenRas98, Raskin1999}
proved a number of analogous results in timed settings; in particular,
they showed that in the \emph{continuous} semantics (i.e.~over finitely variable \emph{signals}),
(i) $\textup{\textmd{\textsf{MITL}}}_{0, \infty}$ and \textup{\textmd{\textsf{ECL}}}{} are as expressive as \textup{\textmd{\textsf{MITL}}}{}, and (ii)
the fragment of $\textup{\textmd{\textsf{EMITL}}}$ with \emph{unconstrained} automata modalities
is as expressive as \emph{recursive event-clock automata},
and the verification problems for this fragment can be solved in $\mathrm{EXPSPACE}$.
Our results can be seen as counterparts
in the \emph{pointwise} semantics (i.e.~over \emph{timed words}).
Besides satisfiability and model checking, extending timed logics with
automata or regular expressions is also a topic of great interest in
\emph{runtime verification}.
Basin, Krsti\'{c}, and Traytel~\cite{Basin2017}
showed that \textup{\textmd{\textsf{MTL}}}{} with time-constrained regular-expression modalities admits an efficient runtime monitoring procedure in a pointwise, integer-time setting.
A very recent work of Ni\v{c}kovi\'c, Lebeltel, Maler, Ferr\`ere, and Ulus~\cite{Nickovic2018}
considered a similar extension of \textup{\textmd{\textsf{MITL}}}{} with \emph{timed regular expressions}
(\textup{\textmd{\textsf{TRE}}}{})~\cite{Asarin1997, Asarin2002} in the context of monitoring and analysis of Boolean and real-valued signals.
\section{Timed logics and automata}
\paragraph{Timed languages}
A \emph{timed word} over a finite alphabet $\Sigma$ is an infinite sequence
of \emph{events} $(\sigma_i,\tau_i)_{i \geq 1}$ over
$\Sigma \times \R_{\geq 0}$ with $(\tau_i)_{i\geq 1}$ a non-decreasing
sequence of non-negative real numbers such that for each $r \in \R_{\geq 0}$,
there is some $j \geq 1$ with $\tau_j \geq r$ (i.e.~we require all timed words
to be `\emph{non-Zeno}').
We denote by $T\Sigma^\omega$ the set of all timed words over $\Sigma$. A \emph{timed language} is a
subset of $T\Sigma^\omega$.
\paragraph{Extended timed logics}
A \emph{non-deterministic finite automaton} (\textup{\textmd{\textsf{NFA}}}{}) over $\Sigma$
is a tuple $\A = \langle \Sigma, S, s_0,\transitions, F \rangle$
where $S$ is a finite set of locations, $s_0 \in S$ is the initial location,
$\transitions \subseteq S \times \Sigma \times S$ is the transition relation,
and $F$ is the set of final locations.
We say that $\A$ is \emph{deterministic} (a \textup{\textmd{\textsf{DFA}}}{}) iff for each $s \in S$
and $\sigma \in \Sigma$, $| \{ (s, \sigma, s') \mid (s, \sigma, s') \in \transitions \} | \leq 1$.
A \emph{run} of $\A$ on $\sigma_1 \dots \sigma_n \in \Sigma^+$
(without loss of generality, we only consider runs of automata modalities over \emph{nonempty} finite words in this paper) is a
sequence of locations $s_0 s_1 \dots s_n$ where
there is a transition $(s_i,\sigma_{i+1},s_{i+1}) \in \transitions$
for each $i$, $0 \leq i < n$. A run of $\A$ is \emph{accepting} iff
it ends in a final location. A finite word is \emph{accepted} by $\A$
iff $\A$ has an accepting run on it.
We denote by $\sem{\A}$ the set of finite words accepted by $\A$.
\emph{Extended Metric Interval Temporal Logic} (\textup{\textmd{\textsf{EMITL}}}{}) formulae over
a finite set of atomic propositions $\AP$
are generated by
\begin{displaymath}
\phi := \top \mid p \mid \phi_1 \land \phi_2 \mid \neg\phi \mid \A_I(\phi_1, \dots, \phi_n)
\end{displaymath}
where $p\in\AP$, $\A$ is an \textup{\textmd{\textsf{NFA}}}{} over the $n$-ary alphabet $\{ 1, \dots, n \}$,
and $I \subseteq \R_{\geq 0}$ is a non-singular interval with endpoints in $\N_{\geq 0} \cup\{\infty\}$.\footnote{For notational simplicity,
we will occasionally use $\phi_1$,~\dots, $\phi_n$ directly as transition labels (instead of $1$, \dots, $n$).}
As usual, we omit the subscript $I$ when $I = [0, \infty)$ and write pseudo-arithmetic expressions for lower
or upper bounds, e.g.,~`$< 3$' for $[0, 3)$.
We also omit the arguments $\phi_1$,~\dots, $\phi_n$
and simply write $\A_I$, if clear from the context.
Following~\cite{AluHen93, AluHen94, Wilke1994, OuaWor07}, we consider
the pointwise semantics of \textup{\textmd{\textsf{EMITL}}}{} and interpret
formulae over timed words: given an \textup{\textmd{\textsf{EMITL}}}{} formula $\phi$ over $\AP$,
a timed word $\rho=(\sigma_1,\tau_1)(\sigma_2,\tau_2)\dots$ over $\Sigma_\AP = 2^\AP$ and
a \emph{position} $i \geq 1$,
\begin{itemize}
\item $(\rho, i)\models \top$;
\item $(\rho, i)\models p$ iff $p\in \sigma_i$;
\item $(\rho, i)\models \phi_1\land \phi_2$ iff $(\rho,i)\models\phi_1$ and
$(\rho,i)\models\phi_2$;
\item $(\rho,i)\models\neg\phi$ iff $(\rho,i)\not\models\phi$;
\item $(\rho,i)\models \A_I(\phi_1, \dots, \phi_n)$ iff there exists
$j\geq i$ such that (i) $\tau_j-\tau_i\in I$ and (ii) there is an
accepting run of $\A$ on $a_i \dots a_j$ where $a_\ell \in \{1, \dots, n\}$
and $(\rho, \ell) \models \phi_{a_\ell}$ for each $\ell$, $i \leq \ell \leq j$.\footnote{Note that
it is possible for $(\rho,i)\models \A_I(\phi_1, \dots, \phi_n)$ and $(\rho,i) \models \A^c_I(\phi_1, \dots, \phi_n)$,
where $\A^c$ is the complement of $\A$, to hold simultaneously.}
\end{itemize}
The other Boolean operators are defined as usual:
$\bot \equiv \neg\top$,
$\phi_1\lor\phi_2 \equiv \neg(\neg\phi_1\land\neg\phi_2)$,
and $\phi_1\Rightarrow \phi_2 \equiv \lnot\phi_1\lor \phi_2$.
We also define the dual automata modalities
$\tilde{\A}_I(\phi_1, \dots, \phi_n) \equiv \neg \A_I(\neg \phi_1, \dots, \neg \phi_n)$.
With the dual automata modalities, we can transform every
\textup{\textmd{\textsf{EMITL}}}{} formula $\phi$ into \emph{negative normal form}, i.e.~an \textup{\textmd{\textsf{EMITL}}}{} formula using
only atomic propositions, their negations, and the operators $\lor$,
$\land$, $\A_I$, and $\tilde{A}_I$.
It is easy to see that the standard \textup{\textmd{\textsf{MITL}}}{} `until' $\phi_1 \mathbin{\mathbf{U}}_I \phi_2$
can be defined in terms of automata modalities.
We also use the usual shortcuts like
$\eventually_I\phi \equiv \top\mathbin{\mathbf{U}}_I\phi$,
$\globally_I\phi \equiv \neg\eventually_I\neg\phi$,
and $\phi_1\mathbin{\mathbf{R}}_I \phi_2 \equiv \neg\big((\neg\phi_1)\mathbin{\mathbf{U}}_I(\neg \phi_2)\big)$.
We say that $\rho$ \emph{satisfies} $\phi$ (written $\rho\models\phi$)
iff $(\rho,1)\models\phi$, and we write
$\sem\phi$ for the timed language of $\phi$, i.e.~the set of all timed words satisfying $\phi$.
$\textup{\textmd{\textsf{EMITL}}}_{0, \infty}$ is the fragment of \textup{\textmd{\textsf{EMITL}}}{} where all constraining
intervals $I$ must be lower or upper bounds (e.g.,~$< 3$ or $\geq 5$).
\emph{Extended Event Clock Logic} (\textup{\textmd{\textsf{EECL}}}{}) is the fragment of \textup{\textmd{\textsf{EMITL}}}{}
where $\A_I$ is replaced by a more restricted `event-clock' counterpart:
\begin{itemize}
\item $(\rho,i)\models \oset{\triangleright}{\A}_I(\phi_1, \dots, \phi_n)$ iff
(i) there is a \emph{minimal} position $j \geq i$ such that $\A$ has an accepting run on $a_i \dots a_j$ where $a_\ell \in \{1, \dots, n\}$
and $(\rho, \ell) \models \phi_{a_\ell}$ for each $\ell$, $i \leq \ell \leq j$; and
(ii) $j$ satisfies $\tau_j-\tau_i\in I$.
\end{itemize}
\paragraph{Timed automata}
Let $X$ be a finite set of \emph{clocks}
($\R_{\geq 0}$-valued variables).
A \emph{valuation} $v$ for $X$ maps each clock $x \in X$ to a value in $\R_{\geq 0}$.
We denote by $\mathbf{0}$ the valuation that maps every clock to $0$,
and we write the valuation simply as a value in $\R_{\geq 0}$
when $X$ is a singleton.
The set $\Guards(X)$ of \emph{clock constraints} $g$ over $X$ is generated
by $g:= \top\mid g\land g \mid x\bowtie c$ where
${\bowtie}\in \{{\leq},{<},{\geq},{>}\}$, $x\in X$, and $c\in\N_{\geq 0}$.
The satisfaction of a clock constraint $g$ by a valuation $v$ (written $v \models g$) is
defined in the usual way, and we write $\sem{g}$ for the set of valuations $v$ satisfying $g$.
For $t\in\R_{\geq 0}$, we let $v +t$
be the valuation defined by $(v +t)(x) = v (x)+t$ for all $x\in
X$. For $\lambda \subseteq X$, we let $v [\lambda \leftarrow 0]$ be the valuation
defined by $(v[\lambda \leftarrow 0])(x) = 0$ if $x\in \lambda$, and
$(v[\lambda \leftarrow 0])(x) = v (x)$ otherwise.
A \emph{timed automaton} (\textup{\textmd{\textsf{TA}}}{}) over $\Sigma$ is a tuple
$\A = \langle \Sigma, S, s_0, X, \transitions, \F \rangle$ where $S$ is a finite set of
locations, $s_0 \in S$ is the initial location, $X$ is a finite set of clocks,
$\transitions \subseteq S \times \Sigma \times \Guards(X) \times
2^X \times S$ is the transition relation,
and $\F=\{F_1,\dots,F_n\}$, with $F_i\subseteq S$ for all $i$, $1\leq i \leq n$,
is the set of sets of final locations.\footnote{We adopt
\emph{generalised B\"uchi} acceptance for technical convenience;
indeed, any \textup{\textmd{\textsf{TA}}}{} with a generalised B\"uchi acceptance condition can be converted into a
classical B\"uchi \textup{\textmd{\textsf{TA}}}{} via a simple standard construction~\cite{Courcoubetis1992}.}
We say that $\A$ is \emph{deterministic} (a \textup{\textmd{\textsf{DTA}}}{}) iff for each $s \in S$ and $\sigma \in \Sigma$ and
every pair of transitions $(s, \sigma, g^1, \lambda^1, s^1) \in \transitions$ and $(s, \sigma, g^2, \lambda^2, s^2) \in \transitions$, $g^1 \land g^2$ is not satisfiable.
A \emph{state} of $\A$ is a pair $(s, v)$
of a location $s \in S$ and a valuation $v$ for $X$.
A \emph{run} of $\A$ on a timed word
$(\sigma_1,\tau_1)(\sigma_2,\tau_2)\cdots\in T\Sigma^\omega$ is a
sequence of states $(s_0,v_0)(s_1,v_1)\dots$ where (i)
$v_0=\mathbf{0}$ and (ii) for each $i\geq 0$,
there is a transition $(s_i,\sigma_{i+1},g,\lambda,s_{i+1})$
such that
$v_i +(\tau_{i+1}-\tau_i)\models g$ (let $\tau_0=0$) and
$v_{i+1} =(v_i +(\tau_{i+1}-\tau_i))[\lambda \leftarrow 0]$.
A run of $\A$ is \emph{accepting} iff
the set of locations it visits infinitely often contains at least
one location from each $F_i$, $1\leq i\leq n$. A timed word
is \emph{accepted} by $\A$ iff $\A$ has an accepting run on it.
We denote by $\sem{\A}$ the timed language accepted by $\A$.
For two \textup{\textmd{\textsf{TA}}}{s} $\A^1 = \langle \Sigma,S^1,s_0^1,X^1,\transitions^1,\F^1 \rangle$ and
$\A^2 = \langle \Sigma,S^2,s_0^2,X^2,\transitions^2,\F^2 \rangle$ over a
common alphabet $\Sigma$, the (synchronous) product
$\A^1 \times \A^2$ is defined as the \textup{\textmd{\textsf{TA}}}{} $\langle \Sigma,S,s_0,X,\transitions,\F \rangle$
where (i) $S=S^1 \times S^2$, $s_0 = (s_0^1, s_0^2)$, and $X = X_1 \cup X_2$;
(ii) $((s^1_1,s^2_1),\sigma,g,\lambda,(s^1_2,s^2_2))\in\transitions$
iff there exists $(s^1_1,\sigma,g^1,\lambda^1,s^1_2)\in\transitions^1$
and $(s^2_1,\sigma,g^2,\lambda^2,s^2_2)\in\transitions^2$ such that
$g=g^1\land g^2$ and $\lambda = \lambda^1 \cup \lambda^2$;
and (iii) let $\F^1=\{F_1^1,\dots,F_n^1\}$, $\F^2=\{F_1^2,\dots,F_m^2\}$, then
$\F = \{F_1^1 \times S^2,\dots,F_n^1\times S^2, S^1\times
F_1^2,\dots,S^1\times F_{m}^2\}$. Note in particular that we have $\sem{\A^1 \times \A^2} = \sem{\A^1} \cap \sem{\A^2}$.
\begin{example}
\begin{figure}[h]
\centering \scalebox{.75}{\begin{tikzpicture}[node distance =
2.5cm] \node[initial left,state](0){};
\node[state, right of=0](1){};
\node[state, right of=1, accepting](2){};
\path
(0) edge[loopabove,->] (0)
(0) edge[->] node[above, align=center]{$p \wedge \neg q$ \\ $x := 0$} (1)
(1) edge[loopabove,->] node[above, align=center]{$\neg q$ \\ $x \leq 1$} (1)
(1) edge[->] node[above] {$x > 1$} (2);
\end{tikzpicture}}
\caption{A \textup{\textmd{\textsf{TA}}}{} accepting
$\sem{\neg \globally(p\Rightarrow \eventually_{\leq 1}q)}$.}
\label{fig:TA}
\end{figure}
Consider the \textup{\textmd{\textsf{TA}}}{} over $\Sigma_{\{p,q\}}$ in \figurename~\ref{fig:TA}
(following the usual convention, we omit transition labels when they are $\top$'s and use Boolean formulae over atomic propositions to represent letters,
e.g., here $p \land \neg q$ stands for $\{ \sigma \in \Sigma_{\{p,q\}} \mid p \in \sigma, q \notin \sigma \}$). It non-deterministically pick an event where $p$ holds but $q$ does not hold (thus
$\eventually_{\leq }q$ is not fulfilled immediately) and enforces that $q$ does not hold
in the next time unit. In other words, it accepts $\sem{\eventually\big(p \wedge \globally_{\leq 1}(\neg q)\big)} = \sem{\neg \globally(p\Rightarrow \eventually_{\leq 1}q)}$.
\end{example}
\paragraph{Alternation} One-clock
alternating timed automata (\textup{\textmd{\textsf{OCATA}}}{s}) extend
one-clock timed automata with the power of
\emph{universal choice}. Intuitively, a transition of an \textup{\textmd{\textsf{OCATA}}}{}
may spawn several copies of the automaton that run in parallel from the
targets of the transition; a timed word is accepted iff
\emph{all} copies accept it.
Formally, for a set $S$ of
locations, let $\Gamma(S)$ be the set of formulae defined by
\[
\gamma := \top\mid \bot\mid \gamma_1 \lor \gamma_2\mid \gamma_1 \land \gamma_2
\mid s \mid x\bowtie c\mid x.\gamma
\]
where $x$ is the single clock, $c\in\N_{\geq 0}$, ${\bowtie} \in\{{\leq},{<},{\geq},{>}\}$, and
$s \in S$ (the construct $x.$ means ``reset $x$''). For a formula $\gamma \in \Gamma(S)$, let its dual $\overline{\gamma} \in \Gamma(S)$
be the formula obtained by applying
\begin{itemize}
\item $\overline{\top} = \bot$; $\overline{\bot} = \top$;
\item $\overline{\gamma_1 \lor \gamma_2} = \overline{\gamma_1} \land \overline{\gamma_2}$;
$\overline{\gamma_1 \land \gamma_2} = \overline{\gamma_1} \lor \overline{\gamma_2}$;
\item $\overline{s} = s$; $\overline{x \bowtie c} = \neg (x \bowtie c)$; $\overline{x.\gamma} = x.\overline{\gamma}$.
\end{itemize}
An \textup{\textmd{\textsf{OCATA}}}{} over $\Sigma$ is a tuple $\A=\langle \Sigma, S, s_0, \atransitions, F \rangle$
where $S$ is a finite set of locations,
$s_0 \in S$ is the initial location,
$\atransitions \colon S\times \Sigma \to \Gamma(S)$ is the transition
function, and $F \subseteq S$ is the set of final locations.
A \emph{state} of $\A$ is a pair $(s,v)$ of a location $s \in S$ and
a valuation $v$ for the single clock $x$.
Given a set of states $M$, a formula $\gamma \in \Gamma(S)$
and a clock valuation $v$, we define
\begin{itemize}
\item $M\models_v \top$; $M\models_v \ell$ iff $(\ell,v)\in M$;
$M\models_v x\bowtie c$ iff $v\bowtie c$; \mbox{$M\models_v x.\gamma$ iff $M\models_0 \gamma$};
\item $M\models_v \gamma_1\land\gamma_2$ iff $M\models_v \gamma_1$
and $M\models_v \gamma_2$;
\item $M\models_v \gamma_1\lor\gamma_2$ iff $M\models_v \gamma_1$
or $M\models_v \gamma_2$.
\end{itemize}
We say that $M$ is a \emph{model} of $\gamma$ with respect
to $v$ iff $M \models_v \gamma$.\footnote{Note that $\models_v$ is \emph{monotonic}: if $M \subseteq M'$ and $M \models_v \gamma$ then $M' \models_v \gamma$.}
A run of $\A$ on a timed word
$(\sigma_1,\tau_1)(\sigma_2,\tau_2)\dots \in T\Sigma^\omega$ is a
rooted directed acyclic graph (DAG) $G=\langle V, \to \rangle$ with vertices of the
form $(s,v,i)\in S\times \R_{\geq 0} \times \N_{\geq 0}$, $(s_0,0,0)$ as the root,
and edges as follows: for every vertex $(s,v,i)$, there is a
model $M$ of the formula $\atransitions(s,\sigma_{i+1})$
with respect to $v+(\tau_{i+1}-\tau_i)$ (again, $\tau_0=0$) such that
there is an edge $(s,v,i) \to (s',v',i+1)$ for every state $(s',v')$ in $M$.
A run $G$ of $\A$ is \emph{accepting} iff
every infinite path in $G$ visits $F$ infinitely often.
A timed word is \emph{accepted} by $\A$
iff $\A$ has an accepting run on it. We denote by $\sem{\A}$ the timed language accepted by $\A$.
For convenience, in the sequel we will regard \textup{\textmd{\textsf{NFA}}}{s} as
(untimed) \textup{\textmd{\textsf{OCATA}}}{s} with finite acceptance conditions and
whose transition functions are simply disjunctions over locations.
\begin{example}\label{ex:OCATA}
\begin{figure}[h]
\centering \scalebox{.75}{\begin{tikzpicture}[node distance =
2.5cm] \node[initial left,accepting,state](0){$s_0$};
\node[state, right of=0](1){$\wedge$};
\node[state, right of=1](2){$s_1$};
\path
(0) edge[loopabove,->] node[above]{$\neg p$} (0)
(0) edge[-] node[above]{$p \wedge \neg q$} (1)
(0) edge[loopbelow,->] node[below]{$p \wedge q$} (0)
(1) edge[->,bend left] node[below] {} (0)
(1) edge[->] node[above] {$x:=0$} (2)
(2) edge[->,loopabove] node[above]{$\top$} (2)
(2) edge[->] node[above]{$x\leq 1, q$} (7,0);
\end{tikzpicture}}
\caption{An \textup{\textmd{\textsf{OCATA}}}{} accepting
$\sem{\globally(p\Rightarrow \eventually_{\leq 1}q)}$.}
\label{fig:OCATA}
\end{figure}
\begin{figure}[h]
\centering \scalebox{1}{\begin{tikzpicture}
\node[text width=1.2cm] (0) {$(s_0, 0, 0)$};
\node[text width=1.6cm, right = 4mm of 0] (1) {$(s_0, 0.42, 1)$};
\node[text width=1.7cm, above right = 1mm and 4mm of 1] (2) {$(s_0, 0.42, 2)$};
\node[text width=1.5cm, below right = 1mm and 4mm of 1] (3) {$(s_1, 0, 2)$};
\node[text width=1.4cm, right = 4mm of 2] (4) {$(s_0, 0.7, 3)$};
\node[text width=0.5cm, right = 0mm of 4] (6) {$\dots$};
\path
(0) edge[->] (1)
(1.east) edge[->] (2.west)
(1.east) edge[->] (3.west)
(2) edge[->] (4);
\end{tikzpicture}}
\caption{A run of the \textup{\textmd{\textsf{OCATA}}}{} in \figurename~\ref{fig:OCATA} on
the timed word $(\emptyset,0.42)(\{p\},0.42)(\{q\},0.7)\cdots$.}
\label{fig:OCATArun}
\end{figure}
Consider the \textup{\textmd{\textsf{OCATA}}}{} over $\Sigma_{\{p,q\}}$ in \figurename~\ref{fig:OCATA}
which accepts $\sem{\globally(p\Rightarrow \eventually_{\leq 1}q)}$.
A run of it on
$(\emptyset,0.42)(\{p\},0.42)(\{q\},0.7)\cdots$
is depicted in \figurename~\ref{fig:OCATArun} where the root is
$(s_0,0,0)$. This vertex has a
single successor $(s_0,0.42,1)$, which in turn has two successors
$(s_0,0.42,2)$ and $(s_1,0,2)$ (after firing the
transition $\atransitions(s_0, \{p\}) = s_0 \wedge x.s_1$). Then, $(s_1,0,2)$ has no successor
since the empty set is a model of $\atransitions(s_1, \{q\}) = x \leq 1$ with respect to $0.28$.
\end{example}
\paragraph{Verification problems} In this work we are concerned with the following standard
verification problems. Given an \textup{\textmd{\textsf{EMITL}}}{} formula $\phi$, the \emph{satisfiability} problem
asks whether $\sem{\phi} = \emptyset$. Given a \textup{\textmd{\textsf{TA}}}{} $\A$ and
an \textup{\textmd{\textsf{EMITL}}}{} formula $\phi$, the \emph{model-checking} problem asks whether
$\sem{\A} \subseteq \sem{\phi}$. As \textup{\textmd{\textsf{TA}}}{s} are closed under intersection and
the \emph{emptiness} problem for \textup{\textmd{\textsf{TA}}}{s} is decidable, both problems above
can be solved by first translating $\phi$ into an equivalent \textup{\textmd{\textsf{TA}}}{} $\A_\phi$.
\section{Expressiveness}
In this section we study the expressiveness of
$\textup{\textmd{\textsf{EMITL}}}_{0, \infty}$, \textup{\textmd{\textsf{EECL}}}{}, and a `counting' extension of \textup{\textmd{\textsf{EMITL}}}{}.
It turned out that the class of timed languages captured by
\textup{\textmd{\textsf{EMITL}}}{} is robust in the sense that it remains the same
under all these modifications.
For the purpose of the proofs below, let us assume (without loss of generality) that
the automaton $\A = \langle \Sigma, S, s_0, \atransitions, F \rangle$ in question is a \textup{\textmd{\textsf{DFA}}}{}
and at most one of $\phi_1, \dots, \phi_n$ may hold at any position in a given timed word~\cite{Wolper1994}.
\paragraph{Counting in intervals}
Recall that the constraining intervals $I$ in the
counting modalities in Ex.~\ref{ex:icd}
satisfy $0 \in I$; this non-trivial extension of \textup{\textmd{\textsf{MTL}}}{} (and \textup{\textmd{\textsf{MITL}}}{}) was first considered by
Hirshfeld and Rabinovich~\cite{HirRab99, Rabinovich2007}.
For the case of timed words, it is shown in~\cite{KriMad16} that allowing arbitrary $I$ (e.g.,~$(1, 2)$)
makes the resulting logic even more expressive.
Here we show that, by contrast, adding the ability to count in $I$---regardless of whether $0 \in I$---does not increase the expressive power of \textup{\textmd{\textsf{EMITL}}}{}.\footnote{As \textup{\textmd{\textsf{EMITL}}}{} can easily express the `until with threshold' modalities of \textup{\textmd{\textsf{CTMITL}}}{},
the latter is clearly subsumed by \textup{\textmd{\textsf{EMITL}}}{}.}
We consider an extention of \textup{\textmd{\textsf{EMITL}}}{} (which we call~\textup{\textmd{\textsf{CEMITL}}}{})
that enables specifying the number of positions within a given interval $I$ from now at which final locations can be reached.
More precisely, we have the following semantic clause in \textup{\textmd{\textsf{CEMITL}}}{}:
\begin{itemize}
\item $(\rho,i)\models \A^{\geq k}_I(\phi_1, \dots, \phi_n)$ iff there exists
$j_1 < \dots < j_k$ such that for each $\ell$, $1 \leq \ell \leq k$, (i) $j_\ell \geq i$; (ii) $\tau_{j_\ell}-\tau_{i} \in I$; and (iii) there is an
accepting run of $\A$ on some $a_i \dots a_{j_{\ell}}$ where $a_{\ell'} \in \{1, \dots, n\}$
and $(\rho, \ell') \models \phi_{a_{\ell'}}$ for each $\ell'$, $i \leq \ell' \leq j_\ell$.
\end{itemize}
\begin{figure}[h]
\centering
\scalebox{.75}{\begin{tikzpicture}[node distance = 2.5cm]
\node[initial left ,state] (0) {$s_0^1$};
\node[state, right of=0] (1) {$s_1^1$};
\node[state, right of=1] (2) {$s_2^1$};
\path
(0) edge[loopabove, ->] (0)
(1) edge[loopabove, ->] (1)
(0) edge[->] (1)
(1) edge[->] (2)
(2) edge[bend left, ->] (0)
(2) edge[loopabove, ->] (2);
\end{tikzpicture}}
\caption{$\A^1$ in the proof of Theorem~\ref{thm:countinguseless}.}
\label{fig:counting}
\end{figure}
\begin{theorem}\label{thm:countinguseless}
$\textup{\textmd{\textsf{CEMITL}}}$ and \textup{\textmd{\textsf{EMITL}}}{} are equally expressive over timed words.
\end{theorem}
\begin{proof}
We give an \textup{\textmd{\textsf{EMITL}}}{} equivalent of $\A^{\geq k}_I(\phi_1, \dots, \phi_n)$.
Provided that $\phi_1$, \dots, $\phi_n$ are already in \textup{\textmd{\textsf{EMITL}}}{} and $\A$ is deterministic
in the sense above, we can count modulo $k$ the number of positions
where final locations are reached and ensures that $I$ encompasses all possible values of the counter;
in contrast to~\cite{KriMad16}, here the counter can be implemented directly using automata modalities.
We give a concrete example which should illustrate the idea.
Let $k = 3$ and $\A^2$ be the product of $\A$ and $\A^1$ (\figurename~\ref{fig:counting}),
i.e.~each location of $\A^2$ is of the form $\langle s, s^1 \rangle$ where $s \in S$ and $s^1 \in \{s_0^1, s_1^1, s_2^1 \}$,
and it is accepting iff $s$ and $s^1$ are both final.
Then, let $\A^3$ be the automaton obtained from $\A^2$
by:
\begin{itemize}
\item For all the transitions $\langle s, s_0^1 \rangle \rightarrow \langle s', s_1^1 \rangle$,
$\langle s, s_1^1 \rangle \rightarrow \langle s', s_2^1 \rangle$,
and $\langle s, s_2^1 \rangle \rightarrow \langle s', s_0^1 \rangle$,
keeping only those with $s' \in F$;
\item For all the transitions $\langle s, s_0^1 \rangle \rightarrow \langle s', s_0^1 \rangle$,
$\langle s, s_1^1 \rangle \rightarrow \langle s', s_1^1 \rangle$,
and $\langle s, s_2^1 \rangle \rightarrow \langle s', s_2^1 \rangle$,
keeping only those with $s' \notin F$.
\end{itemize}
Now let $\A^{1,\ell}$ ($\ell \in \{0, 1, 2\}$) be the automaton
obtained from $\A^1$ by adding an extra final location $s^1_F$
and the transition $s^1_{\ell - 1\ (\mathrm{mod}\ 3)} \rightarrow s^1_F$,
and let $\A^{3,\ell}$ be the corresponding product with $\A$,
keeping transitions $\langle s, s_{\ell - 1\ (\mathrm{mod}\ 3)}^1 \rangle \rightarrow \langle s', s_F^1 \rangle$
with $s' \in F$. The original formula $\A_I^{\geq 3}$ is equivalent
to $\bigwedge_{\ell \in \{0, 1, 2\}} \A^{3, \ell}_I$.
\end{proof}
\paragraph{Restricting to event clocks}
We show that the equivalence of \textup{\textmd{\textsf{ECL}}}{} and $\textup{\textmd{\textsf{MITL}}}_{0, \infty}$
carries over to the current setting.
More specifically, an \textup{\textmd{\textsf{EECL}}}{} formula can be translated
into an equilvalent $\textup{\textmd{\textsf{EMITL}}}_{0, \infty}$ formula of polynomial size (in DAG representation).
On the other hand, our translation from $\textup{\textmd{\textsf{MITL}}}_{0, \infty}$
to \textup{\textmd{\textsf{EECL}}}{} induces an exponential blow-up due to the fact that
automata $\A$ have to be determinised.
\begin{theorem}\label{thm:eeclemitl}
\textup{\textmd{\textsf{EECL}}}{} and $\textup{\textmd{\textsf{EMITL}}}_{0, \infty}$ are equally expressive over timed words.
\end{theorem}
\begin{proof}
Again, we assume that the arguments $\phi_1$, \dots, $\phi_n$ are already in the target logic.
The direction from \textup{\textmd{\textsf{EECL}}}{} to $\textup{\textmd{\textsf{EMITL}}}_{0, \infty}$ is simple and almost identical
to the translation from \textup{\textmd{\textsf{ECL}}}{} to $\textup{\textmd{\textsf{MITL}}}_{0, \infty}$; for example, $\oset{\triangleright}{\A}_{(3, 5)}$
can be written as $\A_{<5} \wedge \neg \A_{\leq 3}$. For the other direction
consider the following $\textup{\textmd{\textsf{EMITL}}}_{0, \infty}$ formulae:
\begin{itemize}
\item $(\rho, i) \models \A_{\leq c}$: the equivalent formula is simply $\oset{\triangleright}{\A}_{\leq c}$.
\item $(\rho, i) \models \A_{\geq c}$: as in~\cite{Raskin1999}, we consider the subcases where:
\begin{itemize}
\item There is no event in $[\tau_i, \tau_i + c)$ apart from $(\sigma_i, \tau_i)$:
let $\A^2$ be the product of $\A$ and $\A^1$ where $\A^1$ is the automaton depicted in \figurename~\ref{fig:cequals0}.
We have $(\rho, i) \models \neg \oset[0ex]{\triangleright}{\A}^1_{<c} \wedge \oset[0ex]{\triangleright}{\A}^2_{\geq c}$.
\item There are events in $[\tau_i, \tau_i + c)$ other than $(\sigma_i, \tau_i)$:
let the last event in $[\tau_i, \tau_i + c)$ be $(\sigma_j, \tau_j)$
and $k > j > i$ be the minimal position such that there exists
$a_i \dots a_k \in \sem{\A}$ with $(\rho, \ell) \models \phi_{a_\ell}$ for all $\ell$, $i \leq \ell \leq k$.
By assumption, $a_i \dots a_k$ is unique and $\A$ must reach a
specific location $s \in S$ after reading $a_i \dots a_j$.
The idea is to split the unique run of $\A$ on $a_i \dots a_k$
at $s$: we take a disjunction over all possible $s \in S$,
enforce that $\tau_j - \tau_i < c$ and $\A$ reaches
a final location from $s$ by reading $a_{j+1} \dots a_k$.
More specifically, let $\B^{s, \phi}$ be the automaton obtained from $\A$
by adding a new location $s_F$, declaring it as the only final location,
and adding new transitions $s' \xrightarrow{\phi_a \wedge \phi} s_F$
for every $s' \xrightarrow{\phi_a} s$ in $\A$.
Let $\C^s$ be the automaton obtained from $\A$ by adding new non-final locations $s_0'$ and $s_1'$,
adding new transitions $s_0' \rightarrow s_1'$
(i.e.~labelled with $\top$) and $s_1' \xrightarrow{\phi_a} s''$
for every $s \xrightarrow{\phi_a} s''$ in $\A$,
removing outgoing transitions from all the final locations, and finally
setting the initial location to $s_0'$.
We have $(\rho, i) \models \oset[0ex]{\triangleright}{\A}^1_{<c} \wedge \oset{\triangleright}{\A} \wedge \neg \bigvee_{s \in S} \oset[0ex]{\triangleright}{\B}^{s, \phi}_{<c}$
where $\phi = \neg \C^s$.
\end{itemize}
The equivalent formula is the disjunction of these.
\end{itemize}
The other types of constraing intervals, such as $[0, c)$, are handled almost identically.
\end{proof}
\begin{figure}[h]
\centering
\scalebox{.75}{\begin{tikzpicture}[node distance = 2.5cm]
\node[initial left ,state] (0) {$s_0^1$};
\node[state, right of=0] (1) {$s_1^1$};
\node[state, accepting, right of=1] (2) {$s_2^1$};
\path
(0) edge[loopabove, ->] (0)
(0) edge[->] (1)
(1) edge[->] (2)
(2) edge[loopabove, ->] (2);
\end{tikzpicture}}
\caption{$\A^1$ in the proof of Theorem~\ref{thm:eeclemitl}.}
\label{fig:cequals0}
\end{figure}
\paragraph{Restricting to one-sided constraining intervals}
Recall that a fundamental stumbling block in the algorithmic analysis of \textup{\textmd{\textsf{TA}}}{s}
is that the \emph{universality} problem is undecidable~\cite{AluDil94}.
\textup{\textmd{\textsf{DTA}}}{s} with finite acceptance conditions, on the other hand, can be complemented easily and
have a decidable universality problem.
This raises the question of whether one can extend \textup{\textmd{\textsf{MITL}}}{} with \textup{\textmd{\textsf{DTA}}}{} modalities
without losing decidability (both are fully decidable formalisms).
Perhaps surprisingly, the resulting formalism already subsumes \textup{\textmd{\textsf{MTL}}}{}
even when punctual constraints are disallowed.
For example, $\eventually_{[d, d]} \phi$ can be written as
$\neg \A' \land \neg \A'' \land \eventually_{[1, \infty)} \phi$
where $\A'$ and $\A''$ are the one-clock deterministic \textup{\textmd{\textsf{TA}}}{s}
in \figurename~\ref{fig:punc1} and \figurename~\ref{fig:punc2}, respectively
(in particular, note that $\A'$ and $\A''$ only use lower- and upper-bound constraints).
It follows from~\cite{Ouaknine2006} that the satisfiability problem
for this formalism is undecidable.
\begin{figure}[h]
\begin{minipage}[b]{0.35\linewidth}
\centering
\scalebox{.75}{\begin{tikzpicture}[node distance = 2.5cm]
\node[initial left ,state] (0) {};
\node[state, accepting, right of=0] (1) {};
\path
(0) edge[loopabove, ->] node[above] {$x < d$} (0)
(0) edge[->] node[above] {$x > d$} (1);
\end{tikzpicture}}
\caption{$\A'$.}
\label{fig:punc1}
\end{minipage}
\begin{minipage}[b]{0.55\linewidth}
\centering
\scalebox{.75}{\begin{tikzpicture}[node distance = 2.5cm]
\node[initial left ,state] (0) {};
\node[state, right= 2cm of 0] (1) {};
\node[state, accepting, right= 2cm of 1] (2) {};
\path
(0) edge[loopabove, ->] node[above left] {$x < d$} (0)
(1) edge[loopabove, ->] node[above] {$\neg \phi$, $x \leq d$} (1)
(0) edge[->] node[above] {$\neg \phi$, $x \geq d$} (1)
(1) edge[->] node[above] {$x > d$} (2);
\end{tikzpicture}}
\caption{$\A''$.}
\label{fig:punc2}
\end{minipage}
\end{figure}
Based on a similar trick, we obtain the main result of this section:
$\textup{\textmd{\textsf{EMITL}}}_{0, \infty}$ already has the full expressive power of \textup{\textmd{\textsf{EMITL}}}{}.
This, together with the fact that the satisfiability and model-checking problems
for $\textup{\textmd{\textsf{EMITL}}}_{0, \infty}$ are only $\mathrm{PSPACE}$-complete
(Theorem~\ref{thm:pspace}) as compared with $\mathrm{EXPSPACE}$-complete for full $\textup{\textmd{\textsf{EMITL}}}$~\cite{Ferrere18}, makes $\textup{\textmd{\textsf{EMITL}}}_{0, \infty}$ a
competitive alternative to other real-time specification formalisms---while a translation from $\textup{\textmd{\textsf{EMITL}}}$ to $\textup{\textmd{\textsf{EMITL}}}_{0, \infty}$
inevitably induces at least an exponential blow-up, it can be argued that many properties of practical interest
can be written in $\textup{\textmd{\textsf{EMITL}}}_{0, \infty}$ directly (e.g.,~Ex.~\ref{ex:icd}
and Ex.~\ref{ex:overtaking}).
The idea of the proof below is similar to that of~\cite[Lemma 6.3.11]{Raskin1999}
($\textup{\textmd{\textsf{MITL}}}_{0, \infty}$ and \textup{\textmd{\textsf{MITL}}}{} are equally expressive in the continuous semantics),
but the technical details are more involved
due to automata modalities and the fact that
each event is not necessarily preceded by another one exactly $1$ time unit earlier
in a timed word;
the latter is essentially the reason why
the expressive equivalence of $\textup{\textmd{\textsf{MITL}}}_{0, \infty}$ and \textup{\textmd{\textsf{MITL}}}{} fails to hold in the pointwise semantics.
\begin{theorem}\label{thm:expeq}
$\textup{\textmd{\textsf{EMITL}}}_{0, \infty}$ and \textup{\textmd{\textsf{EMITL}}}{} are equally expressive over timed words.
\end{theorem}
\begin{proof}
We explain in detail below how to write the \textup{\textmd{\textsf{EMITL}}}{} formula $\A_{(c, c+1)} (\phi_1, \dots, \phi_n)$ where
$c \geq 0$, and $\phi_1, \dots, \phi_n \in \textup{\textmd{\textsf{EMITL}}}_{0, \infty}$
as an $\textup{\textmd{\textsf{EMITL}}}_{0, \infty}$ formula; the other cases, such as $(c, c+1]$
and $[c, c+1]$, are similar.
First consider $c = 0$. If $(\rho, i) \models \A_{(0, 1)}$
for $\rho = (\sigma_1, \tau_1)(\sigma_2,\tau_2)\dots$ and $i \geq 1$, the finite word
$a_i \dots a_k$ accepted by $\A$ must be at least two letters long.
This again is enforced by $\A^1$ in \figurename~\ref{fig:cequals0}:
let $\A^2$ be the product of $\A$ and $\A^1$.
Then, let $\A^3$ be the automaton obtained from $\A^2$
by adding $\neg \nextx_{> 0} \top$ ($\nextx$ is the standard \textup{\textmd{\textsf{MITL}}}{} `next' operator~\cite{OuaWor07}) to all the transitions $\langle s, s_0^1 \rangle \rightarrow \langle s', s_0^1 \rangle$ and $\nextx_{> 0} \top$
to all the transitions $\langle s, s_0^1 \rangle \rightarrow \langle s', s_1^1 \rangle$ as conjuncts
(in doing so, extend the alphabet as necessary).
It is not hard to see that $(\rho, i) \models \A^3_{< 1}$
in the two possible situations:
(i) $\tau_{i+1} - \tau_i > 0$ and
(ii) $\tau_j - \tau_i > 0$ for some $j > i + 1$ and $\tau_\ell - \tau_i = 0$ for all $\ell$, $i < \ell < j$.
The other direction ($(\rho, i) \models \A^3_{< 1} \Rightarrow (\rho, i) \models \A_{(0, 1)}$) is straightforward.
It follows that the equivalent $\textup{\textmd{\textsf{EMITL}}}_{0, \infty}$ formula is $\A^3_{< 1}$.
\begin{figure}[h]
\centering
\scalebox{.70}{
\begin{tikzpicture}
\begin{scope}
\draw[-, loosely dashed] (-70pt,0pt) -- (200pt,0pt);
\draw[-, very thick, loosely dotted] (-110pt,0pt) -- (-75pt,0pt);
\draw[loosely dashed] (-120pt,-30pt) -- (-120pt,10pt) node[at start, below=2mm] {$0$};
\draw[loosely dashed] (-10pt,-30pt) -- (-10pt,10pt) node[at start, below=2mm] {$c-1$};
\draw[loosely dashed] (90pt,-30pt) -- (90pt,10pt) node[at start, below=2mm] {$c$};
\draw[loosely dashed] (79pt,0pt) -- (79pt,25pt);
\draw[loosely dashed] (179pt,0pt) -- (179pt,25pt);
\draw[loosely dashed] (190pt,-30pt) -- (190pt,10pt) node[at start, below=2mm] {$c+1$};
\draw[draw=black, fill=white] (-121pt, -4pt) rectangle (-119pt, 4pt);
\node[below, fill=white, inner sep=1mm] at (-120pt, -7pt) {$\sigma_i$};
\draw[draw=black, fill=white] (-61pt, -4pt) rectangle (-59pt, 4pt);
\draw[draw=black, fill=white] (1pt, -4pt) rectangle (-1pt, 4pt);
\draw[draw=black, fill=black] (53pt, -4pt) rectangle (51pt, 4pt);
\draw[draw=black, fill=white] (105pt, -4pt) rectangle (103pt, 4pt);
\draw[draw=black, fill=white] (157pt, -4pt) rectangle (155pt, 4pt);
\draw[draw=black, fill=black] (-21pt, -4pt) rectangle (-23pt, 4pt);
\draw[draw=black, fill=white] (31pt, -4pt) rectangle (29pt, 4pt);
\draw[draw=black, fill=black] (80pt, -4pt) rectangle (78pt, 4pt);
\node[below, fill=white, inner sep=1mm] at (79pt, -7pt) {$\sigma_j$};
\draw[draw=black, fill=white] (87pt, -4pt) rectangle (85pt, 4pt);
\draw[draw=black, fill=white] (135pt, -4pt) rectangle (133pt, 4pt);
\draw[draw=black, fill=black] (180pt, -4pt) rectangle (178pt, 4pt);
\node[below, fill=white, inner sep=1mm] at (179pt, -7pt) {$\sigma_k$};
\end{scope}
\begin{scope}
\draw[|<->|][dotted] (79pt,25pt) -- (179pt,25pt) node[midway,above] {{\scriptsize $1$}};
\end{scope}
\end{tikzpicture}
}
\caption{Case (i) in the proof of Theorem~\ref{thm:expeq}; solid boxes indicate when $\A$ accepts
the corresponding prefix of $a_i \dots a_k$.}
\label{fig:exactly1before}
\end{figure}
\begin{figure}[h]
\centering
\scalebox{.70}{
\begin{tikzpicture}
\begin{scope}
\draw[-, loosely dashed] (-70pt,0pt) -- (200pt,0pt);
\draw[-, very thick, loosely dotted] (-110pt,0pt) -- (-75pt,0pt);
\draw[loosely dashed] (-120pt,-30pt) -- (-120pt,10pt) node[at start, below=2mm] {$0$};
\draw[loosely dashed] (-10pt,-30pt) -- (-10pt,10pt) node[at start, below=2mm] {$c-1$};
\draw[loosely dashed] (90pt,-30pt) -- (90pt,10pt) node[at start, below=2mm] {$c$};
\draw[loosely dashed] (79pt,0pt) -- (79pt,25pt);
\draw[loosely dashed] (179pt,0pt) -- (179pt,25pt);
\draw[loosely dashed] (190pt,-30pt) -- (190pt,10pt) node[at start, below=2mm] {$c+1$};
\draw[draw=black, fill=white] (-121pt, -4pt) rectangle (-119pt, 4pt);
\node[below, fill=white, inner sep=1mm] at (-120pt, -7pt) {$\sigma_i$};
\draw[draw=black, fill=white] (-61pt, -4pt) rectangle (-59pt, 4pt);
\draw[draw=black, fill=white] (1pt, -4pt) rectangle (-1pt, 4pt);
\draw[draw=black, fill=white] (53pt, -4pt) rectangle (51pt, 4pt);
\draw[draw=black, fill=white] (105pt, -4pt) rectangle (103pt, 4pt);
\draw[draw=black, fill=white] (157pt, -4pt) rectangle (155pt, 4pt);
\draw[draw=black, fill=black] (-21pt, -4pt) rectangle (-23pt, 4pt);
\draw[draw=black, fill=white] (31pt, -4pt) rectangle (29pt, 4pt);
\draw[draw=black, fill=black] (87pt, -4pt) rectangle (85pt, 4pt);
\node[below, fill=white, inner sep=1mm] at (86pt, -7pt) {$\sigma_j$};
\draw[draw=black, fill=white] (135pt, -4pt) rectangle (133pt, 4pt);
\draw[draw=black, fill=black] (180pt, -4pt) rectangle (178pt, 4pt);
\node[below, fill=white, inner sep=1mm] at (179pt, -7pt) {$\sigma_k$};
\end{scope}
\begin{scope}
\draw[|<->|][dotted] (79pt,25pt) -- (179pt,25pt) node[midway,above] {{\scriptsize $1$}};
\end{scope}
\end{tikzpicture}
}
\caption{Case (ii) in the proof of Theorem~\ref{thm:expeq}.}
\label{fig:blackright}
\end{figure}
Now consider $c > 0$.
Suppose that $(\rho, i) \models \A_{(c, c + 1)}$
for $\rho = (\sigma_1, \tau_1)(\sigma_2,\tau_2)\dots$ and $i \geq 1$,
let $k > i$ be the \emph{minimal} position such that
$\tau_k - \tau_i \in (c, c+1)$ and there exists $a_i \dots a_k \in \sem{\A}$
with $(\rho, \ell) \models \phi_{a_\ell}$ for all $\ell$, $i \leq \ell \leq k$
(since at most one of $\phi_1$, \dots, $\phi_n$ may hold at any position, we fix
$a_i \dots a_k$ below).
Consider the following cases (note that they are not mutually disjoint):
\begin{enumerate}[label=(\roman*)]
\item There exists a \emph{maximal} $j$, $i < j < k$ such that (a) $\tau_k - \tau_j = 1$
and (b) there is no $\ell$, $j < \ell < k $ such that $a_i \dots a_{\ell} \in \sem{\A}$
with $(\rho, \ell') \models \phi_{a_{\ell'}}$ for all $\ell'$, $i \leq \ell' \leq \ell$
(\figurename~\ref{fig:exactly1before}):
we take a disjunction over all possible $s \in S$ such that $\A$
reaches $s$ after reading $a_i \dots a_j$ and
enforce that $\tau_j - \tau_i \in (c - 1, c)$ (which we can, by the IH),
$\A$ reaches a final location from $s$ by reading $a_{j+1} \dots a_k$, and $\tau_k - \tau_j = 1$;
thanks to (b), the last condition, which is otherwise inexpressible in $\textup{\textmd{\textsf{EMITL}}}_{0, \infty}$,
can be expressed as a conjunction of two formulae labelled with $\leq 1$ and $\geq 1$.
To this end, we use $\B^{s, \phi}$ and $\C^s$ as defined in the proof
of Theorem~\ref{thm:eeclemitl}: we have
\[
(\rho, i) \models \phi^1 = \bigvee_{s \in S} \B^{s, \phi}_{(c-1, c)}
\]
where $\phi = \C^s_{\leq 1} \land \C^s_{\geq 1}$.
\item There exists $j$, $i < j < k$ such that $\tau_k - \tau_j < 1$,
$\tau_j - \tau_i \in (c-1, c]$, and $a_i \dots a_j \in \sem{\A}$
with $(\rho, \ell) \models \phi_{a_{\ell}}$ for all $\ell$, $i \leq \ell \leq j$ (\figurename~\ref{fig:blackright}): let $\D^s$ be the automaton obtained from $\A$
in the same way as $\C^s$ except that we do not remove outgoing transitions
from the final locations.
Regardless of whether there is a event at $\tau_k - 1$,
it is clear that every position $\ell$ with $\tau_{\ell} - \tau_i \in (c - 1, c]$ must
satisfy $\C_{(0, 1)}^{s}$ where $s$ is
the location of $\A$ after reading $a_i \dots a_{\ell}$.
We have
\[
(\rho, i) \models \phi^2 = \A_{(c-1, c]} \land \neg \bigvee_{s \in S} \B^{s, \phi}_{(c-1, c]}
\]
where $\phi = \neg \D^s_{(0, 1)}$.
\item There exists
$j$, $i < j < k$ such that $\tau_k - \tau_j < 1$,
$\tau_j - \tau_i \in (c-1, c]$, but
there is no $\ell$, $i < \ell < k$ such that (a) $\tau_{\ell} - \tau_i \in (c-1, c]$
and (b) $a_i \dots a_{\ell} \in \sem{\A}$
with $(\rho, \ell') \models \phi_{a_{\ell'}}$ for all $\ell'$, $i \leq \ell' \leq \ell$:
we have
\[
(\rho, i) \models \phi^3 = \neg \A_{(c-1, c]} \wedge \bigvee_{s \in S} \B^{s, \phi}_{(c-1, c]}
\]
where $\phi = \D^s_{(0, 1)}$.
\item There exists a \emph{maximal} $j$, $i < j < k$ such that $\tau_k - \tau_j > 1$, $\tau_j - \tau_i \in (c-1, c]$,
and there is no $\ell$, $j < \ell < k$ such that (a) $\tau_{\ell} - \tau_i \in (c-1, c]$
and (b) $a_i \dots a_{\ell} \in \sem{\A}$
with $(\rho, \ell') \models \phi_{a_{\ell'}}$ for all $\ell'$, $i \leq \ell' \leq \ell$:
observe that (provided that $s$'s are correctly instantiated to the locations $\A$ reaches
as it reads $a_i \dots a_k$)
while $\C^s_{>1}$
may hold arbitrarily often in $[\tau_i, \tau_i + c]$,
the number of positions $\ell$, $i \leq \ell \leq j$ satisfying
\begin{equation}\label{eq:count}
(\rho, \ell) \models \C^s_{>1} \wedge (\rho, \ell + 1) \models \big(\C^s_{\leq 1} \vee (\C^s_{> 1} \wedge s \in F)\big)
\end{equation}
is at most $c$
(since any two of such positions must be separated by more than $1$ time unit).
We define a family of automata modalities $\{\E^m \mid m \geq 1 \}$ such that
each location of $\E^m$ is of the form $\langle s, d \rangle$
with $s \in S$ and $1 \leq d \leq 2m$; see \figurename~\ref{fig:example-b} for an illustration.
Each transition updates the $s$-component as $\A$ would, enforces the formula labelled on the corresponding transition of $\A$
and, additionally, the formula as labelled in \figurename~\ref{fig:example-b} (with $s$ being the \emph{target} location of
the corresponding transition of $\A$).
The formula $\phi^s$ (illustrated in \figurename~\ref{fig:example-b2}), which also follows $\A$
with an $s$-component, checks that the next position either satisfies (a) $s \in F$,
or (b) $\C^s_{\leq 1}$ holds continuously until $s \in F$ eventually holds.
Let $\hat{\E}^m$ be obtained from $\E^m$ by `inlining' $\phi^s$: removing the leftmost locations of
$\phi^s$ and merge the middle locations of $\phi^s$ with the rightmost locations of $\E^m$.
Apparently, $\hat{\E}^m$ and ${\E^m}$ are equivalent if there is no constraing interval---the only
difference between them is which position is `timed'.
Now suppose that the number of positions $\ell$, $i \leq \ell \leq j$ satisfying (\ref{eq:count}) is $m$.
Since $j$ is the last of these positions, we have $(\rho, i) \models \hat{\E}^m_{< c + 1}$.
On the other hand, as there are only $m - 1$ such positions in $[\tau_i, \tau_i + c - 1]$,
we have $(\rho, i) \models \neg {\E^{m}}_{\leq c - 1}$.
By the above, we have
\[
(\rho, i) \models \phi^4 = \bigvee_{1 \leq m \leq c} \big(\hat{\E}^m_{< c + 1} \land \neg {\E^{m}}_{\leq c - 1} \big) \,.
\]
\begin{figure}
\centering
\scalebox{.75}{\begin{tikzpicture}[node distance = 2cm]
\node[initial left ,state] (0) {};
\node[state, right= 1.5cm of 0] (1) {};
\node[state, right= 1.5cm of 1] (2) {};
\node[state, right= 1.5cm of 2] (3) {};
\node[state, right= 1.5cm of 3] (4) {};
\node[state, accepting, right= 1.5cm of 4] (5) {};
\path
(0) edge[loopabove, ->] (0)
(2) edge[loopabove, ->] (2)
(4) edge[loopabove, ->] (4)
(0) edge[->] node[above] {$\C^s_{>1}$} (1)
(1) edge[->] node[above] {$\C^s_{\leq 1}$} (2)
(1) edge[->, bend right=30] node[below] {$\C^s_{>1} \land s \in F$} (3)
(2) edge[->] node[above] {$\C^s_{>1}$} (3)
(3) edge[->] node[above] {$\C^s_{\leq 1}$} (4)
(3) edge[->, bend right=30] node[below] {$\C^s_{>1} \land s \in F \land \phi^s$} (5)
(4) edge[->] node[above] {$\C^s_{>1} \land \phi^s$} (5)
;
\end{tikzpicture}}
\caption{An illustration of $\E^3$ in the proof of Theorem~\ref{thm:expeq}.}
\label{fig:example-b}
\end{figure}
\begin{figure}
\centering
\scalebox{.75}{\begin{tikzpicture}[node distance = 2cm]
\node[initial left ,state] (0) {};
\node[state, right= 1.5cm of 0] (1) {};
\node[state, accepting, right= 1.5cm of 1] (2) {};
\path
(1) edge[loopabove, ->] node[above] {$\C^s_{\leq 1}$} (1)
(0) edge[->] (1)
(1) edge[->] node[above] {$s \in F$} (2)
;
\end{tikzpicture}}
\caption{An illustration of $\phi^s$ in the proof of Theorem~\ref{thm:expeq}.}
\label{fig:example-b2}
\end{figure}
\item There is no event in $(\tau_i + c -1, \tau_i + c]$: We have
\[
(\rho, i) \models \phi^5 = \neg \eventually_{(c-1, c]} \top \land \phi' \,.
\]
If $c = 1$ then $\phi'$ can simply be taken as $\A_{(0, 2)}$,
which is equivalent to $\A^3_{< 2}$ by the same argument as before.
If $c > 1$, then $\phi'$ can be taken as
\[
\bigvee_{1 \leq m \leq c - 1} \big(\hat{\E}^m_{< c + 1} \land \neg {\E^{m}}_{\leq c - 2} \big) \vee \phi''
\]
where $\phi''$ is
\[
\neg \eventually_{(c-2, c-1]} \top \wedge \bigvee_{1 \leq m \leq c - 2} \big(\hat{\E}^m_{< c + 1} \land \neg {\E^{m}}_{\leq c - 3} \big) \vee \phi''' \,.
\]
Intuitively, the former part of $\phi'$ is used to handle the case when
there is (at least) a event in $(\tau_i + c-2, \tau_i + c-1]$, and
the former part of $\phi''$ is for $(\tau_i + c-3, \tau_i + c-2]$,
and so on.
\end{enumerate}
We omit the other direction as it is (more or less) straightforward.
The equivalent $\textup{\textmd{\textsf{EMITL}}}_{0, \infty}$ formula is $\phi^1 \vee \phi^2 \vee \phi^3 \vee \phi^4 \vee \phi^5$.
\end{proof}
\section{From $\textup{\textmd{\textsf{EMITL}}}_{0, \infty}$ to timed automata}\label{sec:emitl2ocata}
\paragraph{Embedding \textup{\textmd{\textsf{EMITL}}}{} formulae into \textup{\textmd{\textsf{OCATA}}}{s}}
We give a translation from a given \textup{\textmd{\textsf{EMITL}}}{} formula $\phi$ over $\AP$ (which we assume to be in negative normal form)
into an \textup{\textmd{\textsf{OCATA}}}{} $\A_\phi = \langle \Sigma_\AP, S, s_0, \atransitions, F \rangle$
such that $\sem{\A_\phi} = \sem{\phi}$.
While this mostly follows the lines of the translation for \textup{\textmd{\textsf{MTL}}}{} (and \textup{\textmd{\textsf{MITL}}}{}) in~\cite{OuaWor07, BriEst14},
it is worth noting that the
resulting \textup{\textmd{\textsf{OCATA}}}{} $\A_\phi$ is \emph{weak}~\cite{MulSao86,KupVar97}
but not necessarily \emph{very-weak}~\cite{Rohde97phd, GasOdd01} due to the presence of automata modalities.
The set of locations $S$ of $\A_\phi$
contains (i) $s^\textit{init}$;
(ii) all the locations of $\A$ for every subformula $\A_I(\phi_1, \dots, \phi_n)$;
(iii) all the locations of $\A$ for every subformula $\tilde{\A}_I(\phi_1, \dots, \phi_n)$.
The initial location $s_0$ is $s^\textit{init}$, and the final locations $F$ are all the locations
of $\A$ for every subformula $\tilde{\A}_I(\phi_1, \dots, \phi_n)$. Finally,
for each $\sigma \in \Sigma_{\AP}$, $\atransitions$ is defined inductively as follows (let $\A = \langle \Sigma^\A, S^\A, s_0^\A, \atransitions^\A, F^\A \rangle$ with $\Sigma^\A = \{1, \dots, n\}$):
\begin{itemize}
\item
$\atransitions(s^\textit{init},\sigma)=x.\atransitions(\phi,\sigma)$,
$\atransitions(\top,\sigma)=\top$, and
$\atransitions(\bot,\sigma)=\bot$;
\item $\atransitions(p,\sigma) = \top$ if $p\in\sigma$,
$\atransitions(p,\sigma)=\bot$ otherwise;
\item $\atransitions(\lnot p,\sigma) = \top$ if $p\notin \sigma$,
$\atransitions(\lnot p,\sigma)=\bot$ otherwise;
\item
$\atransitions(\phi_1\lor\phi_2,\sigma)=\atransitions(\phi_1,\sigma)\lor
\atransitions(\phi_2,\sigma)$, and
$\atransitions(\phi_1\land\phi_2,\sigma)=\atransitions(\phi_1,\sigma)\land \atransitions(\phi_2,\sigma)$;
\item
$\atransitions(\A_I(\phi_1, \dots, \phi_n),\sigma)= x.\atransitions(s_0^\A, \sigma)$;
\item
$\atransitions(s^\A,\sigma)= \bigvee_{a \in \Sigma^\A} \big( \atransitions(\phi_a , \sigma) \land \atransitions^\A[s_F^\A \leftarrow s^\A_F \lor x \in I](s^\A, a) \big)$ where $s^\A \in S^\A$ and $\atransitions^\A[s_F^\A \leftarrow s^\A_F \lor x \in I]$ is obtained from $\atransitions^\A$ by substituting every $s_F^\A \in F^\A$ with
$s_F^\A \lor x \in I$ for some subformula $\A_I(\phi_1, \dots, \phi_n)$;
\item
$\atransitions(\tilde{\A}_I(\phi_1, \dots, \phi_n),\sigma)= x.\atransitions(s_0^\A, \sigma)$;
\item
$\atransitions(s^\A,\sigma)= \bigwedge_{a \in \Sigma^\A} \big( \atransitions(\phi_a , \sigma) \lor \overline{\atransitions^\A}[s_F^\A \leftarrow s^\A_F \land x \notin I](s^\A, a) \big)$ where $s^\A \in S^\A$ and $\overline{\atransitions^\A}[s_F^\A \leftarrow s^\A_F \land x \notin I]$ is obtained from $\overline{\atransitions^\A}$ by substituting every $s_F^\A \in F^\A$ with
$s_F^\A \land x \notin I$ for some subformula $\tilde{\A}_I(\phi_1, \dots, \phi_n)$.
\end{itemize}
\begin{proposition}\label{prop:emitl2ocata}
Given an \textup{\textmd{\textsf{EMITL}}}{} formula $\phi$ in negative normal form, \mbox{$\sem{\A_\phi} = \sem{\phi}$}.
\end{proposition}
We now focus on the case where $\phi$ is an $\textup{\textmd{\textsf{EMITL}}}_{0, \infty}$ formula
and give a set of component \textup{\textmd{\textsf{TA}}}{s} whose product
`implements' the corresponding \textup{\textmd{\textsf{OCATA}}}{} $\A_\phi$.
As we will need some notions from~\cite{BriGee17},
we briefly recall them here to keep the paper self-contained.
\paragraph{Compositional removal of alternation in $\phi$}
Let $\Phi$ be the set of temporal subformulae (i.e.~whose outermost
operator is $\A_I$ or $\tilde{A}_I$) of $\phi$. We introduce
a new atomic proposition $p_{\psi}$ for each $\psi\in \Phi$
(the \emph{trigger} for $\psi$)
and let $\AP_\Phi = \{ p_\psi \mid \psi \in \Phi\}$.
For a timed word $\rho'$ over $\Sigma_{\AP \cup \AP_\Phi}$,
we denote by $\proj_\AP(\rho')$ the timed word obtained from $\rho'$
by hiding all $p \notin \AP$ (i.e.~$p \in \AP_\Phi$).
For a timed language $\Lang$ over $\AP \cup \AP_\Phi$
we write $\proj_\AP(\Lang) = \{ \proj_\AP(\rho') \mid \rho' \in \Lang \}$.
Let $\overline{\psi}$ be the formula obtained from an $\textup{\textmd{\textsf{EMITL}}}_{0, \infty}$ formula $\psi$ (in negative normal form) by replacing all of its
top-level temporal subformulae by their corresponding triggers,
i.e.~$\overline{\psi}$ is defined
inductively as follows (where $p\in \AP$):
\begin{itemize}
\item $\overline {\psi_1\land \psi_2} =\overline {\psi_1}\land \overline{\psi_2}$;
\item $\overline {\psi_1\lor \psi_2} =\overline {\psi_1}\lor \overline{\psi_2}$;
\item $\overline{\psi}= \psi\text{ when }\psi\text{ is }\top\text{ or } \bot\text{ or } p\text{ or } \neg p$;
\item $\overline{\psi}= p_\psi \text{ when }\psi\text{ is } \A_I(\phi_1, \dots, \phi_n) \text{ or } \tilde{\A}_I(\phi_1, \dots, \phi_n)$.
\end{itemize}
Note that $\overline{\psi}$ is simply a positive Boolean combination of atomic propositions.
In this way, we can turn the given $\textup{\textmd{\textsf{EMITL}}}_{0, \infty}$ formula $\phi$ into
an equisatisfiable $\textup{\textmd{\textsf{EMITL}}}_{0, \infty}$ formula $\phi'$ over $\AP \cup \AP_\Phi$:
the conjunction of $\overline{\phi}$,
\[
\bigwedge_{\{ \psi \in \Phi \mid \psi = \A_I(\phi_1, \dots, \phi_n) \}}\globally \big(p_\psi \Rightarrow \A_I(\overline{\phi_1}, \dots, \overline{\phi_n})\big) \,,
\]
and the counterparts for $\{ \psi \in \Phi \mid \psi = \tilde{\A}_I(\phi_1, \dots, \phi_n) \}$.
Finally, we construct the component \textup{\textmd{\textsf{TA}}}{s} $\C^\textit{init}$ (which accepts $\sem{\overline{\phi}}$)
and $\C^\psi$ (which accepts, say, $\sem{\globally \big(p_\psi \Rightarrow \A_I(\overline{\phi_1}, \dots, \overline{\phi_n})\big)}$) for every $\psi \in \Phi$.
The timed language of $\phi'$ is accepted by the product
$\C^\textit{init}\times \prod_{\psi\in\Phi}\C^\psi$ and, in particular, $\proj_\AP(\sem{\C^{\textit{init}}\times \prod_{\psi\in\Phi}\C^\psi}) = \sem{\A_\phi}$.
Intuitively, $p_\psi$ being $\top$ (the trigger $p_\psi$ is `\emph{pulled}')
at some position means that the \textup{\textmd{\textsf{OCATA}}}{} $\A_\phi$ spawns a copy (several copies)
of $\A$ where $\psi = \A_I$ ($\psi = \tilde{\A}_I$) at this position or,
equivalently, an obligation that $\A_I$ ($\tilde{\A}_I$) must hold
is imposed on this position.
\paragraph{Component \textup{\textmd{\textsf{TA}}}{s} for automata modalities}
As the construction of $\C^\textit{init}$ is trivial,
we only describe the component \textup{\textmd{\textsf{TA}}}{s} $\C^\psi$ for $\psi = \A_I(\phi_1, \dots, \phi_n)$ and $\psi = \tilde{\A}_I(\phi_1, \dots, \phi_n)$
where $I = [0, c]$ or $I = [c, \infty)$ for some $c \in \N_{\geq 0}$;
the other types of constraining intervals are handled similarly.
The crucial observation that allows us to bound the number of clocks needed
in $\C^\psi$ is that two or more obligations, provided that
their corresponding copies
of $\A$ are in the same location(s) at some point(s)
and $I$ is a lower or upper bound,
can be merged into a single one.
Instead of keeping track of the order of the values of its clocks,
$\C^\psi$ non-deterministically guesses how obligations
should be merged and put them into suitable sub-components accordingly.
To ensure that all obligations are satisfied, we use an extra
variable $\ell$ such that $\ell = 0$ when there is no obligation,
$\ell = 1$ when there is at least one pending obligation,
$\ell = 2$ when the pending obligations have just been
satisfied and a new obligation has just arrived, and finally
$\ell = 3$ when we have to wait the current obligations to be satisfied
(explained below).
In all the cases below we fix $\Sigma = \Sigma_{\AP \cup \AP_\Phi}$ and $|S^\A| = m$.
We write $S^\textit{src} \xrightarrow[\vee]{\sigma} S^\textit{tgt}$,
where $S^\textit{src}$ and $S^\textit{tgt}$ are two subsets of $S^\A$, iff
$S^\textit{tgt}$ is a minimal set such that
for each $s^{\A, 1} \in S^\textit{src}$, there is a transition
$s^{\A, 1} \xrightarrow{\overline{\phi_a}} s^{\A, 2}$ (where $a \in \{1, \dots, n\}$)
of $\A$
with $\sigma \models \overline{\phi_a}$ and $s^{\A, 2} \in S^\textit{tgt}$.
Similarly, we write $S^\textit{src} \xrightarrow[\wedge]{\sigma} S^\textit{tgt}$ iff
$S^\textit{tgt}$ is a minimal set such that
for each $s^{\A, 1} \in S^\textit{src}$ and each transition
$s^{\A, 1} \xrightarrow{\overline{\phi_a}} s^{\A, 2}$ (where $a \in \{1, \dots, n\}$) of $\A$, either $s^{\A, 2} \in S^\textit{tgt}$ or $\sigma \models \overline{\phi_a}$.
\paragraph{$\psi = \A_{\leq c}$}
Let $C^\psi = \langle \Sigma, S, s_0, X, \transitions, \F \rangle$ be defined as follows
(to simplify the presentation, in this case we assume that $\A$ only accepts words of length $\geq 2$):
\begin{itemize}
\item Each location $s \in S$ is of the form $\langle \ell_1, S_1, \dots, \ell_m, S_m \rangle$ where $\ell_j \in \{0, 1, 2\}$ and $S_j \subseteq S^\A$ for all $j \in \{1, \dots, m\}$; intuitively, $\langle \ell_j, S_j \rangle$ can be seen as a location
of the sub-component $C^\psi_j$;
\item $s_0 = \langle 0, \emptyset, \dots, 0, \emptyset \rangle$;
\item $X = \{x_1, \dots, x_m\}$;
\item $\F = \{F_1, \dots, F_m\}$ where $F_j$ contains all locations with $\ell_j = 0$ or $\ell_j = 2$;
\item $\transitions$ is obtained by synchronising
the transitions
$\langle \ell, S \rangle \xrightarrow{\sigma, g, \lambda} \langle \ell', S' \rangle$
of individual sub-components (we omit the subscripts for brevity):
\begin{itemize}
\item $p_\psi \notin \sigma$; $\ell' = 0$, $\ell = 0$; $S' = \emptyset$, $S = \emptyset$; $g = \top$; $\lambda = \emptyset$.
\item $p_\psi \notin \sigma$; $\ell' = 1$, $\ell \in \{1, 2\}$; $S \xrightarrow[\vee]{\sigma} S'$; $g = \top$; $\lambda = \emptyset$.
\item $p_\psi \notin \sigma$; $\ell' = 0$, $\ell \in \{1, 2\}$; $S' = \emptyset$, $S = \{s^\A\}$, $s^\A_F \models \atransitions(s^\A, \sigma)$ for some $s^\A_F \in F^\A$; $g = x \leq c$; $\lambda = \emptyset$.
\item $p_\psi \in \sigma$; $\ell' = 1$, $\ell = 0$; $\{s_0^\A\} \xrightarrow[\vee]{\sigma} S'$, $S = \emptyset$; $g = \top$; $\lambda = \{ x \}$.
\item $p_\psi \in \sigma$; $\ell' = 1$, $\ell \in \{1, 2\}$; $S'$ is the union of
some $S''$ such that $S \xrightarrow[\vee]{\sigma} S''$ and $S'''$ with $\{s_0^\A\} \xrightarrow[\vee]{\sigma} S'''$; $g = \top$; $\lambda = \emptyset$.
\item $p_\psi \in \sigma$; $\ell' = 2$, $\ell \in \{1, 2\}$; $\{s_0^\A\} \xrightarrow[\vee]{\sigma} S'$, $S = \{s^\A\}$, $s^\A_F \models \atransitions(s^\A, \sigma)$ for some $s^\A_F \in F^\A$; $g = x \leq c$; $\lambda = \{x \}$.
\end{itemize}
If $p_\psi \in \sigma$, then exactly
one of the sub-components takes a `$p_\psi \in \sigma$' transition
while the others proceed as if $p_\psi \notin \sigma$.
\end{itemize}
\begin{proposition}\label{prop:firstcase}
$\sem{\C^\psi} = \sem{\globally \big(p_\psi \Rightarrow \A_{\leq c}(\overline{\phi_1}, \dots, \overline{\phi_n})\big)}$.
\end{proposition}
\begin{proof}[Proof sketch.]
Let $\psi' = \globally \big(p_\psi \Rightarrow \A_{\leq c}(\overline{\phi_1}, \dots, \overline{\phi_n})\big)$
and $\A_{\psi'}$ be the equivalent \textup{\textmd{\textsf{OCATA}}}{} obtained via Proposition~\ref{prop:emitl2ocata}.
If a timed word $\rho = (\sigma_i, \tau_i)_{i \geq 1}$ satisfies $\psi'$, there must be an accepting run
$G = \langle V, \rightarrow \rangle$ of $\A_{\psi'}$ on $\rho$;
in particular, a copy of $\A$ is spawned whenever $p_\psi$ holds.
Now consider each `level'
$L_i = \{ (s, v) \mid (s, v, i) \in V \}$
of $G$ in the increasing order of $i$.
If $|\{ (s^\A, v) \mid (s^\A, v) \in L_i \}| \leq 1$ for every $s^\A \in S^\A$,
in $\C^\psi$ we simply put each corresponding obligation into an unused sub-component (with $\ell = 0$ and $S = \emptyset$)
when it arrives, i.e.~$p_\psi$ holds.
If $|\{ (s^\A, v) \mid (s^\A, v) \in L_i \}| > 1$ for some $s^\A \in S^\A$, since
the constraining interval $[0, c]$ is \emph{downward closed}, the DAG
obtained from $G$ by replacing all the subtrees rooted at nodes $(s^\A, v, i)$
with $(s^\A, v^\textit{max}, i)$,
where $v^\textit{max} = \max \{ v \mid (s^\A, v) \in L_i \}$,
is still an accepting run of $\A_{\psi'}$; in $\C^\psi$, this amounts to putting
the obligations that correspond to nodes $(s^\A, v, i)$ into the sub-component
that holds the (oldest) obligation that corresponds to $(s^\A, v^\textit{max}, i)$.
We do the same for all such $s^\A$, obtain $G'$, and start over from $i + 1$.
In this way, we can readily construct an accepting run of $\C^\psi$ on $\rho$.
The other direction obviously holds as each sub-component $\C^\psi_j$
does not reset its associated clock $x_j$ when $p_\psi \in \sigma$ and $\ell_j \in \{1, 2\}$,
unless the (only remaining) obligation in $S_j$ is fulfilled right away.
In other words, $\C^\psi_j$ adds an obligation that is at least as strong to $S_j$
without weakening the existing ones in $S_j$.
\end{proof}
\paragraph{$\psi = \A_{\geq c}$}
Let $\C^\psi = \langle \Sigma, S, s_0, X, \transitions, \F \rangle$ be defined as follows:
\begin{itemize}
\item Each location $s \in S$ is of the form $\langle \ell_1, S_1, T_1 \dots, \ell_m, S_m, T_m \rangle$ where $\ell_j \in \{0, 1, 2, 3\}$ and $S_j, T_j \subseteq S^\A$ for all $j \in \{1, \dots, m\}$; intuitively, $\langle \ell_j, S_j, T_j \rangle$ can be seen as a location
of the sub-component $C^\psi_j$;
\item $s_0 = \langle 0, \emptyset, \emptyset, \dots, 0, \emptyset, \emptyset \rangle$;
\item $X = \{x_1, \dots, x_m\}$;
\item $\F = \{F_1, \dots, F_m\}$ where $F_j$ contains all locations with $\ell_j = 0$ or $\ell_j = 2$;
\item $\transitions$ is obtained by synchronising (in the same way as before)
$\langle \ell, S, T \rangle \xrightarrow{\sigma, g, \lambda} \langle \ell', S', T' \rangle$
of individual sub-components:
\begin{itemize}
\item $p_\psi \notin \sigma$; $\ell' = 0$, $\ell = 0$; $S' = \emptyset$, $S = \emptyset$, $T' = \emptyset$, $T = \emptyset$; $g = \top$; $\lambda = \emptyset$.
\item $p_\psi \notin \sigma$; $\ell' = 1$, $\ell \in \{1, 2\}$; $S \xrightarrow[\vee]{\sigma} S'$, $T' = \emptyset$, $T = \emptyset$; $g = \top$; $\lambda = \emptyset$.
\item $p_\psi \notin \sigma$; $\ell' = 3$, $\ell = 3$; $S \xrightarrow[\vee]{\sigma} S'$, $T \xrightarrow[\vee]{\sigma} T'$; $g = \top$; $\lambda = \emptyset$.
\item $p_\psi \notin \sigma$; $\ell' = 0$, $\ell \in \{1, 2\}$; $S' = \emptyset$, $S = \{s^\A\}$, $s^\A_F \models \atransitions(s^\A, \sigma)$ for some $s^\A_F \in F^\A$, $T' = \emptyset$, $T = \emptyset$; $g = x \geq c$; $\lambda = \emptyset$.
\item $p_\psi \notin \sigma$; $\ell' = 2$, $\ell = 3$; $T \xrightarrow[\vee]{\sigma} S'$, $S = \{s^\A\}$, $s^\A_F \models \atransitions(s^\A, \sigma)$ for some $s^\A_F \in F^\A$, $T' = \emptyset$; $g = x \geq c$; $\lambda = \{ x \}$.
\item $p_\psi \in \sigma$; $\ell' = 1$, $\ell = 0$; $\{s_0^\A\} \xrightarrow[\vee]{\sigma} S'$, $S = \emptyset$, $T' = \emptyset$, $T = \emptyset$; $g = \top$; $\lambda = \{ x \}$.
\item $p_\psi \in \sigma$; $\ell' = 1$, $\ell \in \{1, 2\}$; $S'$ is the union of
some $S''$ such that $S \xrightarrow[\vee]{\sigma} S''$ and $S'''$ with $\{s_0^\A\} \xrightarrow[\vee]{\sigma} S'''$, $T' = \emptyset$, $T = \emptyset$; $g = \top$; $\lambda = \{ x \}$.
\item $p_\psi \in \sigma$; $\ell' = 3$, $\ell = 1$; $S \xrightarrow[\vee]{\sigma} S'$, $\{s_0^\A\} \xrightarrow[\vee]{\sigma} T'$, $T = \emptyset$; $g = \top$; $\lambda = \emptyset$.
\item $p_\psi \in \sigma$; $\ell' = 3$, $\ell = 3$; $S \xrightarrow[\vee]{\sigma} S'$, $T'$ is the union of some $T''$
such that $T \xrightarrow[\vee]{\sigma} T''$ and $T'''$ with $\{s_0^\A\} \xrightarrow[\vee]{\sigma} T'''$; $g = \top$; $\lambda = \emptyset$.
\item $p_\psi \in \sigma$; $\ell' = 2$, $\ell \in \{1, 2\}$; $\{s_0^\A\} \xrightarrow[\vee]{\sigma} S'$, $S = \{s^\A\}$, $s^\A_F \models \atransitions(s^\A, \sigma)$ for some $s^\A_F \in F^\A$, $T' = \emptyset$, $T = \emptyset$; $g = x \geq c$; $\lambda = \{x \}$.
\item $p_\psi \in \sigma$; $\ell' = 2$, $\ell = 3$; $S'$ is the union of some $S''$ such that $T \xrightarrow[\vee]{\sigma} S''$ and $S'''$ with $\{s_0^\A\} \xrightarrow[\vee]{\sigma} S'''$, $S = \{s^\A\}$, $s^\A_F \models \atransitions(s^\A, \sigma)$ for some $s^\A_F \in F^\A$, $T' = \emptyset$; $g = x \geq c$; $\lambda = \{ x \}$.
\end{itemize}
\end{itemize}
\begin{proposition}
$\sem{\C^\psi} = \sem{\globally \big(p_\psi \Rightarrow \A_{\geq c}(\overline{\phi_1}, \dots, \overline{\phi_n})\big)}$.
\end{proposition}
\begin{proof}[Proof sketch.]
Similar to the proof of Proposition~\ref{prop:firstcase}, but since
$[c, \infty)$ is \emph{upward closed}, we
replace all the subtrees rooted at nodes $(s^\A, v, i)$
with $(s^\A, v^\textit{min}, i)$,
where $v^\textit{min} = \min \{ v \mid (s^\A, v) \in L_i \}$;
in $\C^\psi$, we still put the obligations that correspond to nodes $(s^\A, v, i)$ into the sub-component
that holds the (oldest) obligation that corresponds to $(s^\A, v^\textit{max}, i)$.
There is, however, a potential issue: since we reset $x_j$ whenever the trigger $p_\psi$ is pulled
and $\C^\psi_j$ is chosen,
it might be the case that $x_j$ never reaches $c$, i.e.~the satisfaction of the obligations in $S_j$ are delayed indefinitely.
Following~\cite{BriGee17}, we solve this by locations with $\ell_j = 3$ such that, when entered,
we stop resetting $x_j$ and put the new obligations into $T_j$ instead;
when the obligations in $S_j$ are fulfilled, we move the obligations in $T_j$ to $S_j$
and reset $x_j$.
The other direction obviously holds as each sub-component $\C^\psi_j$
resets $x_j$ when $p_\psi \in \sigma$ and $\ell_j \in \{1, 2\}$,
unless it goes from $\ell_j = 1$ to $\ell_j = 3$.
In other words, $\C^\psi_j$ adds the new obligation to $S_j$
while strengthening the existing ones in $S_j$.
\end{proof}
\paragraph{$\psi = \tilde{\A}_{\leq c}$}
Let $\C^\psi = \langle \Sigma, S, s_0, X, \transitions, \F \rangle$ be defined as follows:
\begin{itemize}
\item Each location $s \in S$ is of the form $\langle S_1, \dots, S_{m+1} \rangle$ where $S_j \subseteq S^\A$
for all $j \in \{1, \dots, m+1\}$; intuitively, $S_j$ can be seen as a location of the sub-component $C^\psi_j$;
\item $s_0 = \langle \emptyset, \dots, \emptyset \rangle$;
\item $X = \{x_1, \dots, x_{m+1}\}$;
\item $\F = \emptyset$, i.e.~any run is accepting;
\item $\transitions$ is obtained by synchronising (in the same way as before)
$S \xrightarrow{\sigma, g, \lambda} S'$
of individual sub-components:
\begin{itemize}
\item $p_\psi \notin \sigma$; $S' = \emptyset$, $S = \emptyset$; $g = \top$; $\lambda = \emptyset$.
\item $p_\psi \notin \sigma$; $S \xrightarrow[\wedge]{\sigma} S'$, $S' \cap F^\A = \emptyset$; $g = x \leq c$; $\lambda = \emptyset$.
\item $p_\psi \notin \sigma$; $S' = \emptyset$; $g = x > c$; $\lambda = \emptyset$.
\item $p_\psi \in \sigma$; $\{s_0^\A\} \xrightarrow[\wedge]{\sigma} S'$, $S' \cap F^\A = \emptyset$, $S = \emptyset$; $g = \top$; $\lambda = \{ x \}$.
\item $p_\psi \in \sigma$; $S'$ is the union of
some $S''$ such that $S \xrightarrow[\wedge]{\sigma} S''$ and $S'''$ with $\{s_0^\A\} \xrightarrow[\wedge]{\sigma} S'''$, $S' \cap F^\A = \emptyset$; $g = x \leq c$; $\lambda = \{ x \}$.
\item $p_\psi \in \sigma$; $\{s_0^\A\} \xrightarrow[\wedge]{\sigma} S'$, $S' \cap F^\A = \emptyset$; $g = x > c$; $\lambda = \{x \}$.
\end{itemize}
\end{itemize}
\begin{proposition}\label{prop:thirdcase}
$\sem{\C^\psi} = \sem{\globally \big(p_\psi \Rightarrow \tilde{\A}_{\leq c}(\overline{\phi_1}, \dots, \overline{\phi_n})\big)}$.
\end{proposition}
\begin{proof}[Proof sketch.]
Let $\psi' = \globally \big(p_\psi \Rightarrow \tilde{\A}_{\leq c}(\overline{\phi_1}, \dots, \overline{\phi_n})\big)$
and $\A_{\psi'}$ be the equivalent \textup{\textmd{\textsf{OCATA}}}{} obtained via Proposition~\ref{prop:emitl2ocata}.
Consider each level $L_i = \{ (s, v) \mid (s, v, i) \in V \}$
of an accepting run $G = \langle V, \rightarrow \rangle$ of $\A_{\psi'}$ on $\rho = (\sigma_i, \tau_i)_{i \geq 1}$
in the increasing order of $i$. In $\C^\psi$, whenever the trigger $p_\psi$ is pulled,
we attempt to put the corresponding obligation into an unused sub-component (with $S = \emptyset$)
or a sub-component that can be cleared (with $x > c$);
if this is not possible, since for every $s^\A \in S^\A$ all the subtrees rooted at nodes
$(s^\A, v, i)$ can be replaced with the subtree rooted at
$(s^\A, v^\textit{min}, i)$ where $v^\textit{min} = \min \{ v \mid (s^\A, v) \in L_i \}$,
at least one sub-component $C^\psi_j$ becomes redundant, i.e.~all of its obligations are implied by
the other sub-components $C^\psi_k$, $k \neq j$. A consequence
is that the obligations in the sub-component $C^\psi_k$ with the minimal non-negative
value of $x_j - x_k$ can be merged with the obligations in $C^\psi_j$, freeing up a sub-component for
the current incoming obligation.
This can be repeated to construct an accepting run of $\C^\psi$ on $\rho$.
The other direction holds as each $C^\psi_j$ adds the new obligation to $S_j$
while strengthening the existing obligations in $S_j$.
\end{proof}
\paragraph{$\psi = \tilde{\A}_{\geq c}$}
Let $\C^\psi = \langle \Sigma, S, s_0, X, \transitions, \F \rangle$ be defined as follows
(for simplicity, assume that $c > 0$):
\begin{itemize}
\item Each location $s \in S$ is of the form $\langle S_1, T_1 \dots, S_{m+1}, T_{m+1} \rangle$ where $S_j, T_j \subseteq S^\A$ for all $j \in \{1, \dots, m + 1\}$; intuitively, $\langle S_j, T_j \rangle$ can be seen as a location
of the sub-component $C^\psi_j$;
\item $s_0 = \langle \emptyset, \emptyset, \dots, \emptyset, \emptyset \rangle$;
\item $X = \{x_1, \dots, x_{m+1}\}$;
\item $\F = \emptyset$, i.e.~any run is accepting;
\item $\transitions$ is obtained by synchronising (in the same way as before)
$\langle \ell, S, T \rangle \xrightarrow{\sigma, g, \lambda} \langle \ell', S', T' \rangle$
of individual sub-components:
\begin{itemize}
\item $p_\psi \notin \sigma$; $S' = \emptyset$, $S = \emptyset$, $T' = \emptyset$, $T = \emptyset$; $g = \top$; $\lambda = \emptyset$.
\item $p_\psi \notin \sigma$; $S' = \emptyset$, $S = \emptyset$, $T \xrightarrow[\wedge]{\sigma} T'$, $T' \cap F^\A = \emptyset$; $g = \top$; $\lambda = \emptyset$.
\item $p_\psi \notin \sigma$; $S \xrightarrow[\wedge]{\sigma} S'$, $T' = \emptyset$, $T = \emptyset$; $g = x < c$; $\lambda = \emptyset$.
\item $p_\psi \notin \sigma$; $S \xrightarrow[\wedge]{\sigma} S'$, $T \xrightarrow[\wedge]{\sigma} T'$, $T' \cap F^\A = \emptyset$; $g = x < c$; $\lambda = \emptyset$.
\item $p_\psi \notin \sigma$; $S' = \emptyset$, $T'$ is the union of some $T''$ such that $S \xrightarrow[\wedge]{\sigma} T''$
and $T'''$ with $T \xrightarrow[\wedge]{\sigma} T'''$, $T' \cap F^\A = \emptyset$; $g = x \geq c$; $\lambda = \emptyset$.
\item $p_\psi \in \sigma$; $\{s_0^\A\} \xrightarrow[\wedge]{\sigma} S'$, $S = \emptyset$, $T' = \emptyset$, $T = \emptyset$; $g = \top$; $\lambda = \{ x \}$.
\item $p_\psi \in \sigma$; $\{s_0^\A\} \xrightarrow[\wedge]{\sigma} S'$, $S = \emptyset$, $T \xrightarrow[\wedge]{\sigma} T'$, $T' \cap F^\A = \emptyset$; $g = \top$; $\lambda = \{ x \}$.
\item $p_\psi \in \sigma$; $S'$ is the union of
some $S''$ such that $S \xrightarrow[\wedge]{\sigma} S''$ and $S'''$ with $\{s_0^\A\} \xrightarrow[\wedge]{\sigma} S'''$, $T' = \emptyset$, $T = \emptyset$; $g = x < c$; $\lambda = \emptyset$.
\item $p_\psi \in \sigma$; $S'$ is the union of
some $S''$ such that $S \xrightarrow[\wedge]{\sigma} S''$ and $S'''$ with $\{s_0^\A\} \xrightarrow[\wedge]{\sigma} S'''$,
$T \xrightarrow[\wedge]{\sigma} T'$, $T' \cap F^\A = \emptyset$; $g = x < c$; $\lambda = \emptyset$.
\item $p_\psi \in \sigma$; $\{s_0^\A\} \xrightarrow[\wedge]{\sigma} S'$, $T'$ is the union of some $T''$ such that $S \xrightarrow[\wedge]{\sigma} T''$
and $T'''$ with $T \xrightarrow[\wedge]{\sigma} T'''$, $T' \cap F^\A = \emptyset$; $g = x \geq c$; $\lambda = \{x \}$.
\end{itemize}
\end{itemize}
\begin{proposition}
$\sem{\C^\psi} = \sem{\globally \big(p_\psi \Rightarrow \tilde{\A}_{\geq c}(\overline{\phi_1}, \dots, \overline{\phi_n})\big)}$.
\end{proposition}
\begin{proof}[Proof sketch.]
As in the proof of Proposition~\ref{prop:thirdcase}, whenever $p_\psi$ is pulled in $C^\psi$,
we attempt to put the corresponding obligation into an unused sub-component (with $S = \emptyset$)
or a sub-component that can be cleared (if $x > c$, we move the obligations in $S$ to $T$ and let them remain there).
If this is not possible, since for every $s^\A \in S^\A$ all the subtrees rooted at nodes
$(s^\A, v, i)$ can be replaced with the subtree rooted at
$(s^\A, v^\textit{max}, i)$ where $v^\textit{max} = \max \{ v \mid (s^\A, v) \in L_i \}$,
some $C^\psi_j$ becomes redundant, and the obligations in the sub-component $C^\psi_k$ with the minimal non-negative
value of $x_k - x_j$ can be merged with the obligations in $C^\psi_j$, freeing up a sub-component for
the current incoming obligation.
This can be repeated to construct an accepting run of $\C^\psi$ on $\rho$.
The other direction holds as each $C^\psi_j$
adds an obligation that is at least as strong to $S_j$
without weakening the existing obligations in $S_j$.
\end{proof}
Finally, thanks to the fact that each location of $C^\psi$
can be represented using space polynomial in the size of $\A$,
and the product $\C^\textit{init}\times \prod_{\psi\in\Phi}\C^\psi$ need not to be constructed
explicitly, we can state the main result of this section.
\begin{theorem}\label{thm:pspace}
The satisfiability and model-checking problems for $\textup{\textmd{\textsf{EMITL}}}_{0, \infty}$ over timed words
are $\mathrm{PSPACE}$-complete.
\end{theorem}
\begin{corollary}\label{cor:eeclpspace}
The satisfiability and model-checking problems for $\textup{\textmd{\textsf{EECL}}}$ over timed words
are $\mathrm{PSPACE}$-complete.
\end{corollary}
\section{Conclusion}
It is shown that $\textup{\textmd{\textsf{EMITL}}}_{0, \infty}$ and \textup{\textmd{\textsf{EECL}}}{} are already as expressive as
$\textup{\textmd{\textsf{EMITL}}}$ over timed words, a somewhat unexpected yet very pleasant result.
We also provided a compositional construction
from $\textup{\textmd{\textsf{EMITL}}}_{0, \infty}$ to diagonal-free \textup{\textmd{\textsf{TA}}}{s}
based on one-clock alternating timed automata (\textup{\textmd{\textsf{OCATA}}}{s});
this allows satisfiability and model checking
based on existing algorithmic back ends for \textup{\textmd{\textsf{TA}}}{s}.
The natural next step would be to implement the construction
and evaluate its performance on real-world use cases.
Another possible future direction is to investigate whether
similar techniques can be used to handle full \textup{\textmd{\textsf{EMITL}}}{} or
larger fragments of \textup{\textmd{\textsf{OCATA}}}{s} (like~\cite{Ferrere18}).
\begin{acks}
The author would like to thank
Thomas Brihaye, Chih-Hong Cheng, Thomas Ferr\`{e}re, Gilles Geeraerts, Timothy M. Jones, Arthur Milchior, and Benjamin Monmege
for their help and fruitful discussions.
The author would also like to thank the anonymous reviewers for their comments. This work is supported by the \grantsponsor{gsidepsrc}{Engineering and Physical Sciences Research Council (EPSRC)}{https://epsrc.ukri.org/}
through grant \grantnum{gsidepsrc}{EP/P020011/1} and
(partially) by \grantsponsor{gsidfrsfnrs}{F.R.S.-FNRS}{http://www.fnrs.be/} PDR
grant \grantnum{gsidfrsfnrs}{SyVeRLo}.
\end{acks}
\bibliographystyle{ACM-Reference-Format}
|
2,869,038,154,705 | arxiv | \section*{Abstract}
{\bf
Quantum kinetically constrained models have recently attracted significant attention due to their anomalous dynamics and thermalization. In this work, we introduce a hitherto unexplored family of kinetically constrained models featuring conserved particle number and strong inversion-symmetry breaking due to facilitated hopping. We demonstrate that these models provide a generic example of so-called quantum Hilbert space fragmentation, that is manifested in disconnected sectors in the Hilbert space that are not apparent in the computational basis. Quantum Hilbert space fragmentation leads to an exponential in system size number of eigenstates with exactly zero entanglement entropy across several bipartite cuts. These eigenstates can be probed dynamically using quenches from simple initial product states. In addition, we study the particle spreading under unitary dynamics launched from the domain wall state, and find faster than diffusive dynamics at high particle densities, that crosses over into logarithmically slow relaxation at smaller densities. Using a classically simulable cellular automaton, we reproduce the logarithmic dynamics observed in the quantum case. Our work suggests that particle conserving constrained models with inversion symmetry breaking realize so far unexplored universality classes of dynamics and invite their further theoretical and experimental studies.
}
\vspace{10pt}
\noindent\rule{\textwidth}{1pt}
\tableofcontents\thispagestyle{fancy}
\noindent\rule{\textwidth}{1pt}
\vspace{10pt}
\section{Introduction}
\label{sec:Intro}
In recent years, kinetically constrained models, originally introduced to describe classical glasses~\cite{Ritort2003,Garrahan2007,Fisher2004}, have received considerable attention in the context of non-equilibrium quantum dynamics~\cite{Garrahan2014,Garrahan2018a,Aidelsburger2021,Vasseur2021,Marino2022}. In analogy with their classical counterparts, they are characterized by unusual dynamical properties, including slow transport~\cite{Garrahan2018,Knap2020,Morningstar2020,Vasseur2021,Yang2022,Knap2022}, localization~\cite{Pancotti2020,Nandkishore2020,Pollmann2019} and fractonic excitations~\cite{Nandkishore2019a,Nandkishore2019c}. Additionally, in the quantum realm, other interesting phenomena have been observed, such as Hilbert space fragmentation~\cite{Nandkishore2020,Pollmann2020a,Pollmann2020b,Iadecola2020,Zadnik2021a,Zadnik2021b,Sen2021,Pozsgay2021} and quantum many-body scars~\cite{Michailidis2018,Choi2019,Papic2020,Iadecola2020a,Tamura2022}.
Among the many possible types of constraints, one can distinguish models that are inversion symmetric from those that break inversion symmetry. Among the latter models, the so-called quantum East model~\cite{Garrahan2015,Pancotti2020,Lazarides2020,Marino2022,Garrahan2022,Eisinger1991} where spin dynamics of a given site is facilitated by the presence of a particular spin configuration \emph{on the left} represents one of the most studied examples. The quantum East model has been shown to host a localization-delocalization transition in its ground state~\cite{Pancotti2020}, which allows the approximate construction of excited eigenstates in matrix product state form. Transport in particle-conserving analogues of the East model was recently investigated through the analysis of the dynamics of infinite-temperature correlations, revealing subdiffusive behavior. A similar result has also been observed in spin-$1$ projector Hamiltonians~\cite{Pal2022}.
The interplay of particle conservation and kinetic constraints that break inversion symmetry opens several interesting avenues for further research. First, the phenomenon of so-called Hilbert space fragmentation that is known to occur in constrained models and is characterized by the emergence of exponentially many disconnected subsectors of the Hilbert space is expected to be modified. The additional $U(1)$ symmetry is expected to influence Hilbert space fragmentation beyond the picture presented in previously studied models~\cite{Garrahan2015,Pancotti2020,Marino2022}. Second, the presence of a conserved charge allows the study of transport~\cite{Vasseur2021,Yang2022,Knap2022}. While transport without restriction to a particular sector of fragmented Hilbert space results in slow subdiffusive dynamics~\cite{Vasseur2021,Yang2022}, a recent work~\cite{Ljubotina2022} demonstrated that a restriction to a particular sector of fragmented Hilbert space can give rise to superdiffusion. This motivates the study of transport in the particle conserving East model restricted to a particular sector of the Hilbert space.
In this work, we investigate a generalized East model, consisting of hard-core bosons with constrained hopping. The constraint prevent hopping in the absence of bosons on a few preceding sites \emph{to the left}. The chiral nature of such facilitated hopping strongly breaks inversion symmetry, akin to the conventional East model, additionally featuring the conservation of the total number of bosons. Our results show that combining charge conservation and the breaking of inversion symmetry yield new interesting transport phenomena.
Specifically, we characterize the proposed generalized East model using its eigenstate properties and dynamics.
The detailed study of the eigenstates reveals so-called quantum Hilbert space fragmentation, so far reported only in a few other models~\cite{Sen2021,Motrunic2022}. The quantum fragmentation we observe in our model leads to the existence of eigenstates that have zero entanglement along one or several bipartite cuts. The number of these low entanglement eigenstates increases exponentially with system size. We find that these unusual eigenstates can be constructed recursively, relying on special eigenstates existing in small chains that are determined analytically. Thus the particle-conserving East model provides an example of \emph{recursive quantum} Hibert space fragmentation.
The study of dynamics of the particle-conserving East model reveals that weakly entangled eigenstates existing in the spectrum can be probed by quenches from simple product states. In addition, the dynamics from a generic domain wall initial state reveals two distinct transport regimes. At short times dynamics is superdiffusive, whereas at longer times the constraint leads to a logarithmically slow spreading. We recover the logarithmically slow dynamics within a classically simulable cellular automaton that has the same features as the Hamiltonian model. In contrast, the early time dynamical exponent differs between the quantum Hamiltonian dynamics and cellular automaton version. These results suggest that constrained models with inversion symmetry breaking and particle conservation may realize a new universality class of dynamics. This invites the systematic study of such models using large scale numerical methods and development of a hydrodynamic description of transport in such systems.
The remainder of the paper is organized as follows. In Section~\ref{Sec:Model} we introduce the Hamiltonian of the particle-conserving East model and explain the effect of the constraint. We then investigate the nature of the Hilbert space fragmentation and of the eigenstates in Section~\ref{Sec:Eigenstates}. In Section~\ref{Sec:Dyn} we investigate the dynamical properties of the system, showing similarities in the long-time behavior among the quantum dynamics and the classical cellular automaton. Finally, in Section~\ref{Sec:Disc}, we conclude by presenting a summary of our work and proposing possible future directions.
\section{Family of particle-conserving East models}\label{Sec:Model}
We introduce a family of particle conserving Hamiltonians inspired by the kinetically constrained East model in one dimension. The East model, studied both in the classical~\cite{Eisinger1991,Ritort2003} and quantum~\cite{Garrahan2015,Pancotti2020,Garrahan2022} cases, features a constraint that strongly violates inversion symmetry: a given spin is able to flip only if its \emph{left} neighbor is in the up ($\uparrow$) state. A natural implementation of such a constrained kinematic term in the particle-conserving case is a hopping process \emph{facilitated} by the presence of other particles on the left. The simplest example of such a model is provided by the following Hamiltonian operating on a chain of hard-core bosons,
\begin{equation}
\label{Eq:Hr1}
\hat{H}_{r=1} = \sum_{i=2}^{L-1} \hat n_{i-1}\bigr(\hat{c}^\dagger_{i}\hat{c}_{i+1}+\hat{c}^\dagger_{i+1}\hat{c}_{i}\bigr),
\end{equation}
where the operator $\hat n_i =\hat{c}^\dagger_{i}\hat{c}_{i}$ is a projector onto the occupied state of site $i$. We assume open boundary conditions here and throughout this work, and typically initialize, without loss of generality, the first site as being occupied by a frozen particle. All sites to the left of the leftmost particle, in fact, cannot be occupied, hence they are not relevant to the behavior of the system.
The Hamiltonian~(\ref{Eq:Hr1}) implements hopping facilitated by the \emph{nearest neighbor} particle on the left, hence we refer to it as the range-1, $r=1$, particle conserving East model. A natural extension of this model would be hopping facilitated by the nearest \emph{or} next nearest neighbor, which reads:
\begin{equation}
\label{Eq:Hr=2}
\hat{H}_{2} = \sum_{i=2}^{L-1}(\hat{n}_{i-2}+\hat{n}_{i-1}-\hat{n}_{i-2}\hat{n}_{i-1})(\hat{c}^\dagger_{i}\hat{c}_{i+1} + \text{H.c.}),
\end{equation}
where we treat the operator $\hat{n}_{i=0}=0$ as being identically zero. Note, that in this Hamiltonian we use the same hopping strength irrespective if the facilitating particle is located on the nearest neighbor or next nearest neighbor site, however this condition may be relaxed. Examples of range-1, $\hat H_1$, and range-2, $\hat H_2$, particle conserving East models can be further generalized to arbitrary range $r$ as
\begin{eqnarray}
\label{Eq:Hr}
\hat{H}_r &=& \sum_{i=r+1}^{L-1} \hat{\mathcal{K}}_{i,r}\bigr(\hat{c}^\dagger_{i+1}\hat{c}_i+\text{H.c.}\bigr),
\\
\label{Eq:Kr}
\hat{\mathcal{K}}_{i,r} &=& \sum_{\ell=1}^{r}t_\ell\hat{\mathcal{P}}_{i,\ell
\end{eqnarray}
where the operator $\hat{\mathcal{K}}_{i,r}$ implements a range-$r$ constraint using projectors on the configurations with $\hat{n}_{i-\ell}=1$ and the region $[i-\ell+1,i-1]$ empty, $\hat{\mathcal{P}}_{i,\ell} = \hat{n}_{i-\ell} \prod_{j=i-\ell+1}^{i-1} (1-\hat{n}_j)$. The coefficients $t_\ell$ correspond to the amplitude of the hopping facilitated by the particle located $\ell$-sites on the left. The Hamiltonian~$\hat{H}_2$ in Eq.~(\ref{Eq:Hr=2}) corresponds to the particular case when all $t_\ell=1$.
Models with similar facilitated hopping terms were considered in the literature earlier.
In particular a pair hopping $\bullet\bullet\circ\leftrightarrow\circ\bullet\bullet$ was introduced in~\cite{DeRoeck2016} and later used in~\cite{Brighi2020} to probe many-body mobility edges, and shown to be integrable in Ref.~\cite{Pozsgay2021}.
In~\cite{Bahovadinov2022} a similar constrained hopping term was shown to arise from the Jordan-Wigner transformation of a next nearest neighbor XY spin chain.
Another constrained model recently studied is the so-called \textit{folded} XXZ~\cite{Pollmann2019,Iadecola2020}, where the $\Delta\to \infty$ limit of the XXZ chain is considered, leading to integrable dynamics~\cite{Zadnik2021a,Zadnik2021b}.
The key difference in our work, compared to the previous literature, consists of having a chiral kinetic term, whereas in the mentioned works symmetric constraints are considered.
\begin{figure}[t]
\centering
\includegraphics[width=.65\columnwidth]{fig1.pdf}
\caption{\label{Fig:cartoon}
Illustration of constrained hopping in the range-2 particle conserving East model. }
\end{figure}
Hamiltonians $\hat H_r$ for all values of $r$ feature $U(1)$ symmetry related to the conservation of total boson number, justifying the name of particle-conserving East models. In this work we mostly focus on the case of $r=2$ with homogeneous hopping parameters $t_\ell=1$, as written in Eq.~(\ref{Eq:Hr=2}).
We discuss the generality of our results with respect to the choice of hopping strengths and range of constraint in Appendices~\ref{App:generic r} and ~\ref{App:generic dyn}.
A major feature of this family of models is Hilbert space fragmentation, which is known to affect spectral and dynamical properties.
As such we begin our investigation by looking into the nature of Hilbert space fragmentation in these models in Section~\ref{Sec:Eigenstates}, where we highlight the generality of our results, by formulating them for a general range $r$ and show examples for $r=2$.
\section{Hilbert space fragmentation and eigenstates}\label{Sec:Eigenstates}%
In this Section we focus on the phenomenon of Hilbert space fragmentation in the particle-conserving East models introduced above. First, we discuss the block structure of the Hamiltonian in the product state basis --- known as a classical Hilbert space fragmentation --- and define the largest connected component of the Hilbert space. Next, in Sec.~\ref{Sec:Quantum Fragmentation} we discuss the emerging disconnected components of the Hilbert space that are not manifest in the product state basis, leading to quantum Hilbert space fragmentation.
\subsection{Classical Hilbert space fragmentation} \label{Sec:Classical fragmentation}
Due to the $U(1)$ symmetry of the Hamiltonian~(\ref{Eq:Hr}), the global Hilbert space is divided in blocks labeled by the different number of bosons $N_p$ with dimension given by the binomial coefficient $\mathcal{C}^L_{N_p}$. Within each given sector of total particle number $N_p$, the constrained hopping causes further fragmentation of the Hilbert space in extensively many subspaces. First, the leftmost boson in the system is always frozen. Hence, as we discussed in Section~\ref{Sec:Model}, we choose the first site to be always occupied, which may be viewed as a boundary condition. In addition, a boson may also be frozen if the number of particles to its left is too small. An example configuration is given by the product state $\ket{\bullet\circ\circ\circ\bullet\bullet\circ\circ}$ for the $r=2$ model, where $\circ$ corresponds to an empty site and $\bullet$ is a site occupied by one boson. Here the second boson cannot move since the previous two sites are empty and cannot be occupied.
In view of this additional fragmentation, we focus on the largest classically connected sector of the Hilbert space with a fixed number of particles, $N_p$. This sector can be constructed starting from a particular initial state $\ket{\text{DW}}$, where all particles are located at the left boundary,
\begin{equation}
\label{Eq:psi0}
\ket{\text{\text{DW}}} = |\underbrace{\bullet\bullet\bullet\dots\bullet}_{N_p}\underbrace{\circ\circ\circ\dots\circ}_{L-N_p}\rangle.
\end{equation}
Starting from this initial state the constraint will limit the spreading of particles, that can reach at most
\begin{equation}\label{Eq:Lstar}
L^*_r(N_p)=(r+1)N_p-r
\end{equation}
sites, corresponding to the most diluted state, $\ket{\bullet\circ\circ\bullet\circ\circ\bullet\circ\circ\bullet\ldots}$ for $r=2$. Thus, in what follows we use the system size $L=L^*_r$ uniquely defined by the number of particles and the range of the constraint in Eq.~(\ref{Eq:Lstar}).
The fragmentation of the Hilbert space discussed above may be attributed to a set of emergent conserved quantities in the model in addition to the total particle number, $\hat N_\text{tot} = \sum_i \hat n_i$. The first class of conserved operators responsible for the freezing of the leftmost particle is written as
\begin{equation}
\label{Eq:Op fisrt site}
\hat{N}_{\ell_0} = \ell_0 \bigr[\prod_{i<\ell_0} (1-\hat{n}_i)\bigr] \hat{n}_{\ell_0}.
\end{equation}
Since projectors in this operator are complementary to the projectors in the Hamiltonian, this satisfies the property $\hat{N}_{\ell_0} \hat H_r = \hat H_r \hat{N}_{\ell_0} = 0$, hence trivially having a zero commutator. This conservation law induces further fragmentation of the Hilbert space into $L-N_p$ sectors labeled by the position of the leftmost boson.
The second class of operators yields a further fragmentation within each sector with fixed position of the leftmost particle. One can check that a region of $N_\text{left}$ particles with length $L_{r}^{\text{left}} \geq L^*_r(N_\text{left}) +r+1$ is dynamically isolated from any configuration on its right. By construction, the particles in the left part can never facilitate hopping of particles on the right, as they always have a distance $d>r$, hence different sectors can be labeled by the position and width of the frozen regions. The simplest example of such configuration is given by $\ket{\bullet\circ\circ\bullet\circ\circ\circ\bullet\dots}$ for $r=2$. Formally, these conserved quantities are represented by the following operator
\begin{equation}
\label{Eq:Op intermediate dark states}
\hat{O}^{L_{r}^{\text{left}}}_{N_\text{left}} =\hat{\mathcal{P}}_{N_\text{left}} \Bigr[\prod_{k=L_r^*(N_\text{left})+1}^{L_{r}^{\text{left}} - L^*(N_\text{left})} (1-\hat{n}_k)\Bigr] \hat{n}_{L_{r}^{\text{left}}+1},
\end{equation}
where $\hat{\mathcal{P}}_{N_\text{left}}$ is the projector on the states with $N_\text{left}$ particles in the first $L^*_r(N_\text{left})$ sites. The freedom in the choice of $L_{r}^{\text{left}}$ yields $r(N_p-N_\text{left}-1)$ different sectors for a fixed $N_\text{left}$. Hence, the number of the fragmented sectors is given by
\begin{equation}
\label{Eq:fragm frozen regions}
\sum_{N_\text{left}=1}^{N_p-1} r(N_p-N_\text{left}-1) = \bigr[\frac{1}{2}(N_p^2-3N_p)+1\bigr] \propto N_p^2.
\end{equation}
We notice that additional levels of fragmentation can emerge whenever the right part can be further decomposed in a similar way to the one discussed above. Every time that happens, additional subsectors appear for some of the sectors identified by the operator $\hat{O}^{L_{r}^{\text{left}}}_{N_\text{left}}$. As the number of additional levels of fragmentation increases proportionally to $N_p$, each adding subsectors to the previous level, one finally obtains that the asymptotic behavior of the global number of classically fragmented subsectors has to be $O(\exp(N_p))$. The exponential increase of the number of disconnected subsectors was verified numerically, thus properly identifying a case of Hilbert space fragmentation.
\begin{figure}[t]
\centering
\includegraphics[width=.75\columnwidth]{fig2.pdf}
\caption{\label{Fig:S evecs}
Entanglement entropy of eigenstates along the bipartite cut at the site $8$ for $N_p=8$ and $L=22$. The color intensity corresponds to the density of dots, revealing that the majority of the eigenstates have nearly thermal entanglement. However, a large number of eigenstates has entanglement much lower than the thermal value. Among these, the red dots correspond to entanglement being zero up to numerical precision (inset).
}
\end{figure}
\subsection{Recursive quantum Hilbert space fragmentation}\label{Sec:Quantum Fragmentation}%
Due to the fragmentation of the Hilbert space in the computational basis discussed above, we focus on the largest sector of the Hilbert space as defined in the previous section. In Appendix~\ref{App:therm} we show that the statistic of the level spacing for the Hamiltonian $\hat H_2$ within this block follows the Wigner-Dyson surmise, confirming that we resolved all symmetries of this model and na\"ively suggesting an overall thermalizing (chaotic) character of eigenstates~\cite{D'Alessio2016}.
To further check the character of eigenstates, we consider their entanglement entropy. We divide the system into two parts, $A$ containing sites $1,\ldots, i$, $A=[1,i]$ and its complement denoted as $B=[i+1,L]$. The entanglement entropy of the eigenstate $\ket{E_\alpha}$ for such bipartition is obtained as the von Neumann entropy of the reduced density matrix $\rho_i = \tr_B \ket{E_\alpha}\bra{E_\alpha}$
\begin{equation}
\label{Eq:Entanglement}
S_i = -\tr\bigr[\rho_i\ln\rho_i\bigr].
\end{equation}
In thermal systems entanglement of highly excited eigenstates is expected to follow volume law scaling, increasing linearly with $i$ for $i\ll L$, and reaching maximal value for $i=L/2$. However, our numerical study of the entanglement entropy shows strong deviations from these expectations, in particular revealing a significant number of eigenstates with extremely low, and even exactly zero, entanglement, a feature typical of quantum many-body scars~\cite{Turner2018,Michailidis2018,Bernevig2018,Choi2019,Iadecola2019,Motrunich2019,Papic2020,Knolle2020,Serbyn2021,Regnault2022}.
Figure~\ref{Fig:S evecs} illustrates such anomalous behavior of eigenstate entanglement for a chain of $L=22$ sites. For the bipartite cut shown, $A=[1,8]$, most of the eigenstates have increasing entanglement as their energy approaches zero, where the density of states is maximal, in agreement with thermalization. Nevertheless, a significant number of eigenstates features much lower values of entanglement, and the red box and inset in Fig.~\ref{Fig:S evecs} highlight the presence of eigenstates with zero entanglement (up to numerical precision). We explain this as a result of an additional fragmentation of the Hilbert space caused by the interplay of the constraint and boson number conservation.
Eigenstates with zero entanglement, denoted as $\ket{E_{S=0}}$, are separable and can be written as a product state of the wave function in the region $A$ and in its complement $B$. To this end, we choose the wave function $\ket{\psi^\ell_m}$ of the separable state $\ket{E_{S=0}}$ in the region $A$ as an eigenstate of the Hamiltonian $\hat H_r$ restricted to the Hilbert space of $m$ particles in $\ell$ sites. The state $\ket{\psi^\ell_m}$ has to satisfy the additional condition $\bra{\psi_m^\ell}\hat{n}_{\ell}\ket{\psi_m^\ell}=0$, i.e. that the last site of the region is empty. Provided such state exists, we construct the separable eigenstate $\ket{E_{S=0}}$ as
\begin{equation}
\label{Eq:zero S evec}
\ket{E_{S=0}} = \ket{\psi_m^\ell}\otimes \underbrace{\ket{\circ\circ\dots \circ}}_{q}\otimes\ket{\psi_R},
\end{equation}
where $\ket{\psi_R}$ is an eigenstate of the Hamiltonian restricted to $L-\ell-q$ sites and $N_p-m$ particles. Inserting an empty region of $q\geq r$ sites separating the support of $\ket{\psi_m^\ell}$ and $\ket{\psi_R}$ ensures that the two states are disconnected.
Note that $q$ is upper bounded by the requirement that the resulting state belongs to the largest classically fragmented sector.
It is easy to check that the state~$\ket{E_{S=0}}$ is an eigenstate of the full Hamiltonian. Indeed, thanks to the empty region $q$ the particles in $A$ cannot influence those in $B$ and the two eigenstates of the restricted Hamiltonian combine into an eigenstate of the full system.
The construction of $\ket{E_{S=0}} $ relies on the existence of eigenstates $ \ket{\psi_m^\ell}$ with vanishing density on the last site. This is a non-trivial requirement that \emph{a priory} is not expected to be satisfied. However, we observe that such eigenstates can be found within the degenerate subspace of eigenstates with zero energy, see Appendix~\ref{App:PsiL}. If $\ket{\psi_m^\ell}$ is an eigenstate with zero energy, the energy of eigenstate $\ket{E_{S=0}}$ is determined only by the energy of the $\ket{\psi_R}$.
The existence of $ \ket{\psi_m^\ell}$ relies on two conditions which have to hold simultaneously: $\ell>m+r$ and $(r+1)m-r\ge\ell$.
These are satisfied only for $m\geq 3$ particles, thus resulting in a minimal size of the left region $\ell_\text{min}=6$ for $r=2$.
While there is no guarantee that states $\ket{\psi_m^\ell}$ exist for generic $(m,\ell)$, we have an explicit analytic construction for the smallest state $\ket{\psi_3^6}$ for $(m,\ell)=(3,6)$
\begin{equation}\label{Eq:min-left}
\ket{\psi_3^6} = \frac{1}{\sqrt{2}}\bigr[\ket{\bullet\bullet\circ\circ\bullet\circ}-\ket{\bullet\circ\bullet\bullet\circ\circ}\bigr],
\end{equation}
similarly we report solutions up to $(m,\ell)=(7,18)$ in Appendix~\ref{App:PsiL}.
Furthermore, for each $(m,\ell)$ satisfying the condition, one can easily verify that stacking multiple $|\psi_m^\ell\rangle$ separated by at least $r$ empty sites generates another state fulfilling the same condition.
This recursive construction of the left states in Eq.~(\ref{Eq:zero S evec}), together with the explicit example Eq.~(\ref{Eq:min-left}), guarantees the existence of an infinite number of $\ket{\psi_m^\ell}$, in the thermodynamic limit. We further notice that a similar decomposition can be applied to the right eigesntates, $\ket{\psi_R}$ in a recursive fashion. The observed \textit{recursive} quantum Hilbert space fragmentation is a novel feature of this family of Hamiltonians~(\ref{Eq:Hr}), relying on both particle conservation and chiral constraint.
\begin{figure}[t]
\centering
\includegraphics[width=1.01\columnwidth]{fig3.pdf}
\caption{\label{Fig:Special evecs}
(a): The density profile of the zero-entanglement eigenstates for $L=13$ shows a common pattern, due to their special structure~(\ref{Eq:zero S evec}). The first sites correspond to the zero mode of the Hamiltonian restricted to $3$ particles in $6$ sites $\ket{\psi_3^6}$, followed by $2$ empty sites. The right subregion can then be any of the $6$ eigenstate of $H$ for $2$ particles in $4$ sites, with energy $\pm\sqrt{2},\;0$. We notice that eigenstates with the same $\ket{\psi_R}$ but a different number of empty sites separating it from $\ket{\psi_m^\ell}$ are degenerate and can be mixed by the numerical eigensolver, as is the case in the density profiles shown here.
(b): The number of zero entanglement entropy eigenstates $\mathcal{N}_S(i)$ depends on the boundary of the subregion $A=[1,i]$.
In particular, in the interval $i\in[5,9]$ the number of zero-entanglement eigenstates is exponentially larger compared to more extended left subregions. At larger $i$ recursively fragmented eigenstates contribute to $\mathcal{N}_S(i)$ for $L\geq13$. The total number of zero-entanglement eigenstates, $\mathcal{N}_S$, grows exponentially in $L$, as shown in the inset. Note that $\mathcal{N}_S\neq\sum_i\mathcal{N}_S(i)$, as some eigenstates have zero entanglement across multiple bipartite cuts.
}
\end{figure}
Let us explore the consequence of the existence of the special eigenstates defined in Eq.~(\ref{Eq:zero S evec}). Given the special character of the wave function $\ket{\psi_m^\ell}$, we expect that states $\ket{E_{S=0}} $ have a similar pattern of local observables in the first $\ell$ sites. An example of such behavior is shown in Figure~\ref{Fig:Special evecs}(a), which reveals that all four states $\ket{E_{S=0}}$ that have zero entanglement across at least one bipartite cut in the $L=13$ chain for $r=2$ feature the same density expectation values, $\langle \hat{n}_i\rangle_\alpha=\bra{E_\alpha}\hat{n}_i\ket{E_\alpha}$, in the first $\ell=6$ sites. Starting from the site number $i=9$, the density profile has different values on different eigenstates, corresponding to different wave functions $\ket{\psi_R}$ in Eq.~(\ref{Eq:zero S evec}).
The number of eigenstates with zero entanglement grows exponentially with system size. Even for the case of a fixed $\ket{\psi_m^\ell}$, the right restricted eigenstate $\ket{\psi_R}$ is not subject to any additional constraints, hence the number of possible choices of $\ket{\psi_R}$ grows as the dimension of the Hilbert space of $N_p-m$ particles on $L-\ell-r$ sites, that is, at fixed $m$, asymptotically exponential in $N_p$. In the general case where $(m,\ell)$ are allowed to change, new $\ket{E_{S=0}}$ states will appear, with zero entanglement entropy at different bipartite cuts, according to the size of the left region. Finally, the recursive nature of the fragmentation discussed above is expected to give eigenstates with zero entropy across two or more distinct cuts which are separated by a non-vanishing entanglement region. These states are observed in numerical simulations starting from $N_p=7$ and $L=19$.
To illustrate the counting of eigenstates with zero entropy at a cut separating subregion $A=[1,i]$ from the rest of the system, we denote their number as $\mathcal{N}_S(i)$. For $i<5$, this number is zero $\mathcal{N}_S(i)=0$, as explained in the construction of these states.
For $i\geq 5$ we observe a large $\mathcal{N}_S(i)$, exponentially increasing with system size.
However, at larger $i$, the available configurations that can support states of the form Eq.~(\ref{Eq:zero S evec}) decrease and $\mathcal{N}_S(i)$ drops and eventually vanishes. As $N_p$ and system size increase, left states $\ket{\psi_m^\ell}$ with a larger support $\ell$ are allowed thus increasing the range of sites where $\mathcal{N}_S(i)>0$. This is also due to recursive fragmentation which can appear starting from $N_p=5$ and $L=13$. Carefully counting all \textit{distinct} eigenstates $\ket{E_{S=0}}$ we confirm that their total number $\mathcal{N}_S$ grows exponentially with system size in the inset of Fig.~\ref{Fig:Special evecs}(b)
\section{Dynamics}\label{Sec:Dyn}
After discussing recursive quantum Hilbert space fragmentation in the particle-conserving East model, we proceed with the study of the dynamics. First, in Section~\ref{Sec:QDyn} we consider the dynamical signatures of Hilbert space fragmentation. Afterwards, in Section~\ref{Sec:Qdyn DW} we discuss the phenomenology of particle spreading starting from a domain wall state and illustrate how this can be connected to the structure of the Hilbert space. Finally, we compare the quantum dynamics to that of a classical cellular automaton in Section~\ref{Sec:ClDyn}.
\subsection{Dynamical signatures of quantum Hilbert space fragmentation}\label{Sec:QDyn}
The zero-entanglement eigenstates $\ket{E_{S=0}}$ identified in Eq.~(\ref{Eq:zero S evec}) span a subsector of the Hilbert space which is dynamically disconnected from the rest. In this subspace the Hamiltonian has non-trivial action only in the right component of the state, and eigenstates can be written as product states across the particular cut. Below we discuss signatures of such fragmentation in dynamics launched from weakly entangled initial states.
As an illustrative example, we show in Figure~\ref{Fig:revival} the time evolution of a state of the form defined in Eq.~(\ref{Eq:zero S evec}) for $L=13$. To obtain non-trivial dynamics, we replace the eigenstate $\ket{\psi_R}$ with a product state. In particular, we choose the initial state as
\begin{equation}
\label{Eq:psi0 revivals}
\ket{\psi_0} = \frac{\ket{\bullet\bullet\circ\circ\bullet\circ}-\ket{\bullet\circ\bullet\bullet\circ\circ}}{\sqrt{2}}\otimes\ket{\circ\circ}\otimes\ket{\bullet\circ\bullet\circ\circ},
\end{equation}
\begin{figure}[h]
\centering
\includegraphics[width=.75\linewidth]{fig4.pdf}
\caption{\label{Fig:revival}
The signatures of quantum Hilbert space fragmentation can be observed for initial states that have a large overlap with zero-entanglement eigenstates $\ket{E_{S=0}}$.
The fidelity $F(t)=|\bra{\psi_0}\psi(t)\rangle|^2$ shows periodic revivals for all three initial states; choosing an eigenstate on the left portion of the chain results in perfect revivals (blue curve). Entanglement entropy across the cut $i=11$ in the middle of the right region $R$ and density on the same site show oscillations with identical frequency.
}
\end{figure}
and consider the time-evolved state $\ket{\psi_0(t)} =e^{-\imath t \hat H_2} \ket{\psi_0}$.
The action of the full Hamiltonian does not affect the left part of the state and the Hamiltonian acting on the last five sites in the chain $R=[9,13]$ is a simple $3\times 3$ matrix
\begin{equation}
\label{Eq:HR}
\hat{H}_R = \begin{pmatrix}
0 &1 &0\\
1 &0 &1\\
0 &1 &0
\end{pmatrix}.
\end{equation}
in the $\{\ket{\bullet\bullet\circ\circ\circ},\ket{\bullet\circ\bullet\circ\circ},\ket{\bullet\circ\circ\bullet\circ}\}$ basis. Diagonalizing this matrix, we write the time-evolved state $\ket{\psi_0(t)}$ as
\begin{equation}
\label{Eq:psi t Rev}
\begin{split}
\ket{\psi(t)} =& \ket{\psi_m^\ell}\otimes\ket{00}\otimes\Big[\cos(\sqrt{2}t)\ket{\bullet\circ\bullet\circ\circ}\\
&
- \sin(\sqrt{2}t)\frac{\ket{\bullet\bullet\circ\circ\circ}+\ket{\bullet\circ\circ\bullet\circ}}{\sqrt{2}}\Big],
\end{split}
\end{equation}
hence the fidelity reads $F(t) = |\bra{\psi_0}\psi(t)\rangle|^2 = \cos^2(\sqrt{2}t)$. As the time-evolution in Eq.~(\ref{Eq:psi t Rev}) involves only three different product states, it produces perfect revivals with period $T=\pi/\sqrt{2}$. This periodicity also affects observables, such as the density in the region $R$, and the entanglement entropy.
This periodic dynamics also appears in the two product states $\ket{\psi_+}=\ket{\bullet\bullet\circ\circ\bullet\circ\circ\circ\bullet\circ\bullet\circ\circ}$ and $\ket{\psi_-}=\ket{\bullet\circ\bullet\bullet\circ\circ\circ\circ\bullet\circ\bullet\circ\circ}$ that are contained in Eq.~(\ref{Eq:psi0 revivals}). These states indeed show revivals of the fidelity with the same period $T$, although the peaks are more suppressed. This is not surprising, as these states have only part of their weight in the disconnected subspace.
In Figure~\ref{Fig:revival} we show the results of the dynamics of the state $\ket{\psi_0}$, Eq.~(\ref{Eq:psi0 revivals}), together with the two product states generating the superposition, $\ket{\psi_\pm}$. In addition to fidelity, we also show the density and entanglement dynamics of sites $i$ within the right region $R$.
As expected, the fidelity shows revivals with period $T=\pi/\sqrt{2}$, and similar oscillations are also observed in local operators and entanglement.
\begin{figure}[b!]
\centering
\includegraphics[width=.65\columnwidth]{fig5.pdf}
\caption{\label{Fig:n saturation}
The constrained character of the model leads to a non-uniform stationary density profile for the domain wall initial state. This coincides with the infinite-temperature prediction on large systems, as highlighted by the dashed line corresponding to $\tr[\hat{n}_i]/\tr[\mathbb{1}]$ for $L=25$, where $\tr[\hat{O}]=\sum_j \hat{O}_{jj}$. Rescaling the $x$-axis by the number of particles $N_p$, we obtain a good collapse of the data, as shown in the inset. The particle density follows a linear decrease $\overline{\langle \hat{n}_i\rangle}\approx \overline{\langle \hat{n}_2\rangle}-c(i-2)/N_p$, with $c\approx0.15$.
}
\end{figure}While the initial state $\ket{\psi_0}$ defined in Eq.~(\ref{Eq:psi0 revivals}) presents perfect revivals with $F(T)=1$, the product states $\ket{\psi_\pm}$ does not display perfect fidelity revivals show larger entanglement. We note, that since the two product states $\ket{\psi_\pm}$ together form a state $\ket{\psi_m^\ell}$ their dynamics in the region $R$ is not affected by the choice of the left configuration, and all considered quantities for theses two initial states have identical dynamics.
\subsection{Phenomenology of dynamics from the $\ket{\text{\text{DW}}}$ initial state}\label{Sec:Qdyn DW}
\begin{figure}[b!]
\includegraphics[width=1.01\textwidth]{fig6.pdf}
\caption{\label{Fig:n dynamics L28}
The approach to saturation in the density dynamics is very different depending on the region within the chain. (a) In the first $2N_p$ sites of the chain a fast relaxation takes place due to the weak role of the constraint in dense regions. (b) For the right part of the chain, $i>2N_p$ anomalously slow logarithmic dynamics arise. The inset shows the data collapse upon rescaling the density axis by the long time average and the time axis by the number of states within each \textit{leg} of the graph $\mathcal{N}_i$ to the power $\alpha\approx 1.15$, as discussed in more detail at the end of this section. The data shown here are for a system of $L=28$ sites with $N_p=10$ bosons.
}
\end{figure}
After exploring the dynamics resulting from quantum Hilbert space fragmentation, we now turn to the dynamics in the remainder of the constrained Hilbert space focusing on the domain wall state~(\ref{Eq:psi0}). The domain wall state does not have any overlap with zero entanglement eigenstates except for possibly states with zero entanglement on the last cut. It is also characterized by a vanishing expectation value of the Hamiltonian, corresponding to zero energy density, where the density of states is maximal. Hence, thermalization implies that time evolution from the domain wall leads to the steady state where all observables agree with their infinite-temperature expectation value. To check this property we focus on the expectation value of the particle density operators throughout the chain.
Figure~\ref{Fig:n saturation} shows the infinite time average of the particle density, $\overline{\langle \hat{n}_i\rangle}$ obtained through the diagonal ensemble
\begin{equation}
\overline{\langle \hat{n}_{i}\rangle}=\sum_\alpha |\bra{\text{DW}}E_\alpha\rangle |^2 \bra{E_\alpha}\hat{n}_i\ket{E_\alpha},
\end{equation}
where the sum runs over all eigenstates $\alpha$. This calculation is performed for $L\leq 22$, where the full set of eigenstates can be obtained through exact diagonalization. For larger systems, the infinite time average value of $\overline{\langle \hat{n}_i\rangle}$ is approximated as the average of the density in the time-window $t\in [6.9\times10^3,10^4]$. We observe that the density profile agrees well with the infinite-temperature prediction. See Appendix~\ref{App:therm} for details of the calculation.
The infinite-temperature prediction for the density profile does not result in a homogeneous density due to the constraint. The number of allowed configurations with non-zero density in the last sites is indeed limited by the constraint, and results in a lower density in the rightmost parts of the chain.
In addition, the profile has a step-like shape that is related to the range-2 constraint in the model. In the inset of Fig.~\ref{Fig:n saturation} we show that the density profiles collapse onto each other when plotted as a function of $i/N_p$. This suggests the heuristic expression for the density profile $\overline{\langle \hat{n}_i\rangle}\approx \overline{\langle \hat{n}_2\rangle}-c (i-2)/{N_p}$ where $c\approx0.15$ is a positive constant.
Although the saturation profile of the density is consistent with thermalization, below we demonstrate that \emph{relaxation} to the steady state density profile is anomalous. The time-evolution of the density $\langle \hat{n}_i(t)\rangle = \bra{\psi(t)}\hat{n}_i\ket{\psi(t)}$ is shown in Figure~\ref{Fig:n dynamics L28} for $L=28$ sites up to times $t\approx 10^4$. The data demonstrates that the relaxation of density qualitatively depends on the location within the chain. In the left part of the chain with $i\leq 2N_p$, the spreading of the density front is fast, and saturation is reached quickly on timescales of $O(10)$, as shown in Fig.~\ref{Fig:n dynamics L28}(a). This can be attributed to the fact that the constraint is not effective at large densities. In contrast, in the rightmost part of the chain, $i>2N_p$ the constraint dramatically affects the spreading of particles resulting in the logarithmically slow dynamics in Fig.~\ref{Fig:n dynamics L28}(b).
To further characterize the anomalous dynamics, we study the transport of the particle density on short time-scales for larger systems up to $L=37$ sites. For the systems with $L>28$ we use a fourth-order Runge-Kutta algorithm with a time-step as small as $\delta t=10^{-3}$. This allows us to reliably study the short-time behavior with sufficient accuracy down to $\delta t^4=10^{-12}$. We consider the dynamics of the root mean squared displacement, that measures the spreading of the \textit{center of mass}
\begin{equation}
\label{Eq:R tilde}
{R}(t) = \sqrt{\sum_{i>N_p} \langle\hat{n}_i(t)\rangle |i-N_p|^2}.
\end{equation}
\begin{figure}[t]
\centering
\includegraphics[width=1.01\columnwidth]{fig7.pdf}
\caption{\label{Fig:Rtilde and z}
(a) The behavior of the root-mean-square displacement shows an initial power-law growth $R(t)\sim t^{1/z_{{R}}(t)}$ followed by a slow-down to logarithmic behavior at later times, in agreement with the density dynamics. (b) The analysis of the dynamical exponent $z_{{R}}(t)$ shows the presence of a super-diffusive plateau $1/{z_{{R}}}\approx 0.74$ at intermediate times, whose duration grows with system size. At later times, the onset of logarithmic dynamics is signalled by the decay of $1/z_{{R}}(t)$. Data are for $13\leq L \leq 37$ from more to less transparent.
}
\end{figure}
The dynamics of $R(t)$ in Figure~\ref{Fig:Rtilde and z}(a) shows a clear initial power-law behavior drifting to much slower logarithmic growth at later times, in agreement with the dynamics of $\langle\hat{n}_i(t)\rangle$ in the right part of the chain. At even longer times $R(t)$ saturates to a value proportional to the system size $L$. Figure~\ref{Fig:Rtilde and z}(b) shows the instantaneous dynamical exponent
\begin{equation}\label{Eq:z-define}
z_{{R}}(t) =\left( \frac{d\ln R(t)}{d\ln t}\right)^{-1}.
\end{equation}
The early time dynamics is characterized by ballistic behavior, $z_R(t)\approx 1$ due to the large density in the vicinity of $i=N_p$. On intermediate time-scales $t\approx 10$, a superdiffusive plateau of $1/z_{{R}}(t)\approx 0.74$ is visible. Finally, at longer times dynamics slow down and become logarithmic, consistent with a vanishing $1/z_R(t)$. Zooming in the time-window $t\leq30$, we notice that the extent of the superdiffusive plateau increases roughly linearly with system size, suggesting a persistence of the super-diffusive regime in the thermodynamic limit.
The exact scaling, obtained by collapsing the curves in Fig.~\ref{Fig:Rtilde and z}(b), yields a power law behavior of the plateau extent,$t\sim L^{1.1}$, however due to the relatively small number of system sizes we cannot exclude a more natural $t\sim L$ dependence.
\begin{figure}[tb]
\centering
\includegraphics[width=.85\textwidth]{fig8.pdf}
\caption{\label{Fig:G}
The representation of $\hat H_2$ as the adjacency graph $\mathcal{G}_r$ for system with $N_p=5$ particle and $L=13$ lattice sites. The dense central part -- \textit{backbone} -- has gradually decreasing number of vertices and connectivity as the position of the rightmost particle increases above $\imax>2N_p=10$ (dashed line). The \textit{legs} of the graph emanate from the backbone and correspond to regions where $\imax$ is conserved. The legs end with the product states (an example is labeled as $\mathcal{L}_{\imax}$), where a particular particle is frozen near the end of the chain. Red vertices show product states corresponding to zero-entanglement eigenstates $\ket{E_{S=0}}$, which in this case have weight on $12$ out of $\mathcal{D}_{N_p}=273$ product states contained in the constrained Hilbert space.
}
\end{figure}
We now focus on capturing the phenomenology of the dynamics observed above using the structure of the Hamiltonian. In order to do so we interpret the Hamiltonian as a graph where the vertices of the graph enumerate the product states contained in a given connected sector of the Hilbert space. The edges of the graph connect product states that are related by any particle hopping process allowed by the constraint. A particular example of such a graph for the system with $N_p=5$ particles and $L=13$ sites is shown in Fig.~\ref{Fig:G}.
The vertices of the graph in Fig.~\ref{Fig:G} are approximately ordered by the position of the rightmost occupied site $i_\text{max}\geq N_p$, revealing the particular structure emergent due to the constraint. The dense region that follows the domain wall product state has high connectivity, and we refer to it as the \textit{backbone}. In addition to the backbone, the graph has prominent \emph{legs} emanating perpendicularly. The legs are characterized by the conserved position of the rightmost particle that is effectively frozen due to the particles on the left retracting away, as pictorially shown in Fig.~\ref{Fig:G}. Since such legs are in one-to-one correspondence with the position of the rightmost particle, $i_\text{max}$, their number grows linearly with system size. The number of product state configurations contained within each leg strongly depends on $i_\text{max}$. Given that the position of the rightmost particle is frozen within a leg, they cast a strong effect on the dynamics of the model.
In particular, the spreading of particles towards the right probed by $R(t)$ can be related to the presence of an increasing number of configurations within legs at large $\imax$, $\mathcal{N}_{\imax}$. These are characterized by long empty regions as the one depicted in Figure~\ref{Fig:G}, which require the collective motion of many particles to allow the hopping of the rightmost boson sitting at $\imax$. The slow dynamics observed, then, can be qualitatively understood as the effect of many states not contributing to the spreading and of the increasingly long empty regions that have to be crossed to activate hopping further to the right.
Looking back at the dynamics shown in Figure~\ref{Fig:n dynamics L28}, we highlight this effect by rescaling the time-axis by the number of configurations belonging to each leg, $\mathcal{N}_i$. The resulting collapse is shown in the inset of Figure~\ref{Fig:n dynamics L28}(b).
\subsection{Dynamics in constrained classical cellular automata}\label{Sec:ClDyn}
The anomalous relaxation of the quantum model from the domain wall state reported in Section~\ref{Sec:Qdyn DW} invites natural questions about the universality of dynamics in presence of inversion-breaking constraints. To shed light on this question, we introduce a classical cellular automaton model that replaces the unitary time-evolution of the quantum model $\hat{U}(t) = \exp(-\imath\hat{H}t)$ with a circuit of local unitary gates preserving the same symmetries and constraints of the Hamiltonian~\cite{Nandkishore2019b,Pozsgay2021a}.
To reproduce correlated hopping in the Hamiltonian~(\ref{Eq:Hr=2}), we introduce two sets of local gates $U_1$ and $U_2$ schematically shown in Fig.~\ref{Fig:Circuit}(a). The first gate, $U_1$, acts on $4$ sites and implements the hopping facilitated by the next nearest neighbor,
\begin{equation}
\label{Eq:U1}
U_1 = \exp\Bigg\{ - \imath \theta\bigg[ \hat{n}_{j}(1-\hat{n}_{j+1})\bigr(c^\dagger_{j+3}c_{j+2} + \text{H.c.}\bigr)\bigg]\Bigg\}.
\end{equation}
The second gate, $U_2$, acts on three sites, and implements the hopping facilitated by the nearest neighbor site:
\begin{equation}
\label{Eq:U2}
U_2 = \exp\Bigg\{ - \imath \theta\bigg[ \hat{n}_{j}\bigr(c^\dagger_{j+2}c_{j+1} + \text{H.c.}\bigr)\bigg]\Bigg\}.
\end{equation}
For a generic choice of the rotation angle $\theta$ these gates cannot be efficiently simulated classically. However, in what follows we fix $\theta$ to the special value, $\theta=\pi/2$, so that gates $U_{1,2}$ map any product state to another product state.
This corresponds to a classical cellular automaton which allows for efficient classical simulation.
\begin{figure}[h]
\centering
\includegraphics[width=.65\columnwidth]{fig9.pdf}
\caption{\label{Fig:Circuit_sch}
Schematic representation of the circuit used to describe the classical dynamics. The continuous time-evolution $\hat{U}(t)$ is decomposed into a series of $4$-sites gates $U_1$ and of $3$-sites gates $U_2$, whose action is shown on the right part of the Figure.
}
\end{figure}
As each local gate is particle conserving, in order to allow for non-trivial transport, we shift gate position by one site after each layer, as shown in Fig.~\ref{Fig:Circuit}(a). Consequently, the circuit has a $7$-layer unit cell in the time direction. Additionally, the order of gate applications is also important, as the gates $U_{1,2}$ generally do not commute with each other. Alternating the layers of $U_1$ and $U_2$ gates proves to be the best choice, as it implements all allowed particle hopping processes, leading to the circuit shown in Fig.~\ref{Fig:Circuit}(a).
\begin{figure}[tb]
\centering
\includegraphics[width=1.\columnwidth]{fig10.pdf}
\caption{\label{Fig:Circuit}
(a)-(b): Density evolution of the classical cellular automaton starting from domain wall initial state for a system with $L=298$ sites and $N_P=100$. (Black and white dots correspond to occupied and empty sites). (a) At short times particles spread ballistically into the empty region. Scattering events appear at regular time intervals at the boundaries of the red dashed triangle which defines the region of ballistic behavior. (b) At later times when particle density is lower the constraint becomes more effective, leading to the logarithmic spreading of particles into the empty region. The inset shows the dependence of the displacement of the center of mass on time that has a clear ballistic regime of linear increase with time followed by slow logarithmic growth at later times.
}
\end{figure}
Using this cellular automaton we are able to simulate the time-evolution of very large systems to extremely long times. As the setup implements the same constraint as the Hamiltonian dynamics, we conjecture that it should present similar features. For instance, initializing the system in a dense-empty configuration similar to the $\ket{\text{DW}}$ state, we expect the dense region to spread quickly into the empty one, until eventually it stretches too much and its propagation slows down due to the constraint.
We study the evolution to the domain-wall initial state for a system of $L=298$ sites and $N_p=100$ particles. Since this model is deterministic, the density as a function of circuit depth is a binary function, $n_i(t)\in\{0,1\}$. Figure~\ref{Fig:Circuit}(b) shows the short-time density dynamics ($t<1000$). We observe ballistic particle transport in the dense regime. On the one hand, the position of the rightmost particle moves to the right. On the other hand, defects (holes) propagate within the dense domain wall state. The simulation reveals notable difference in velocities of holes and spreading of the rightmost particle, that is expected in view of the inversion breaking symmetry within the model.
The ballistic expansion of the particles is followed by a logarithmic slowdown at later times as shown in Fig.~\ref{Fig:Circuit}(b). Much akin to the Hamiltonian dynamics, this slowdown is due to the lower density reached at later times as the front moves to the right and more particles become temporarily frozen due to the constraint.
To further probe the two distinct behaviors observed in the cellular automaton, in the inset of Fig.~\ref{Fig:Circuit}(b) we show the time-evolution of the displacement of the center of mass $R(t)$ as in Eq.~(\ref{Eq:R tilde}). From the initial linear behavior, $R(t)$ abruptly enters a logarithmic regime as it exceeds the extent of the ballistic region, corresponding to $i\approx180$.
The study of the circuit evolution for the domain-wall initial state then shows the overall similar characteristic inhomogeoneous dynamics as the quantum system. At early times, and close to the initial domain wall $i=N_p$, the transport of particles and holes is ballistic as for $t\leq1$ in the quantum case (see Fig.~\ref{Fig:Rtilde and z}). However, as the density spreads and particle density lowers, ballistic spreading is replaced by a logarithmic slow dynamics. We notice, however, that the automaton lacks the super-diffusive plateau observed in the Hamiltonian dynamics.
\section{Discussion}\label{Sec:Disc}
In this work, we introduced a family of models characterized by a conserved $U(1)$ charge and strong inversion symmetry breaking. We demonstrate that such models feature recursive quantum Hilbert space fragmentation~\cite{Motrunic2022} that gives rise to weakly entangled eigenstates coexisting with volume-law entangled eigenstates in the spectrum. In addition, we investigate the dynamics of the system in a quantum quench launched from the domain wall initial state. Although the long time saturation value of particle density is consistent with thermalization, we observe two distinct regimes in particles spreading. An initial superdiffusive particle spreading at high density is dramatically slowed down at lower densities, leading to a logarithmically slow approach of density to its saturation value. We suggest the particular structure of the constrained Hilbert space as a possible explanation of such slow propagation. In addition, we also reproduce the logarithmic dynamics in a classical cellular automaton that features the same symmetries, although at early times the cellular automaton features ballistic dynamics in contrast to slower but still superdiffusive spreading of particles in the Hamiltonian model.
Our work suggests that the interplay of constraints and broken inversion or other spatial symmetries may lead to new universality classes of weak thermalization breakdown and quantum dynamics. In particular, the quantum Hilbert space fragmentation in the considered model gives rise to a number of weakly entangled eigenstates that can be interpreted as quantum many-body scars~\cite{Serbyn2021,Regnault2022}. The number of these eigenstates scales exponentially with system size. Moreover these eigenstates may be constructed in a recursive fashion, by reusing eigenstates of a smaller number of particles. This is in contrast to the PXP model, where the number of scarred eigenstates is believed to scale polynomially with system size~\cite{Turner2018,Iadecola2019}, though existence of a larger number of special eigenstates was also conjectured~\cite{Ljubotina2022}.
Although we presented an analytic construction for certain weakly entangled eigenstates and demonstrated their robustness to certain deformations of the Hamiltonian, the complete understanding of quantum recursive Hilbert space fragmentation requires further work. The complete enumeration and understanding of weakly entangled eigenstates may give further insights into their structure and requirements for their existence. In addition, a systematic study of the emergence of quantum Hilbert space fragmentation in the largest sector of a classically connected Hilbert space in other constrained systems, like the XNOR or the Fredkin models is desirable~\cite{Vasseur2021,Yang2022}.
From the perspective of particle transport, the numerical data for dynamical exponent that controls particle spreading suggests that constrained models may provide stable generic examples of superdiffusive dynamics~\cite{Ljubotina2017,DeNardis2021,Ilievski2021,Bulchandani2021,Ljubotina2022}.
This observation differs from results of Ref.~\cite{Vasseur2021} that reported typically slower than diffusive dynamics. This difference may be partially attributed to the fact that Ref.~\cite{Vasseur2021} probed dynamics via the time-evolution of an infinite temperature density matrix that was not projected to the largest connected sector of the Hilbert space. Similarly to quantum Hilbert space fragmentation, our understanding of transport properties also remains limited. This invites large-scale numerical studies of transport in the particle conserving east models using operator evolution with tensor network methods~\cite{Ljubotina2022}. Such studies would enable accessing much larger system sizes and are likely to provide valuable insights needed for constructing an analytic theory of transport. In particular, it is interesting to study the dependence of the superdiffusive exponent on the constraint range in the family of models introduced in this work.
Finally, the models considered in our work may be implemented using quantum simulator platforms. In particular, the Floquet model consists of control-swap gates of various ranges. Thus, an experimental study of such models may reveal novel valuable insights into their physics and the universality of their transport phenomena.
\section*{Acknowledgments}
We would like to thank Raimel A. Medina, Hansveer Singh, and Dmitry Abanin for useful discussions.
The authors acknowledge support by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (Grant Agreement No.~850899). We acknowledge support by the Erwin Schr\"odinger International Institute for Mathematics and Physics (ESI).
\begin{appendix}
\section{Thermalization within the largest subsector of the Hilbert space}\label{App:therm}
In order to show the ergodic behavior of the eigenstates of the Hamiltonian, we study the distribution $P(s)$ of the energy differences in the sorted eigenspectrum weighted by the mean level spacing $\Delta$, $s_i=(\epsilon_i-\epsilon_{i-1})/\Delta$. It is known that thermal systems which satisfy the eigenstate thermalization hypothesis are characterized by level statistics in agreement with the prediction of the Gaussian orthogonal ensemble (GOE), $P_{\text{GOE}}(s) = \frac{\pi}{2} se^{-\frac{\pi}{4}s^2}.$
\begin{figure}[b!]
\centering
\includegraphics[width=1.01\columnwidth]{fig11.pdf}
\caption{\label{Fig:DOS}
(a): As shown in the left sub-panel, the spectrum is symmetric with respect to $E_n=0$, such that for any eigenstate with eigenvalue $E_n$ there is a second state with energy $-E_n$.
Additionally, the model has a large number of zero energy eigenstates, as highlighted by the peak of the density of states $\rho(E_n)$ in the right sub-panel. We show data for $N_p=7$ and $L=19$.
(b): The level spacing distribution $P(s)$ shows good agreement with the GOE prediction, shown as a black dashed line, thus confirming the presence of level repulsion within the largest subsector.
}
\end{figure}
However, before discussing the level statistics, the discussion of the density of states is in order. The Hamiltonian $\hat H_2$ has a spectral reflection property with respect to $E=0$ and it presents an exponentially large in system size number of zero modes, as highlighted by the peak in the density of states $\rho(0)$ shown in Figure~\ref{Fig:DOS}(a). The large number of zero energy eigenstates is explained by the bipartite nature of the adjacency graph that describes the Hamiltonian, see Figure~\ref{Fig:G} for an example.
In a bipartite graph there exist two sets of nodes $\mathcal{P}_{1,2}$ labeled by different product states, such that the action of the Hamiltonian on states belonging to the set $\mathcal{P}_1$ yields a state in the set $\mathcal{P}_2$ and vice versa.
These two partitions are identified by the eigenvalue of the parity operator $\hat{\mathcal{P}} = \prod_j (1-2\hat{n}_j)^j=\prod_j (-\sigma^z_j)^j$, where $\sigma^z_j=2\hat{n}_j-1$ is the corresponding Pauli matrix. It is known that a bipartite graph has a number of zero modes bounded from below by the difference in the size of the two sets $P_{1}$ and $P_2$~\cite{Abrahams1994}.
In fact, when the two partitions have a different number of states, a non-trivial solution of the Schr\"odinger equation for a zero energy eigenstate can be expressed as a system of $n_1$ linear equations for $n_2$ variables. If $n_2>n_1$, there are at least $n_2-n_1$ linearly independent solutions. In this case, in spite of the bound not being tight, both the number of zero modes and the lower bound from the bipartite structure of the graph describing the Hamiltonian increase exponentially with system size, albeit with different prefactors in the exponent. This suggests that the present understanding of the zero mode subspace is incomplete, inviting further research. In particular, using the disentangling algorithm~\cite{Karle} may give valuable insights. This may also help to develop a more complete understanding of the recursive Hilbert space fragmentation, since its mechanism relies on the zero energy eigenstates with vanishing particle density on the last sites of the system, see Section~\ref{Sec:Quantum Fragmentation}.
In Figure~\ref{Fig:DOS}(b) we show the level spacing distribution for $L\in[16,22]$ in the interval $[E_\text{GS},-0.1]$, where $E_\text{GS}$ corresponds to the ground state energy.
Note that due to the spectral reflection property of the Hamiltonian, taking into account only negative energies yields the same results as considering the whole spectrum.
To obtain $P(s)$, we unfold the spectrum in the given interval through polynomial interpolation of the integrated density of states.
The agreement with the GOE prediction suggests that despite the presence of a constraint, the levels develop repulsion within the largest connected sector of the Hilbert space and the model is not integrable.
\begin{figure}[b!]
\centering
\includegraphics[width=1.01\columnwidth]{fig12.pdf}
\caption{\label{Fig:GS}
(a): The density profile of the ground state $\langle \hat{n}_i\rangle_{\text{GS}}$ shows large particle occupation up to $i=2N_p$. Outside this region, the density starts decaying exponentially, as shown in the inset.
(b):The finite size scaling of the energy gap $\Delta E$ shows that it vanishes as $1/L$, thus indicating that the model is gapless in the thermodynamic limit.
(c): Entanglement entropy across the central cut grows logarithmically with strong finite size corrections (dashed orange and green lines show logarithmic fits), providing additional evidence that the ground state is critical.
}
\end{figure}
\section{Ground state characterization}\label{App:GS}%
In this Appendix we characterize the ground state, studying the scaling of the energy gap and of the entanglement entropy.
As the Hamiltonian~(\ref{Eq:Hr}) only has hopping terms, the low lying eigenstates need to have a large overlap with product states that maximize a number of configurations to which hopping is allowed to. In graph language, see Figure~\ref{Fig:G} for an example, these product states correspond to vertices with the largest possible connectivity.
For $r=2$, the state with highest connectivity is $|\underbrace{\bullet\circ\bullet\circ\bullet\dots\bullet\circ}_{2N_p}\underbrace{\circ\circ\circ\dots\circ}_{L-2N_p}\rangle$, with connectivity $2N_p-1$, hence we expect the ground state to have a large weight on the initial $2N_p$ sites. In Figure~\ref{Fig:GS}(a) we plot the density profile of the ground state of the Hamiltonian~(\ref{Eq:Hr=2}) for different system sizes from $L=4$ to $L=25$ against a rescaled $x$-axis $i/2N_p$. The figure confirms the prediction, the ground state is confined within the first $2N_p$ sites, with an exponentially decaying density outside of this region, as shown in the inset. This behavior is different from the one observed in the quantum East model in absence of particle conservation~\cite{Pancotti2020,Marino2022}, where occupation immediately decays exponentially.
We further study the scaling of the energy gap and of the entanglement entropy. As clearly shown in Figure~\ref{Fig:GS}(b), the energy gap $\Delta E$ vanishes as the inverse system size, suggesting that model is in a gapless phase in the thermodynamic limit. Additionally, the entanglement entropy of the ground state across the central cut in the chain presents a slow logarithmic growth. These results suggest that the ground state is critical.
\section{Construction of left parts of separable eigenstates}\label{App:PsiL}
In this section we report the left-restricted eigenvectors entering Eq.~(\ref{Eq:zero S evec}) for all sub-system sizes we were able to investigate numerically for $r=2$. These were used in the main text to correctly count the global number of zero entanglement eigenstates $\mathcal{N}_S$ shown in Figure~\ref{Fig:Special evecs}(b). We remind here that these eigenstates have to fulfill two conditions
\begin{itemize}
\item[(i)] they have to be an eigenstate on the problem restricted to $m$ particles in $\ell$ sites, with $\ell\leq 3m-2$.
\item[(ii)] They must have zero density on the boundary site $\ell$: $\bra{\psi_m^\ell}\hat{n}_\ell\ket{\psi_m^\ell} = 0$.
\end{itemize}
Additionally we observe that these left-restricted eigenvectors always correspond to zero energy.
To obtain these states, we take advantage of the large number of zero modes of the Hamiltonian~(\ref{Eq:Hr=2}). Within the degenerate sub-space, one can perform unitary transformations and obtain a new set of zero energy eigenstates where at least one satisfies the condition (ii) above. To find the correct states in an efficient way, we build the matrix $N_{\alpha,\beta} = \bra{E^{m,\ell}_\alpha}\hat{n}_\ell\ket{E^{m,\ell}_\beta}$ of the expectation values of the density on the last site on eigenstates of the Hamiltonian reduced to $(m,\ell)$. We then diagonalize $N_{\alpha,\beta}$ and check whether it has zero eigenvalues. If so, the corresponding eigenvector is still an eigenstate of the reduced Hamiltonian, and, by construction, it satisfies condition (ii). We notice that this method implements a sufficient condition, which implies that there could be other states that fulfill the same set of restrictions. However, our goal here is merely to provide evidence of existence of these states in several different system sizes.
In the following, we list the states for $m=3,4,5$ and $\ell=6,9,11$ respectively.
\begin{equation}
\begin{split}
\ket{\psi_3^6} &= \frac{1}{\sqrt{2}}\bigr(\ket{\bullet\bullet\circ\circ\bullet\circ}-\ket{\bullet\circ\bullet\bullet\circ\circ}\bigr) \\
\ket{\psi_4^9} &= \frac{1}{2}\bigr(\ket{\bullet\bullet\circ\circ\bullet\circ\circ\bullet\circ} - \ket{\bullet\bullet\bullet\circ\circ\circ\circ\bullet\circ}\bigr) +\frac{1}{4}\bigr(\ket{\bullet\circ\circ\bullet\bullet\bullet\circ\circ\circ}+\ket{\bullet\circ\bullet\bullet\circ\bullet\circ\circ\circ} \\
&+\ket{\bullet\circ\circ\bullet\circ\bullet\bullet\circ\circ} + \ket{\bullet\bullet\bullet\circ\circ\bullet\circ\circ\circ} -\ket{\bullet\bullet\circ\bullet\bullet\circ\circ\circ\circ}-\ket{\bullet\bullet\circ\circ\bullet\bullet\circ\circ\circ} \\
&-\ket{\bullet\circ\circ\bullet\bullet\circ\circ\bullet\circ} -\ket{\bullet\circ\bullet\circ\bullet\circ\bullet\circ\circ} \bigr) \\
\ket{\psi_5^{11}} & = \frac{1}{\sqrt{6}}\bigr(\ket{\bullet\circ\circ\bullet\bullet\bullet\bullet\circ\circ\circ\circ} + \ket{\bullet\circ\bullet\bullet\circ\circ\bullet\bullet\circ\circ\circ} + \ket{\bullet\bullet\circ\circ\bullet\bullet\circ\circ\bullet\circ\circ} + \ket{\bullet\bullet\bullet\circ\circ\circ\bullet\circ\circ\bullet\circ} \\
&- \ket{\bullet\circ\bullet\circ\bullet\bullet\circ\bullet\circ\circ\circ} - \ket{\bullet\bullet\circ\bullet\circ\circ\bullet\circ\bullet\circ\circ}\bigr)
\end{split}
\end{equation}
Additional states are present, which we do not write down for the sake of brevity. However, we point out the existence of recursively stacked eigenstates, as mentioned in the main text, and of states where the right part corresponds to a single isolated particle.
\section{Quantum Hilbert space fragmentation for generic Hamiltonian parameters}\label{App:generic r}
Throughout the main text, we often mentioned that the results regarding quantum fragmentation hold irrespective of the range of the constraint $r$ and of the values of the hopping amplitudes $t_\ell$. In the following, we provide evidence in support of the generality of recursive fragmentation.
\begin{figure}[h]
\includegraphics[width=.99\textwidth]{fig13.pdf}
\caption{\label{Fig:S0 generic}
(a): entanglement entropy of the eigenstates of the Hamiltonian for range $r=2$, random hopping parameters $t_{1}=0.84$, $t_{2}=0.49$ , and system size $L=16$. The presence of zero entanglement eigenstates, highlighted by the red crosses, confirms that quantum fragmentation is insensitive to the value of the hopping amplitudes. (b)-(c): A similar result is obtained for different values of the range $r$. The central panels refer to $r=1$, $N_p=8$ and $L=15$, while the right ones show $r=3$, $N_p=5$ and $L=17$.
}
\end{figure}
In Figure~\ref{Fig:S0 generic}, we first show the entanglement entropy of eigenstates for $r=2$ and Hamiltonian
\begin{equation}
\label{Eq:Hr=2,t1t2}
\hat{H} = \sum_{i=2}^{L-1}(t_1\hat{n}_{i-1}+t_2\hat{n}_{i-2} - t_2\hat{n}_{i-1}\hat{n}_{i-2})(\hat{c}^\dagger_{i+1}\hat{c}_i+\text{H.c.}),
\end{equation}
with generic, although homogeneous, hopping amplitudes $t_1,t_2$. In the leftmost panel, we highlight the presence of zero entanglement eigenstates in the half-chain cut for a random choice of the hopping parameters. The density profile of these special eigenstates is similar to the one showed in Figure~\ref{Fig:Special evecs}(a), although the density profile in the left region has more complicated pattern due to the different values of $t_{1,2}$.
Next, we show the presence of recursive fragmentation in the generic Hamiltonian~(\ref{Eq:Hr}). In the central and right panels of Figure~\ref{Fig:S0 generic} zero entanglement eigenstates (red crosses) appear across the central cut for both $r=1$ and $r=3$. As for the random $t_{1,2}$ case, the structure of these eigenstates is akin to the one obtained in Eq.~(\ref{Eq:zero S evec}), featuring an empty region of $r+1$ sites disconnecting the left region from the right one.
Thus we provide numerical evidence in support of the generic form of the zero entropy eigenstates $\ket{E_{S=0}}$ proposed in the main text.
\begin{figure}[b!]
\centering
\includegraphics[width=1.01\columnwidth]{fig14.pdf}
\caption{\label{Fig:n dynamics scaling-r}
(a): The dynamics of the density on the last site $\langle \hat{n}_L(t)\rangle$ for several different system sizes. The slow logarithmic growth is evident for all $L\geq 16$. At larger system sizes $L\geq19$ the slope becomes independent of system size, as well as the saturation value, thus suggesting a universal behavior.
(b): The density dynamics for different values of the range $r$ shows always a logarithmic behavior. While the quantitative details change between different values of $r$, the qualitative feature of the logarithmic growth is a constant, thus confirming our claim of generality of the results. The data are obtained on a chain of $N_p=9$ and $L=17$ for $r=1$, and $N_p=6$ and $L=21$ for $r=3$.
}
\end{figure}
\section{Additional evidence of slow dynamics}\label{App:generic dyn}
\begin{figure}[tb]
\centering
\includegraphics[width=1.01\columnwidth]{fig15.png}
\caption{\label{Fig:short n dynamics}
(a) Spreading of the density in a system with $L=37$ sites and $N_p=13$ bosons. Lines of constant value $\varepsilon$ highlight the very different behavior observed in the two regions $i \lessgtr 2N_p$.
(b) The inverse dynamical exponent $1/z_r(t)$ is always super-diffusive. While for a large threshold it decays to $0$ indicating the onset of logarithmic growth, for small values of $\varepsilon$ the dynamical exponent seems to saturate approaching the asymptotic value (weakly dependent on the threshold value), before the onset of boundary effects. As shown in the right panel, the asymptotic $1/z_r$ is super-diffusive behavior is generic irrespective of the choice of the range of the constraint. The data shown in this panel correspond to $N_p=11$ and $L=21,31,41$ for $r=1,2,3$ respectively.}
\end{figure}
In the main text we provided evidence of slow dynamics from the time-evolution of the density operator in large systems and from the behavior of the root-mean-square displacement. Here, we present some additional data regarding system size scaling of the density dynamics as well as the observation of slow dynamics for generic $r$. Finally, we present an additional measure for the logarithmic behavior of the particles spreading.
In Figure~\ref{Fig:n dynamics scaling-r}(a) we show the system size scaling of the dynamics of the density on the last site of the chain, $\langle \hat{n}_L(t)\rangle$. All the curves present logarithmic growth, and for larger system sizes $L\geq19$ the slope becomes roughly constant. The absence of logarithmic behavior for smaller system sizes $L<16$ is in agreement with the data shown in the main text, where $R(t)$ quickly saturates for $L=13$.
Similar slow dynamics are observed in the time-evolution generated by Hamiltonians with generic constraint range $r$. In Figure~\ref{Fig:n dynamics scaling-r}(b) we present the growth of the density in the last three sites of two chains of length $L=17$ and $L=21$ for $r=1$ and $r=3$ respectively. As the data suggest, the dynamics in the rightmost part of the chain always presents logarithmic behavior, irrespective of the range of the constraint. However, the quantitative details are affected by $r$.
To analyze the spreading of the density, in the main text we presented the behavior of the root-mean-square displacement $R(t)$ together with the respective dynamical exponent $z_R(t)$. Here, we approach the same question using a different measure, namely the time-dependence of the expansion of the density profile. This spreading distance $\delta r$ is defined as the distance from the domain wall boundary, $i=N_p$, at which density becomes larger than a certain threshold $\varepsilon \ll 1$. The spreading distance $\delta r$ is expected to asymptotically behave as a power-law in time, defining a dynamical exponent $z_r$ such that $\delta r \approx t^{1/z_r}$. However, the limited system sizes available to our numerical study do not allow us to reach the asymptotic regime, and we are forced to study the time-dependent analogue $z_r(t)$, obtained through the logarithmic derivative of the spreading distance with respect to time, $(z_r(t))^{-1}=d\ln\delta r /d\ln t$.
In panel~(a) of Figure~\ref{Fig:short n dynamics} we show a heat-map of the density dynamics for $L=37$ sites, superimposed with curves of constant $\langle \hat{n}_i(t)\rangle=\varepsilon$, for values of $\varepsilon \in [0.1,10^{-10}]$, above the accuracy limit $O(10^{-12})$ of the $4$-th order Runge-Kutta algorithm. For each threshold, we show in panel~(b) the time-dependent dynamical exponent. For the largest values of $\varepsilon$ the dynamical exponent has a super-diffusive plateau at $1/z_r(t)\approx 0.7$ before quickly vanishing as expected from the logarithmic dynamics of the density. On the other hand, at smaller thersholds the dynamical exponent seems to saturate to a finite value, before it eventually starts decreasing due to boundary effects.
The saturation value of the time-dependent dynamic exponent for small thresholds has a weak dependence on the value of $\varepsilon$. As $\varepsilon\to0$, $1/z_r$ approaches a $r$-dependent saturation value, monotonically increasing as the range of the constraint becomes larger, as shown in the right panel of Figure~\ref{Fig:short n dynamics}(b). This behavior is in agreement with the expectation that at $r\to \infty$ the system should approach ballistic dynamics.
\end{appendix}
\bibliographystyle{SciPost_bibstyle}
|
2,869,038,154,706 | arxiv | \section{Introduction}
Let $K$ be a field. Fix positive integers $k$ and $n_i$, $1\le i\le k$. Set $Y:= \prod _{i=1}^{k} \enm{\mathbb{P}}^{n_i}$ (the
multiprojective space with $k$ non-trivial factors of dimension $n_1,\dots ,n_k$). Set $r:= -1+ \prod _{i=1}^{k} (n_i+1)$.
Let $\nu : Y\to \enm{\mathbb{P}}^r$ denote the Segre embedding of the multiprojective space $Y$. Thus $X:= \nu (Y)$ is a Segre variety of
dimension $n_1+\cdots +n_k$. It was introduced by Corrado Segre in 1891 (\cite{seg}). See \cite[Ch. 25]{ht} for its geometry over a
finite field. In the last 30 years this variety had a prominent role in the applied sciences, because it is strongly related
to tensors and it was realized that tensors may be used in Engineering and other sciences (\cite{l}).
Let $S\subset Y$ be a finite subset. Set
$e(S):= h^1(\enm{\cal{I}} _S(1,\dots ,1)) = \# (A) -1 -\dim \langle \nu (A)\rangle$, where $\langle \ \ \rangle$ denote the linear
span. The minimal multiprojective subspace
$Y'$ of
$Y$ containing
$S$ is the multiprojective space $\prod _{i=1}^{k} \langle \pi _i(S)\rangle \subseteq Y$, where $\langle \pi _i(S)\rangle$
denote the linear span of the finite set $\pi _i(S)$ in the projective space $\enm{\mathbb{P}}^{n_i}$. We say that $Y'$ is the multiprojective subspace generated by $S$ and that $S$ is \emph{nondegenerate} if $Y'=Y$. We say that
$S$ is
\emph{linearly independent} if
$\nu (S)\subset \enm{\mathbb{P}}^r$ is linearly independent. By the definitions of Segre embedding and of the integer $e(S)$ we have $\dim
\langle
\nu (S)\rangle =\# (S)-1-e(S)$. In particular $S$ is linearly dependent if and only if $e(S)>0$. We say that
$S$ is a
\emph{circuit} if
$S$ is linearly dependent, but every proper subset of $S$ is linearly independent.
Everything said up to now use only the
linear structure of the ambient $\enm{\mathbb{P}}^r$. Now we describe the new feature coming from the structure of $Y$ as a
multiprojective space, in particular the structure of linear subspaces contained in the Segre variety $\nu (Y)$. We say that a
finite set $S\subset Y$ is \emph{minimal} if there is no line $L\subset \nu (Y)$ such that $\# (\nu (S)\cap L) \ge 2$. Of
course, if $\# (\nu (S)\cap L) \ge 3$, then $S$ is not linearly independent. However, a non-minimal finite set $S$ may be
linearly independent (take as $S$ two points such that the line $\langle \nu (S)\rangle$ is contained in $\nu (Y)$). When $S$
is linearly independent there is no $A\subset Y$ such that $\# (A)<\# (S)$ and $\langle \nu (A)\rangle \supseteq
\langle
\nu (S)\rangle$, but if $S$ is not minimal, say there is $L\subset Y$ such that $\nu (L)$ is a line and $L\cap S\supseteq
\{a,b\}$ with $a\ne b$, there is $o\in L$ such that $q\in \langle \nu (o) \cup (S\setminus \{a,b\})$. Note that $\# (\{o\}\cup (S\setminus \{a,b\}) <\# (S)$. We say that $S$ is \emph{$i$-minimal} if there is no curve $J\subset Y$ such that $\nu (J)$ is a line, $\# (J\cap S)\ge 2$ and $J$ is mapped isomorphically into the $i$-th factors of $Y$, while
it is contracted to a point by the projections onto the other factors of $Y$. The finite set $S$ is minimal if and only if it
is $i$-minimal for all $i$. The minimality condition is in general quite weaker/different from the assumptions needed to apply
the famous Kruskal's criterion to two subsets $A, B\subset S$ with $A\cup B=S$ and $\# (A) = \# (B)$
(\cite{co,cov,cov2,ddl1,ddl2,ddl3,kr}).
We classify circuits with cardinality $4$ (Proposition \ref{e2}) and give the following classification of circuits formed by $5$ points.
\begin{theorem}\label{e3}
Let $\Sigma$ denote the set of all nondegenerate circuits $S\subset Y$ such that $\# (S) =5$. Then one of
the following cases occurs:
\begin{enumerate}
\item $k=1$, $n_1=3$;
\item $k=2$, $n_1=n_2=1$;
\item $k=2$ and $n_1+n_2 =3$; all $S\in \Sigma$ are described in Example \ref{p2p1};
\item $k=3$, $n_1=n_2=n_3=1$; all $S\in \Sigma$ are described in Lemma \ref{p1p1p1}; in this case $\Sigma$ is an irreducible
variety of dimension $11$.
\end{enumerate}
\end{theorem}
All $S\in \Sigma$ in the first two cases listed in Theorem \ref{e3} are obvious and we describe them in Remark
\ref{e3.0}. In the classification of case $k=3$ we use the rational normal curves contained in a Segre variety.
We also
classify the nondegenerate sets
$S$ with
$\# (S)=5$ and
$e(S)\ge 2$ (Proposition
\ref{e3.01}).
The study of
linearly dependent subsets of Segre varieties with low cardinality was started in
\cite{sac}.
The author has no conflict of interest.
\section{Preliminaries}
Take $Y =\enm{\mathbb{P}}^{n_1}\times \cdots \times \enm{\mathbb{P}}^{n_k}$, $k\ge 1$, $n_i>0$, $1\le i\le k$. Set $r:= -1 + \prod _{i=1}^{k} (n_i+1)$. Let $\nu: Y \to \enm{\mathbb{P}}^r$
denote the Segre embedding of $Y$. We will use the same name, $\nu$, for the Segre embedding of any
multiprojective subspace $Y'\subseteq Y$. Set
$X:=
\nu (Y)\subset \enm{\mathbb{P}}^r$. For any $q\in \enm{\mathbb{P}}^r$ the $X$-rank of $q$ is the minimal cardinality of a finite subset $S\subset X$
such that $q\in \langle S\rangle$, where $\langle \ \ \rangle$ denote the linear span. For any $q\in \enm{\mathbb{P}}^r$ let $\enm{\cal{S}} (Y,q)$
denote the set
of all $A\subset Y$ such that $\# (A) =r_X(q)$ and $q\in \langle \nu (A)\rangle$. In the introduction we observed that $S\notin \enm{\cal{S}} (Y,q)$ for any $q\in \enm{\mathbb{P}}^r$ if $S$ is not minimal.
For any $i\in \{1,\dots ,k\}$ set
$Y_i:= \prod_{h\ne i} \enm{\mathbb{P}}^{n_h}$ with the convention that $Y_1$ is a single point if $k=1$.
Let $\eta _i: Y\to Y_i$ denote
the projection (it is the map forgetting the $i$-th coordinate of the points of $Y$). We have $h^0(\enm{\cal{O}} _Y(1,\dots ,1)) =r+1$. For any $i\in \{1,\dots ,k\}$ let $\enm{\cal{O}} _Y(\epsilon _i)$ (resp. $\enm{\cal{O}}
_Y(\hat{\epsilon}_i)$) be the line bundle $\enm{\cal{O}} _Y(a_1,\dots ,a_k)$ on $Y$ with multidegree $(a_1,\dots ,a_k)$ with $a_i=1$ and
$a_j =0$ for all $j\ne i$ (resp. $a_i=0$ and $a_j=1$ for all $j\ne i$. We have $h^0(\enm{\cal{O}} _Y(\epsilon _i)) =n_i+1$ and $h^0(\enm{\cal{O}}
_Y(\hat{\epsilon}_i)) =(r+1)/(n_i+1)$.
\begin{definition}
Take $S\subset Y$ such that $e(S)>0$. We say that $S$ is \emph{strongly essential} if $e(S')=0$ for all $S'\subset S$ such that
$\# (S') =\# (S)-e(S)$.
\end{definition}
Take an essential set $S\subset Y$ and any $S'\subset S$. We have $e(S') =\max \{0,e(S)-\# (S)+\# (S')\}$.
A set $S\subset Y$ with $e(S)=1$ is strongly essential if and only if it is a circuit.
We recall the following lemma (\cite[Lemma 2.4]{bbcg1}), whose proof works over any algebraically closed field, although it was only claimed over $\enm{\mathbb{C}}$, or at least over an algebraically closed base field
with characteristic $0$.
\begin{lemma}\label{ee0}
Take $q\in \enm{\mathbb{P}}^r$ and and finite sets $A, B\subset Y$ irredundantly spanning $q$. Fix an effective divisor $D\subset Y$.
Assume $A\ne B$ and $h^1(\enm{\cal{I}} _{A\cup B\setminus D\cap (A\cup B)}(1,\dots ,1)(-D))=0$. Then $A\setminus A\cap D =B\setminus
B\cap D$.
\end{lemma}
\begin{proof}
In \cite{bbcg1} there is the default assumption that the base field is $\enm{\mathbb{C}}$ (or al least an algebraically closed field with
characteristic $0$). The proof of \cite[Lemma 2.5]{bbcg1} (whose statement imply \cite[Lemma 2.4]{bbcg1}) never uses any
assumption on the characteristic of the base field. Now we explain why the statement of Lemma \ref{ee0} over an algebraic
closure $\overline{K}$ of $K$ implies the statement over $K$. By assumption all points of $A$ and $B$ are defined over $K$.
The dimension of a linear span of a subset of $\nu(A\cup B)$ is the same over $K$ or over $\overline{K}$. Since the
statement of the lemma also uses cohomology groups of coherent sheaves we also need to use that the dimension of the cohomology
groups of coherent sheaves on projective varieties defined over $K$ is preserved when we extend the base field $K\subseteq
\overline{K}$, because $\overline{K}$ is flat over $K$ (\cite[Proposition III.9.3]{h}).
\end{proof}
\begin{remark}Fix an integer $e>0$ and an integral and non-degenerate variety $W\subset \enm{\mathbb{P}}^r$ defined over $\overline{K}$. Since
$W(\overline{K})$ is Zariski dense, $r+1+e$ is the maximal cardinality over a finite set $S\subset W(\overline{K})$ such that
$\dim \langle S\rangle =\# (S)-1-e$. The same is true if $W$ is defined over $K$ and we require that $S\subset W(K)$ and
that $W(K)$ is Zariski dense in $W(\overline{K})$. The minimal such cardinality of any such set $S\subset W(\overline{K})$
is
$e+2$ if and only if $W$ contains a line; otherwise it is larger. If $K$ is finite to get the same we need $\# (K) \ge e+1$
and that the line $L\subset W$ is defined over $K$. The Segre variety has plenty of lines defined over the base field $K$.
\end{remark}
\section{Linear algebra inside the Segre varieties}\label{Sl}
Take $Y:= \enm{\mathbb{P}}^{n_1}\times \cdots \times \enm{\mathbb{P}}^{n_k}$, $n_i>0$ for all $i$.
\begin{remark}Fix a finite nondegenerate set $S\subset Y$.
We assume $h^1(\enm{\cal{I}} _S(1,\dots ,1)) >0$ (i.e., that $\nu (S)$ is linearly independent, i.e., (with our terminology) that $S$ is linearly dependent) and $h^1(\enm{\cal{I}} _{S'}(1,\dots ,1)) =0$ for all $S'\subsetneq S$ (i.e., that each proper subset of $S'$ is linearly independent). Equivalently, let $S\subset Y$ be a nondegenerate circuit. In particular we have $h^1(\enm{\cal{I}} _S(1,\dots ,1)) =1$, i.e., $\dim \langle \nu (S)\rangle = \# (S)-2$. Since $\enm{\cal{O}} _Y(1,\dots ,1)$ is very ample, we have $\# (S)\ge 3$.
\end{remark}
\begin{remark}\label{eo0}
Take $Y = \enm{\mathbb{P}}^{n_1}\times \times \cdots \times \enm{\mathbb{P}}^{n_k}$ and set $m:= \max \{n_1,\dots ,n_k\}$. It easy to check that $m+1$
is the minimal cardinality of a subset of $Y$ generating $Y$, i.e., not contained in a proper multiprojective subspace of $Y$ (just take
$S$ such
that $\dim \langle \pi _i(S)\rangle =\min \{n_i,\# (S)-1\}$ for all $i$).
\end{remark}
\begin{example}\label{ee1}
Let $S\subset Y$ be a finite linearly independent subset, $S\ne \emptyset$, and set $s:= \# (S)$. Fix
$i\in
\{1,\dots ,k\}$. We construct another subset $S_i\subset Y$ such that $\# (S_i) =s+1$ and $\#
(S_i\cap S) =s-1$ in the following way and discuss when (assuming $s\le r$) $S_i$ is linearly independent. Fix $o\in S$ and $i\in \{1,\dots ,k\}$. Take a line $L\subseteq \enm{\mathbb{P}}^{n_i}$ containing
$o_i$. Fix two points $o'_i,o''_i\in L\setminus L\cap \pi _i(S)$. Let $o'$ (resp. $o''$) be the only point of $Y$ with $\pi
_i(o') = o'_i$ (resp. $\pi _i(o'') = o''_i$) and $\pi _j(o')=\pi _j(o'') = \pi _j(o)$ for all $j\ne i$. Set $S_i:= (S\setminus
\{o\})\cup \{o',o''\}$. We have $\# (S_i) =\# (S)+1$. By assumption we have $\dim \langle \nu (S)\rangle =s-1$. Let
$D\subseteq D'\subseteq Y$ be the multiprojective subspace of $Y$ with $\pi _j(o)$ as their projection for all $j\ne i$, $\pi
_i(D)=L$ and $\pi _i(D') =\enm{\mathbb{P}}^{n_i}$. The set $S_i$ is linearly independent for general $o'_i,o''_i\in L$ (resp. for general
$o'_i,o''_i\in L$ and for a general line $L\subseteq \enm{\mathbb{P}}^{n_i}$) if and only if $\dim \langle \nu (S\cup D)\rangle \ge s$
(resp. $\dim \langle \nu (S\cup D)\rangle \ge s$). Now assume $s\ne r+1$ and set $E:= S\setminus \{o\}$. Take a general $(a_1,\dots ,a_k)\in Y$, and a general
line $J\subset \enm{\mathbb{P}}^{n_i}$ such that $a_i\in J$. Let $T\subset Y$ be the irreducible curve with $\pi _h(T)=a_h$ for all $h\ne i$, $\pi _i(T)=J$ and $\pi _{i|T}: T\to J$ an isomorphism.
Then $L:= \nu (T)$ is a line and $\dim \langle L\cup \nu (E)\rangle =s$. We describe in Lemma \ref{ee1.1} and Remark
\ref{ee1.12} some cases in which we may take $\nu (o)\in L$. Of course, for any linearly independent set $A\subset Y$ with
$\# (A) \le r$, there is $o\in Y\setminus A$ such that $A\cup \{o\}$ is linearly independent, because $\nu (Y\setminus A)$
spans
$\enm{\mathbb{P}}^r$.
\end{example}
The set $S_i$ constructed in Example \ref{ee1} will be called an \emph{elementary increasing} of $S$ in the $i$-th factor or an \emph{$i$-elementary increasing} of $S$. We say that the set $S$ is obtained from $S_i$ by an \emph{elementary decreasing} of the non-minimal set $S_i$ in the $i$-th direction or by an \emph{$i$-elementary decreasing}.
Note that $S_i\subset S$ is obtained from some $i$ making an elementary increasing along the $i$-th component if and only if
$\eta _{i|S_i}$ is not injective.
\begin{remark}\label{ee1.0}
Take $S, S_i, o, o', o'' $ as in Example \ref{ee1}. Since $\nu (o)\in \langle \nu (\{o',o''\})\rangle$, we have $\langle \nu (S)\rangle\subseteq \langle \nu (S_i)\rangle$, with strict inclusion if and only if $S_i$ is linearly independent.
\end{remark}
\begin{lemma}\label{ee1.1}
Let $S\subset Y$ be a linearly independent finite subset, $S\ne \emptyset$, such that there is $i\in \{1,\dots ,k\}$ such that $\langle \pi _i(S)\rangle \subsetneq \enm{\mathbb{P}}^{n_i}$. Then $S$ has a linearly independent elementary increasing in the $i$-th direction.
\end{lemma}
\begin{proof}
Set $s:= \# (S)$. Let $Y'\subset Y$ be the multiprojective space with $\enm{\mathbb{P}}^{n_j}$ as its factor for all $j\ne i$ and $\langle \pi _i(S)\rangle$ as its $i$-th factor. By assumption we have $Y'\subsetneq Y$. Fix $o\in S$. A general line $L\subset \enm{\mathbb{P}}^{n_i}$ containing $\pi _i(o)$ is not contained in $\langle \pi _i(S)\rangle$. Call $Y''\subseteq Y$
the multiprojective space with $\enm{\mathbb{P}}^{n_j}$ as its factor for all $j\ne i$ and $\langle \pi _i(S)\cup L\rangle$ as its $i$-th factor. We have $S\subset Y'\subsetneq
Y''\subseteq Y$. Hence
$\nu (L)\nsubseteq \langle \nu (Y')\rangle$. Fix any $o'_i,o''_i\in \pi _i(L)\setminus \{\pi _i(o)\}$ such that $o'_i\ne
o''_i$. Take $o', o''\in Y$ with $\pi _i(o')=o'_i$, $\pi _i(o'')=o''_i$ and $\pi _j(o')=\pi _j(o'')=o_j$ for all $j\ne i$.
\end{proof}
\begin{remark}\label{ee1.12}
For each $a
=(a_1,\dots ,k\}$ set $a[i]:= \eta _i^{-1}(\eta _i(a))$. Note that $a[i] \cong \enm{\mathbb{P}}^{n_i}$ and that $\nu (a[i])$ is an
$n_i$-dimensional linear space containing $\nu (a)$. Set $\{\{a\}\}:= \cup _{i=1}^{k} a[i]$. Note that $\dim
\langle\nu(\{\{a\}\})\rangle = n_1+\cdots +n_k$. For any finite set $A\subset Y$, $A\ne\emptyset$, set $A[i]:= \cup _{a\in A}
a[i]$ and $\{\{A\}\}:= \cup _{a\in A} \{\{a\}\}$. Now assume that $A$ is linearly independent and $\# (A)\le r$. Since any
two points of a projective space are contained in a line, $A$ has a linearly independent $i$-increasing (resp a linearly
independent increasing) if and only if $\langle \nu (A)\rangle \subsetneq \langle \nu (A[i])\rangle$ (resp. $\langle \nu
(A)\rangle
\subsetneq \langle \nu (\{\{A[i])\}\}\rangle$). By Lemma \ref{ee1.1} to prove that $A$ has a linearly independent elementary
increasing we may assume that $\langle \pi _i(A)\rangle =\enm{\mathbb{P}}^{n_i}$ for all $i$. If $k=2$ and $\langle \pi _i(A)\rangle
=\enm{\mathbb{P}}^{n_i}$ for at least one $i$, then it is easy to see that $\langle \nu (A)\rangle =\enm{\mathbb{P}}^r$.
\end{remark}
\begin{remark}\label{p2}
Fix a linearly independent $S\subset Y$ and take $i\in \{1,\dots ,k\}$ and $a\in Y_i$. Since $3$ collinear points
are not
linearly independent, we have $\# (S\cap \eta _i^{-1}(a)) \le 2$. Take a circuit $A\subset Y$. Either $\# (A\cap \eta
_i^{-1}(a)) \le 2$ or $A$ is formed by $3$ collinear points (and so $\enm{\mathbb{P}}^1$ is the minimal multiprojective space containing
$A$).
\end{remark}
Let $S\subset Y$ be a finite set such that $e(S)>0$. Note that $\# (S)\ge e(S)+2$. A point $o\in S$ is said to be
\emph{essential for $S$} if $e(S\setminus \{o\}) = e(S)-1$. If $o$ is not essential for $S$ we will often say that
$o$ is \emph{inessential for $S$}. Since $e(S)-e(S') \le \# (S)-\# (S')$ for all $S'\subset S$, $o$ is inessential for
$S$ if and only if $e(S\setminus \{o\}) = e(S)$.
Let $S\subset Y$ be a finite set such that $e(S)>0$. A \emph{kernel} of $S$ is a minimal subset $S'\subseteq S$ such that
$e(S')=e(S)$.
\begin{lemma}\label{p3}
Any finite linearly dependent subset of $Y$ has a unique kernel.
\end{lemma}
\begin{proof}
Take a finite set $S\subset Y$ such that $e(S)>0$. Let $S'$ and $S''$ be kernels of $S$. Assume $S'\ne S''$. By the definition of
kernels we have $S''\nsubseteq S'$. We order the points of $S''$, say $ S'' =\{o_1,\dots ,o_b\}$ so that $o_b\notin S'$. By the
definition of kernel we have
\begin{equation}\label{eqp1}
\dim
\langle
\nu (S)\rangle =\dim \langle \nu (S')\rangle+\# (S\setminus S').
\end{equation}
We first add $\{o_1,\dots ,o_{b-1}\}$ to $S'$ and get a set
$S_1\supseteq S'$. By (\ref{eqp1}) we get $\dim \langle \nu (S_1)\rangle =\dim \langle \nu (S')\rangle +b-1 -\# (S'\cap
S'')$. Then we add $o_b$. Since $e(S'')>0$, we have $\nu (o_b)\in \langle \nu (\{o_1,\dots ,o_{b-1}\}\rangle$ and hence
$\nu (o_b)\in \langle \nu (S_1)\rangle $, contradicting (\ref{eqp1}).
\end{proof}
\begin{lemma}\label{p4}
Let $S\subset Y$ be a finite subset such that $e(S)>0$. The kernel of $S$ is the set of all its essential points, i.e., the
tail of $S$ is the set of all its inessential points.
\end{lemma}
\begin{proof}
Fix an inessential point $o\in S$ (if any). Let $S'\subset S\setminus \{o\}$ be a minimal subset of $S\setminus \{o\}$ with
$e(S')=e(S)$. By Lemma \ref{p4} $S'$ is the unique kernel of $S$. Hence the tail of $S$ contains $o$. Thus the tails of $S$
contains all inessential points of $S$. No essential point of $S$ may belong to the tail.
\end{proof}
Let $S\subset Y$ be a finite subsets with $e(S)>0$. The \emph{tail} of $S$ is $S\setminus S'$, where $S'$ is the kernel of
$S$. The tail of $S$ is the set of all inessential points of
$S$, while the \emph{kernel} of $S$ is the set of all its essential points.
\begin{remark}\label{p3.00}
Let $A\subset \enm{\mathbb{P}}^m$ be a linearly dependent finite subset, i.e., a finite subset such that $e(A):= \# (A)-1-\dim \langle A\rangle >0$. We first show to different methods
to get a subset $B\subseteq A$ with $e(B) =e(A)$, $e(B')$. Easy examples (even when $m=1$) shows that each of these methods
does not give a unique $B$. The first method is increasing the number of points in a subset of $A$. We start with $o, o'\in A$
such that $o\ne o'$. Obviously $e(\{o,o'\})=0$
\end{remark}
Consider the following construction: linear projections of a multiprojective space from proper linear subspaces of
one of its factors. Fix integers $0\le v < n$ and a $v$-dimensional linear subspace $V\subset \enm{\mathbb{P}}^n$. Let $\ell _V:
\enm{\mathbb{P}}^n\setminus V\to \enm{\mathbb{P}}^{n-v-1}$ denote the linear projection from $V$. Take $Y = \enm{\mathbb{P}}^{n_1}\times \cdots \times \enm{\mathbb{P}}^{n_k}$.
Fix an integer
$i\in \{1,\dots ,k\}$, an integer $v$ such that $0\le v <n_i$ and a $v$-dimensional linear subspace $V\subset \enm{\mathbb{P}}^{n_i}$. Let
$Y'$ (resp. $Y''$) be the multiprojective space with $k$ factors, with $\enm{\mathbb{P}}^{n_h}$ as its $h$-th factor for all $h\ne i$ and
with $V$ (resp. $\enm{\mathbb{P}}^{n_i-v-1}$) as its $i$-th factor. The multiprojective space $Y'$ (resp. $Y''$) has $k$ non-trivial
factors if and only if $v>0$ (resp. $v\le n_i-2$). Let $\ell _{V,i}: Y\setminus Y' \to Y''$ denote the morphism defined by the
formula $\ell _{V,i}(a1,\dots ,a_k) = (b_1,\dots ,b_k)$ with $b_h=a_h$ for all $h\ne i$ and $b_i =\ell _V(a_i)$. When $K$ is
infinite we use the Zariski topology on $\enm{\mathbb{P}}^n(K)$ and the $K$-points of the Grassmannians.
\begin{remark}\label{f3}
Fix a finite set $S\subset Y$, an integer $i\in \{1,\dots ,s\}$ and an integer $v$ such that $0\le v\le n_i-2$. Assume for the
moment that
$K$ is infinite. Let $V$ be a general (for the Zariski topology) $v$-dimensional linear subspace. Then $\ell _{V,i|S}$ is
injective. For fixed
$k, n_1,\dots ,n_k$ there is an integer $q_0$ such that for all $q\ge 2$ there is a $v$-dimensional linear subspace $V\subset
\enm{\mathbb{P}}^{n_i}(\enm{\mathbb{F}} _q)$ such that $\ell _{V,i|S}$ is injective. When $\ell _{V,i|S}$ is injective, we obviously have $h^1(Y,\enm{\cal{I}}
_S(1,\dots ,1)) \le h^1(Y'',\enm{\cal{I}} _{\ell _{V,i}(S)}(1,\dots ,1))$. If $Y$ is the minimal multiprojective subspace containing
$S$, then $Y''$ is the minimal multiprojective space containing $\ell _{V,i}(S)$.
\end{remark}
\section{Rational normal curves inside a Segre variety}
Fix positive integers $k$ and $n_i$, $1\le i\le k$. Set $Y:= \enm{\mathbb{P}}^{n_1}\times \cdots \times \enm{\mathbb{P}}^{n_k}$. Let $\enm{\cal{B}} (n_1,\dots
,n_k)$
denote the set of all integral curves $D\subset \enm{\mathbb{P}}^1$ such that $D = h(\enm{\mathbb{P}}^1)$ with $h =(h_1,\dots ,h_k): \enm{\mathbb{P}}^1\to Y$ with
$h_i: \enm{\mathbb{P}}^1\to \enm{\mathbb{P}}^{n_i}$ an embedding with $h_i(\enm{\mathbb{P}}^1)$ a rational normal curve of $\enm{\mathbb{P}}^{n_i}$. The set $\enm{\cal{B}} (n)$ of all
rational normal curves of $\enm{\mathbb{P}}^n$ is a rational variety of dimension $n(n+3)$. Thus $\enm{\cal{B}} (n_1,\dots ,n_k)$ is parametrized
by an irreducible variety. For any $D\in \enm{\cal{B}} (n_1,\dots ,n_k)$ we have $\dim \langle \nu (D)\rangle = n_1+\cdots +n_k$
and $\nu (D)$ is a degree $n_1+\cdots +n_k$ rational normal curve of $\langle \nu (D)\rangle$. Let
$D\subset Y$ be a curve. Obviously
$D\in
\enm{\cal{B}} (n_1,\dots ,n_k)$ if and only if the following conditions are satisfied:
\begin{itemize}
\item[(a)] $D$ is an integral curve;
\item[(b)] $\pi _{i|D}$ is birational onto its image for all $i=1,\dots ,k$;
\item[({c})] $p_i(D)$ is a rational normal curve of $\enm{\mathbb{P}}^{n_i}$ for all $i=1,\dots ,k$.
\end{itemize}
Obviously $D\in \enm{\cal{B}} (n_1,\dots ,n_k)$ if and only if the following
conditions are satisfied:
\begin{itemize}
\item[($a_1$)] $D$ is an integral curve;
\item[($b_1$)] $\deg (\nu (D)) =n_1+\cdots +n_k$;
\item[($c_1$)] $Y$ is the minimal multiprojective subspace of $Y$ containing $D$.
\end{itemize}
\begin{remark}
The integer $n_1+\cdots +n_k$ is the minimal degree of a connected and reduce curve $\nu (D)$, $D\subset Y$.
\end{remark}
\begin{remark}\label{n1}
Fix $D\in \enm{\cal{B}} (n_1,\dots ,n_k)$ and any finite subset $S\subset D$. Since $\dim \langle \nu (D)\rangle = n_1+\cdots +n_k$
and $\nu (D)$ is a rational normal curve of $\langle \nu (D)\rangle$, we have
$$e(S) = \max \{0,\# (S)-n_1-\cdots -n_k-1\}.$$
Since $\# (L\cap \nu (D))\le 1$ for each $L\subset \nu (Y)$, $S$ is minimal.
\end{remark}
\begin{remark}\label{up1}
Take $Y:= (\enm{\mathbb{P}}^1)^k$. Let $T\subset Y$ be an integral curve. The \emph{multidegree} $(a_1,\dots ,a_k)$ of $T$ is defined in the
following way. If $\pi _i(T)$ is a point, then set $a_i:= 0$. If $\pi _i(T)=\enm{\mathbb{P}}^1$ let $a_i$ be the degree of the morphism
$\pi _{i|T}: T\to \enm{\mathbb{P}}^1$. If $k=3$, we say \emph{tridegree} instead of multidegree. Note that if $a_i>0$ for all $i$, then $T$
is not contained in a proper multiprojective subspace of $Y$. If $a_i=1$ for some $i$, then $\pi _{i|T}: T \to \enm{\mathbb{P}}^1$ is a
degree $1$ morphism between integral curves with the target smooth. By Zariski's Main Theorem (\cite{h}) $\pi _{i|T}$ is an
isomorphism and in particular $T\cong \enm{\mathbb{P}}^1$. Let $\enm{\cal{B}} _k$ denote the set of all $T\subset Y$ with multidegree $(1,\dots ,1)$.
We just say that for any $T\in \enm{\cal{B}} _k$, $T\cong \enm{\mathbb{P}}^1$ and each $\pi _{i|T}: T\to \enm{\mathbb{P}}^1$ is an isomorphism. Thus $\enm{\cal{B}} _k$ (as
algebraic set) is isomorphic to $\mathrm{Aut}(\enm{\mathbb{P}}^1)^k$. We have $\enm{\cal{B}} _k =\enm{\cal{B}} (1,\dots ,1)$.
\end{remark}
Obviously $D\in \enm{\cal{B}} _k$ if and only if the following
conditions are satisfied:
\begin{itemize}
\item[($a_2$)] $D$ is an integral curve;
\item[($b_2$)] $D$ has multidegree $(1,\dots ,1)$.
\end{itemize}
\begin{example}\label{n2}
Fix positive integers $e$, $k$ and $n_i$, $1\le i\le k$, such that $k\ge 2$. Set $Y = \enm{\mathbb{P}}^{n_1}\times \cdots \enm{\mathbb{P}}^{n_k}$
and $m:= \max \{n_1-1,n_2,\dots ,n_k\}$. Fix
a line $L\subseteq \enm{\mathbb{P}}^{n_1}$ and $e+2$ points $a_1,\dots ,a_{e+2}\in L$. Fix $o_i\in \enm{\mathbb{P}}^{n_i}$, $2\le i\le k$.
Let $b_i\in Y$, $1\le i\le e+2$, be the point of $Y$ with $\pi _1(b_i)=a_i$ and $\pi _h(b_i) =o_h$ for all $h\in \{2,\dots
,k\}$.
Fix any $A\subset Y$ such that $\# (A)=m$, $\pi _1(A)\cup L$ spans $\enm{\mathbb{P}}^{n_1}$ and $\pi _h(A)$ spans $\enm{\mathbb{P}}^{n_h}$ for all $h\in \{2,\dots
,k\}$. Set $S:= A\cup \{b_1,\dots ,b_{e+2}\}$. We have $\# (S) =e+2+m$, $S$ is not contained in a proper multiprojective
subspace of $Y$ and $e(S)=e$.
\end{example}
\begin{proposition}\label{n3}
Fix positive integers $k$ and $n_i$, $1\le i\le k$, such that $k\ge 2$ and $n_i\le n_1$ for all $i$.
Let $S\subset Y$ be a finite subset of $Y$ such that $e(S) >0$ and $S$ is not contained in a proper multiprojective subspace of
$Y$. Then
$$\# (S) \ge e(S)+n_1+1.$$
\end{proposition}
\begin{proof}
Since the proposition is trivial if $k=1$ we may assume $k\ge 2$ and that the proposition is true for multiprojective spaces with smaller dimension. Up to a permutation of the factors of $Y$ we may assume $\enm{\cal{A}} =\{1,\dots ,c\}$. We first consider sets $S$ with $e(S)=1$. Set $s:= \# (S)$. Fix $E\subseteq S$ such that $\# (E)=\min \{n_1,s\}$. Since $h^0(\enm{\cal{O}} _Y(\epsilon _1))=1$,
Take $H\in |\enm{\cal{O}} _{Y'}(\epsilon _1)|$ containing at least $m_1$ points of $S$. Since $S\nsubseteq H$, Lemma \ref{n3} gives
$h^1(\enm{\cal{I}} _{S\setminus S\cap H}(\hat{\epsilon}_1)) >0$. Thus $\# (S\setminus S\cap H)\ge 2$. Hence $\# (S)\ge n_1+2$. Now assume $\# (S)=n_1+2$. Hence
Now assume $e(S)\ge 2$. We use induction on the integer $e(S)$. Fix $p\in S$ and set $S':= S\setminus \{p\}$. We have $e(S)-1 \le e(S')\le e(S)$. Assume
$e(S') =e(S)-1$. In this case we have $\langle \nu (S')\rangle =\langle \nu (S)\rangle$ and hence $S'$ generates $Y$. If $e(S)=e(S')$ use that the maximal dimension of a factor of the multiprojective space spanned by $S'$ is at least
$n_1-1$.\end{proof}
\begin{proposition}\label{n4}
Let $S\subset Y =(\enm{\mathbb{P}}^1)^k$ be a minimal and nondegenerate set with $e(S)>0$. Then:
\quad (a) $\# (S)\ge k+e(S)+1$.
\quad (b) There is $D\in \enm{\cal{B}} _k$ containing $S$ if and only if the $k$ ordered sets $\pi _i(S)$, $1\le i\le s$, are (with the chosen order) projectively equivalent, i.e., there is an ordering $q_1,\dots ,q_s$
of the points of $S$ and for all $i\ne j$ isomorphisms $h_{ij}: \enm{\mathbb{P}}^1\to \enm{\mathbb{P}}^1$ such that $h_{ij}(\pi _i(q_h)) =\pi _j(q_h)$ for all $h\in \{1,\dots ,s\}$.
\end{proposition}
\begin{proof}
Set $s:= \# (S)$. Since $S$ is minimal, each $\pi _{i|S}$ is injective. Assume for the moment $e(S)\ge 2$ (hence $\# (S)\ge e(S)+2)$ and take any $A\subset S$ such that $\# (A)=\# (S)-e(S)+1$. Since $S$ is minimal and $\# (A)\ge 2$, $A$ is minimal and spans $Y$. Thus to prove part (a) it is sufficient to prove that $s\ge k+2$ if $e(S)>0$.
We use induction on $k$, using in the case $k=1$ the stronger (obvious) observation that any two points of a line are linearly independent. Assume $k\ge 2$. Fix $o\in S$
and set $\{H\}:= |\enm{\cal{I}} _o(\epsilon _k)|$. Apply Lemma \ref{ee0} to any partition of $S$ into two proper subsets. Since $S\nsubseteq H$, we have $h^1(\enm{\cal{I}} _{S\setminus S\cap H}(\hat{\epsilon} _k)) >0$. Hence $\# (S\setminus S\cap H) \ge 2$. Since $S$ is minimal, $\eta _{k|S}$ is injective, $\eta _k(S)$ is minimal in $Y_k$ and $h^1(\enm{\cal{I}} _{S\setminus S\cap H}(\hat{\epsilon} _k))=h^1(Y_k,\enm{\cal{I}} _{\eta _k(S\setminus S\cap H)}(1,\dots ,1))$.
Thus $\eta _k(S\setminus S\cap H)$ is minimal. Since $\# (\pi _i(S)) =s$ for all $i$ and $\# (S\setminus S\cap H) =2$, $\eta _k(S)$ generates $Y_k$. The inductive assumption gives $\# (S\setminus S\cap H) \ge k+1$, i.e., $s\ge k+2$.
Now we prove part (b). First assume the existence of $D\in \enm{\cal{B}} _k$ containing $S$. Write $D = h(\enm{\mathbb{P}}^1)$ with $h =(h_1,\dots ,h_k)$. Each $\pi
_{i|D}: D \to \enm{\mathbb{P}}^1$ is an isomorphism. Hence $\pi _{i|D}\circ \pi _{j|D}^{-1}: \enm{\mathbb{P}}^1\to \enm{\mathbb{P}}^1$ is an isomorphism sending $\pi
_j(S)$ onto $\pi _S(S)$. Now we assume that all sets $\pi _i(S)$, $1\le i\le s$, are projectively equivalent. We order the
points $q_1,\dots ,q_s$ of $S$. For any $i\ge 2$ let $f_i: \enm{\mathbb{P}}^1\to \enm{\mathbb{P}}^1$ be the isomorphism sending $\pi _1(q_x)$ to $\pi
_j(q_x)$ for all $x\in \{1,\dots ,s\}$. We take the target of $\pi _1$ as the domain, $\enm{\mathbb{P}}^1$, of the morphism $h =(h_1,\dots
,h_k):
\enm{\mathbb{P}}^1\to Y$ we want to construct. With this condition $h_1$ is the identity map. Set $h_i:= f_i$, $i=2,\dots ,k$. By
construction and the choice of $h_1$ as the identity map, we have $h(q_x) =q_x$ for all $x\in \{1,\dots ,s\}$. We have $D:=
h(\enm{\mathbb{P}}^1)\in \enm{\cal{B}} _k$ and $S\subset D$.
\end{proof}
\begin{proposition}\label{n4.00}
Set $Y: = \enm{\mathbb{P}}^{n_1}\times \cdots \times \enm{\mathbb{P}}^{n_k}$, $n_i>0$ for all $i$, such that $k\ge 2$. Set $m:= \max \{n_1,\dots
,n_k\}$. Fix a positive integer $e$. The integer $m+k+e$ is the minimal cardinality of a minimal and nondegenerate set
$S\subset Y$ such that
$e(S)=e$.
\end{proposition}
\begin{proof}
With no loss of generality we may assume $n_1=m$. Fix $Y'\subset Y$ with $Y'=(\enm{\mathbb{P}}^1)^k$ and take
$S'\subset S$ with
$\# (S')=k+e+1$, $S'$ nondegenerate and minimal (we may take as $S'$ $k+e+1$ points on any $C \in \enm{\cal{B}} _k$). Then add
$m-1$ sufficiently general points of $Y$.
To prove the minimality of the integer $k+e+1$ one can use induction on the integer $m+k$ using linear projections in the
single factors $\ell _{\{o\},i}$ from points of $S$ (Section \ref{Sl}). The starting point of the induction is Proposition
\ref{n4}.
\end{proof}
\begin{question}
Which are the best lower bounds for $\# (S)$ in Proposition \ref{n4.00} if we also impose that $S$ has no inessential
points (resp. it is strongly essential)?
\end{question}
\section{Linearly dependent subsets with low cardinality}\label{Sc}
Unless otherwise stated $Y = \enm{\mathbb{P}}^{n_1}\times \cdots \times \enm{\mathbb{P}}^{n_k}$, $n_i>0$ for all $i$, and $S\subset Y$ is nondegenerate
(i.e., $S\nsubseteq Y'$ for any multiprojective space $Y'\subsetneq Y$) and linear dependent. Since $\nu $ is an embedding,
we have $\# (S)\ge 3$.
\begin{remark}\label{e1}
Assume $\# (S) =3$, i.e., assume that $\nu (S)$ spans a line. Since $\nu (Y)$ is cut out by quadrics, we have $\langle \nu
(S)\rangle \subseteq \nu (Y)$. Since $S$ is nondegenerate, we have $k=1$ and $n_1=1$.
\end{remark}
\begin{proposition}\label{e2}
Let $S\subset Y = \prod _{i=1}^{k}\enm{\mathbb{P}}^{n_i}$, $n_i>0$ for all $i$, be a nondegenerate circuit. Assume $\# (S) =4$. Then either
$k=1$ and
$n_1=2$ or
$k=2$ and $n_1=n_2=1$.
Assume $k=2$. In this case $S$ is contained in a unique $D\in |\enm{\cal{O}} _Y(1,1)|$.
\quad (a) Assume that $D$ is reducible, say $D=D_1\cup D_2$
with $D_1\in |\enm{\cal{O}} _Y(1,0)|$ and $D_2\in |\enm{\cal{O}} _Y(0,1)|$, then $\# (S\cap D_1) =\# (S\cap D_2) =2$ and $D_1\cap D_2\cap S=\emptyset$. For any choice
of $A\subset S$ with $\# (A\cap D_1) = \# (A\cap D_2)=1$, the set $\langle \nu (A)\rangle \cap \langle \nu (S\setminus A)\rangle$ is a single point, $q$. We have $r_X(q) =2$ and $A, S\setminus A\in \enm{\cal{S}} (Y,q)$.
\quad (b) Assume that $D$ is irreducible. For any choice of a set $A\subset S$ such that $\# (A)=2$ the set $\langle \nu (A)\rangle \cap \langle \nu (S\setminus A)\rangle$ is a single point, $q$. We have $r_X(q) =2$ and $A, S\setminus A\in \enm{\cal{S}} (Y,q)$.
\end{proposition}
\begin{proof}
If $k=1$, then $S$ is formed by $4$ coplanar points, no $3$ of them collinear. The minimality assumption of $Y$ gives $n_1=2$.
From now on we assume $k\ge 2$. Take $A\subset S$ such that $\# (A) =2$ and set $B:= S\setminus A$. Since $h^1(\enm{\cal{I}} _S(1,\dots ,1)) >0$ and $h^1(\enm{\cal{I}} _{S'}(1,\dots ,1)) =0$ for all $S'\subsetneq S$, the set $\langle \nu (A)\rangle \cap \langle \nu (B)\rangle$ is a single point, $q$. Since $\nu (S)$ is a circuit, we have $q\notin \nu (S)$. If $q\notin \nu (Y)$, then $A$ shows that $r_X(q) =2$.
\quad (a) Assume $q\in \nu(Y)$, say $q =\nu (o)$. Since $S$ is a circuit, we saw that $o\notin S$. Since $\# (\langle \nu (A)\rangle \cap \nu (Y))\ge 3$, $\# (\langle \nu (B)\rangle \cap \nu (Y))\ge 3$ and $\nu (Y)$ is cut out by quadrics, $\langle \nu (A)\rangle \cup \langle \nu (B)\rangle \subset \nu (Y)$, i.e., there are integers $i, j\in \{1,\dots ,k\}$
such that $\# (\pi _h(A)) = \# (\pi _m(B)) =1$ for all $h\ne i$ and all $m\ne j$. Since $\nu (S)\subset \langle \nu (A)\rangle \cup \langle \nu (B)\rangle$ and $\nu (o)\in \langle \nu (A)\rangle \cap \langle \nu (B)\rangle$, the minimality of $Y$ gives $k=2$, $n_1=n_2=1$ and that we are in case (a).
\quad (b) From now on we assume $q\notin \nu (Y)$.
\quad \emph{Claim 1:} Each $\pi _{i|S}: S\to \enm{\mathbb{P}}^{n_i}$ is injective.
\quad \emph{Proof of Claim 1:} Assume that some $\pi _{i|S}$ is not injective, say $\pi _{1|S}$ is not injective. Thus there
is $A'\subset S$ such that $\# (A') =2$ and $\# (\pi _1(A'))=1$. Set $B':= S\setminus A'$. We see that $A', B'$ are as
in case (a) and in particular $k=2$ and $n_1=n_2=2$ and $S=A'\cup B'\subset D_1\cup D_2$.
\quad \emph{Claim 2:} We have $n_i=1$ for all $i$.
\quad \emph{Proof of Claim 2:} Assume the existence of $i\in \{1,\dots ,k\}$ such that $n_i\ge 2$. Since $h^0(\enm{\cal{O}} _Y(\epsilon
_i)) \ge 3$, there is $H\in |\enm{\cal{O}} _Y(\epsilon _i)|$ containing $A$. The minimality of $Y$ and the inclusion $A\subset H$
implies $B\nsubseteq H$, i.e., $B\setminus B\cap H \ne A\setminus A\cap H$. Lemma \ref{ee0} implies $h^1(\enm{\cal{I}} _{B\setminus B\cap
H}(\hat{\epsilon}_i)) >0$. Since $\enm{\cal{O}} _Y(\hat{\epsilon}_i)$ is a globally generated line bundle, we get $\# (B\setminus
B\cap H)>1$. Thus $B\cap H=\emptyset$. Since any Segre embedding is an embedding, we get $\# (\pi _h(B)) =2$ for all $h\ne
i$, contradicting Claim 1 because $k\ge 2$.
\quad \emph{Claim 3:} We have $k=2$ and $n_1=n_2=1$.
\quad \emph{Proof of Claim 3:} By Claim 2 it is sufficient to prove that $k=2$. Assume
$k\ge 3$. Since $h^0(\enm{\cal{O}} _Y(\epsilon _1)) =2$, there are $H_i\in |\enm{\cal{O}} _Y(\epsilon _i)|$, $i=1,2$, such that $A\subset H_1\cup
H_2$. Since $k\ge 3$, as in the proof of Claim 2 we get that either $B\subset H_1\cup H_2$ or $B\cap (H_1\cup H_2) =0$ and
$\# (\pi _i(B)) =1$ for all $i\ge 3$. The latter possibility is excluded by Claim 1. Thus $S\subset H_1\cup H_2$. Hence
there
is $i\in \{1,2\}$ such that $\# (H_i\cap S) \ge 2$, i.e., $\pi _{i|S}$ is not injective, contradicting Claim 1.
\end{proof}
\begin{remark}
In case (b) of Proposition \ref{e2} $S$ is minimal, while $S$ in case (a) is not minimal.
\end{remark}
Case (a) of Proposition \ref{e2} may be generalized to the following examples of circuits in $\enm{\mathbb{P}}^{n_1}\times \enm{\mathbb{P}}^{n_2}$.
\begin{example}
Take $k=2$, i.e., $Y =\enm{\mathbb{P}}^{n_1}\times \enm{\mathbb{P}}^{n_2}$. Fix $o=(o_1,o_2)\in Y$ and set $D_i:= \pi _i^{-1}(o_i)$, $i=1,2$. We have
$D_i\in |\enm{\cal{O}} _Y(\epsilon _i)|$. Fix $S_i\in D_i\setminus \{o\}$ such that $\# (S_i)=n_{3-i}+1$ and $\nu (S_i)$ spans the
linear space $\nu (D_i)$. Set $S:= S_1\cup S_2$. Since $o\notin S_1\cup S_2$, we have $\# (S) = n_1+n_2+2$. It is easy to
check that
$S$ is a circuit. We get in this way an irreducible and rational $(n_1^2+n_2^2+2n_1+2n_2+2)$-dimensional family of circuits.
\end{example}
\begin{remark}\label{e3.0}
Let $S\subset Y =\enm{\mathbb{P}}^{n_1}\times \cdots \times \enm{\mathbb{P}}^{n_k}$ be a nondegenerate circuit with cardinality $5$.
\quad (a) Assume $k=1$ and $n_1=3$. In this case we may take as $S$ any set with cardinality $5$ such that all its proper
subsets are linearly independent.
\quad (b) Assume $k=2$ and $n_1=n_2=1$. Since $r=3$, in this case we may take as $S$ any set with cardinality $5$ such that all
its proper subsets are linearly independent.
\end{remark}
\begin{proposition}\label{e3.01}
Take $S\subset Y$ with $\# (S)=5$ and $e(S)\ge 2$. Let $A$ be the kernel of $S$ and let $Y' = \enm{\mathbb{P}}^{m_1}\times \cdots \times
\enm{\mathbb{P}}^{m_s}$, $s\ge 1$, be the minimal multiprojective space containing $A$. Then $A$, $e(S)$ and $Y$' are in the following list
and all the numerical values in the list are realized by some $S$:
\begin{enumerate}
\item $e(S)=3$, $A=S$, $s=1$ and $m_s=1$;
\item $e(S)=2$, $A=S$, $s=1$ and $m_s=2$;
\item $e(S)=2$, $A=S$, $s=2$ and $m_1=m_2=1$;
\item $e(S)=2$, $\# (A)=4$, $s=1$ and $m_s=1$.
\end{enumerate}
\end{proposition}
\begin{proof}
We have $e(A) =e(S)$. Since $\nu (Y)$ is cut out by quadrics, each line $L\subset \enm{\mathbb{P}}^r$ containing at least $3$ points of $\nu (Y)$ is contained in
$\nu (Y)$ and hence $L =\nu (J)$ for some line $J$ in one of the factors of $Y$.
First assume $\# (A)\le 4$. Since $e(A)\ge 2$ and $\nu $ is an embedding,
we see that $\# (A)=4$, $e(S)=2$, $s=1$ and $m_1=1$.
From now on we assume $\# (A) =5$, i.e., $A=S$.
Since
$\nu
$ is an embedding, we see that $e(A)\le 3$ and that $e(A)=3$ if and only if $s=1$
and $m_1=1$. Now assume $e(A) =2$. In this case $\langle \nu (A)\rangle$ is a plane $\Pi$ containing at least $5$ distinct
points of $\nu (A)$. The plane $\Pi$ is contained in $\nu (Y)$ if and only if $s=1$ and $m_1=2$. Now assume that $\Pi$ is not
contained in $\nu (Y)$. Thus $s\ge 2$. Since $\nu (Y)$ is cut out by quadrics, $\nu(A)\subset \Pi$, and $\#
(A)=5$,
$\Pi \cap \nu (Y)$ is a conic, $T$. Since $e(A)=2$ and $A$ is a finite set, $T$ is not a double line. Taking suitable $E\subset
T$ with $\# (E)=4$ and applying Proposition \ref{e2} we get $s=2$ and $m_1=m_2=1$.
\end{proof}
\begin{example}\label{p2p1}
Take $Y = \enm{\mathbb{P}}^2\times \enm{\mathbb{P}}^1$. Let $S\subset Y$ be a nondegenerate circuit with $\# (S)=5$. Here we describe the elements of
$D\in |\enm{\cal{O}} _Y(1,1)|$ and the intersections of two of them containing $S$. We have $\deg (\nu (Y))=3$ (e.g., use that by the
distributive law the intersection number $\enm{\cal{O}} _Y(1,1)\cdot
\enm{\cal{O}} _Y(1,1)\cdot \enm{\cal{O}} _Y(1,1)$ is $3$ times the integer $\enm{\cal{O}} _Y(1,0)\cdot \enm{\cal{O}} _Y(1,0)\cdot \enm{\cal{O}} _Y(1,0) =1$). Since $S$ is a
circuit, $\# (S)=5$ and $r=5$, we have
$h^0(\enm{\cal{I}} _S(1,1)) =2$.
\quad \emph{Claim 1:} The base locus $B$ of $|\enm{\cal{I}} _S(1,1)|$ contains no effective divisor.
\quad \emph{Proof of Claim 1:} Assume that $D$ is an effective divisor contained in $B$. Since $h^0(\enm{\cal{I}} _S(1,1))>1$, we have $D\in
|\enm{\cal{O}} _Y(\epsilon _i)|$ for some $i=1,2$. By assumption we have $h^0(\enm{\cal{I}} _{S\setminus S\cap D}(\hat{\epsilon}_i)) = h^0(\enm{\cal{I}}
_S(1,1)) =2$. Since
$D$ is a multiprojective space, we have $i=2$ and $\# (S\setminus S\cap D) =1$. Lemma \ref{ee0} gives $h^1(\enm{\cal{I}} _{S\setminus S\cap D}(1,0)) >0$, a contradiction.
By Claim 1 $S$ is contained in a unique complete intersection of two elements of $|\enm{\cal{O}} _Y(1,1)|$.
Suppose $C$ is the complete intersection
of two elements of $|\enm{\cal{O}} _Y(1,1)|$ (we allow the case in which $C$ is reducible or with multiple components). We know that $\deg ({C}) =\deg (\nu (Y)) =3$. Since $h^1(\enm{\cal{O}} _Y(-2,-2))=0$ (K\"{u}nneth), a standard exact sequence gives $h^0(\enm{\cal{O}} _C)=1$. Thus $C$ is connected. Since $C$ is the complete intersection of $2$ ample divisors, we have $\deg (\enm{\cal{O}} _C(1,0)) =\enm{\cal{O}} _Y(1,0)\cdot \enm{\cal{O}} _Y(1,1)\cdot \enm{\cal{O}} _Y(1,1)) = 2$ (where the second term is the intersection product)
and $\deg (\enm{\cal{O}} _C(0,1)) = 1$. Since $\omega _Y\cong \enm{\cal{O}} _Y(-3,-2)$, the adjunction formula gives $\omega _C \cong \enm{\cal{O}} _Y(-1,0)$. Thus its irreducible components $T$
of $D$ with $\pi _1(T)$ not a point is a smooth rational curve of degree $\le 2$, while all irreducible components of $T$ of $C$ with $\pi _1(T)$ a point has arithmetic genus $1$, hence no such $T$ exists.
Now we impose that $S\subset C$. Since $h^0(\enm{\cal{I}} _S(1,1)) =2 =h^0(\enm{\cal{I}} _C(1,1))$, we have $\langle \nu (S)\rangle = \langle \nu ({C})\rangle \cong \enm{\mathbb{P}}^3$. If $C$ is irreducible, then it a smooth rational normal curve of $\enm{\mathbb{P}}^3$. In this case any $5$ points of $C$ forms a circuit. Of course, the general complete intersection of two elements of $|\enm{\cal{O}} _Y(1,1)|$ is irreducible. In this
way we get an irreducible family of dimension $13$ of circuits. Now assume that $C$ is not irreducible. Since it is connected
and $S\subset C_{\red}$, we get that
$C$ has only multiplicity one components, so it is either a connected union of $3$ lines with arithmetic genus $0$ or a union of a smooth conic and a line meeting exactly at one point and quasi-transversally. Since $\nu(S)$ is a circuit, each line contained in $\nu({C})$ contains at most $2$ points of $\nu(S)$ and each conic contained in $\nu({C})$ contains at most $3$ points of $S$. Thus if $C =T_1\cup L_1$ with $\nu(T_1)$ a smooth conic, we have $S\cap T_1\cap L_1 =\emptyset$, $\# (S\cap T_1)=3$ and $\# (S\cap L_1)=2$. Conversely any $S\subset T_1\cup L_1$ with these properties gives a circuit. The smooth conics, $T$, contained in $Y$ are of two types: either $\pi _2(T)$ is a point
(equivalently, $\pi _1(T)$ is a conic) or $\pi _2(T) =\enm{\mathbb{P}}^1$ (equivalently, $\pi _1(T)$ is a line; equivalently the minimal
segre containing $T$ is isomorphic to $\enm{\mathbb{P}}^1\times \enm{\mathbb{P}}^1$). Now take
$C = L_1\cup L_2\cup L_3$ with
$\nu (L_i)$ a line for all
$i$. We have
$\# (S\cap L_i) \le 2$ for all $i$. For any $1\le i< j \le 3$ such that $L_i\cap L_j\ne \emptyset$ we have $\# (S\cap
L_i\cup L_j)\le 3$. Conversely any $S$ satisfying all these inequalities gives a circuit.
\end{example}
\begin{lemma}\label{p1p1p1}
Let $\Sigma$ be the set of all nondegenerate circuits $S\subset Y:= \enm{\mathbb{P}}^1\times \enm{\mathbb{P}}^1\times \enm{\mathbb{P}}^1$ such that $\# (S)=5$. Let $\enm{\cal{B}}$ be the set of all integral curves $C\subset Y$ of
tridegree $(1,1,1)$.
\quad (a) $\Sigma$ is non-empty, irreducible, $\dim \Sigma = 11$ and (if $K$ is an algebraically closed field with
characteristic $0$) $\Sigma$ is rationally connected.
\quad (b) $\enm{\cal{B}}$ is irreducible and rational, $\dim \enm{\cal{B}} = 6$. For any $C\in \enm{\cal{B}}$ we have $\dim \langle \nu ({C})\rangle=3$ and
the curve
$\nu ({C})$ is a rational normal curve of $\langle \nu ({C})\rangle$.
\quad ({c}) Each $S\in \Sigma$ is contained in a unique $C\in \enm{\cal{B}}$.
\quad (d) For any $C\in \enm{\cal{B}}$ any $S\subset C$ with $\# (S)=5$ is an element of $\Sigma$.
\end{lemma}
\begin{proof}
By Remark \ref{up1} $\enm{\cal{B}}$ is parametrized by the set of all triple $(h_1,h_2,h_3)\in \mathrm{Aut}(\enm{\mathbb{P}}^1)^3$, up to a
parametrization, i.e. we may take as $h_1$ the indentity map $\enm{\mathbb{P}}^1\to \enm{\mathbb{P}}^1$. Thus
$\enm{\cal{B}}$ is irreducible, rational and of dimension $6$. Hence part (b) is true.
\quad (a) Fix $C\in \enm{\cal{B}}$. Since $C$ is irreducible and with
tridegree
$(1,1,1)$, it is smooth and rational, $\deg (\nu ({C}))=3$ and $C$ is not contained in a proper multiprojective subspace of
$Y$. Since
$\nu({C})$ is smooth and rational and $\deg (\nu ({C}))=3$, we have $\dim \langle \nu ({C})\rangle =3$. Thus $\nu ({C})$ is a
rational normal curve of $\langle \nu ({C})\rangle$. Hence any $A\subset Y$ such that $\# (A)=5$ and
$\nu (A) \subset \nu ({C})$ is a circuit. To conclude the proof of part (d) it is sufficient to prove that $A$ is not
contained in a proper multiprojective subspace of $Y$. We prove that no $B\subset C$ with $\# (B)= 2$ is contained
in a proper multiprojective subspace of $Y$. Take $B\subset C$ such that $\# (B)=2$. Since $C$ has multidegree
$(1,1,1)$
each $\pi _{i|C}$ is injective. Thus $\# (\pi _i(B)) =2$ for all $i$, i.e., there is no $D\in |\enm{\cal{O}} _Y(\epsilon _i)|$
containing $B$. If we prove part ({c}) of the lemma, then part (a) would follows, except for the rational connectedness of
$\Sigma$. the rational connected over an algebraically closed field with characteristic $0$ follows immediately from
\cite[Corollary 1.3]{ghsd}).
\quad (b) Fix $S\in \Sigma$. We have $\dim \langle \nu (S)\rangle =3$. Obviously $\# (L\cap \nu (S))\le 2$ for any
line
$L\subset \enm{\mathbb{P}}^7$ and $\# (C\cap \nu (S))\le 3$ for any plane curve
$C\subset \enm{\mathbb{P}}^7$. Take any $E\subset S$ such that $\# (E) =2$ and set $F:= S\setminus E$. Since $S$ is a circuit,
$\langle \nu (E)\rangle \cap \langle \nu (F)\rangle$ is a single point, $q$, and $q\notin \langle \nu (G)\rangle$ for any
$G\subsetneq A$ and any $G\subsetneq F$. Fix
$i\in
\{1,2,3\}$. Since $\dim |\enm{\cal{O}} _Y(\hat{\epsilon}_i)|=3$, there is $D\in |\enm{\cal{I}} _B(\hat{\epsilon}_i)|$. Lemma \ref{ee0} gives that
either $S\subset D$ or $h^1(\enm{\cal{I}} _{S\setminus S\cap D}(\epsilon _i)) >0$. If $h^1(\enm{\cal{I}} _{S\setminus S\cap D}(\epsilon _i)) >0$
we have $S\setminus S\cap D = E$ and $\# (\pi _h(E)) =1$ for all $h\in \{1,2,3\}\setminus \{i\}$, i.e., $\# (\eta
_i(E)) =1$.
Fix $i\in \{1,2,3\}$ and $A\subset S$ such that $\# (A)=\# (\eta _i(A)) =2$. Set $B:= S\setminus A$. We just
proved that any $D_i\in |\enm{\cal{O}} _Y(\hat{\epsilon}_i)|$ containing $B$ contains $S$. Since $\# (L\cap \nu (S)\rangle)\le 2$
for any line
$L\subset \enm{\mathbb{P}}^7$, we get $\# (\eta _i(S))\ge 2$ for all $i=1,2,3$. Thus there are $D_i\in |\enm{\cal{O}} _Y(\hat{\epsilon}_i)|$,
$i=1,2,3$ such that $S\subseteq D_1\cap D_2\cap D_3$.
Since $h^0(\enm{\cal{O}} _Y(\epsilon _i)=2$, a standard exact sequence gives $\dim \langle \nu (D_i)\rangle =5$. Since $\langle
\nu(D_i)\cup
\langle \nu (D_j\rangle =\enm{\mathbb{P}}^7$ for all $i\ne j$, the Grassmann's formula gives $\dim \langle \nu (D_i)\rangle
\cap \langle \nu (D_j)\rangle$. Thus for any $G\in \Sigma$ contained in $D_i\cap J$ we have $\langle \nu (G)\rangle =\langle \nu (D_i)\rangle
\cap \langle \nu (D_j))\rangle$. Assume for the moment that $D_1$ is reducible, say $D_1 =D'\cup D''$ with $D'\in |\enm{\cal{O}}
_Y(\epsilon _2)|$ and $D''\in |\enm{\cal{O}} _Y(\epsilon _3)|$. Take any $M\in |\enm{\cal{O}} _Y(\epsilon _2)|$ containing no irreducible
component of $D_1$.
\quad \emph{Claim 1:} There is no line $L\subset \nu (Y)$ such that $\# (L\cap \nu (S) )=2$.
\quad \emph{Proof of Claim 1:} Assume that $L$ exists and take $A\subset S$ such that $\# (A)=2$ and $\nu (A)\subset L$.
By the structure of lines of $\nu (Y)$ there is $i\in \{1,2,3\}$ such that $\# (\pi _h(A)) =1$ for all $h\in
\{1,2,3\}\setminus \{i\}$, i.e., $\# (\eta _i(A))=1$. Set $\{M\}:= |\enm{\cal{I}} _A(\epsilon _i)|$. Since $M$ is a multiprojective
space, we have $S\nsubseteq M$. Thus Lemma \ref{ee0} gives $h^1(\enm{\cal{I}} _{S\setminus S\cap M}(\hat{\epsilon}_i)) >0$. Thus one of
the following cases occur:
\begin{enumerate}
\item there are $u, v \in S\setminus S\cap M$ such that $u\ne v$ and $\eta _i(u)=\eta _i(v)$;
\item we have $\# (S\setminus S\cap M) = \# (\eta _i(S\setminus S\cap M)) =3$ and $\nu _i(\eta _i(S\setminus S\cap
M))$ is contained in a line.
\end{enumerate}
First assume the existence of $u, v$. We get the existence of a line $R\subset \nu (Y)$ such that $\{u,v\}\subset R$ and
$R\cap L=\emptyset$. Thus $\dim \langle L\cup R\rangle =3$. Since $L$ and $R$ are generated by the points of $\nu (S)$
contained in them, we get $\nu (S)\subset \langle L\cup R\rangle$. Set
$\{o\}:= S\setminus (A\cup \{u,v\})$. Since any line of $\enm{\mathbb{P}}^7$ contains at most $2$ points of $\nu (S)$, we get $\nu (o)\in
\langle L\cup R\rangle \setminus (L\cup R)$. Thus there is a unique line $J\subset \langle L\cup R\rangle$ such that $J\cap
L\ne \emptyset$, $R\cap J\ne \emptyset$ and $\nu (o)\in J$. Since $\nu (Y)$ is cut out by quadrics and $\{o\}\cup L\cup
R\subset \nu (Y)$, we get $J\subset \nu (Y)$. Since $\langle L\cup R\rangle \cap \nu (Y)$ is cut out by quadrics and $\nu (Y)$
contains no plane, we get that $\langle L\cup R\rangle \cap \nu (Y)$ is a quadric surface $Q$.
\quad \emph{Subclaim:} $Q = \nu (Y))$ for some multiprojective subspace $Y'\subsetneq Y$.
\quad \emph{Proof of the subclaim:} Since the irreducible quadric surface $Q$ contains the lines $R$ and $L$ such that $R\cap
L=\emptyset$, we have
$Q\cong
\enm{\mathbb{P}}^1\times \enm{\mathbb{P}}^1$ and all elements of the two rulings of $Q$ are embedded as lines. The structure of linear spaces contained
in Segre varieties gives
$Q =\nu (Y')$ for some
$2$-dimensional multiprojective subspace $Y'\subset Y$.
Since $S\subset \langle L\cup R\rangle \cap \nu (Y) =Q$, the subclaim gives a contradiction.
Now assume $\# (S\setminus S\cap M) = \# (\eta _i(S\setminus S\cap M)) =3$ and that
$\nu _i(\eta _i(S\setminus S\cap M))$ is contained in a line. Thus there is $j\in \{1,2,3\}\setminus \{i\}$ such that $\#
(\pi _j(\eta _i(S\setminus S\cap M)))=1$. Since $j\ne i$, we have $\pi _j((\eta _i(S\setminus S\cap
M))) = \pi _j(S\setminus S\cap M)$. Thus there is $W\in |\enm{\cal{O}} _Y(\epsilon _j)|$ such $S\setminus S\cap M$.
Since $W$ is a multiprojective space, we have $S\nsubseteq W$. Thus Lemma \ref{ee0} gives $h^1(\enm{\cal{I}} _{S\setminus S\cap
W}(\hat{\epsilon}_j)) >0$. Since $\enm{\cal{O}} _Y(\hat{\epsilon}_j)$ is globally generated and $S\setminus S\cap W \subseteq A$, we
get $S\setminus S\cap W =A$. Since $j\ne i$ we have $\# (\eta _j(A)) =2$. Thus $h^1(\enm{\cal{I}} _A(\hat{\epsilon}_j)) =0$, a
contradiction.
\quad ({c}) In step (b) we saw that $S\subseteq D_1\cap D_2\cap D_j$ for some $D_i\in |\enm{\cal{O}} _Y(\hat{\epsilon}_i)|$ and that
$\langle \nu (S)\rangle = \langle \nu (D_i)\rangle \cap \langle \nu (D_j)\rangle$. Now we assume that both $D_1$ and $D_2$
are reducible and that they have a common irreducible component. Write $D_1 = D'\cup D''$ with $D'\in |\enm{\cal{O}} _Y(\epsilon _3)|$
and $D''\in |\enm{\cal{O}} _Y(\epsilon _2)|$. We see that $D_1$ and $D_2$ have $D'$ as their common component, say $D_2 = D'\cup M$
with $M\in |\enm{\cal{O}} _Y(\epsilon _1)|$. The curve $\nu (D''\cap M)$ is a line. Since $\dim \langle \nu (D')\rangle =3$,
we have $\langle \nu (D_1)\rangle \cap \langle \nu (D_2)\rangle = \langle \nu (D')\rangle$. Since $\mu (Y)$ is cut out by
quadrics
and contains no $\enm{\mathbb{P}}^3$, we have $M\cap D''\subset D'$. Thus $S\subset D'$, a contradiction.
In the same way we exclude the existence of a surface contained in $D_i\cap D_j$ for any $i\ne j$.
Now assume that $D_1 =D'\cup D''$ is reducible, $D_2 = M'\cup M''$ is reducible, but that $D_1$ and $D_2$ have no common
irreducible component. We get that $\nu (D_1\cap D_2)$ is a union of $4$ lines. Since $S\subset D_1\cap D_2$, Claim 1 gives a
contradiction. Thus at most one among $D_1$, $D_2$ and $D_3$ is reducible. Assume that $D_1$ is reducible, say $D_1 = D'\cup D''$ with $D'\in |\enm{\cal{O}} _Y(\epsilon _3)|$
and $D''\in |\enm{\cal{O}} _Y(\epsilon _2)|$. The curve $\nu (D_2\cap D'')$ is a conic (maybe reducible) and hence it contains at most
$3$ points of $\nu (S)$. The curve
$\nu (D_2\cap D')$ is a line. Since $S\subset D_1\cap D_2$, Claim 1 gives a contradiction.
Thus each $D_i$, $1\le i\le 3$, is irreducible.
\quad (d) By part ({c}) $S\subseteq D_1\cap D_2\cap D_3$ with $D_i\in |\enm{\cal{O}} _Y(\hat{\epsilon}_i)|$ and each $D_i$ irreducible.
Thus $T:= D_1\cap D_2$ has pure dimension $1$. We have $\enm{\cal{O}} _Y(1,1,1)\cdot \enm{\cal{O}} _Y(0,1,1)\cdot \enm{\cal{O}} _Y(1,0,1)=3$ (intersection
number), i.e., the curve $\nu (T)$ has degree $3$. Since $S\subset T$ and no line contains two points of $\nu (S)$ (Claim 1),
$T$ must be irreducible. Since $S\subset T$, no proper multiprojective subspace of $Y$ contains $T$. Thus $T$ has multidegree
$(a_1,a_2,a_3)$ with $a_i>0$ for all $i$. Since $\deg (T) =a_1+a_2+a_3$, we get $a_i=1$ for all $i$, i.e., $T\in
\enm{\cal{B}}$. Fix
$C, C'\in
\enm{\cal{B}}$ such that
$C\ne C'$ and assume
$S\subseteq C\cap C'$. We have $\langle \nu({C})\rangle =\langle \nu (S)\rangle = \langle \nu (C')\rangle$. Hence $\langle \langle \nu (S)\rangle \cap \nu (Y)$
contains two different rational normal curves of $\enm{\mathbb{P}}^3$ with $5$ common points. Such a reducible curve $\nu ({C})\cup \nu (C')$ is contained in a unique quadric surface $Q'$
and $Q'$ is integral. Since $\nu (Y)$ is cut out by quadrics, there is a an integral surface $G\subset X$ such that $\nu (G)=Q'$. Since $\deg (G)=2$, $G\in |\enm{\cal{O}} _Y(\epsilon _i)|$
for some $i$. Since $G$ is a multiprojective space and $S\subset G$, we got a contradiction.
\end{proof}
\begin{proof}[Proof of Theorem \ref{e3}:]
The cases $k=1$ and $k=2$, $n_1=n_2=1$ are obvious (the latter because it has $r=3$). Thus we may assume $k\ge 2$ and
$n_1+\cdots +n_k\ge 3$.
\quad (a) Assume $k=2$, $n_1=2$ and $n_2=1$. Thus $r=5$. All $S\in \Sigma$ are described in Example \ref{p2p1}. The same proof
works if $k=2$,
$n_1=1$ and
$n_2=2$.
\quad (b) Assume $k=2$ and $n_1=n_2=2$. Fix $a, a'\in S$, $a\ne a'$ and take
$H\in |\enm{\cal{O}} _Y(1,0)|$ containing $\{a,a'\}$. Since $S\nsubseteq H$, Lemma \ref{ee0} gives $h^1(\enm{\cal{I}} _{S\setminus
S\cap H}(0,1)) >0$. Thus either there are $b, b'\in S\setminus S\cap H$ such that $b\ne b'$ and $\pi _2(b)=\pi _2(b')$
or $\# (S\setminus S\cap H) =3$ and $\pi _2(S\setminus S\cap H)$ is contained in a line.
First assume the existence of $b, b'$. Write $S =\{b,b',u,v,w\}$. We get the existence of $D\in |\enm{\cal{O}} _Y(0,1)|$ containing
$\{b,b',u\}$. Since $S\nsubseteq D$, Lemma \ref{ee0} gives $h^1(\enm{\cal{I}} _{S\setminus S\cap D}(1,0)) >0$. Thus $S\setminus S\cap D
= \{v,w\}$ and $\pi _2(v)=\pi _2(w)$. Taking $w$ instead of $u$ we get $\pi _2(v)=\pi _2(v)=\pi _2(w_2)$. Hence we get
$U\in |\enm{\cal{O}} _Y(0,1)|$ containing at least $4$ points of $S$. Since $U$ is a multiprojective space, we have $S\nsubseteq U$.
Lemma \ref{ee0} gives $h^1(\enm{\cal{I}} _{S\setminus S\cap U}(1,0))>0$. Since $\# (S\setminus S\cap U)=1$, we get a contradiction.
Now assume that $\pi _2(S\setminus \{a,a'\})$ is contained in a line. Thus there is $M\in
|\enm{\cal{O}} _Y(0,1)|$ containing $S\setminus \{a,a'\}$. Since $S\nsubseteq M$, Lemma \ref{ee0} gives $h^1(\enm{\cal{I}} _{S\setminus S\cap
M}(1,0))>0$. Since $S\setminus S\cap M\subseteq \{a,a'\}$, we get $\pi _1(a)=\pi _1(a')$. We conclude as we did with
$\{b,b'\}$ using the other factor of $\enm{\mathbb{P}}^2\times \enm{\mathbb{P}}^2$.
\quad ({c}) Assume $k=2$, $n_1=3$ and $n_2=1$. Take $H\in |\enm{\cal{O}}
_Y(1,0)|$ containing $B$. Since $H$ is a multiprojective space, we have $S\nsubseteq H$. Thus Lemma \ref{ee0} gives
$h^1(\enm{\cal{I}} _{A\setminus A\cap H}(0,1)) >0$. Since $\enm{\cal{O}} _{\enm{\mathbb{P}}^1}(1)$ is very ample and $\# (A\setminus A\cap H)\le 2$, we get
$A\cap H =\emptyset$ and $\# (\pi _2(A)) =1$. Set $\{M\}:= |\enm{\cal{I}}_A(0,1)|$. Since $S\nsubseteq M$, Lemma \ref{ee0}
gives $h^1(\enm{\cal{I}} _{B\setminus B\cap H}(1,0)) >0$. Thus either there are $b, b'\in B\setminus B\cap H$ such that $b\ne b'$
and $\pi _1(b)=\pi _1(b')$ or $B\cap H =\emptyset$ and $\pi _1(B)$ is contained in a line.
Assume the existence of $b, b'$. Since $h^0(\enm{\cal{O}} _Y(1,0))=4$, we get the existence if $D\in |\enm{\cal{O}} _Y(1,0)|$ contained $B$ and a
point of $A$. Take any $D'\in |\enm{\cal{O}} _Y(0,1)|$ such that $D'\cap S=\emptyset$. We have $\# ((D\cup D')\cap S)=\# (D\cap
S) \ge 4$. Since $S$ is a circuit and $\# (D\cap
S) \ge 4$, we get $D\cup D'\supset S$ and hence $D\supset S$. Since $D$ is a multiprojective space, we get a contradiction.
Now assume that $\pi _1(B)$ is contained in a line. Since $h^0(\enm{\cal{O}} _Y(1,0)) =4$, we get the existence of $D''\in |\enm{\cal{O}}
_Y(1,0)|$ containing $S$. Since $D''$ is a multiprojective space, we get a contradiction.
\quad (d) As in step ({c}) we exclude all other cases with $k=2$ and $n_1+n_2\ge 4$. Thus from now on we assume $k\ge 3$.
\quad (e) See Lemma \ref{p1p1p1} for the description of the case $k=3$ and $n_1=n_2=n_3=1$.
\quad (f) Assume $k=3$, $n_1=2$ and $n_2=n_2=1$. Fix $a, a'\in S$ such that $a\ne a'$. Take $H\in |\enm{\cal{O}} _Y(1,0,0)|$ containing
$\{a,a'\}$. Since $S\nsubseteq H$, Lemma \ref{ee0} gives $h^1(\enm{\cal{I}} _{S\setminus S\cap H}(0,1,1)) >0$. Thus either there are $b,
b'\in S\setminus S\cap H$ such that $b\ne b'$ and $\eta _1(b)=\eta _1(b')$ or $\# (S\setminus S\cap H) =3$ and there is
$i\in \{2,3\}$ such that $\# (\pi _i(S\setminus S\cap H)) =1$. In the latter case there is $M\in |\enm{\cal{O}} _Y(\epsilon _i)|$
containing $S\setminus S\cap H$. Since $S\nsubseteq M$, Lemma \ref{ee0} gives $h^1(\enm{\cal{I}} _{S\setminus S\cap
M}(\hat{\epsilon}_i)) >0$. Thus $\# (S\setminus S\cap M) =2$, i.e., $S\setminus S\cap M = \{a,a'\}$ and $\eta _i(a) =\eta
_i(a')$. In particular we have $\pi _1(a) =\pi _1(a')$. Thus there is $H'\in |\enm{\cal{O}} _Y(1,0,0)|$ containing $\{a,a'\}$ and at
least another point of $H$. Using $H'$ instead of $H$ we exclude this case (but not the existence of $b, b'$ for $S\setminus
H'\cap S$), because $\# (S\setminus S\cap H') \le 2$.
Now assume the existence of $b, b'\in S\setminus S\cap H$ such that $b \ne b'$ and $\eta _1(b) =\eta _1(b')$. Thus there is $D\in |\enm{\cal{O}} _Y(0,0,1)|$ containing $\{b,b'\}$. Using Lemma \ref{ee0} we get $h^1(\enm{\cal{I}} _{S\setminus S\cap D}(1,1,0)) >0$. Thus one of the following cases occurs:
\begin{enumerate}
\item $S\cap D =\{b,b'\}$, $\pi _1(S\setminus S\cap D)$ is contained in a line and $\# (\pi _2(S\setminus S\cap D)) =1$;
\item there are $x, y\in S\setminus S\cap D$ such that $x\ne y$ and $\eta _3(x) =\eta _3(y)$.
\end{enumerate}
First assume $S\cap D =\{b,b'\}$, $\pi _1(S\setminus S\cap D)$ is contained in a line and $\# (\pi _2(S\setminus S\cap D)) =1$. There is $T\in |\enm{\cal{O}} _Y(\epsilon _2)|$
containing $S\setminus S\cap D$. Lemma \ref{ee0} gives $h^1(\enm{\cal{I}} _{\{b,b'\}}(1,0,1)) >0$. Thus $\eta _2(b)=\eta _2(b')$. Since
$\eta _1(b)=\eta _1(b')$, we get $b=b'$, a contradiction.
Assume the existence of $x, y$ such that $x\ne y$ and $\eta _3(x)=\eta _3(y)$. Write $S = \{b,b',x,y,v\}$. There is $W\in |\enm{\cal{O}} _Y(1,0,0)|$ containing $b$ and $x$ and hence containing $y$. By Lemma \ref{ee0} we get $h^1(\enm{\cal{I}} _{S\setminus W}(0,1,1)) >0$.
Since $\# (S\setminus S\cap W)\le 2$, we get $S\setminus S\cap W =\{b',v\}$ and $\eta _1(b') =\eta _1(v)$. Using $D'\in |\enm{\cal{O}} _Y(0,1,0)|$ containing $\{b,b'\}$ instead of $D$
we get the existence of $x', y' \in \{x,y,v\}$ such that $\eta _2(x') =\eta _2(y')$ and $x'\ne y'$. We have $\{x',y'\}\cap \{x,y\} \ne \emptyset$. With no loss of generality we may assume $x=x'$.
Either $y' =y$ or $y'=v$. If $y'=y$, we get $x=y$, a contradiction. Thus $\eta _2(v) =\eta _2(y)$. Hence $\pi _1(x)=\pi _1(y) =\pi _1(v)$. Thus there is $W'\in |\enm{\cal{O}} _Y(1,0,0)|$ containing at least $4$ points of $S$. Take a general $W_1\in |\enm{\cal{O}} _Y(0,1,1)|$. Thus $S\cap W_1 =\emptyset$. Since $h^1(\enm{\cal{I}} _S(1,1,1)) >0$ and $W'\cup W_1$ contains at least $4$
points of $S$, we get $S\subset W'\cup W_1$. Thus $S\subset W'$. Since $W'$ is a proper multiprojective space of $Y$, we get a
contradiction.
\quad (g) Step (e) excludes all cases with $k\ge 3$ and $n_i\ge 2$ for at least one $i$.
\quad (h) Assume $k=4$ and $n_i=1$ for all $i$. Fix $o\in S$ and let $H$ be the only element of $|\enm{\cal{O}} _Y(\epsilon _4)|$
containing $o$.
Since $H$ is a multiprojective space, we have $S\nsubseteq H$. Thus Lemma \ref{ee0} gives $h^1(\enm{\cal{I}} _{S\setminus S\cap
H}(\hat{\epsilon}_4))>0$. Thus one of the following two cases occurs:
\begin{enumerate}
\item $\# (\eta _4(S)) =4$ and $h^1(Y_4,\enm{\cal{I}} _{\eta _4(S\setminus S\cap H)}(1,1,1)) >0$;
\item there are $u, v\in S\setminus S\cap H$ such that $u\ne v$ and $\eta _4(u) =\eta _4(v)$.
\end{enumerate}
\quad (h1) Assume that case (1) occurs. In this case $S\setminus S\cap H =S\setminus \{o\}$. By Proposition \ref{e2} there is
an integer
$i\in \{1,2,3\}$ such that $\# (\pi _i(S\setminus \{o\}) =1$. Thus there is $M\in |\enm{\cal{O}} _Y(\epsilon _i)|$ containing
$S\setminus \{o\}$. Take $W\in |\enm{\cal{O}} _Y(\hat{\epsilon}_4)|$ such that $W\cap S =\emptyset$. Since $H\cup W$ is an element
of $|\enm{\cal{O}} _Y(1,1,1,1)|$ containing at least $4$ points of $S$ and $h^1(\enm{\cal{I}} _S(1,1,1,1)) >0$, we have $S\subset H\cup W$. Since
$S\cap W=\emptyset$, we get $S\subset H$, a contradiction.
\quad (h2) By step (h1) there are $u, v\in S\setminus S\cap H$ such that $\eta _4(u)=\eta _4(v)$. For any $a\in S$ let
$H_a$ be the only element of $|\enm{\cal{O}} _Y(\epsilon _4)|$ containing $a$. Thus $H_o =H$. By step (h1) applied to $a$ instead of $o$
there are
$u_a,v_a\in S\setminus S\cap H_a$ such that $u_a\ne v_a$ and $\eta _4(u_a) = \eta _4(v_a)$. Set $E:= \cup _{a\in S} \{u_a,v_a\}$.
\quad \emph{Claim 1:} We have $\# (E) >2$ and for all $a, b\in S$ either $\{u_a,v_a\} =\{u_b,v_b\}$ or $\{u_a,v_a\}\cap \{u_b,v_b\} =\emptyset$.
\quad \emph{Proof of Claim 1:} Assume $\# (E) \le 2$, i.e., $\{u_a,v_a \} = \{u_b,v_b\}$. Since $x\notin \{u_x,v_x\}$, taking $b = u_{u_a}$ we get a contradiction. Fix $a, b\in S$ such that $a\ne b$ and assume $\# (\{u_a,v_a\}\cap \{u_b,v_b\} )=1$, say $\{u_a,v_a\}\cap \{u_b,v_b\} =\{u_a\}$. Taking $x:= u_a$, $y:=v_a$ and $z:= v_b$, find $x, y, z\in S$
such that $\# (\{x,y,z\})=3$ and $\nu (\{x,y,z\})$ is contained in a line of of $\nu (Y)$. Thus $S$ is not a circuit, a contradiction.
The first assertion of Claim 1 gives $\# (E) >2$. Then the second assertion of Claim 1 gives $\# (E)\ge 4$.
There is $M\in |\enm{\cal{O}} _Y(\epsilon _1)|$ containing $E$. Since $S\nsubseteq M$, we first get $\# (E) =4$ and then (by Lemma \ref{ee0}) $h^1(\enm{\cal{I}} _{S\setminus S\cap M}(0,1,1,1)) >0$, contradicting the global spannedness of $\enm{\cal{O}} _Y(0,1,1,1)$.
\quad (i) Steps (g) and (h) exclude all cases with $k\ge 4$.
\end{proof}
|
2,869,038,154,707 | arxiv | \section{Introduction}
\label{sec:introduction}
Event detection is an important application for which a wireless sensor
network ($\mathsf{WSN}$) is deployed. A number of sensor nodes (or ``motes'')
that can sense, compute, and communicate are deployed in a region of
interest ($\mathsf{ROI}$) in which the occurrence of an event (e.g., crack in a
structure) has to be detected. In our work, we view {\em an event as
being associated with a change in the distribution (or cumulative
distribution function) of a physical quantity that is sensed by the
sensor nodes}. Thus, the work we present in this paper is in the
framework of quickest detection of change in a random process. In the
case of small extent networks, where the coverage of every sensor spans
the whole $\mathsf{ROI}$, and where we assume that an event affects all the
sensor nodes in a statistically equivalent manner, we obtain the
classical change detection problem whose solution is well known
(see, for example, \cite{shiryayev63},
\cite{stat-sig-proc.page54continuous-inspection-schemes},
\cite{stat-sig-proc.tartakovsky-veeravalli03quickest-change-detection}).
In \cite{premkumar_etal10det_over_mac} and
\cite{premkumar_kumar08infocom}, we have studied variations of the
classical problem in the $\mathsf{WSN}$ context, where there is a wireless
communication network between the sensors and the fusion
centre~\cite{premkumar_etal10det_over_mac}, and where there is a cost
for taking sensor measurements~\cite{premkumar_kumar08infocom}.
However, in the case of large extent networks, where the $\mathsf{ROI}$ is large
compared to the coverage region of a sensor, an event (e.g., a crack in
a huge structure, gas leakage from a joint in a storage tank) affects
sensors that are in its proximity; further the effect depends on the
distances of the sensor nodes from the event. Since the location of the
event is unknown, {\em the post--change distribution of the observations
of the sensor nodes are not known}. In this paper, we are interested in
obtaining procedures for detecting and locating an event in a large
extent network. This problem is also referred to as {\em change
detection and isolation}
(see \cite{nikiforov95change_isolation},
\cite{nikiforov03lower-bound-for-det-isolation},
\cite{tartakovsky08multi-decision},
\cite{stat-sig-proc.mei05information-bounds},
\cite{lai00multi-hypothesis-testing}).
Since the $\mathsf{ROI}$ is large, a large number of sensors are deployed to
cover the $\mathsf{ROI}$, making a centralised solution infeasible. In our work,
{\em we seek distributed algorithms for detecting and locating an event,
with small detection delay, subject to constraints on false alarm and
false isolation}. The distributed algorithms require only local
information from the neighborhood of each node.
\subsection{Discussion of Related Literature}
The problem of sequential change detection/isolation with a finite set
of post--change hypotheses was introduced by Nikiforov
\cite{nikiforov95change_isolation}, where he
studied the change
detection/isolation problem with the observations being conditionally
independent, and proposed a non--Bayesian procedure which is shown to be
maximum mean detection/isolation delay optimal, as the average run
lengths to false alarm and false isolation go to $\infty$. Lai
\cite{lai00multi-hypothesis-testing} considered the multi--hypothesis
change detection/isolation problem with stationary pre--change and
post--change observations, and obtained asymptotic lower bounds for the
maximum mean detection/isolation delay.
Nikiforov also studied a change detection/isolation problem under the
average run length to false alarm ($\mathsf{ARL2FA}$) and the probability of false isolation
($\mathsf{PFI}$) constraints \cite{nikiforov03lower-bound-for-det-isolation}, in which he
showed that a ${\sf CUSUM}$--like {\em recursive} procedure is asymptotically
maximum mean detection/isolation delay optimal among the procedures that
satisfy $\mathsf{ARL2FA}\geq\gamma$ and $\mathsf{PFI}\leq\alpha$ asymptotically, as
$\min\{\gamma,1/\alpha\}\to\infty$ . Tartakovsky in
\cite{tartakovsky08multi-decision} also studied the change
detection/isolation problem where he proposed recursive matrix ${\sf
CUSUM}$ and recursive matrix Shiryayev--Roberts tests, and showed that
they are asymptotically maximum mean delay optimal
over the constraints $\mathsf{ARL2FA}\geq\gamma$ and $\mathsf{PFI}\leq\alpha$ asymptotically, as
$\min\{\gamma,1/\alpha\}\to\infty$.
Malladi and Speyer \cite{malladi_speyer99shiryayev_isolation}
studied a Bayesian change detection/isolation problem and obtained a
mean delay optimal centralised procedure which is a threshold based rule
on the a posteriori probability of change corresponding to each
post--change hypothesis.
Centralised procedures incur high communication costs and distributed
procedures would be desirable. In this paper, we study distributed
procedures based on $\mathsf{CUSUM}$ detectors at the sensor nodes where the
$\mathsf{CUSUM}$ detector at sensor node $s$ is driven only by the observations
made at node $s$. Also, in the case of large extent networks, the
post--change distribution of the observations of a sensor node, in
general, depends on the distance between the event and the sensor node
which is unknown.
\subsection{Summary of Contributions}
\begin{enumerate}
\item As the ${\sf WSN}$ considered is of large extent, the post--change
distribution is unknown, and could belong to a set of
alternate hypotheses.
In Section~\ref{sec:problem_formulation}, we formulate the event
detection/isolation problem in a large extent network in the
framework of
\cite{nikiforov03lower-bound-for-det-isolation},
\cite{tartakovsky08multi-decision}
as a maximum mean detection/isolation delay minimisation problem subject to an
average run length to false alarm ($\mathsf{ARL2FA}$) and probability of
false isolation ($\mathsf{PFI}$) constraints.
\item We propose distributed detection/isolation procedures ${\sf
MAX}$, ${\sf ALL}$, and ${\sf HALL}$ ({\bf H}ysteresis modified
{\bf ALL}) for large extent networks in
Section~\ref{sec:distributed_change_detection_isolation_procedures}.
The procedures
${\sf MAX}$ and ${\sf ALL}$ are extensions of the
decentralised procedures $\sf{MAX}$
\cite{stat-sig-proc.tartakovsky-veeravalli03quickest-change-detection}
and $\sf{ALL}$ \cite{stat-sig-proc.mei05information-bounds},
\cite{agt-vvv08}, which were developed for small extent networks.
The distributed procedures
are energy--efficient compared to the centralised procedures.
Also, the known centralised procedures are applicable only for the
Boolean sensing model.
\item In
Section~\ref{sec:distributed_change_detection_isolation_procedures},
we first obtain bounds on $\mathsf{ARL2FA}$, $\mathsf{PFI}$, and maximum mean
detection/isolation delay ($\mathsf{SADD}$) for the distributed procedures
$\sf{MAX}$, $\sf{ALL}$, and $\sf{HALL}$. These bounds are then
applied to get an upper bound on the $\mathsf{SADD}$ for the procedures
when $\mathsf{ARL2FA} \geq \gamma$, and $\mathsf{PFI} \leq \alpha$, where $\gamma$
and $\alpha$ are some performance requirements. For the case of
the Boolean sensing model, we compare the $\mathsf{SADD}$
of the distributed procedures with
that of Nikiforov's procedure
\cite{nikiforov03lower-bound-for-det-isolation}
(a centralised asymptotically optimal procedure)
and show that the
an asymptotic upper bound on the maximum mean
detection/isolation delay of our distributed procedure scales with
$\gamma$ and $\alpha$ in the same way as that of
\cite{nikiforov03lower-bound-for-det-isolation}.
\end{enumerate}
\section{System Model}
\label{sec:system_model}
Let $\mathcal{A} \subset \mathbb{R}^2$ be the region of interest
($\mathsf{ROI}$) in which $n$ sensor nodes are deployed. All nodes are equipped
with the same type of sensor (e.g., acoustic).
Let
$\ell^{(s)}\in\mathcal{A}$ be the location of sensor node $s$, and
define ${\bm \ell} :=[\ell^{(1)},\ell^{(2)},\cdots,\ell^{(n)}]$. We
consider a discrete--time system, with the basic unit of time being one
slot, indexed by $k=0,1,2,\cdots,$ the slot $k$ being the time interval
$[k,k+1)$. The sensor nodes are assumed to be time--synchronised (see,
for example, \cite{solis-etal06time-synch}), and at the beginning of
every slot $k \geqslant 1$, each sensor node $s$ samples its environment
and obtains the observation $X_k^{(s)}\in\mathbb{R}$.
\subsection{Change/Event Model}
An event (or change) occurs at an unknown time $T \in \{1,2,\cdots\}$
and at an unknown location $\ell_e \in \mathcal{A}$. We consider only
stationary (and permanent or persistent) point events, i.e., an event
occurs at a point in the region of interest, and {\em having occurred,
stays there forever}. Examples that would motivate such a model are 1)
gas leakage in the wall of a large storage tank, 2) excessive strain at
a point in a large 2--dimensional structure. In
\cite{tartakovsky-report} and \cite{premkumar_etal10iwap}, the authors
study change detection problems in which the event stays only for a
finite random amount of time.
An event is viewed as a source of some physical signal that can be
sensed by the sensor nodes. Let $h_e$ be the signal strength of the
event\footnote{In case, the signal strength of the event is not known,
but is known to lie in an interval $[\underline{h}, \overline{h}]$, we
work with $h_e = \underline{h}$ as this corresponds to the least
Kullback--Leibler divergence between the ``{\em event not occurred}''
hypothesis and the ``{\em event occurred}'' hypothesis. See
\cite{stat-sig-proc.tartakovsky-polunchenko08change-det-with-unknown-param}
for change detection with unknown parameters for a collocated network.}.
A sensor at a distance $d$ from the event senses a signal $h_e \rho(d)
+ W$, where $W$ is a random zero mean noise, and $\rho(d)$ is
the distance dependent loss in signal strength which is a
decreasing function of the distance $d$, with $\rho(0)=1$. We
assume an isotropic distance dependent loss model, whereby the
signal received by all sensors at a distance $d$ (from the event)
is the same.
\begin{example} {\bf The Boolean model} (see \cite{sensing-models}): In
this model, the signal strength that a sensor receives is the same
(which is given by $h_e$) when the event occurs within a distance of
$r_d$ from the sensor and is 0 otherwise. Thus, for a Boolean
sensing model,
\begin{eqnarray*}
\rho(d) & = & \left\{
\begin{array}{ll}
1, & \text{if} \ d \leqslant r_d\\
0, & \text{otherwise}.
\end{array}
\right.
\end{eqnarray*}
\end{example}
\begin{example} {\bf The power law path--loss model} (see
\cite{sensing-models}) is given by
\begin{eqnarray*}
\rho(d) & = & d^{-\eta},
\end{eqnarray*}
for some path loss exponent $\eta > 0$. For free space, $\eta = 2$.
\end{example}
\subsection{Detection Region and Detection Partition}
\label{sec:detection-range}
In Example 2, we see that the signal from an event varies continuously
over the region. Hence, unlike the Boolean model, there is no clear
demarcation between the sensors that observe the event and those that do
not. Thus, in order to facilitate the design of a distributed detection
scheme with some performance guarantees, in the remainder of this
section, we will define certain regions around each sensor.
\begin{definition}
Given $0 < \mu_1 \leqslant h_e$, the {\bf Detection Range} $r_d$
of a sensor is defined as
the distance from the sensor within which the occurrence of an event
induces a signal level of at least $\mu_1$, i.e.,
\begin{align*}
r_d &:= \sup\left\{d : h_e \rho(d) \ge \mu_1\right\}.
\end{align*}
\hfill\hfill \rule{2.5mm}{2.5mm}
\end{definition}
\vspace{-4mm}
In the above definition, $\mu_1$ is a design parameter that defines
the acceptable detection delay. For a given signal strength $h_e$, a
large value of $\mu_1$ results in a small detection range $r_d$ (as
$\rho(d)$ is non--increasing in $d$). We will see in
Section~\ref{sec:average_detection_delay}
(Eqn.~\eqref{eqn:sadd_max_all_hall}) that the $\mathsf{SADD}$
of the distributed change detection/isolation procedures we propose,
depends on the detection range $r_d$, and that a small $r_d$ (i.e., a
large $\mu_1$) results in a small $\mathsf{SADD}$, while
requiring more sensors to be deployed in order to achieve coverage of
the ${\sf ROI}$.
\begin{figure}[t]
\centering
\includegraphics[width = 55mm, height = 50mm]{coverage_new}
\caption{{\bf Partitioning of $\mathcal{A}$ in a large $\mathsf{WSN}$ by
detection regions}: (a simple example) The coloured solid
circles
around each sensor node denote their detection regions. The
four sensor nodes divide the $\mathsf{ROI}$, indicated
by the square region, into regions $\mathcal{A}_1, \cdots,
\mathcal{A}_6$ such that region $\mathcal{A}_i$ is
detection--covered by a unique set of sensors $\mathcal{N}_i$.
For example, ${\cal A}_1$ is detection covered by the set of
sensors ${\cal N}_1 = \{1,2,4\}$, etc.
}
\label{fig:coverage}
\end{figure}
We say that a location $x \in \mathsf{ROI}$ is {\em detection--covered} by
sensor node $s$, if $\|\ell^{(s)}-x\| \leqslant r_d$. For any
sensor node $s$, $\mathcal{D}^{(s)} := \{x \in \mathcal{A} :
\|\ell^{(s)}-x\| \leqslant r_d \}$ is called its {\em
detection--coverage region} (see Fig.~\ref{fig:coverage}). {\em We
assume that the sensor deployment is such that every $x \in \mathcal{A}$
is detection--covered by at least one sensor} (Fig.~\ref{fig:coverage}). For each $x \in \mathcal{A}$, define
$\mathcal{N}(x)$ to be the largest set of sensors by which $x$ is
detection--covered, i.e., $\mathcal{N}(x) := \{s : x \in {\cal
D}^{(s)}\}$. Let $\mathcal{C}(\mathcal{N}) = \{\mathcal{N}(x) : x \in
{\cal A} \}$. $\mathcal{C}(\mathcal{N})$ is a finite set and
can have at most $2^n-1$ elements. Let $N =
|\mathcal{C}(\mathcal{N})|$. For each $\mathcal{N}_i \in
\mathcal{C}(\mathcal{N})$, we denote the corresponding
detection--covered region by $\mathcal{A}_i = \mathcal{A}(\mathcal{N}_i)
:= \{x \in \mathsf{ROI} : \mathcal{N}(x) = \mathcal{N}_i \}$. Evidently, the
${\cal A}_i, 1 \leqslant i \leqslant N$, partition the $\mathsf{ROI}$. We say
that the $\mathsf{ROI}$ is {\em detection--partitioned} into a {\em minimum
number of subregions}, $\mathcal{A}_1, \mathcal{A}_2, \cdots,
\mathcal{A}_N$, such that the subregion $\mathcal{A}_i$ is
detection--covered by a unique set of sensors $\mathcal{N}_i$, and
$\mathcal{A}_i$ is the maximal detection--covered region of
$\mathcal{N}_i$, i.e., $\forall i \neq i'$, $\mathcal{N}_i \neq
\mathcal{N}_{i'}$ and $\mathcal{A}_i\cap \mathcal{A}_{i'}=\emptyset$.
See Fig.~\ref{fig:coverage} for an example.
\subsection{Sensor Measurement Model}
\label{subsec:measurement_model}
Before change, i.e., for $k < T$, the observation $X_k^{(s)}$ at the
sensor $s$ is just the zero mean sensor noise $W_k^{(s)}$, the
probability density function (pdf) of which is denoted by $f_0(\cdot)$
({\em pre--change pdf}). After change, i.e., for $k \geqslant T$ with the
location of the event being $\ell_e$, the observation of sensor $s$ is
given by $X_k^{(s)} = h_e\rho(d_{e,s}) + W_k^{(s)}$ where $d_{e,s} :=
\|\ell^{(s)} - \ell_e\|$, the pdf of which is denoted by
$f_1(\cdot;d_{e,s})$ ({\em post--change pdf}). The noise processes
$\{W_k^{(s)}\}$ are independent and identically distributed (iid) across
time and across sensor nodes. In the rest of the paper, {\em we consider
$f_0(\cdot)$ to be Gaussian with mean 0 and variance
$\sigma^2$.}
We denote the probability measure when the change happens at time $T$
and at location $\ell_e$ by ${\sf P}^{({\bf
d}(\ell_e))}_{T}\left\{\cdot\right\}$, where ${\bf d}(\ell_e) =
[d_{e,1},d_{e,2},\cdots,d_{e,n}]$, and the corresponding expectation
operator by ${\sf E}^{({\bf d}(\ell_e))}_{T}\left[\cdot\right]$. In the
case of Boolean sensing model, the post--change pdfs depend only on the
detection subregion where the event occurs, and hence, we denote the
probability measure when the event occurs at $\ell_e \in {\cal A}_i$ and
at time $T$ by ${\sf P}^{(i)}_{T}\left\{\cdot\right\}$, and the
corresponding expectation operator by ${\sf
E}^{(i)}_{T}\left[\cdot\right]$.
\subsection{Local Change Detectors}
We compute a $\mathsf{CUSUM}$ statistic $C_k^{(s)}, k\geq 1$ at each sensor $s$
based only on its own observations.
The $\mathsf{CUSUM}$ procedure was proposed by Page
{\cite{stat-sig-proc.page54continuous-inspection-schemes} as a solution
to the classical change detection problem (${\sf CDP}$, in which there
is one pre--change hypothesis and only one post--change hypothesis).
The optimality of $\mathsf{CUSUM}$ was shown for conditionally iid observations
by Moustakides in \cite{moustakides86optimal-stopping-times} for a
maximum mean delay metric introduced by Pollak \cite{pollak85} which is
$\mathsf{SADD}(\tau) :=$ $\underset{T \geqslant 1}{\sup} \ \
{\mathsf E}_T\left[\tau-T|\tau \geq T\right]$.
The driving term of $\mathsf{CUSUM}$ should
be the log likelihood--ratio (LLR) of $X_k^{(s)}$ defined as
$Z^{(s)}_k(d_{e,s}) :=
\ln\left(\frac{f_1(X_k^{(s)};d_{e,s})}{f_0(X_k^{(s)})}\right)$. As the
location of the event $\ell_e$ is unknown, the distance $d_{e,s}$ is
also unknown. Hence, one cannot work with the pdfs $f_1(\cdot;d_{e,s})$.
We propose to drive the $\mathsf{CUSUM}$ at each node $s$ with
$Z^{(s)}_k(r_d)$, where we recall that $r_d$ is the
detection range of a sensor. Based on the $\mathsf{CUSUM}$ statistic
$C_k^{(s)}, k\geq 1$, sensor $s$ computes a sequence of local decisions
$D_k^{(s)} \in \{0,1\}, k \geq 1$, where 0 represents no--change and 1
represents change. For each set of sensor nodes ${\cal N}_i$ that
detection partitions the $\mathsf{ROI}$, we define $\tau^{({\cal N}_i)}$, the
stopping time (based on the sequence of local decisions $D_k^{(s)}$s for
all $s \in {\cal N}_i$) at which the set of sensors ${\cal N}_i$ detects
the event. The way we obtain the local decisions $D_k^{(s)}$ from the
$\mathsf{CUSUM}$ statistic $C_k^{(s)}, k \geq 1$, and the way these local
decisions determine the stopping times $\tau^{({\cal N}_i)}$, varies
from rule to rule. Specific rules for local decision and the fusion of
local decisions will be described in
Section~\ref{sec:distributed_change_detection_isolation_procedures}
(also see \cite{mandar-etalfusum}).
An implementation strategy for our distributed event detection/isolation
procedure can be the following. We assume that the sensors know to
which detection sensor sets ${\cal N}_i$s they belong. This could be
done by initial configuration or by self--organisation. When the local
decision of sensor $s$ is 1, it broadcasts this fact to
all sensors in its detection neighbourhood. In practise, the broadcast
range of these radios is substantially larger than the detection range.
Hence, the local decision of $s$ is learnt by all sensors $s'$ that
belong to ${\cal N}_i$ to which $s$ belongs. When any node learns that
all the sensors in ${\cal N}_i$ have reached the local decision 1, it
transmits an alarm message to the base station \cite{thuli09thesis}. A
distributed leader election algorithm can be implemented so that only
one, or a controlled number of alarms is sent.
This alarm message is
carried by geographical forwarding \cite{naveen-kumar10geo-forwarding}.
A system that utilises such local fusion (but with a different sensing
and detection model) was developed by us and is reported in
\cite{wsn10smart-detect}.
\begin{comment}
Event detection with local fusion rules has been studied
in a variety of contexts in
\cite{mandar-etalfusum},
\cite{thuli09thesis}, and
In \cite{mandar-etalfusum}, Nadgir et al. proposed various local
decision rules and fusion strategies for a large extent ${\sf WSN}$
which is deployed for event detection. Tuli studied the problem of
fusion of local alarms in a large extent ${\sf WSN}$ in
\cite{thuli09thesis}. Naveen and Kumar
\cite{naveen-kumar10geo-forwarding} studied the problem of optimal
forwarding of alarms in a ${\sf WSN}$ with sleep--wake cycling. A
research report of the activities of a practical ${\sf WSN}$
prototype/testbed for human intruder detection is described by Kumar et
al. in \cite{wsn10smart-detect}.
\end{comment}
\subsection{Influence Region}
\label{subsec:influence_region}
After a set of nodes ${\cal N}_i$ declares an event, the
event is {\em isolated} to a region associated with ${\cal N}_i$
called the influence region.
In the Boolean sensing model, if an event occurs in ${\cal A}_i$, then
only the sensors $s \in {\cal N}_i$ observe the event, while the other
sensors ${s' \notin {\cal N}_i}$ only observe noise. On the other hand,
in the power law path--loss model, sensors ${s' \notin {\cal
N}_i}$ can also observe the event, and the driving term of the $\mathsf{CUSUM}$s of sensors
$s'$ may be affected by the event. The mean of the driving term of
$\mathsf{CUSUM}$ of any sensor $s$ is given by
\begin{eqnarray}\label{eqn:proof_of_lemma_1}
{\mathsf E}_{f_1(\cdot;d_{e,s})} [Z_k^{(s)}(r_d)]
& = & \frac{(h_e\rho(r_d))^2}{2\sigma^2}
\left(\frac{2\rho(d_{e,s})}{\rho(r_d)} - 1\right).
\end{eqnarray}
Thus, the mean of the increment that drives $\mathsf{CUSUM}$ of node
$s$ decreases with $d_{e,s}$ and becomes negative when $2\rho(d_{e,s}) <
\rho(r_d)$. In this region, we are interested in finding $T_E$, the
expected time for the $\mathsf{CUSUM}$ statistic $C_k^{(s)}$ to cross the
threshold $c$. Define $\tau^{(s)} := \inf \left\{k: C_k^{(s)} \geqslant
c\right\}$, and hence, $T_E = {\sf E}_{1}^{({\bf
d}(\ell_e))}\left[\tau^{(s)}\right]$.
\begin{lemma}
\label{lem:mean-time-tfi}
If the distance between sensor node $s$ and the event, $d_{e,s}$ is such
that $2\rho(d_{e,s}) < \rho(r_d)$, then
\begin{eqnarray*}
T_E & \geqslant &\exp(\omega_0 c)
\end{eqnarray*}
where
$\omega_0 = 1 - \frac{2\rho(d)}{\rho(r_d)}$.
\end{lemma}
\begin{proof}
From (Eqn.~5.2.79 pg. 177 of) \cite{basseville-nikiforov93detection}, we
can show that ${\sf E}_{1}^{({\bf d}(\ell_e))}\left[\tau^{(s)}\right]
\geqslant \exp(\omega_0 c)$ where $\omega_0$ is the solution to the
equation
\[
{\sf E}_{1}^{({\bf d}(\ell_e))}\left[ e^{\omega_0 Z_k^{(i)}(r_d)}\right] = 0,
\]
which is given by $\omega_0 = 1 - \frac{2\rho(d)}{\rho(r_d)}$
(see Eqn.~\eqref{eqn:proof_of_lemma_1}).
\end{proof}
We would be interested in ${\sf T_E} \geqslant
\exp(\underline{\omega}_0\cdot c)$ for some $0 < \underline{\omega}_0 < 1$.
We now define the {\em influence
range} of a sensor as follows.
\begin{definition}{\bf Influence Range} of a sensor, $r_i$, is
defined as the distance from the sensor within which the occurrence of
an event can be detected within a mean delay of
$\exp{(\underline{\omega}_0 c)}$ where $\underline{\omega}_0$
is a parameter of interest and $c$ is the threshold of the local $\mathsf{CUSUM}$ detector.
Using Lemma~\ref{lem:mean-time-tfi}, we see that
$r_i =\min\{d' : 2\rho(d') \leqslant (1 - \underline{\omega}_0)
\rho(r_d)\}$.
\hfill \rule{2.5mm}{2.5mm}
\end{definition}
A location $x \in {\cal A}$ is influence covered by a sensor
$s$ if $\|\ell^{(s)}-x\| \leq r_i$, and a set of sensors ${\cal
N}_j$ is said to influence cover $x$ if each sensor $s \in {\cal N}_j$
influence covers $x$.
From Lemma~\ref{lem:mean-time-tfi}, we see that by having a large value
of $\underline{\omega}_0$, i.e., $\underline{\omega}_0$ close to 1, the
sensors that are beyond a distance of $r_i$ from the event take a
long time to cross the threshold. However, we see from the definition
of influence range that a large value of $\underline{\omega}_0$ gives a
large influence range $r_i$. We will see from the discussion in
Section~\ref{subsec:discussion} that a large influence range results in
the isolation of the event to a large subregion of ${\cal A}$. On the
other hand, from Section~\ref{sec:average_time_to_false_isolation}, we
will see that a large $\underline{\omega}_0$ decreases the probability
of false isolation, a performance metric of change detection/isolation
procedure, which we define in Section~\ref{sec:problem_formulation}.
We define the {\em influence--region} of sensor $s$ as
$\mathcal{T}^{(s)} \ := \ \{x \in \mathcal{A} : \|\ell^{(s)}-x\|
\leqslant r_i \}$. For the Boolean sensing model, $r_i
= r_d$, and hence, ${\cal D}^{(s)} = {\cal T}^{(s)}$ for all $1
\leqslant s \leqslant n$, and for the power law path--loss sensing
model, $r_i > r_d$, and hence, ${\cal D}^{(s)} \subset {\cal
T}^{(s)}$ for all $1 \leqslant s \leqslant n$
(see Fig.~\ref{fig:tfi_coverage}).
\begin{figure}[t]
\centering
\subfigure[Detection and influence regions of the Boolean model]
{
\includegraphics[width = 45mm, height = 45mm]{coverage_new}
\label{fig:coverage_new_left}
}
\hspace{10mm}
\subfigure[Detection and influence regions of the power law path loss model]
{
\includegraphics[width = 45mm, height = 45mm]{tfi_coverage_new}
}
\caption{{\bf Influence and detection regions}:
A simple example of partitioning of $\mathcal{A}$ in a large
$\mathsf{WSN}$. The coloured solid circles around each sensor node denote
their detection regions. The four sensor nodes, in the figure,
divide the $\mathsf{ROI}$, indicated by the square region, into regions
$\mathcal{A}_1, \cdots, \mathcal{A}_6$ such that region
$\mathcal{A}_i$ is detection--covered by a unique set of
sensors $\mathcal{N}_i$. The
dashed circles represent the influence regions. In the Boolean
model, the influence region of a sensor coincides with
its detection region.
}
\label{fig:tfi_coverage}
\end{figure}
Recalling the sets of sensors ${\cal N}_i$, $1 \leqslant i \leqslant N$,
defined in Section~\ref{sec:detection-range}, we define the {\em
influence region of the set of sensors} $\mathcal{N}_i$ as the region
$\mathcal{B}_i$ such that each $x \in \mathcal{B}_i$ is within the
influence range of all the sensors in $\mathcal{N}_i$, i.e.,
$\mathcal{B}_i \ := \ {\cal B}({\cal N}_i) \
:= \bigcap_{s \in \mathcal{N}_i } \mathcal{T}^{(s)}$.
Note that $\mathcal{A}(\mathcal{N}_i) = \left(\underset{s \in
\mathcal{N}_i }{\bigcap} \mathcal{D}^{(s)}\right) \bigcap
\left(\underset{s' \notin \mathcal{N}_i }{\bigcap}
\overline{\mathcal{D}^{(s')}}\right)$, where $\overline{{\cal D}}$ is
the complement of the set ${\cal D}$, and ${\cal D}^{(s)} \subseteq
{\cal T}^{(s)}$. Hence, $\mathcal{A}(\mathcal{N}_i) \subseteq
\mathcal{B}(\mathcal{N}_i)$. For the power law path--loss sensing model,
${\cal D}^{(s)} \subset {\cal T}^{(s)}$ for all $1 \leqslant s \leqslant
n$, and hence, $\mathcal{A}(\mathcal{N}_i) \subset
\mathcal{B}(\mathcal{N}_i)$ for all $1 \leqslant i \leqslant N$. For the
Boolean sensing model,
$\mathcal{A}(\mathcal{N}_i) =
\mathcal{B}(\mathcal{N}_i)
\bigcap
\left(\underset{s' \notin \mathcal{N}_i }{\bigcap}
\overline{\mathcal{D}^{(s')}}\right)$, and hence
$\mathcal{A}(\mathcal{N}_i) =
\mathcal{B}(\mathcal{N}_i)$ only when ${\cal N}_i = \{1,2,\cdots,n\}$.
Thus, for a general sensing model, $\mathcal{A}(\mathcal{N}_i) \subseteq
\mathcal{B}(\mathcal{N}_i)$. We note here that
in the Boolean and the power law path loss models,
an event which does not lie in the detection subregion of ${\cal N}_i$,
but lies in its influence subregion (i.e., $\ell_e \in
\mathcal{B}(\mathcal{N}_i)\setminus\mathcal{A}(\mathcal{N}_i)$) can
be detected due to ${\cal N}_i$ because of the stochastic nature
of the observations; in the power law path loss sensing model, this is
also because of the difference in losses $\rho(d_{e,s})$ between different
sensors
\noindent
{\bf Remark:} The definition of the
detection and influence ranges have involved two design
parameters $\mu_1$ and $\underline{\omega}_0$ which can be used to
``tune'' the performance of the distributed detection schemes
that we develop.
\hfill\hfill \rule{2.5mm}{2.5mm}
\vspace{-4mm}
\subsection{Isolating the Event}
\label{subsec:discussion}
In Section II D, we provided an outline of a class of distributed
detection procedures that will yield a stopping rule. On stopping, a
decision for the location of the event is made, which is called {\em
isolation.} In
Section~\ref{sec:distributed_change_detection_isolation_procedures}, we
will provide specific distributed detection/isolation procedures in
which stopping will be due to one of the sensor sets ${\cal N}_i$.
An event occurring at location $\ell_e \in {\cal A}_i$ can influence
sensors $s'$ which influence cover $\ell_e$, and hence, the detection
can be due to sensors ${\cal N}_i \neq {\cal N}_j$ which influence cover
$\ell_e$. Thus, we isolate the event to the influence region of the
sensors that detect the event.
Because of noise,
detection can be due to a sensor set ${\cal N}_{h}$ which does not
influence cover the event. Such an error event is called false
isolation.
\begin{comment}
In the Boolean sensing model, an event occurring at $\ell_e \in {\cal
A}_i$ is influence covered only by sensors $s' \in {\cal N}_i$. Hence,
the detection due to any ${\cal N}_j \subseteq {\cal N}_i$ corresponds
to the isolation of the event, and that due to
${\cal N}_j \not\subseteq {\cal N}_i$ corresponds
to false isolation.
\end{comment}
An event occurring at $\ell_e \in {\cal
A}_i$ is influence covered by sensors $s' \in {\cal N}(\ell_e) : = \{s:
\|\ell^{(s)}-\ell_e\| \leq r_i\}$. Hence, the detection due to
any ${\cal N}_j \subseteq {\cal N}(\ell_e)$ corresponds to the isolation of the event, and that due to
${\cal N}_j \not\subseteq {\cal N}(\ell_e)$ corresponds
to false isolation. Note that in the case of Boolean sensing model
${\cal N}(\ell_e) = {\cal N}_i$.
In Section~\ref{sec:problem_formulation}, we formulate the problem of
quickest detection of an event and {\em isolating the event to one of
the influence subregions $\mathcal{B}_1,
\mathcal{B}_2,\cdots,\mathcal{B}_N$} under a false alarm and false
isolation constraint.
\section{Problem Formulation}
\label{sec:problem_formulation}
We are interested in studying the {\em problem of distributed event
detection/isolation} in the setting developed in
Section~\ref{sec:system_model}. Given a sample node deployment (i.e.,
given ${\bm\ell}$), and {\em having chosen a value of the detection
range, $r_d$}, we partition the $\mathsf{ROI}$, $\mathcal{A}$ into the
detection--subregions, $\mathcal{A}_1, \mathcal{A}_2, \cdots,
\mathcal{A}_N$. Let $\mathcal{N}_i$ be the set of sensors that
detection--cover the region $\mathcal{A}_i$. Having chosen the influence
range $r_i$, the influence region $\mathcal{B}_i$ of the set of
sensor nodes $\mathcal{N}_i$ can be obtained. We define the following
set of hypotheses
\begin{eqnarray*}
{\bf H}_0 & : & \text{event not occurred}, \\
{\bf H}_{T,i} & : & \text{event occurred at time $T$ in subregion}
\ \mathcal{A}_i, \ \ T=1,2,\cdots, \ i=1,2,\cdots,N.
\end{eqnarray*}
The event occurs in one of the detection subregions ${\cal A}_i$, but we
will only be able to isolate it to one of the influence subregions
${\cal B}_i$ that is consistent with the ${\cal A}_i$ (see
Section~\ref{subsec:discussion}). We study
distributed procedures described by a stopping time $\tau$, and an
isolation decision $L(\tau) \in \{1,2,\cdots,N\}$ (i.e., the tuple
$(\tau, L)$) that detect an event at time $\tau$ and locate it to
$L(\tau)$ (i.e., to the influence region $\mathcal{B}_{L(\tau)}$) subject to a false alarm
and false isolation constraint. The {\em false alarm constraint}
considered is the average run length to false alarm $\mathsf{ARL2FA}$, and the
{\em false isolation constraint} considered is the probability of false
isolation $\mathsf{PFI}$, each of which we define as follows.
\begin{definition}
\label{def:overall_tfa}
The {\bf Average Run Length to False Alarm} $\mathsf{ARL2FA}$ of a change
detection/isolation procedure $\tau$ is defined as the
expected number of samples taken under null hypothesis ${\bf H}_0$
to raise an alarm, i.e.,
\begin{eqnarray*}
\mathsf{ARL2FA}(\tau) & := & {\sf E}_\infty\left[\tau\right],
\end{eqnarray*}
where ${\sf E}_\infty[\cdot]$ is the expectation operator (with the
corresponding probability measure being ${\sf P}_\infty\{\}$) when the
change occurs at infinity.
\hfill \rule{2.5mm}{2.5mm}
\end{definition}
\begin{comment}
We define the pairwise probability of false isolation when the event
occurs at $\ell_e \in {\cal A}_i$ and is detected by ${\cal N}_j$ which
is not in consistent with ${\cal N}_i$
(see Section~\ref{subsec:discussion}).
\begin{definition}
\label{def:overall_pfi}
The {\bf Pairwise Probability of False Isolation} $\mathsf{PFI}_{ij}$ of a change
detection/isolation procedure $\tau$ is defined as the supremum of the
probabilities of making an isolation decision $j$ under
hypothesis ${\bf H}_{1,i}$ (i.e., the event occurs at time 1 and at
location $\ell_e \in {\cal A}_i$), where the supremum is taken over all
possible $\ell_e \in {\cal A}_i$, i.e.,
\begin{eqnarray}
\mathsf{PFI}_{ij}(\tau)
& := & \sup_{\ell_e \in {\cal A}_i} \
{\mathsf P}_1^{({\bf d}(\ell_{e}))}\left\{L(\tau) = j
\right\
\end{eqnarray}
where $L(\tau)$ is the isolation decision taken at $\tau$, the time of
alarm.
\hfill \rule{2.5mm}{2.5mm}
\end{definition}
In the case of Boolean sensing model, the post--change pdfs depend only
on the index $i$ of the detection subregion where the event occurs, and
hence, the $\mathsf{PFI}_{ij}$ is given by
\begin{eqnarray*}
\mathsf{PFI}_{ij}(\tau)
& := &
{\mathsf P}_1^{(i)}\left\{L(\tau) = j
\right\}
\end{eqnarray*}
In the above definition, the change occurs at time 1, which is
along the lines of the class of detection/isolation rules studied by
Tartakovsky \cite{tartakovsky08multi-decision}.
\end{comment}
\begin{definition}
\label{def:overall_pfi}
The {\bf Probability of False Isolation} $\mathsf{PFI}$ of a change
detection/isolation procedure $\tau$ is defined as the
supremum of the probabilities of making an incorrect isolation decision,
i.e.,
\begin{eqnarray*}
\mathsf{PFI}(\tau) \ := \
\max_{1 \leq i \leq N} \
\sup_{\ell_e \in {\cal A}_i} \
\max_{1 \leq j \leq N, {\cal N}_j \not\subseteq{\cal N}(\ell_e)} \
{\mathsf P}_1^{({\bf d}(\ell_{e}))}\left\{L(\tau) = j \right\}
\end{eqnarray*}
where we recall that ${\cal N}(\ell_e) = \{s:\|\ell^{(s)}-\ell_e\|\leq
r_i\}$ is the set of sensors that influence covers $\ell_e
\in {\cal A}_i$.
\hfill \rule{2.5mm}{2.5mm}
\end{definition}
In the case of Boolean sensing model, the post--change pdfs depend only
on the index $i$ of the detection subregion where the event occurs, and
hence, the $\mathsf{PFI}$ is given by
\begin{eqnarray*}
\mathsf{PFI}(\tau) \ := \
\max_{1 \leq i \leq N} \
\max_{1 \leq j \leq N, {\cal N}_j \not\subseteq{\cal N}_i} \
{\mathsf P}_1^{(i)}\left\{L(\tau) = j \right\}.
\end{eqnarray*}
In
\cite{nikiforov03lower-bound-for-det-isolation},
Nikiforov defined the probability of false isolation, also, over the set
of all possible change times, as
${\sf SPFI}(\tau)
\ := \
\sup_{1 \leq i \leq N} \ \
\sup_{1 \leq j\neq i \leq N} \ \
\sup_{T \geq 1} \ {\mathsf P}_T^{(i)}\left\{L(\tau) = j \mid \tau \geq
T\right\}$.
Define the following classes of change detection/isolation procedures,
\begin{eqnarray*}
{\Delta}(\gamma,\alpha) &:=& \left\{(\tau,L): \mathsf{ARL2FA}(\tau) \geq \gamma,
{\sf SPFI}(\tau) \leqslant \alpha \right\}, \\
\widetilde\Delta(\gamma,\alpha) &:=& \left\{(\tau,L): \mathsf{ARL2FA}(\tau) \geq \gamma,
\mathsf{PFI}(\tau) \leqslant \alpha \right\}.
\end{eqnarray*}
\begin{comment}
In the classical change detection problem (${\sf CDP}$), there is one
pre--change hypothesis and only one post--change hypothesis, and hence,
there is no question of a false isolation. In
{\cite{stat-sig-proc.page54continuous-inspection-schemes}, Page proposed
the $\mathsf{CUSUM}$ procedure for the ${\sf CDP}$ in a non--Bayesian setting.
The optimality of $\mathsf{CUSUM}$ for any ${\sf ARL2FA} > 0$ was shown for
conditionally iid observations by Moustakides in
\cite{moustakides86optimal-stopping-times}. For a stationary
pre--change and post--change observations, Lai \cite{lai98} showed that
the $\mathsf{CUSUM}$ procedure is asymptotically (as $\mathsf{ARL2FA} \to \infty$) maximum
mean delay optimal with respect to the detection delay metric
introduced by Pollak \cite{pollak85} which is $\mathsf{SADD}(\tau) :=$
$\underset{T \geqslant 1}{\sup} \ \ {\mathsf E}_T\left[\tau-T|\tau \geq
T\right]$.
\end{comment}
\begin{comment}
In this work, we are interested in obtaining a detection/isolation
procedure $\tau$ (for a multi--hypothesis post--change problem) that
defines a stopping time and, upon stopping, isolates the event to a set
of locations in the ${\sf ROI}$. The error events that can occur are
the false alarm and the false isolation which are measured in terms of
the average run length to false alarm $\mathsf{ARL2FA}$ and the probability of
false isolation $\mathsf{PFI}$ (defined above).
\end{comment}
We define the supremum average
detection delay $\mathsf{SADD}$ performance for the procedure $\tau$, in the same
sense as Pollak \cite{pollak85} (also see
\cite{nikiforov03lower-bound-for-det-isolation}), as the maximum mean
number of samples taken under any hypothesis ${\bf H}_{T,i}, \
i=1,2,\cdots,N$, to raise an alarm, i.e.,
\begin{eqnarray*}
\mathsf{SADD}(\tau) :=
\underset{\ell_e \in \mathcal{A}}{\sup} \
\underset{T \geqslant 1}{\sup} \ {\mathsf E}^{({\bf
d}({\ell_e}))}_T\left[\tau-T|\tau \geq T\right].
\end{eqnarray*}
We are interested in obtaining an
optimal procedure $\tau$ that minimises the $\mathsf{SADD}$ subject to the
average run length to false alarm and the probability of false isolation
constraints,
\begin{eqnarray*}
\inf
& &
\underset{\ell_e \in \mathcal{A}}{\sup} \
\underset{T \geqslant 1}{\sup} \
{\mathsf E}^{({\bf d}(\ell_e))}_T\left[\tau-T|\tau \geq T \right]\\
\text{subject to} & &
\begin{array}{rcl}
\mathsf{ARL2FA}(\tau) & \geqslant & \gamma\\
\mathsf{PFI}(\tau) & \leqslant & \alpha.
\end{array}
\end{eqnarray*}
The change detection/isolation problem that we pose here is motivated
by the framework of
\cite{nikiforov95change_isolation},
\cite{nikiforov03lower-bound-for-det-isolation},
\cite{tartakovsky08multi-decision},
which we discuss in the next subsection.
\subsection{Centralised Recursive Solution for the Boolean Sensing Model}
In \cite{nikiforov03lower-bound-for-det-isolation}, Nikiforov and in
\cite{tartakovsky08multi-decision}, Tartakovsky studied a change
detection/isolation problem that involves $N > 1$ post--change
hypotheses (and one pre--change hypothesis). Thus, their formulation can
be applied to our problem. But, in their model,
the pdf of ${\bf X}_k$ for $k \geq T$,
under hypothesis ${\bf H}_{T,i}$,
$g_i$ is completely known. It should be
noted that in our problem, in the case of power law path--loss sensing
model, the pdf of the observations under any post--change hypothesis is
unknown as the location of the event is unknown. The problem posed by
Nikiforov
\cite{nikiforov03lower-bound-for-det-isolation} is
\begin{eqnarray}
\label{eqn:problem1}
\inf_{(\tau,L) \in {\Delta}(\gamma,\alpha)}
& &
\underset{1 \leqslant i \leqslant N}{\sup} \
\sup{\mathsf E}^{(i)}_T\left[\tau-T|\tau \geq T\right],
\end{eqnarray}
and that by Tartakovsky \cite{tartakovsky08multi-decision} is
\begin{eqnarray}
\label{eqn:problem2}
\inf_{(\tau,L)\in\widetilde{\Delta}(\gamma,\alpha)}
& &
\underset{1 \leqslant i \leqslant N}{\sup} \
\sup{\mathsf E}^{(i)}_T\left[\tau-T|\tau \geq T\right].
\end{eqnarray}
Nikiforov
\cite{nikiforov03lower-bound-for-det-isolation} and Tartakovsky
\cite{tartakovsky08multi-decision} obtained
asymptotically optimal {\em centralised change detection/isolation}
procedures as $\min\{\gamma,\frac{1}{\alpha}\} \to \infty$, the $\mathsf{SADD}$
of which is given by the following theorem.
\begin{theorem}[Nikiforov 03]
\label{thm:niki}
For the $N$--hypotheses change detection/isolation problem (for the Boolean
sensing model) defined in
Eqn.~\eqref{eqn:problem1}, the asymptotically maximum mean delay optimal
detection/isolation procedure
$\tau^{\sf *}$ has the property,
\begin{eqnarray*}
\label{eqn:niki-add}
\mathsf{SADD}(\tau^{\sf *}) & \stackrel{\leq}{\sim} & \max\left\{\frac{\ln \gamma}{
\underset{1\leqslant i \leqslant N}{\min} \ \ \ \mbox{KL}(g_i,g_0)
}, \frac{-\ln(\alpha)}{
\underset{
1 \leqslant i \leqslant N,
1 \leqslant j \neq i
\leqslant N}{\min} \ \ \mbox{KL}(g_i,g_j)
}\right\}, \ \ \text{as} \ \min\left\{\gamma,\frac{1}{\alpha}\right\} \to \infty,
\end{eqnarray*}
where $\mbox{KL}(\cdot,\cdot)$ is the Kullback--Leibler divergence
function, and $g_i$ is the pdf of the
observation ${\bf X}_k$ for $k \geq T$ under hypothesis ${\bf H}_{T,i}$.
\hfill \rule{2.5mm}{2.5mm}
\end{theorem}
\noindent
{\bf Remark:}
Since,
${\Delta}(\gamma,\alpha) \subseteq
\widetilde{\Delta}(\gamma,\alpha)$, the
asymptotic upper bound on $\mathsf{SADD}$ for $\tau^*$ is also an upper bound
for the $\mathsf{SADD}$ over the set of procedures in $\widetilde{\Delta}(\gamma,\alpha)$.
In the case of Boolean sensing model, for any post--change hypothesis
${\bf H}_{T,i}$, only the set of sensor nodes that
detection cover (which is the same as influence cover) the subregion
${\cal A}_i$ switch to a post--change pdf $f_1$ (and the distribution of
other sensor nodes continues to be $f_0$). Since the pdf of the sensor
observations are conditionally i.i.d., the pdf of the observation
vector, in the Boolean sensing model, corresponds to the post--change
pdf $g_i$ of the centralised problem studied by Nikiforov
\cite{nikiforov03lower-bound-for-det-isolation} and by Tartakovsky
\cite{tartakovsky08multi-decision}.
Thus, their problem directly applies to our setting with the Boolean
sensing model. In our work, however, we propose algorithms for the
change detection/isolation problem for the power law sensing model as
well. Also, the procedures proposed by Nikiforov and by Tartakovsky are
(while being recursive) centralised, whereas we propose distributed procedures which are
computationally simple.
In Section~\ref{sec:distributed_change_detection_isolation_procedures},
we propose distributed detection/isolation procedures $\sf{MAX}$,
$\sf{HALL}$ and $\sf{ALL}$ and analyse their false alarm ($\mathsf{ARL2FA}$), false
isolation ($\mathsf{PFI}$) and the detection delay ($\mathsf{SADD}$) properties.
\section{Distributed Change Detection/Isolation Procedures}
\label{sec:distributed_change_detection_isolation_procedures}
In this section, we study the procedures $\sf{MAX}$ and $\sf{ALL}$ for
change detection/isolation in a distributed setting. Also, we propose a
distributed detection procedure ``${\sf HALL}$,'' and
analyse the $\mathsf{SADD}$, the $\mathsf{ARL2FA}$, and the $\mathsf{PFI}$ performance.
\subsection{The $\sf{MAX}$ Procedure}
Tartakovsky and Veeravalli proposed a decentralised procedure $\sf{MAX}$
for a collocated scenario in
\cite{stat-sig-proc.tartakovsky-veeravalli03quickest-change-detection}.
We extend the $\sf{MAX}$ procedure to a large $\mathsf{WSN}$ under the $\mathsf{ARL2FA}$
and $\mathsf{PFI}$ constraints. Recalling Section~\ref{sec:system_model}, each
sensor node $i$ employs $\mathsf{CUSUM}$ for local change detection between pdfs
$f_0$ and $f_1(\cdot;r_d)$. Let $\tau^{(i)}$ be the random time at
which the $\mathsf{CUSUM}$ statistic of sensor node $i$ crosses the threshold
$c$ for the first time. At each time $k$, the local decision of sensor
node $i$, $D_k^{(i)}$ is defined as
\begin{eqnarray*}
D_k^{(i)}
& := & \left\{
\begin{array}{ll}
0, & \text{for} \ k < \tau^{(i)}\\
1, & \text{for} \ k \geqslant \tau^{(i)}.
\end{array}
\right.
\end{eqnarray*}
The global decision rule $\tau^{\sf{MAX}}$ declares an alarm at the
earliest time slot $k$ at which all sensor nodes $j \in \mathcal{N}_i$
for some $i=1,2,\cdots,N$ have crossed the threshold $c$. Thus,
\begin{eqnarray*}
\tau^{\mathsf{MAX},(\mathcal{N}_i)}
& := & \inf\left\{k: D_k^{(j)} = 1, \ \forall j \in \mathcal{N}_i\right\}
\ = \ \min \left\{\tau^{(j)} : j \in \mathcal{N}_i \right\}\\
\tau^{\mathsf{MAX}} & := & \min\left\{\tau^{\mathsf{MAX},(\mathcal{N}_i)}: 1 \leqslant i \leqslant N \right\}.
\end{eqnarray*}
i.e., the {\sf MAX} procedure declares an alarm at the earliest time
instant when the $\mathsf{CUSUM}$ statistic of all the sensor nodes ${\cal
N}_i$ corresponding to hypothesis ${\bf H}_{T,i}$ of some $i$ have crossed the
threshold at least once.
The isolation rule is
$L(\tau) = \arg\min_{1\leq i\leq N} \{\tau^{\sf MAX, ({\cal N}_i)}\}$,
i.e., to declare that the event has occurred in the
influence region ${\cal B}_{L(\tau)} = \mathcal{B}(\mathcal{N}_{L(\tau)})$
corresponding to the set of sensors $\mathcal{N}_{L(\tau)}$ that raised the alarm.
\subsection{$\sf{ALL}$ Procedure}
Mei, \cite{stat-sig-proc.mei05information-bounds}, and
Tartakovsky and Kim, \cite{stat-sig-proc.tartakovsky-kim06decentralized},
proposed a decentralised procedure $\sf{ALL}$, again for a collocated
network. We extend the $\sf{ALL}$ procedure to a large extent network
under the $\mathsf{ARL2FA}$ and the $\mathsf{PFI}$ constraints. Here, each sensor node $i$
employs $\mathsf{CUSUM}$ for local change detection between pdfs $f_0$ and
$f_1(\cdot;r_d)$. Let $C_k^{(i)}$ be the $\mathsf{CUSUM}$ statistic of sensor
node $i$ at time $k$. {\em The $\mathsf{CUSUM}$ in the sensor nodes is allowed
to run freely even after crossing the threshold $c$}. Here, the local
decision of sensor node $i$ is
\begin{eqnarray*}
D_k^{(i)} & := & \left\{
\begin{array}{ll}
0, & \text{if} \ C_k^{(i)} < c\\
1, & \text{if} \ C_k^{(i)} \geqslant c.
\end{array}
\right.
\end{eqnarray*}
The global decision rule $\tau^{\sf{ALL}}$ declares an alarm at
the earliest time slot $k$ at which the local decision of all the
sensor nodes corresponding to a set $\mathcal{N}_i$, for some
$i=1,2,\cdots,N$, are 1, i.e.,
\begin{eqnarray*}
\tau^{\mathsf{ALL},(\mathcal{N}_i)}
& := & \inf\left\{k: D_k^{(j)} = 1, \ \forall j \in \mathcal{N}_i\right\}
\ = \ \inf \left\{ k: C_k^{(j)} \geqslant c, \forall j \in \mathcal{N}_i \right\}\\
\tau^{\mathsf{ALL}} & := & \min\left\{\tau^{\mathsf{ALL},(\mathcal{N}_i)}: 1 \leqslant i \leqslant N \right\}.
\end{eqnarray*}
The isolation rule is
$L(\tau) \ = \ \arg\min_{1\leq i\leq N} \{\tau^{\sf ALL, ({\cal
N}_i)}\}$,
i.e., to declare that the event has occurred in the
influence region ${\cal B}_{L(\tau)} = \mathcal{B}(\mathcal{N}_{L(\tau)})$
corresponding to the set of sensors $\mathcal{N}_{L(\tau)}$ that raised the alarm.
\subsection{$\sf{HALL}$ Procedure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=3.5in]{single_cusum}
\caption{{\sf ALL} and {\sf HALL}: Evolution of $\mathsf{CUSUM}$ statistic
$C_k^{(i)}$ of node $i$ plotted vs.\ $k$.
Note that at time $k = V_j^{(i)}$, $R_j^{(i)}$ is the excess above the threshold.
}
\label{fig:single_cusum}
\end{center}
\end{figure}
Motivated by ${\sf ALL}$, and the fact that sensor noise can make the $\mathsf{CUSUM}$ statistic fluctuate
around the threshold, we propose a local decision rule which is 0 when
the $\mathsf{CUSUM}$
statistic has visited zero and has not crossed the threshold yet and is 1
otherwise. We explain the $\sf{HALL}$ procedure below.
\noindent
The following discussion is illustrated in Fig.~\ref{fig:single_cusum}.
Each sensor node $i$ computes a $\mathsf{CUSUM}$ statistic $C_k^{(i)}$ based on
the LLR of its own observations between the pdfs $f_1(\cdot;r_d)$ and
$f_0$. Define $U_0^{(i)} := 0$. Define $V_1^{(i)}$ as the time at which
$C_k^{(i)}$ crosses the threshold $c$ (for the first time) as:
\begin{eqnarray*}
V_1^{(i)} := \inf \left\{k: C_k^{(i)} \geqslant c\right\}
\end{eqnarray*}
(see Fig.~\ref{fig:single_cusum} where the ``overshoots'' $R_k^{(i)}$, at
$V_k^{(i)}$, are also shown). Note that $\inf\emptyset := \infty$.
Next define
\begin{eqnarray*}
U_1^{(i)} := \inf \left\{k > V_1^{(i)}: C_k^{(i)} = 0\right\}.
\end{eqnarray*}
Now starting with $U_1^{(i)}$, we can recursively define $V_2^{(i)},U_2^{(i)}$
etc. in the obvious manner (see Fig.~\ref{fig:single_cusum}).
Each node $i$ computes the local decision $D_k^{(i)}$ based on the
$\mathsf{CUSUM}$
statistic $C_k^{(i)}$ as follows:
\begin{eqnarray}
\label{eqn:ld}
D_k^{(i)} & = & \left\{
\begin{array}{lll}
1, & \text{if} \ V^{(i)}_j \leqslant k < U^{(i)}_j \ \text{ for some } j\\
0, & \text{otherwise.}
\end{array}
\right.
\end{eqnarray}
\vspace{0mm}
The global decision rule
is a stopping time $\tau^{\sf{HALL}}$ defined
as the earliest time slot $k$ at which all the sensor nodes in a region have a
local decision $1$, i.e.,
\begin{eqnarray*}
\tau^{\mathsf{HALL},(\mathcal{N}_i)} & := & \inf\left\{k: D_k^{(j)} = 1, \ \forall j \in \mathcal{N}_i\right\},\\
\tau^{\mathsf{HALL}} & := & \min\left\{
\tau^{\mathsf{HALL},(\mathcal{N}_i)} : 1 \leqslant i \leqslant N\right\}.
\end{eqnarray*}
The isolation rule is
$L(\tau) = \arg\min_{1\leq i\leq N} \{\tau^{\sf HALL, ({\cal N}_i)}\}$,
i.e., to declare that the event has occurred in the
influence region ${\cal B}_{L(\tau)} = \mathcal{B}(\mathcal{N}_{L(\tau)})$
corresponding to the set of sensors $\mathcal{N}_{L(\tau)}$ that raised the alarm.
\noindent
{\bf Remark:}
The procedures $\sf{HALL}, \sf{MAX}$ and $\sf{ALL}$ differ only in their
local decision rule; the global decision rule as a function of
$\{D_k^{(i)}\}$ is the same for $\sf{HALL}, \sf{MAX}$ and $\sf{ALL}$.
For the distributed procedures $\sf{MAX}$, $\sf{ALL}$, and $\sf{HALL}$,
we analyse the $\mathsf{ARL2FA}$ in Section \ref{sec:false_alarm_analysis}, the
$\mathsf{PFI}$ in Section \ref{sec:average_time_to_false_isolation}, and the
$\mathsf{SADD}$ performance in Section \ref{sec:average_detection_delay}.
\subsection{Average Run Length to False Alarm ($\mathsf{ARL2FA}$)}
\label{sec:false_alarm_analysis}
From the previous sections, we see that the stopping time of any
${\sf procedure}$ ($\mathsf{MAX}$, $\mathsf{ALL}$, or ${\sf HALL}$) is the minimum of the stopping times corresponding to each
${\cal N}_i$, i.e.,
\begin{eqnarray*}
\tau^{\mathsf{procedure}} & := &
\min\left\{\tau^{\mathsf{procedure}, (\mathcal{N}_i)} : 1 \leqslant i \leqslant N\right\}.
\end{eqnarray*}
Under the null hypothesis ${\bf H}_0$, the $\mathsf{CUSUM}$ statistics $C_k^{(s)}$s
of sensors $s \in {\cal N}_i$ are driven by independent noise processes,
and hence, $C_k^{(s)}$s are independent. But, there can be a sensor
that is common to two different ${\cal N}_i$s, and hence,
$\tau^{\mathsf{procedure}, (\mathcal{N}_i)}$s, in general, are not independent.
We provide asymptotic lower bounds for the $\mathsf{ARL2FA}$ for
$\sf{MAX}$,
$\sf{HALL}$, and $\sf{ALL}$, in the following theorem.
\begin{theorem}
For local $\mathsf{CUSUM}$ threshold $c$,
\label{thm:tfa}
\begin{eqnarray}
\mathsf{ARL2FA}(\tau^{\sf MAX}) & \geqslant & \exp\left(a_{\sf MAX} c\right) \cdot (1+o(1)) \\
\mathsf{ARL2FA}(\tau^{\sf HALL}) & \geqslant & \exp\left(a_{\sf HALL} c\right) \cdot (1+o(1)) \\
\mathsf{ARL2FA}(\tau^{\sf ALL}) & \geqslant & \exp\left(a_{\sf ALL} c\right) \cdot (1+o(1))
\end{eqnarray}
($o(1) \to 0$ as $c \to \infty$), where
for any arbitrarily small $\delta > 0$,
$a_{\sf MAX} = a_{\sf HALL} = 1- \delta$, $a_{\sf ALL} = m -
\delta$,
\end{theorem}
where $m = \min\{{\cal N}_i \setminus \bigcup_{\substack{j \neq
i, \ }{j \in {\cal I}}} {\cal N}_j : i \in {\cal I}\}$,
${\cal I}$ is the set of indices of the detection sets that are
minimal in the partially order of set inclusion among the detection
sets.
\begin{proof}
See Appendix~\ref{app:tfa}.
\end{proof}
Thus, for ${\sf procedure}$, for a given $\mathsf{ARL2FA}$ requirement of $\gamma$,
it is sufficient to choose the threshold $c$ as
\begin{eqnarray}
\label{eqn:c_for_gamma}
c& = \frac{\ln\gamma}{a_{\sf procedure}} (1+o(1)), \ \ \text{as $\gamma
\to \infty$}.
\end{eqnarray}
\vspace{-3mm}
\subsection{Probability of False Isolation ($\mathsf{PFI}$)}
\label{sec:average_time_to_false_isolation}
A false isolation occurs when the hypothesis ${\bf H}_{T,i}$ is true for
some $i$ and the hypothesis ${\bf H}_{T,j} \neq {\bf H}_{T,i}$ is declared to be
true at the time of alarm, {\em and} the event does not lie in the region
$\mathcal{B}(\mathcal{N}_j)$. The following theorem provide asymptotic
upper bounds for the $\mathsf{PFI}$ for each of the procedures $\mathsf{MAX}$,
$\mathsf{ALL}$, and $\mathsf{HALL}$.
\begin{theorem}
\label{thm:pfi}
For local $\mathsf{CUSUM}$ threshold $c$,
\begin{eqnarray}
{\sf PFI}(\tau^{\mathsf{MAX}}) &\leq&
\frac{\exp\left(-b_{\sf MAX} c\right)}{B_{\sf MAX}} \cdot (1+o(1))\\
{\sf PFI}(\tau^{\mathsf{HALL}}) &\leq&
\frac{\exp\left(-b_{\sf HALL} c\right)}{B_{\sf HALL}} \cdot (1+o(1))\\
{\sf PFI}(\tau^{\mathsf{ALL}}) &\leq&
\frac{\exp\left(-b_{\sf ALL} c\right)}{B_{\sf ALL}} \cdot (1+o(1)).
\end{eqnarray}
where $o(1) \to 0$ as $c \to \infty$, and $
b_{\sf MAX} = b_{\sf HALL} =
\frac{m\underline{\xi \omega}_0}{2}-\frac{1+\bar{m}}{n}$,
$b_{\sf ALL}= \frac{m\underline{\xi \omega}_0}{2}-\frac{1}{n}$,
$\underline{\omega}_0 = 1$ for Boolean sensing model,
$\xi$ is 2 for Boolean sensing model and is 1 for path--loss sensing
model,
$m = \min\left\{
|{\cal N}_j \setminus {\cal N}(\ell_e)| :
{1 \leq i \leq N, \ell_e \in {\cal A}_i, 1 \leq j \leq N, {\cal N}_j
\not\subseteq {\cal N}(\ell_e)} \right\}$
and
\noindent
$\bar{m} = \max\left\{
|{\cal N}_j \setminus {\cal N}(\ell_e)| :
{1 \leq i \leq N, \ell_e \in {\cal A}_i, 1 \leq j \leq N, {\cal N}_j
\not\subseteq {\cal N}(\ell_e)} \right\}$,
and $B_{\sf MAX}$,
$B_{\sf HALL}$, and $B_{\sf ALL}$ are positive constants.
\end{theorem}
\begin{proof}
See Appendix~\ref{app:pfi}.
\end{proof}
Thus, for a given $\mathsf{PFI}$ requirement of $\alpha$, the threshold $c$ for
should satisfy
\begin{eqnarray}
\label{eqn:c_for_alpha}
c \ = \ \frac{-\ln B_{\sf procedure} -\ln\alpha}{b_{\sf procedure}} (1+o(1))
\ = \ \frac{-\ln\alpha}{b_{\sf procedure}} (1+o(1)), \ \ \ \text{as $\alpha \to 0$}.
\end{eqnarray}
\subsection{Supremum Average Detection Delay (\textsf{SADD})}
\label{sec:average_detection_delay}
In this section, we analyse the $\mathsf{SADD}$ performance of the distributed
detection/isolation procedures. We observe that for any sample path of
the observation process, for the same threshold $c$, the ${\mathsf{MAX}}$
rule raises an alarm first, followed by the ${\mathsf{HALL}}$ rule, and then
by the ${\mathsf{ALL}}$ rule. This ordering is due to the following reason.
For each sensor node $s$, let $\tau^{(s)}$
be the {\em first time instant} at
which the $\mathsf{CUSUM}$ statistic $C_k^{(s)}$ crosses the threshold $c$
(denoted by $V_1^{(i)}$ in Figure~\ref{fig:single_cusum}).
Before time $\tau^{(s)}$, the local decision is 0 for all
the procedures, ${\sf MAX}$, ${\sf ALL}$, and ${\sf HALL}$. For
${\mathsf{MAX}}$, for all $k \geq \tau^{(s)}$, the local decision
$D_{k}^{(s)} = 1$. Thus, the stopping time of $\mathsf{MAX}$ is at least as
early as that of ${\mathsf{HALL}}$ and $\mathsf{ALL}$. The local decision of
$\mathsf{ALL}$ is 1 ($D_k^{(s)} = 1$) only at those times $k$ for which $C_k^{(s)}
\geq c$. However, even when $C_k^{(s)} < c$, the local decision of
$\mathsf{HALL}$ is 1 if $V_j^{(s)} \leq k < U_j^{(s)}$
(see Figure~\ref{fig:single_cusum}) for some $j$. Thus, the local decisions of ${\sf
MAX}$, ${\sf HALL}$, and ${\sf ALL}$ are ordered as, for all $k \geq 1$,
$ D_k^{(s)}(\mathsf{MAX}) \geqslant
D_k^{(s)}(\mathsf{HALL}) \geqslant
D_k^{(s)}(\mathsf{ALL})$,
and hence,
$\tau^{\mathsf{MAX},({\cal N}_i)} \leqslant
\tau^{\mathsf{HALL},{(\cal N}_i)} \leqslant
\tau^{\mathsf{ALL},{(\cal N}_i)}$.
Each of the stopping times
$\sf{MAX}$, $\sf{HALL}$, or $\sf{ALL}$ is the minimum of stopping times
corresponding to the sets of sensors $\{\mathcal{N}_i : i =
1,2,\cdots,N\}$, i.e.,
\begin{eqnarray*}
\tau^{\mathsf{procedure}} & = &
\min\{\tau^{\mathsf{procedure},(\mathcal{N}_i)}:i=1,2,\cdots,N\}
\end{eqnarray*}
where ``$\mathsf{procedure}$'' can be $\mathsf{MAX}$ or $\mathsf{HALL}$ or $\mathsf{ALL}$.
Hence, we have
\begin{eqnarray}
\label{o6_eqn:ordered_stopping}
\tau^{\mathsf{MAX}} \ \leqslant \ \tau^{\mathsf{HALL}} \ \leqslant \ \tau^{\mathsf{ALL}}.
\end{eqnarray}
From \cite{stat-sig-proc.mei05information-bounds}, we see that
\begin{eqnarray}
\sup_{T \geqslant 1} \
{\mathsf E}_T^{(i)}\left[\tau^{\mathsf{ALL},({\cal N}_i)}-T\mid
\tau^{\sf ALL, ({\cal N}_i)} \geq T \right]
&=& \frac{c}{I} \left(1+o(1)\right)
\end{eqnarray}
where $I$ is the
Kullback--Leibler divergence between the post--change and the
pre--change pdfs.
For $\ell_e \in \mathcal{A}_i$, we have
$\forall s \in \mathcal{N}_i, \ d_{e,s} \leqslant r_d$. Also, since
$\tau^{\mathsf{ALL}} \leqslant \tau^{\mathsf{ALL},({\cal N}_i)}$, we have
\begin{eqnarray}
\label{eqn:sadd_b}
\sup_{\ell_e \in {\cal A}_i} \
\sup_{T \geqslant 1} \
{\mathsf E}_T^{({\bf d}({\ell_e}))}\left[\tau^{\mathsf{ALL}}-T\mid \tau^{\sf ALL} \geq T \right]
& \leqslant &
\sup_{\ell_e \in {\cal A}_i} \
\sup_{T \geqslant 1} \
{\mathsf E}_T^{({\bf d}({\ell_e}))}\left[\tau^{\mathsf{ALL},({\cal N}_i)}-T
\mid \tau^{\sf ALL} \geq T \right]
\end{eqnarray}
From Appendix~\ref{app:sadd}, Eqn.~\eqref{eqn:sadd_b} becomes,
\begin{eqnarray}
\sup_{\ell_e \in {\cal A}_i} \
\sup_{T \geqslant 1} \
{\mathsf E}_T^{({\bf d}({\ell_e}))}\left[\tau^{\mathsf{ALL}}-T\mid \tau^{\sf ALL} \geq T \right]
& \leqslant &
\sup_{\ell_e \in {\cal A}_i} \
\sup_{T \geqslant 1} \
{\mathsf E}_T^{({\bf d}({\ell_e}))}\left[\tau^{\mathsf{ALL},({\cal N}_i)}-T
\mid \tau^{\sf ALL, ({\cal N}_i)} \geq T \right]\nonumber \\
&=& \frac{c}{\mathsf{KL}(f_1(\cdot;r_d),f_0)}(1+o(1))
\end{eqnarray}
From the above equation, and from Eqn.~\eqref{o6_eqn:ordered_stopping}, we have
\begin{eqnarray}
\label{eqn:sadd_max_all_hall}
\mathsf{SADD}(\tau^{\mathsf{MAX}}) \ \leqslant \ \mathsf{SADD}(\tau^{\mathsf{HALL}}) \ \leqslant \ \mathsf{SADD}(\tau^{\mathsf{ALL}}) \
\leqslant \ \frac{c}{\mathsf{KL}(f_1(\cdot;r_d),f_0)}(1+o(1)), \ \text{as} \
c\to\infty,
\end{eqnarray}
\noindent
{\bf Remark:}
Recall from Section~\ref{sec:detection-range} that $\mu_1 =
h_e \rho(r_d)$. We now see that $\mu_1$ governs the detection
delay performance, and $\mu_1$ can be chosen such that a requirement on $\mathsf{SADD}$
is met. Thus, to achieve a requirement on $\mathsf{SADD}$, we need to choose
$r_d$ appropriately. A small value of $r_d$ (gives a large
$\mu_1$ and hence,) gives less detection delay
compared to a large value of $r_d$. But, a small $r_d$ requires more
sensors to detection--cover the $\mathsf{ROI}$.
In the next subsection, we discuss the asymptotic minimax delay
optimality of the distributed procedures in relation to
Theorem~\ref{thm:niki}.
\subsection{Asymptotic Upper Bound on $\mathsf{SADD}$}
\label{sec:asymp_order_optimality}
For any change detection/isolation procedure to achieve a
$\mathsf{ARL2FA}$ requirement of $\gamma$ and $\mathsf{PFI}$ requirement of
$\alpha$, a threshold $c$ is chosen such that it satisfies
Eqns.~\ref{eqn:c_for_gamma} and \ref{eqn:c_for_alpha}, i.e.,
\begin{eqnarray}
c &=& \max\left\{\frac{\ln\gamma}{a_{\sf procedure}},
\frac{-\ln\alpha}{b_{\sf procedure}}
\right\} (1+o(1)).
\end{eqnarray}
Therefore, from Eqn.\eqref{eqn:sadd_max_all_hall}, the $\mathsf{SADD}$ is given
by
\begin{eqnarray}
\label{eqn:order-optimal}
\mathsf{SADD}(\tau^{\mathsf{procedure}}) & \leqslant &
\frac{1}{\mathsf{KL}(f_1(\cdot;r_d),f_0)}
\cdot \max\left\{\frac{\ln\gamma}{a_{\sf procedure}},
\frac{-\ln\alpha}{b_{\sf procedure}}
\right\} (1+o(1)).
\end{eqnarray}
where $o(1) \to 0$ as $\min\{\gamma, \frac{1}{\alpha}\} \to \infty$.
Note that as $r_d$ decreases,
$\mathsf{KL}(f_1(\cdot;r_d),f_0) =
\frac{h_e^2\rho(r_d)^2}{2\sigma^2}$ increases.
Thus, to achieve a smaller detection delay, the detection range
$r_d$ can be decreased, and the number of sensors $n$ can be
increased to cover the $\mathsf{ROI}$.
We can compare the asymptotic $\mathsf{SADD}$ performance of the distributed
procedures $\sf{HALL}$, $\sf{MAX}$ and $\sf{ALL}$ against
Theorem~\ref{thm:niki} for the Boolean sensing model.
For Gaussian pdfs $f_0$ and $f_1$, the KL divergence between the
hypotheses ${\bf H}_{T,i}$ and ${\bf H}_{T,j}$ is given by
\begin{eqnarray*}
\mathsf{KL}(g_i,g_j)
& = & \int \ln\left(\frac{
\prod_{s \in {\cal N}_i} f_1(x^{(s)})
\prod_{s' \notin {\cal N}_i} f_0(x^{(s')}) }{
{ \prod_{s \in {\cal N}_j} f_1(x^{(s)})
\prod_{s' \notin {\cal N}_j} f_0(x^{(s')}) }
}\right) \ \prod_{s \in {\cal N}_i} f_1(x^{(s)})
\prod_{s' \notin {\cal N}_i} f_0(x^{(s')}) \ d{\bf x}\nonumber \\
& = & \int \left(\ln\left(
\prod_{s \in {\cal N}_i}
\frac{f_1(x^{(s)})}{f_0(x^{(s)})}\right)
-
\ln\left(
{ \prod_{s \in {\cal N}_j } \frac{f_1(x^{(s)})}{f_0(x^{(s)})} }
\right) \right) \ \prod_{s \in {\cal N}_i} f_1(x^{(s)}) \prod_{s' \notin {\cal N}_i} f_0(x^{(s')}) \ d{\bf x} \nonumber \\
& = & \sum_{s \in {\cal N}_i} \mathsf{KL}(f_1,f_0)
- \sum_{s \in {{\cal N}_j \cap \cal N}_i} \mathsf{KL}(f_1,f_0)
+ \sum_{s \in {\cal N}_j\setminus {\cal N}_i} \mathsf{KL}(f_1,f_0)\nonumber \\
&=& |{\cal N}_i~\Delta~{\cal N}_j| \ \mathsf{KL}(f_1,f_0)
\end{eqnarray*}
where the operator $\Delta$ represents the symmetric difference between
the sets. Thus, from Theorem~\ref{thm:niki} for Gaussian $f_0$ and
$f_1$, we have
\begin{eqnarray*}
\mathsf{SADD}(\tau^{*}) &\leq&
\frac{1}{\mathsf{KL}(f_1,f_0)}
\cdot \max\left\{\frac{\ln\gamma}{a^*},
\frac{-\ln\alpha}{b^*}
\right\} (1+o(1)), \nonumber \\
\text{where} \ a^* &=& \min_{1 \leqslant i \leqslant N} |{\cal N}_i|, \nonumber \\
\text{and} \ b^* &=& \min_{\substack{1 \leqslant i \leqslant N\\
1 \leqslant j \leqslant N, \ {\cal N}_j \not\subseteq {\cal N}_i}} |{\cal
N}_i\Delta{\cal N}_j|.
\end{eqnarray*}
The $\mathsf{SADD}$ performance of the distributed ${\sf
procedure}$ with the Boolean sensing model is
\begin{eqnarray}
\label{eqn:order-optimal-1}
\mathsf{SADD}(\tau^{\mathsf{procedure}}) & \leqslant &
\frac{1}{\mathsf{KL}(f_1,f_0)}
\cdot \max\left\{\frac{\ln\gamma}{a_{\sf procedure}},
\frac{-\ln\alpha}{b_{\sf procedure}}
\right\} (1+o(1)).
\end{eqnarray}
where $o(1) \to 0$ as $\min\{\gamma, \frac{1}{\alpha}\} \to \infty$.
Thus, the asymptotically optimal upper bound on $\mathsf{SADD}$
(which corresponds to the optimum centralised procedure $\tau^*$) and that of the distributed
procedures $\mathsf{ALL}$, ${\sf{HALL}}$, and ${\sf{MAX}}$ scale in the
same way as
$\ln\gamma/\mathsf{KL}(f_1,f_0)$ and $-\ln\alpha/\mathsf{KL}(f_1,f_0)$.
\section{Numerical Results}
\label{sec:numerical_results}
We consider a deployment of 7 nodes with the detection range $r_d
= 1$, in a hexagonal $\mathsf{ROI}$ (see Fig.~\ref{fig:sensor_placement_7nodes})
such that we get $N = 12$ detection subregions, and ${\cal N}_1 =
\{1,3,4,6\}$, ${\cal N}_2 = \{1,3,4\}$, ${\cal N}_3 = \{1,2,3,4\}$,
${\cal N}_4 = \{1,2,4\}$, ${\cal N}_5 = \{1,2,4,5\}$, ${\cal N}_6 =
\{2,4,5\}$, ${\cal N}_7 = \{2,4,5,7\}$, ${\cal N}_8 = \{4,5,7\}$, ${\cal
N}_9 = \{4,5,6,7\}$, ${\cal N}_{10} = \{4,6,7\}$, ${\cal N}_{11} =
\{3,4,6,7\}$, and ${\cal N}_{12} = \{3,4,6\}$. The pre--change pdf
considered is $f_0 \sim \mathcal{N}(0,1)$, and the detection range and the
influence range considered are $r_d=1.0$ and $r_i=1.5$ respectively.
We compute the $\mathsf{SADD}$, the $\mathsf{ARL2FA}$ and the $\mathsf{PFI}$ performance of
$\sf{MAX}$, $\sf{HALL}$, $\sf{ALL}$, and Nikiforov's procedure
(\cite{nikiforov03lower-bound-for-det-isolation}) for the Boolean
sensing model with $f_1 \sim \mathcal{N}(1,1)$, and plot the $\mathsf{SADD}$ vs
$\log(\mathsf{ARL2FA})$ performance in Fig.~\ref{fig:bool}, of the change
detection/isolation procedures for $\mathsf{PFI} \leq 5 \times 10^{-2}$. The
local $\mathsf{CUSUM}$ threshold $c$ that yields the target $\mathsf{ARL2FA}$ and other
simulation parameters and results are tabulated in
Table~\ref{tab:bool}. To obtain the $\mathsf{SADD}$ the event is assumed to
occur at time 1, which corresponds to the
maximum mean delay (see \cite{pollak85},
\cite{stat-sig-proc.lorden71procedures-change-distribution}).
We observe from Fig.~\ref{fig:bool} that the $\mathsf{SADD}$ performance of
$\mathsf{MAX}$ is the worst and that of ${\sf Nikiforov}'s$ is the best.
Also, we note that the performance of the distributed procedures,
${\sf ALL}$ and ${\sf HALL}$, are very close to that of the optimal
centralised procedure. For eg., for a
requirement of $\mathsf{ARL2FA} = 10^5$ (and $\mathsf{PFI} \leq 5 \times 10^{-2}$), we
observe from Fig.~\ref{fig:bool} that $\mathsf{SADD}(\tau^{\mathsf{MAX}}) = 26.43$,
$\mathsf{SADD}(\tau^{\mathsf{HALL}}) = 13.78$,
$\mathsf{SADD}(\tau^{\mathsf{ALL}}) = 12.20$, and
$\mathsf{SADD}(\tau^{*}) = 11.28$. Since $\mathsf{MAX}$ does not make use of the
the dynamics of $C_k^{(s)}$ beyond $\tau^{s}$, it's $\mathsf{SADD}$ vs $\mathsf{ARL2FA}$
performance is poor. On the other hand, $\mathsf{ALL}$ and $\mathsf{HALL}$
make use of $C_k^{(s)}$ for all $k$ and hence, give a better
performance.
\begin{figure}
\centerline{\includegraphics[width = 55mm, height =50mm]{large_extent_wsn}}
\caption{{\bf Sensor nodes placement}: 7 sensor nodes (which are
numbered 1,2,$\cdots$,7) represented by small filled circles are placed
in the hexagonal $\mathsf{ROI}$ ${\cal A}$.
The sensor nodes partition the $\mathsf{ROI}$ into the detection subregions
${\cal A}_1, {\cal A}_2, \cdots, {\cal A}_{12}$ (for both the Boolean
and the power law path loss sensing models). }
\label{fig:sensor_placement_7nodes}
\end{figure}
\begin{figure}[t]
\centering
\subfigure[$\mathsf{SADD}$ vs $\mathsf{ARL2FA}$ for the Boolean model]
{
\includegraphics[width=73mm,height=50mm]{bool_sadd_vs_arl2fa}
\label{fig:bool}
}
\hspace{10mm}
\subfigure[$\mathsf{SADD}$ vs $\mathsf{ARL2FA}$ for the square law path loss model]
{
\includegraphics[width=73mm,height=50mm]{path_sadd_vs_arl2fa}
\label{fig:path}
}
\caption{
$\mathsf{SADD}$ versus $\mathsf{ARL2FA}$ (for $\mathsf{PFI} \leq 5 \times 10^{-2}$)
for $\sf{MAX}$, $\sf{HALL}$, $\sf{ALL}$ and
Nikiforov's procedure for the Boolean and the square law path
loss sensing models. In the Boolean sensing model, the
system parameters are $f_0 \sim {N}(0,1)$, $f_1 \sim
{N}(0,1)$, and in the case of path loss sensing model, the
parameters are
$f_0 \sim {N}(0,1)$, $h_e = 1$, $r_d = 1.0$, $r_i=1.5$.}
\label{fig:sadd_vs_tfa}
\end{figure}
\begin{table} \footnotesize
\begin{center}
\caption{Simulation parameters and results
for the Boolean sensing model for $\mathsf{PFI} \leq 5 \times 10^{-2}$}
\begin{tabular} {|c|l|l|c|r|r|r|r|r|}
\hline
Detection/ & No. of & Threshold & & \multicolumn{2}{|c|}{99\% Confidence interval}& & \multicolumn{2}{|c|}{99\% Confidence interval} \\\cline{5-6}\cline{8-9}
Isolation & MC & $c$ & $\mathsf{ARL2FA}$ & $\mathsf{ARL2FA}_{\text{lower}}$ & $\mathsf{ARL2FA}_{\text{upper}}$& $\mathsf{SADD}$ & $\mathsf{SADD}_{\text{lower}}$ & $\mathsf{SADD}_{\text{upper}}$ \\
procedure & runs & & & & & & & \\ \hline
\multirow{5}{*}{{\sf MAX}}
& $10^4$ & 2.71 & $10^2$ & 93.69 & 106.61 & 8.77 & 8.45 & 9.09\\ \cline{2-9}
& $10^4$ & 4.93 & $10^3$ & 942.10 & 1065.81 & 14.89 & 14.41 & 15.37\\ \cline{2-9}
& $10^4$ & 7.24 & $10^4$ & 9398.61 & 10640.99 & 21.01 & 20.42 & 21.61\\ \cline{2-9}
& $10^4$ & 9.52 & $10^5$ & 95696.90 & 108008.89& 26.43 & 25.76 & 27.11\\ \hline
\multirow{4}{*}{{\sf HALL}}
& $10^4$ & 1.67 & $10^2$ & 92.67 & 107.58 & 5.96 & 5.72 & 6.20\\ \cline{2-9}
& $10^4$ & 2.69 & $10^3$ & 927.17 & 1085.48 & 8.81 & 8.48 & 9.14\\ \cline{2-9}
& $10^4$ & 3.66 & $10^4$ & 9239.97 & 10826.71 & 11.58 & 11.17 & 11.99\\ \cline{2-9}
& $10^4$ & 4.52 & $10^5$ & 92492.85 & 108389.15& 13.78 & 13.32 & 14.23\\ \hline
\multirow{4}{*}{{\sf ALL}}
& $10^4$ & 2.16 & $10^3$ & 915.94 & 1089.33 & 7.82 & 7.53 & 8.11\\ \cline{2-9}
& $10^4$ & 2.96 & $10^4$ & 9197.23 & 10811.90 & 10.07 & 9.70 & 10.44\\ \cline{2-9}
& $10^4$ & 3.71 & $10^5$ & 92205.45 & 107952.43 & 12.20 & 11.76 & 12.63\\ \hline
\multirow{4}{*}{{\sf Nikiforov}}
& $10^4$ & 2.75 & $10^2$ & 98.30 & 116.32 & 4.75 & 4.52 & 4.98\\ \cline{2-9}
& $10^4$ & 4.50 & $10^3$ & 986.48 & 1048.23 & 7.08 & 6.79 & 7.38\\ \cline{2-9}
& $10^4$ & 6.32 & $10^4$ & 9727.19 & 10261.94 & 9.14 & 9.00 & 9.68\\ \cline{2-9}
& $10^4$ & 8.32 & $10^5$ & 98961.41 & 110415.50 & 11.28 & 11.00 & 12.25\\
\hline
\end{tabular}
\label{tab:bool}
\end{center}
\end{table}
\begin{table}
\begin{center}
\caption{Simulation parameters and results
for the square law path loss sensing model for $\mathsf{PFI} \leq 5 \times
10^{-2}$}
\begin{tabular} {|c|l|l|c|r|r|r|r|r|}
\hline
Detection/ & No. of & Threshold & & \multicolumn{2}{|c|}{99\% Confidence interval}& & \multicolumn{2}{|c|}{99\% Confidence interval} \\\cline{5-6}\cline{8-9}
Isolation & MC & $c$ & $\mathsf{ARL2FA}$ & $\mathsf{ARL2FA}_{\text{lower}}$ & $\mathsf{ARL2FA}_{\text{upper}}$& $\mathsf{SADD}$ & $\mathsf{SADD}_{\text{lower}}$ & $\mathsf{SADD}_{\text{upper}}$ \\
procedure & runs & & & & & & & \\ \hline
\multirow{5}{*}{{\sf MAX}}
& $10^4$ & 2.71 & $10^2$ & 93.69 & 106.61 & 30.74 & 29.31 & 32.17\\ \cline{2-9}
& $10^4$ & 4.93 & $10^3$ & 942.10 & 1065.81 & 79.60 & 75.86 & 83.34\\ \cline{2-9}
& $10^4$ & 7.23 & $10^4$ & 9398.61 & 10640.99 & 169.63 & 161.61 & 177.65\\ \cline{2-9}
& $10^4$ & 9.52 & $10^5$ & 95696.90 & 108008.89 & 301.77 & 286.88 & 316.66\\ \hline
\multirow{4}{*}{{\sf HALL}}
& $10^4$ & 1.67 & $10^2$ & 92.67 & 107.58 & 20.58 & 19.43 & 21.74\\ \cline{2-9}
& $10^4$ & 2.69 & $10^3$ & 927.17 & 1085.48 & 40.56 & 38.24 & 42.88\\ \cline{2-9}
& $10^4$ & 3.66 & $10^4$ & 9239.97 & 10826.71 & 66.45 & 62.57 & 70.33\\ \cline{2-9}
& $10^4$ & 4.52 & $10^5$ & 92492.85 & 108389.15 & 96.93 & 91.03 & 102.82\\ \hline
\multirow{4}{*}{{\sf ALL}}
& $10^4$ & 1.33 & $10^2$ & 92.24 & 107.79 & 20.19 & 19.06&21.32\\ \cline{2-9}
& $10^4$ & 2.16 & $10^3$ & 915.94 & 1089.33 & 39.90 & 37.59&42.21\\ \cline{2-9}
& $10^4$ & 2.96 & $10^4$ & 9197.23 & 10811.90 & 63.34 & 59.43&67.24\\ \cline{2-9}
& $10^4$ & 3.71 & $10^5$ & 92205.45 & 107952.43 & 98.96 & 93.01&104.92\\ \hline
\end{tabular}
\label{tab:path}
\end{center}
\end{table}
\normalsize
For the same sensor deployment in
Fig.~\ref{fig:sensor_placement_7nodes}, we compute the $\mathsf{SADD}$ and the
$\mathsf{ARL2FA}$ for the square law path loss ($\eta=2$) sensing model given in
Section~\ref{sec:system_model}. Also,
the signal strength $h_e$ is taken to be unity. Thus, the sensor sets
(${\cal N}_i$s) and the detection subregions (${\cal A}_i$s) are the same as
in the Boolean model, we described above.
Since $r_d$
is taken as 1, $f_1(\cdot;r_d) \sim {\cal N}(1,1)$.
Thus, the LLR of observation $X_k^{(s)}$ is given by
$\ln\left(\frac{f_1(X_k^{(s)};r_d)}{f_0(X_k^{(s)})}\right) =
X_k^{(s)}-\frac{1}{2}$, which is the same
as that in the Boolean sensing model. Hence, under {\em the event not occurred
hypothesis}, the $\mathsf{ARL2FA}$ under the path
loss sensing model is the same as that of the Boolean sensing model.
The $\mathsf{CUSUM}$ threshold $c$ that yields the target $\mathsf{ARL2FA}$s and
other parameters and results
are tabulated in Table~\ref{tab:path}.
To obtain the $\mathsf{SADD}$ the event is assumed to
occur at time 1, and at a distance of
$r_i$ from all the nodes of ${\cal N}_i$ that influence covers the
event (which corresponds to the maximum detection delay).
We plot the $\mathsf{SADD}$ vs $\log(\mathsf{ARL2FA})$ in
Fig.~\ref{fig:path}. The ordering on $\mathsf{SADD}$ for any $\mathsf{ARL2FA}$
across the procedures is the same as that
in the Boolean model, and can be explained in the same manner. The
ambiguity in $\ell_e$ affects $f_1(\cdot;d_{e,s})$ and shows up as
large $\mathsf{SADD}$ values.
\vspace{-4mm}
\section{Conclusion}
\label{sec:conclusions}
We consider the quickest distributed event detection/isolation problem
in a large extent $\mathsf{WSN}$ with a practical sensing model which
incorporates the reduction in signal strength with distance. We formulate the change
detection/isolation problem in the optimality framework of
\cite{nikiforov03lower-bound-for-det-isolation}
and \cite{tartakovsky08multi-decision}.
We propose distributed
detection/isolation procedures, $\sf{MAX}$, $\sf{ALL}$ and $\sf{HALL}$
and show that as $\min\{\mathsf{ARL2FA},1/\mathsf{PFI}\} \to \infty$, the $\mathsf{SADD}$ performance
of the distributed procedures grows in the same scale as that of the
optimal centralised procedure of Tartakovsky
\cite{tartakovsky08multi-decision} and Nikiforov
\cite{nikiforov03lower-bound-for-det-isolation}.
\vspace{-4mm}
\appendices
\label{sec:appendix}
\begin{comment}
\section{}
\begin{lemma}
\label{lem:wald}
For any $k \geq 1$,
\begin{align}
\probnull{C^{(s)}_k \geq c }
&\leq e^{-c}
\end{align}
\end{lemma}
\begin{proof}
Let $S_k^{(s)}$ be the partial sum of the LLR of the observations up to time $k$.
Note that the ${\sf CUSUM}$ statistic $C^{(s)}_k = \max_{0 \leq t < k}
S^{(s)}_t$. Since the LLR $Z^{(s)}_k$s are i.i.d., the distribution of
$(S^{(s)}_k, S^{(s)}_k-S^{(s)}_1, \cdots, S^{(s)}_k-S^{(s)}_{k-1})$ is the same as that of
$(S^{(s)}_k, S^{(s)}_{k-1}, \cdots, S^{(s)}_1)$. Hence,
\begin{align}
\probnull{C^{(s)}_k \geq c }
&= \probnull{\max_{1 \leq t \leq k} S^{(s)}_t \geq c}
\ = \ \probnull{\tau^{{\sf SPRT},(s)} \leq k }\nonumber \\
&\leq \probnull{\tau^{{\sf SPRT},(s)} < \infty }
\ \leq \ e^{-c} \ \ \text{(Wald's inequality)}.
\end{align}
where $\tau^{{\sf SPRT},(s)}$ is the stopping time of the one--sided sequential
probability ratio test (${\sf SPRT}$) with the threshold being $c$
(\cite{basseville-nikiforov93detection}).
\end{proof}
\end{comment}
\section{Proof of Theorem~\ref{thm:tfa}}
\label{app:tfa}
From detection sensor sets ${\cal N}_i, i=1,2,\cdots,N$, we choose the
collection of indices ${\cal I} \subseteq \{1,2,\cdots,N\}$ such that
any two sensor sets ${\cal N}_i$, ${\cal N}_j$, $i,j
\in {\cal I}$, are not partially ordered by set inclusion. For each $i
\in {\cal I}$, define the set of sensors that are unique to the
sensor set ${\cal N}_i$,
${\cal M}_i \ := \ {\cal N}_i\setminus \underset{j \neq i, j \in {\cal
I}}{\bigcup}{\cal N}_j \
\subseteq {\cal N}_i$.
The sets ${\cal M}_1, {\cal M}_2, \cdots,
{\cal M}_{|{\cal I}|}$ are disjoint. Under the null hypothesis, ${\bf H}_0$, the
observations of sensors in the sensor sets ${\cal M}_1, {\cal M}_2,
\cdots, {\cal M}_{|{\cal I}|}$ are iid, with
the pdf
$f_0 \sim {\cal N}(0,\sigma^2)$.
For every ${\cal N}_i$, there exists ${\cal M}_j$ such that
${\cal M}_j \subseteq {\cal N}_i$, so that
$\tau^{{\sf rule},({\cal N}_i)} \geq \tau^{{\sf rule},({\cal M}_j)}$.
Hence,
$\tau^{\sf procedure} =\min\{\tau^{{\sf procedure},({\cal N}_i)}:i=1,2,\cdots,N\}
\geq \min\{\tau^{{\sf procedure},({\cal M}_i)}:i\in {\cal I}\}
=: \widehat{\tau}^{\ {\sf rule}}$.
Hence,
\begin{align}
\myexpnull{\tau^{{\sf rule}}}
&\geq \myexpnull{\widehat{\tau}^{\ {\sf rule}}}
\ \geq e^{mc} \cdot \prob{\widehat{\tau}^{\ {\sf rule}} > e^{mc}}
\ \text{(by the Markov inequality)}\nonumber \\
\text{or,} \
\frac{\myexpnull{\tau^{{\sf rule}}}}{e^{mc}}
&\geq \prob{\widehat{\tau}^{\ {\sf rule}} > e^{mc}}
\label{eqn:markov}
\ = \ \prod_{i \in {\cal I}}\probnull{\tau^{\ {\sf rule}, ({\cal M}_i)} > e^{mc}}.
\end{align}
We analyse
$\probnull{\tau^{\ {\sf rule}, ({\cal M}_i)} > e^{mc}}$ as $c \to
\infty$, for
$\mathsf{ALL}$, $\mathsf{MAX}$, and $\mathsf{HALL}$. For ${\sf ALL}$,
\begin{align}
\probnull{\tau^{\ {\sf ALL}, ({\cal M}_i)} = k}
& \leq \ \probnull{C_k^{(s)} \geq c, \forall s \in {\cal M}_i }\
\ = \ \prod_{s \in {\cal M}_i } \probnull{C_k^{(s)} \geq c }\nonumber \\
& {\leq} \ e^{-cm_i} \ \ \ \ \text{(using Wald's inequality)}\nonumber \\
\text{Therefore}, \hspace{10mm}
\probnull{\tau^{\ {\sf ALL}, ({\cal M}_i)} \leq k}
&\leq k \cdot e^{-cm_i}\nonumber \\
\probnull{\tau^{\ {\sf ALL}, ({\cal M}_i)} > e^{mc}} &\geq 1 -
e^{-c(m_i-m)}. \nonumber
\end{align}
Hence, for any $m < m_i$, we have
$\liminf_{c \to \infty} \
\probnull{\tau^{\ {\sf ALL}, ({\cal M}_i)} > e^{mc}} = 1$.
A large $m$ (which is smaller than all $m_i$s) is desirable. Thus, a
good choice for $m$ is
$a_{\sf ALL} = \min\{m_i : i \in {\cal I}\} - \delta$.
for some arbitrarily small $\delta > 0$.
Hence, from Eqn.~\eqref{eqn:markov},
\begin{align}
\myexpnull{\tau^{\sf ALL}}& \geqslant \exp\left( a_{\sf ALL}c \right)(1+o(1))
\end{align}
\noindent
For ${\sf MAX}$,
at the stopping time of $\mathsf{MAX}$, at least one of the ${\sf CUSUM}$
statistics is above the threshold $c$,
\begin{align}
\probnull{\tau^{\ {\sf MAX}, ({\cal M}_i)} = k}
&\leq \probnull{C_k^{(s)} \geq c, \ \text{for some} \ s \in {\cal M}_i }\nonumber \\
&\leq \sum_{s \in {\cal M}_i} \probnull{C_k^{(s)} \geq c }\nonumber \\
&{\leq} m_i e^{-c} \ \ \ \ \text{(using Wald's inequality)}.
\end{align}
\begin{align}
\text{Therefore, for any arbitrarily small $\delta > 0$}, \
\probnull{\tau^{\ {\sf MAX}, ({\cal M}_i)} > e^{(1-\delta)c}} &\geq 1- m_ie^{-\delta c} \nonumber \\
\liminf_{c \to \infty}
\probnull{\tau^{\ {\sf MAX}, ({\cal M}_i)} > e^{(1-\delta)c}} &=1.
\end{align}
Let $a_{\sf MAX} = 1-\delta$.
For any arbitrarily small $\delta >0$, we see from Eqn.~\eqref{eqn:markov},
\begin{align}
\label{eqn:max_arlfa}
\myexpnull{\tau^{{\sf MAX}}}
\geqslant \exp\left((1-\delta) c\right) (1+o(1)) \
\ =: \ \exp\left(a_{\sf MAX} c\right) (1+o(1)),
\end{align}
\begin{comment}
{\bf An alternative bound for $\myexpnull{\tau^{\mathsf{MAX}}}$:}
At the stopping time of $\mathsf{MAX}$, the ${\sf CUSUM}$ statistic of all the
nodes have crossed the threshold $c$ (at least once). Hence,
\begin{align}
\probnull{\tau^{\ {\sf MAX}, ({\cal M}_i)} = k}
&\leq \probnull{
\tau^{(s)} \leq k, \ \forall \ s \in {\cal M}_i
}\nonumber \\
&=
\prod_{s \in {\cal M}_i} \ \
\probnull{ \tau^{(s)} \leq k}
\nonumber \\
&\leq
\prod_{s \in {\cal M}_i} \ \
\sum_{t = 1}^k \
\probnull{ \tau^{(s)} = t}
\nonumber \\
&\leq
\prod_{s \in {\cal M}_i} \ \
\sum_{t = 1}^k \
\probnull{ C^{(s)}_t \geq c}
\nonumber \\
&\leq
\prod_{s \in {\cal M}_i} \ \
k e^{-c}
\ \
\text{(from Lemma~\ref{lem:wald})} \nonumber\\
&= \frac{k^{m_i}}{\exp\left({m_i}c\right)}
\nonumber\\
\probnull{\tau^{\ {\sf MAX}, ({\cal M}_i)} \leq e^{mc}}
&\leq
\frac{ \sum_{k=1}^{e^{mc}} k^{m_i}}
{\exp\left({m_i}c\right)} \nonumber \\
&\leq
\exp(mc) \exp(mm_i c) \cdot \exp\left(-{m_i}c\right)
\nonumber\\
&= \exp\left(-\left[m_i-m-mm_i\right]c\right)
\end{align}
Hence, if ${m_i}-m(1+m_i) > 0$,
$\liminf_{c \to \infty} \probnull{\tau^{\ {\sf MAX}, ({\cal M}_i)} >
e^{mc}} =1$. The largest $m$ for which ${m_i}-m(1+m_i) > 0$ for all
$m_i$s is given by $m = \frac{\underline{m}}{1+\underline{m}} -\delta$
for some arbitrarily small $\delta > 0$. We note that
$\frac{1}{2} \leq \frac{\underline{m}}{1+\underline{m}} \leq \frac{n}{1+n} < 1$. Hence, this does not yield a
a better bound for
$\mathsf{ARL2FA}$
compared to that given by Eqn.~\eqref{eqn:max_arlfa}.
Hence, for any arbitrarily small $\delta >0$,
\begin{align*}
\label{eqn:max_arlfa1}
\myexpnull{\tau^{{\sf MAX}}}& \geqslant e^{(1-\delta)c} (1+o(1))
\end{align*}
\end{comment}
\noindent
For {\sf HALL}, for the same threshold $c$,
the stopping time of $\mathsf{HALL}$ is after
that of $\mathsf{MAX}$. Hence,
$\tau^{\ {\sf HALL}} \ \geq \ \tau^{\ {\sf MAX}}$. Hence,
$\myexpnull{\tau^{\ {\sf HALL}}} \ \geq \
\myexpnull{\tau^{\ {\sf MAX}}} \ \geq \
\exp\left((1-\delta)c\right)(1+o(1))$ (from Eqn.~\eqref{eqn:max_arlfa}).
Thus, for $a_{\sf ALL} := 1-\delta$, for any arbitrarily small $\delta >
0$,
\begin{align}
\myexpnull{\tau^{\ {\sf HALL}}} & \geq \exp\left(a_{\sf ALL} c\right) (1+o(1))
\end{align}
\begin{comment}
{\bf An alternative bound for $\myexpnull{\tau^{\mathsf{HALL}}}$:} At the
stopping time of $\mathsf{HALL}$, the ${\sf CUSUM}$ statistic of all the nodes
have crossed the threshold $c$ and remain in the ${\sf local alarm}$
state.
Let the random variable $H \geq 0$ represent the excess of the ${\sf
CUSUM}$ statistic above the threshold.
Hence,
\begin{align}
\probnull{\tau^{\ {\sf HALL}, ({\cal M}_i)} = k}
&\leq \probnull{
C_{t_s}^{(s)} \geq c, \ \text{for some} \ \ t_s \leq k,
C_{t}^{(s)} > 0, \ \forall \ t = t_s+1,t_s+2,\cdots,k,
\forall s \in {\cal M}_i
}\nonumber \\
&\leq
\prod_{s \in {\cal M}_i} \ \
\left[
\sum_{t_s=1}^{k-1}
\probnull{
C_{t_s}^{(s)} = c + H, \
C_{t}^{(s)} > 0, \ \forall \ t = t_s+1,t_s+2,\cdots,k}
+
\probnull{
C_{k}^{(s)} \geq c}
\right]
\nonumber \\
&\leq
\prod_{s \in {\cal M}_i} \ \
\left[
\sum_{t_s=1}^{k-1}
\probnull{C_{t_s}^{(s)} \geq c} \
\probnull{C_{t}^{(s)} > 0, \ \forall \ t = t_s+1,t_s+2,\cdots,k \mid
C_{t_s}^{(s)} = c+H}
+ e^{-c}
\right]
\nonumber \\
&\leq
\prod_{s \in {\cal M}_i} \ \
\left[
\sum_{t_s=1}^{k-1}
e^{-c} \
\probnull{\min\{C_{t}^{(s)}:t_s+1\leq t \leq k\} > 0 \mid C_{t_s}^{(s)} = c+H}
+ e^{-c}
\right]
\nonumber \\
&=
\prod_{s \in {\cal M}_i} \ \
\exp\left(-c\right)
\left[1+
\sum_{t_s=1}^{k-1}
\probnull{\min\left\{\sum_{n=t_s+1}^{t} Z_{k}^{(s)}: t_s+1 \leq t
\leq k \right\} > -(c+H)}
\right]
\ \
\nonumber\\
&=
\prod_{s \in {\cal M}_i} \ \
\exp\left(-c\right)
\left[1+
\sum_{t_s=1}^{k-1}
\probnull{\min\left\{\sum_{n=1}^{t} Z_{k}^{(s)}: 1 \leq t
\leq k-t_s \right\} > -(c+H)}
\right]
\ \
\nonumber\\
&=
k^{m_i} \cdot \exp\left(-\frac{m_i}{2}c\right)
\left[
\frac{\exp\left(-\frac{\alpha}{4}\right)}{1-\exp\left(-\frac{\alpha}{4}\right)}\right]^{m_i}
\nonumber\\
\probnull{\tau^{\ {\sf MAX}, ({\cal M}_i)} \leq e^{mc}}
&\leq
\left[ \sum_{k=1}^{e^{mc}} k^{m_i} \right]\cdot
\exp\left(-\frac{m_i}{2}c\right)
\left[
\frac{\exp\left(-\frac{\alpha}{4}\right)}{1-\exp\left(-\frac{\alpha}{4}\right)}\right]^{m_i}
\nonumber\\
&\leq
\exp(mc) \exp(mm_i c) \cdot \exp\left(-\frac{m_i}{2}c\right)
\left[
\frac{\exp\left(-\frac{\alpha}{4}\right)}{1-\exp\left(-\frac{\alpha}{4}\right)}\right]^{m_i}
\nonumber\\
&= \exp\left(-\left[\frac{m_i}{2}-m-mm_i\right]c\right) \left[ \frac{\exp\left(-\frac{\alpha}{4}\right)}{1-\exp\left(-\frac{\alpha}{4}\right)}\right]^{m_i}
\end{align}
Hence, the value of $m$ which achieves
$\liminf_{c \to \infty} \probnull{\tau^{\ {\sf MAX}, ({\cal M}_i)} >
e^{mc}} =1$ is given by $\frac{m_i}{2}-m(1+m_i) > 0$ or $m =
\frac{\underline{m}}{2(1+\underline{m})} -\delta$ for some arbitrarily
small $\delta > 0$.
We note that
$\frac{m_i}{2(1+m_i)} \leq \frac{1}{2}$. Hence, this does not yield a
a better bound for
$\mathsf{ARL2FA}$
compared to that given by Eqn.~\eqref{eqn:max_arlfa}.
Hence, for any arbitrarily small $\delta >0$,
\begin{align*}
\label{eqn:max_arlfa1}
\myexpnull{\tau^{{\sf MAX}}}& \geqslant e^{(1-\delta)c} (1+o(1))
\end{align*}
\end{comment}
\section{Proof of Theorem~\ref{thm:pfi}}
\label{app:pfi}
Consider $\ell_e \in {\cal A}_i$. The probability of false isolation
when the detection is due to ${\cal N}_j \not\subseteq{\cal N}(\ell_e)$
is
\begin{align*}
\pmeasure{
\tau^{{\sf rule}} = \tau^{{\sf rule},({\cal N}_j)}
}
&= \pmeasure{
\tau^{{\sf rule},({\cal N}_j)} \leq
\tau^{{\sf rule},({\cal N}_h)}, \forall h=1,2,\cdots,N
}\\
&\leq \pmeasure{
\tau^{{\sf rule},({\cal N}_j)} \leq \tau^{{\sf rule},({\cal N}_i)}
} \\
&= \sum_{k=1}^\infty
\pmeasure
{ \tau^{{\sf rule},({\cal N}_i)} = k
}
\pmeasure{ \tau^{{\sf rule},({\cal N}_j)} \leq k
\mid \tau^{{\sf rule},({\cal N}_i)} = k
}\nonumber \\
&= \sum_{k=1}^\infty
\pmeasure
{ \tau^{{\sf rule},({\cal N}_i)} = k
}
\left[
\sum_{t=1}^k
\pmeasure{ \tau^{{\sf rule},({\cal N}_j)} = t
\mid \tau^{{\sf rule},({\cal N}_i)} = k
}
\right]
\end{align*}
\subsection{${\sf PFI}(\tau^{\sf ALL})$ -- Boolean Sensing Model}
\vspace{-9mm}
\begin{align}
\pbmeasure{
\tau^{\mathsf{ALL},({\cal N}_j)}=t
\mid \tau^{\mathsf{ALL},({\cal N}_i)}=k
}
&\leq \pbmeasure{ C_{t}^{(s)} \geq c, \ \forall s \in {\cal N}_j
\mid \tau^{\mathsf{ALL},({\cal N}_i)}=k
}\nonumber \\
&\leq {\mathsf P}_\infty\left\{ C_{t}^{(s)} \geq c, \
\forall s \in {\cal N}_j \setminus {\cal N}_i
\right\}\nonumber \\
&\leq
\exp\left(-|{\cal N}_j\setminus{\cal N}_i|c\right) \ \ \
\text{(using Wald's inequality)}.\nonumber \\
\text{Therefore,} \
\pbmeasure{
\tau^{{\mathsf{ALL}},({\cal N}_j)} \leq
\tau^{\mathsf{ALL},({\cal N}_i)}
}
&\leq \exp\left(-|{\cal N}_j\setminus{\cal N}_i|c\right) \cdot
{\mathsf E}_1^{(i)}\left[\tau^{\mathsf{ALL}, ({\cal N}_i)} \right] \nonumber \\
&\leq \exp\left(-(|{\cal N}_j\setminus{\cal N}_i|c-\ln(c))\right) \cdot
\frac{1}{\alpha|{\cal N}_i|}(1+o(1)). \nonumber \\
\text{Hence}, \ {\sf PFI}(\tau^{\mathsf{ALL}})
&\leq \ \max_{1 \leq i \leq N} \
\max_{1 \leq j \leq N, {\cal N}_j \not\subseteq {\cal N}_i }
\
{\mathsf P}_1^{(i)}\left\{
\tau^{{\mathsf{ALL}},({\cal N}_j)} \leq \tau^{\mathsf{ALL},({\cal N}_i)}
\right\}\nonumber \\
&\leq \frac{\exp\left(-(m c-\ln(c))\right)}{\underline{n}\alpha
}(1+o(1))
\end{align}
where
$\underline{n} = \min\{|{\cal N}_i:i=1,2,\cdots,N|\}$,
$m = \underset{1 \leq i \leq N, 1 \leq j \leq N, {\cal N}_j
\not\subseteq {\cal N}_i}{\min} \{ |{\cal
N}_j\setminus{\cal N}_i| \}$.
For any $n$, there exists $c_0(n)$ such that for all $c
> c_0(n), c < e^{c/n}$. Using this inequality, for sufficiently large $c$
\begin{align*}
{\sf PFI}(\tau^{\mathsf{ALL}})
&\leq \frac{\exp\left(-\left(\left(m-\frac{1}{n}\right) c\right)\right)}{\underline{n}\alpha
}(1+o(1)) \ =
\frac{\exp(- b_{\mathsf{ALL}}\cdot c)}
{B_{\sf \mathsf{ALL}}}
(1+o(1)),
\end{align*}
where
$b_{\sf \mathsf{ALL}} = m - 1/n$ and $B_{\mathsf{ALL}} = \underline{n}\alpha$.
\subsection{${\sf PFI}(\tau^{\sf MAX})$ -- Boolean Sensing Model}
\vspace{-9mm}
\begin{align*}
{\mathsf P}_1^{(i)}\left\{ \tau^{{\sf MAX},({\cal N}_j)} =t
\mid \tau^{\mathsf{MAX}, ({\cal N}_i)}=k
\right\}
&\leq
{\mathsf P}_1^{(i)}\left\{ \tau^{(s)} \leq t, \forall s \in {\cal
N}_j
\mid \tau^{\mathsf{MAX}, ({\cal N}_i)}=k
\right\}\\
&\leq
{\mathsf P}_\infty\left\{ \tau^{(s)} \leq t, \forall s \in {\cal
N}_j\setminus{\cal N}_i
\mid \tau^{\mathsf{MAX}, ({\cal N}_i)}=k
\right\}\\
&=
\prod_{s \in {\cal N}_j\setminus {\cal N}_i} \
\sum_{n=1}^t \
{\mathsf P}_\infty\left\{ \tau^{(s)} =n \mid \tau^{\mathsf{MAX}, ({\cal
N}_i)}=k
\right\}\\
&=
\prod_{s \in {\cal N}_j\setminus {\cal N}_i} \
\sum_{n=1}^t \
{\mathsf P}_\infty\left\{ C_n^{(s)} \geq c
\right\}\\
&\leq \exp\left(-m_{ji}c\right)t^{m_{ji}}\\
\text{Hence,} \ {\mathsf P}_1^{(i)}\left\{
\tau^{\sf MAX,({\cal N}_j)} \leq \tau^{\mathsf{MAX}, ({\cal N}_i)}
\right\}
&\leq
\exp\left(- m_{ji} c\right)
{\mathsf E}_1^{(i)}\left[(\tau^{\mathsf{MAX}, {\cal N}_i})^{1+m_{ji}}
\right] \nonumber\\
&\leq
\exp\left(- m_{ji} c\right)
\frac{c^{1+m_{ji}}}{\alpha^{1+m_{ji}}}(1+o(1))\\
& =
\frac{\exp\left(-\left(m_{ji}c-(1+m_{ji})\ln(c)\right)\right)}{\alpha^{1+m_{ji}}}
(1+o(1))
\end{align*}
\begin{comment}
where in step $(a)$, we used
Theorem 6 in
\cite{tartakovsky-veeravalli05general-asymptotic-quickest-change}.
\end{comment}
Let $m = \underset{1 \leq i \leq N, 1 \leq j \leq N, {\cal N}_j
\not\subseteq {\cal N}_i}{\min} m_{ji}$,
$\bar{m} = \underset{1 \leq i \leq N, 1 \leq j \leq N, {\cal N}_j
\not\subseteq {\cal N}_i}{\max} m_{ji}$,
and $\alpha^* = \underset{1 \leq i \leq N, 1 \leq j \leq N, {\cal N}_j
\not\subseteq {\cal N}_i}{\min} \alpha^{1+m_{ji}}$.
\begin{comment}
and $\bar{m} = \underset{1 \leq i \leq N, 1 \leq j \leq N, {\cal N}_j
\not\subseteq {\cal N}_i}{\max} m_{ji}$.
Define $K$ as follows.
\begin{align*}
K &:=\left\{
\begin{array}{ll}
\frac{\exp\left(-\frac{\alpha}{4} \right)}
{\alpha\left(1-\exp\left(-\frac{\alpha}{4}\right)\right)},
& \text{if} \ \frac{\exp\left(-\frac{\alpha}{4} \right)}
{\alpha\left(1-\exp\left(-\frac{\alpha}{4}\right)\right)} \leq 1 \\
\left(\frac{\exp\left(-\frac{\alpha}{4} \right)}
{\alpha\left(1-\exp\left(-\frac{\alpha}{4}\right)\right)}
\right)^{\bar{m}}, & \text{otherwise}
\end{array}
\right.
\end{align*}
\end{comment}
\begin{align*}
\text{Therefore,} \
{\sf PFI}(\tau^{\mathsf{MAX}})
&\leq \max_{1 \leq i \leq N} \
\max_{1 \leq j \leq N, {\cal N}_j \not\subseteq {\cal N}_i }
\
{\mathsf P}_1^{(i)}\left\{
\tau^{\sf MAX,({\cal N}_j)} \leq \tau^{\mathsf{MAX}, ({\cal N}_i)}
\right\}\\
&\leq
\frac{\exp\left(-\left(m c-(1+\bar{m})\ln(c)\right)\right)}{\alpha^*}
(1+o(1)).
\end{align*}
For any $n$, there exists $c_0(n)$ such that for all $c
> c_0(n), c < e^{c/n}$. Hence, for sufficiently large $c$
\begin{align*}
{\sf PFI}(\tau^{\mathsf{MAX}})
&\leq \max_{1 \leq i \leq N} \
\max_{1 \leq j \leq N, {\cal N}_j \not\subseteq {\cal N}_i }
\
{\mathsf P}_T^{(i)}\left\{
\tau^{\sf MAX,({\cal N}_j)} \leq \tau^{\mathsf{MAX}, ({\cal N}_i)}
\right\}\\
&\leq
\frac{\exp\left(-\left(m -\frac{1+\bar{m}}{n}\right)c\right)}{\alpha^*}
(1+o(1)) \ = \
\frac{\exp(- b_{\mathsf{MAX}} \cdot c)}
{B_{\sf \mathsf{MAX}}}
(1+o(1)),
\end{align*}
where $b_{\sf \mathsf{MAX}} = m - ((1+\bar{m})/n)$ and $B_{\mathsf{MAX}} = \alpha^*$.
\subsection{${\sf PFI}(\tau^{\sf HALL})$ -- Boolean Sensing Model}
\vspace{-9mm}
\begin{align*}
{\mathsf P}_1^{(i)}\left\{ \tau^{{\sf HALL},({\cal N}_j)} =t
\mid \tau^{\mathsf{HALL}, ({\cal N}_i)}=k
\right\}
&\leq
{\mathsf P}_1^{(i)}\left\{ \tau^{(s)} \leq t, \forall s \in {\cal
N}_j
\mid \tau^{\mathsf{HALL}, ({\cal N}_i)}=k
\right\}
\end{align*}
which has the same form as that of $\mathsf{MAX}$. Hence, from the analysis of
$\mathsf{MAX}$, it follows that
\begin{align*}
{\mathsf P}_1^{(i)}\left\{
\tau^{\sf HALL,({\cal N}_j)} \leq \tau^{\mathsf{HALL}, ({\cal N}_i)}
\right\}
&\leq
\exp\left(-m_{ji}c\right)
{\mathsf E}_1^{(i)}\left[(\tau^{\mathsf{HALL}, ({\cal N}_i)})^{1+m_{ji}}
\right]\\
&\leq
\exp\left(-m_{ji}c\right)
\frac{c^{1+m_{ji}}}{|{\cal N}_i|^{1+m_{ji}} \alpha^{1+m_{ji}}}(1+o(1))\\
& =
\exp\left(-\left(m_{ji}c-(1+m_{ji})\ln(c)\right)\right)
\left[\frac{1}{\alpha|{\cal N}_i| }\right]^{1+m_{ji}}
(1+o(1))
\end{align*}
\begin{align*}
{\sf PFI}(\tau^{\mathsf{HALL}})
& \leq \ \max_{1 \leq i \leq N} \
\max_{1 \leq j \leq N, {\cal N}_j \not\subseteq {\cal N}_i }
\
{\mathsf P}_1^{(i)}\left\{
\tau^{\sf HALL,({\cal N}_j)} \leq \tau^{\mathsf{HALL}, ({\cal N}_i)}
\right\} \\
&\leq
\frac{ \exp\left(-
\left(mc-(1+\bar{m})\ln(c)\right)\right)}{\alpha^*}(1+o(1)).
\end{align*}
where
$\alpha^* = \underset{1 \leq i \leq N, 1 \leq j \leq N, {\cal N}_j
\not\subseteq {\cal N}_i}{\min} \left(\alpha \cdot|{\cal
N}_i|\right)^{1+m_{ji}}$.
For any $n$ there exists $c_0(n)$ such that for all $c
> c_0(n), c < e^{c/n}$. Hence, for sufficiently large $c$
\begin{align*}
{\sf PFI}(\tau^{\mathsf{HALL}})
& \leq \ \max_{1 \leq i \leq N} \
\max_{1 \leq j \leq N, {\cal N}_j \not\subseteq {\cal N}_i }
\
{\mathsf P}_1^{(i)}\left\{
\tau^{\sf HALL,({\cal N}_j)} \leq \tau^{\mathsf{HALL}, ({\cal N}_i)}
\right\} \\
&\leq
\frac{ \exp\left(-
\left(m-\frac{1+\bar{m}}{n}\right)c\right)}{\alpha^*}(1+o(1))
\ = \
\frac{\exp(- b_{\mathsf{HALL}} \cdot c)}
{B_{\sf \mathsf{HALL}}}
(1+o(1)),
\end{align*}
where $b_{\sf \mathsf{HALL}} = m - ((1+\bar{m})/n)$ and $B_{\mathsf{HALL}} = \alpha^*$.
\begin{comment}
\begin{align*}
K &:=\left\{
\begin{array}{ll}
\frac{\exp\left(-\frac{\alpha}{4} \right)}
{\underline{n}\alpha\left(1-\exp\left(-\frac{\alpha}{4}\right)\right)},
& \text{if} \ \frac{\exp\left(-\frac{\alpha}{4} \right)}
{\underline{n}\alpha\left(1-\exp\left(-\frac{\alpha}{4}\right)\right)} \leq 1 \\
\left(\frac{\exp\left(-\frac{\alpha}{4} \right)}
{\underline{n}\alpha\left(1-\exp\left(-\frac{\alpha}{4}\right)\right)}
\right)^{\bar{m}}, & \text{otherwise}
\end{array}
\right.
\end{align*}
and $\underline{n} = \min\{|{\cal N}_i| : i = 1,2,\cdots,N\}$.
\end{comment}
\section*{$\mathsf{PFI}$ -- Path--Loss Sensing Model}
\begin{comment}
Let us consider an event occurring at location $\ell_e \in {\cal A}_i$,
i.e., the set of sensors ${\cal N}_i$ detection--covers (and hence,
influence covers) $\ell_e$, and no sensor $s \in \overline{{\cal N}}_i :=
\{1,2,\cdots,n\}\setminus{\cal N}_i$ detection covers $\ell_e$. But
there may exist sensors $s' \in \overline{{\cal N}}_i$ (in addition
to the sensors in ${\cal N}_i$) that can influence cover $\ell_e$. Let
the set of sensors that influence covers $\ell_e$ be denoted by ${\cal M}_e$.
If an alarm occurs due to ${\cal N}_{h} \subseteq {\cal M}_e$, then the isolation
region ${\cal B}({\cal N}_h)$ contains
$\ell_e$, and hence, is a correct isolation. On the other hand,
if the alarm is due to ${\cal N}_{h'} \not\subseteq {\cal M}_e$, then the isolation
region ${\cal B}({\cal N}_{h'})$ does not contain $\ell_e$, and hence, is
a false isolation. In this section, we compute the probability of false
isolation when the event occurs at ${\cal A}_i$ and the
alarm is due to ${\cal N}_j \not\subseteq {\cal M}_e$, i.e., ${\cal
N}_j$ contains at least one sensor $s \notin {\cal M}_e$.
Let $T \in \{1,2,\cdots\}$ be an unknown change point, and let
$\ell_{e,i}$ denote $\ell_e:\ell_e\in{\cal A}_i$. The probability
measure ${\sf P}_T^{(\ell_{e,i})}(\cdot)$ is with respect to the change
happening at time $T$ and at location $\ell_{e,i}$, and the probability
measure ${\sf P}_\infty(\cdot)$ is with respect to the change happening
at $\infty$ (i.e., it corresponds to the pre--change hypothesis ${\bf
H}_0$). Let $\tau \in \{1,2,\cdots\}$ be the stopping time.
We compute the probability of a ${\sf CUSUM}$ statistic crossing a
threshold $c$ of a sensor node $s \in {\cal N}_j\setminus{\cal M}_e$. We
will be using this bound to compute the bounds for
$\mathsf{PFI}\left(\tau^{\mathsf{ALL}}\right)$ and $\mathsf{PFI}\left(\tau^{\mathsf{MAX}}\right)$.
\end{comment}
\begin{lemma}
\label{lem:path-bound}
For $s \in {\cal N}_j\setminus {\cal M}_e$ and for $t \geq T$, (with the
pre--change pdf
$f_0 \sim {\cal N}(0,\sigma^2)$ and the post--change pdf $f_1 \sim {\cal
N}(h_e\rho(r_s),\sigma^2)$)
\begin{align*}
\pmeasure{ C_{t}^{(s)} \geq c }
&\leq
\exp\left( -\frac{\underline{\omega}_0}{2}c\right)
\cdot
\frac{\exp\left( -
\frac{\alpha\underline{\omega}_0^2}{4}\right)}{1-\exp\left(-\frac{\alpha
\underline{\omega}_0^2}{4}\right)},
\end{align*}
where we recall that the parameter $\underline{\omega}_0$ defines the
influence range, and $\alpha = $ KL$(f_1,f_0)$.
\end{lemma}
\noindent
{\em Proof:}
For $s \in {\cal N}_j\setminus {\cal M}_e$ and for $t \geq T$,
\footnotesize
\begin{align*}
&\pmeasure{ C_{t}^{(s)} \geq c }\\
&= \pmeasure{ \max_{1\leq n \leq t}
\sum_{k=1}^{n}
\ln\left(\frac{f_1(X^{(s)}_k;r_s)}{f_0(X^{(s)}_k)}\right) \geq c }\\
&\leq \sum_{n=1}^{\infty} \pmeasure{ \sum_{k=1}^n \ln\left(\frac{f_1(X^{(s)}_k;r_s)}{f_0(X^{(s)}_k)}\right) \geq c }\\
&= \sum_{n=1}^{T-1} {\mathsf P}_{\infty}\left\{ \sum_{k=1}^n \ln\left(\frac{f_1(X^{(s)}_k;r_s)}{f_0(X^{(s)}_k)}\right) \geq c \right\}
+ \sum_{n=T}^{\infty} \pmeasure{ \sum_{k=1}^n \ln\left(\frac{f_1(X^{(s)}_k;r_s)}{f_0(X^{(s)}_k)}\right) \geq c }\\
&= \sum_{n=1}^{T-1} {\mathsf P}_{\infty}\left\{ \sum_{k=1}^n \ln\left(\frac{f_1(X^{(s)}_k;r_s)}{f_0(X^{(s)}_k)}\right) \geq c \right\}
+
\sum_{n=T}^{\infty} \pmeasure{
\sum_{k=1}^{T-1} \ln\left(\frac{f_1(X^{(s)}_k;r_s)}{f_0(X^{(s)}_k)}\right)
+\sum_{k=T}^{n} \ln\left(\frac{f_1(X^{(s)}_k;r_s)}{f_0(X^{(s)}_k)}\right) \geq c }\\
&= \sum_{n=1}^{T-1} {\mathsf P}_{\infty}\left\{
\sum_{k=1}^{n}
X_k^{(s)}
\geq \frac{\sigma^2}{h_e\rho(r_s)}(c + n \alpha)
\right\}
+ \sum_{n=T}^{\infty} {\mathsf P}_{\infty}\left\{
\sum_{k=1}^{n}
X_k^{(s)}
\geq \frac{\sigma^2}{h_e\rho(r_s)}c+nh_e\left(\frac{\rho(r_s)}{2}
- \rho(d_{e,s}) \right)
+ \left( T-1\right) h_e\rho(d_{e,s})
\right\}\\
&\leq \sum_{n=1}^{T-1} {\mathsf P}_{\infty}\left\{
\sum_{k=1}^{n}
X_k^{(s)}
\geq \frac{\sigma^2}{h_e\rho(r_s)}(c + n \alpha)
\right\} + \sum_{n=T}^{\infty} {\mathsf P}_{\infty}\left\{
\sum_{k=1}^{n} X_k^{(s)} \geq
n \cdot {h_e \frac{\rho(r_s)}{2} \underline{\omega}_0}
+ {c \cdot \frac{\sigma^2}{h_e\rho(r_s)}}
\right\}\\
&\leq \sum_{n=1}^{\infty} {\mathsf P}_{\infty}\left\{
\exp\left(\theta\sum_{k=1}^{n} X_k^{(s)}\right) \geq
\exp\left(
\frac{\theta\sigma^2}{h_e\rho(r_s)}(c+ n\alpha\underline{\omega}_0)
\right)
\right\} \ \ \text{for any} \ \theta > 0.\\
\text{Hence}, \ \pmeasure{ C_{t}^{(s)} \geq c }
&\leq
\sum_{n=1}^{\infty}
\exp\left(-\frac{\theta\sigma^2}{h_e\rho(r_s)}
(c+n\alpha\underline{\omega}_0) \right) \left({\mathsf
E}_\infty\left[e^{\theta X_1^{(s)}}\right]\right)^n \\
&=
\sum_{n=1}^{\infty}
\exp\left(-\frac{\theta\sigma^2}{h_e\rho(r_s)}
(c+n\alpha\underline{\omega}_0) +
\frac{n\sigma^2\theta^2}{2}\right)
\end{align*}
\normalsize
Since the above inequality holds for any $\theta > 0$, we have
\begin{align*}
\pmeasure{ C_{t}^{(s)} \geq c }
&\leq
\sum_{n=1}^{\infty}
\min_{\theta>0}
\exp\left(-\frac{\theta\sigma^2}{h_e\rho(r_s)}
(c+n\alpha\underline{\omega}_0) +
\frac{n\sigma^2\theta^2}{2}\right)
\end{align*}
The minimising $\theta$ is
$\frac{c+n\alpha\underline{\omega}_0}{nh_e\rho(r_s)}$.
Therefore, for
$\theta = \frac{c+n\alpha\underline{\omega}_0}{nh_e\rho(r_s)}$,
\begin{align*}
\pmeasure{ C_{t}^{(s)} \geq c }
&\leq
\sum_{n=1}^{\infty}
\exp\left( \frac{-(c+n\alpha\underline{\omega}_0)^2}{4\alpha n}
\right).
\end{align*}
\begin{align*}
\text{Note that} \
-\frac{(c+ \alpha \underline{\omega}_0 n)^2}{4\alpha n}
+\frac{(c+ \alpha \underline{\omega}_0 (n-1))^2}{4\alpha (n-1)}
&=
-\frac{\alpha\underline{\omega}_0^2}{4} + \frac{c^2}{4\alpha(n-1)n}
\end{align*}
Therefore, by iteratively computing the exponent, we have
\begin{align*}
\exp\left(-\frac{(c+\alpha\underline{\omega}_0n)^2}{4\alpha n}\right)
&=
\exp\left(-\frac{(c+\alpha\underline{\omega}_0)^2}{4\alpha}\right)\cdot
\exp\left(-\frac{\alpha \underline{\omega}_0^2}{4}(n-1)\right) \exp\left(
\frac{c^2}{4\alpha}\left(1 - \frac{1}{n}\right)\right) \\
&\leq
\exp\left(-\frac{(c+\alpha\underline{\omega}_0)^2}{4\alpha}\right)\cdot
\exp\left(-\frac{\alpha \underline{\omega}_0^2}{4}(n-1)\right) \exp\left(
\frac{c^2}{4\alpha}\right) \\
\text{or} \
\sum_{n=1}^{\infty}
\exp\left(-\frac{(c+\alpha\underline{\omega}_0n)^2}{4\alpha
n}\right) &\leq
\exp\left( -\frac{\underline{\omega}_0}{2}c\right)
\cdot
\frac{\exp\left( - \frac{\alpha\underline{\omega}_0^2}{4}\right)}{1-\exp\left(-\frac{\alpha \underline{\omega}_0^2}{4}\right)}\\
&=: \beta
\end{align*}
\normalsize
\subsection{${\sf PFI}(\tau^{\sf ALL})$ -- Path Loss Sensing Model}
\vspace{-9mm}
\begin{align*}
\pmeasure{
\tau^{\sf ALL,({\cal N}_j)} = t
\mid \tau^{\mathsf{ALL}, ({\cal N}_i)}=k
}
&\leq
\pmeasure{C_t^{(s)} \geq c, \forall s
\in {\cal N}_j
\mid \tau^{\mathsf{ALL}, ({\cal N}_i)}=k
} \nonumber\nonumber \\
&\leq
\pmeasure{C_t^{(s)} \geq c, \forall s
\in {\cal N}_j\setminus {\cal N}(\ell_e)
\mid \tau^{\mathsf{ALL}, ({\cal N}_i)}=k
} \nonumber \nonumber \\
&= \prod_{s \in {\cal N}_j\setminus{\cal N}(\ell_e)}
\pmeasure{C_t^{(s)} \geq c
} \nonumber \nonumber \\
&\leq \beta^{|{\cal N}_j\setminus{\cal N}(\ell_e)|}
\ \ \ (\text{from Lemma~\ref{lem:path-bound}})\nonumber \\
\text{Therefore}, \
\pmeasure{
\tau^{\sf ALL,({\cal N}_j)} \leq \tau^{\mathsf{ALL}, ({\cal N}_i)}
}
&\leq \beta^{{|{\cal N}_j\setminus{\cal N}(\ell_e)|}}
{\mathsf E}_1^{({\bf d}(\ell_{e}))}\left[\tau^{\mathsf{ALL}, ({\cal N}_i)}
\right]
\\
&\leq \beta^{{|{\cal N}_j\setminus{\cal N}(\ell_e)|}}\frac{c}{\alpha|{\cal
N}_i|}(1+o(1))
\end{align*}
Let $m = \underset{1 \leq i \leq N, \ell_e \in {\cal A}_i, 1 \leq j \leq N, {\cal N}_j
\not\subseteq {\cal N}(\ell_e)}{\min} |{\cal N}_j \setminus {\cal
N}(\ell_e)|$
and
$\underline{n}=\min\{|{\cal N}_i|:i=1,2,\cdots,N\}$.
Define $K =
\underset{1 \leq i \leq N, \ell_e \in {\cal A}_i, 1 \leq j \leq N, {\cal N}_j
\not\subseteq {\cal N}(\ell_e)}{\max}
\left[\frac{\exp\left(-\frac{\alpha\underline{\omega}_0^2}{4} \right)}
{1-\exp\left(-\frac{\alpha\underline{\omega}_0^2}{4}\right)}\right]^
{ |{\cal N}_j \setminus {\cal N}(\ell_e)|} $.
Therefore,
\begin{align*}
{\mathsf{PFI}}\left(\tau^{\sf ALL}\right)
&\leq \ \max_{1 \leq i \leq N} \
\sup_{\ell_e \in {\cal A}_i} \
\max_{1 \leq j \leq N, {\cal N}_j \not\subseteq{\cal N}(\ell_i)} \
\pmeasure{
\tau^{\sf ALL,({\cal N}_j)} \leq \tau^{\mathsf{ALL}, ({\cal N}_i)}
} \\
&\leq \frac{K \exp\left(-
\left(\frac{m\underline{\omega}_0}{2}c-\ln(c)\right) \right)}{\alpha
\underline{n}} (1+o(1)).
\end{align*}
For any $n$ there exists $c_0(n)$ such that for all $c
> c_0(n), c < e^{c/n}$. Hence, for sufficiently large $c$
\begin{align*}
{\mathsf{PFI}}\left(\tau^{\sf ALL}\right)
&\leq \frac{K \exp\left(-
\left(\frac{m\underline{\omega}_0}{2}-\frac{1}{n}\right)c \right)}{\alpha
\underline{n}} (1+o(1)) \ = \
\frac{\exp(-b_{\mathsf{ALL},d} \cdot c)}{B_{\sf \mathsf{ALL},d}} (1+o(1))
\end{align*}
where $b_{\mathsf{ALL},d}=(m\underline{\omega}_0/2) -(1/n)$ and
$B_{\mathsf{ALL},d}= \alpha\underline{n}/K$.
\subsection{${\sf PFI}(\tau^{\sf MAX})$ -- Path Loss Sensing Model}
\vspace{-9mm}
\begin{align*}
\pmeasure{\tau^{\sf MAX,({\cal N}_j)} = t
\mid \tau^{\mathsf{MAX}, ({\cal N}_i)}=k
}
&\leq
\pmeasure{\tau^{(s)} \leq t, \forall \ s \in {\cal N}_j \setminus{\cal N}(\ell_e)
\mid \tau^{\mathsf{MAX}, ({\cal N}_i)}=k
} \nonumber \\
&=
\prod_{s \in {\cal N}_j\setminus{\cal N}(\ell_e)} \
\pmeasure{\tau^{(s)} \leq t
\mid \tau^{\mathsf{MAX}, ({\cal N}_i)}=k
} \nonumber \\
&\leq
\prod_{s \in {\cal N}_j\setminus{\cal N}(\ell_e)} \
\sum_{n=1}^t \
\pmeasure{C_n^{(s)} \geq c} \nonumber \\
&\leq
{\beta}^{|{\cal N}_j\setminus{\cal N}(\ell_e)|}\cdot
t^{|{\cal N}_j\setminus{\cal N}(\ell_e)|}
\ \ \ (\text{from Lemma~\ref{lem:path-bound}})\nonumber \\
\pmeasure{\tau^{\sf MAX,({\cal N}_j)} \leq \tau^{\mathsf{MAX}, ({\cal N}_i)}}
&\leq
{\beta}^{|{\cal N}_j\setminus{\cal N}(\ell_e)|}\cdot
{\mathsf E}_1^{({\bf d}(\ell_{e}))}\left[(\tau^{\mathsf{MAX},({\cal N}_i)})^
{1+|{\cal N}_j\setminus{\cal N}(\ell_e)|}
\right] \nonumber \\
&\leq
{\beta}^{|{\cal N}_j\setminus{\cal N}(\ell_e)|}\cdot
\frac{c^{1+
|{\cal N}_j\setminus{\cal N}(\ell_e)|
}
}{\alpha^{1+
|{\cal N}_j\setminus{\cal N}(\ell_e)|
}}(1+o(1))
\end{align*}
\begin{comment}
where in step $(a)$, we used
Theorem 6 in
\cite{tartakovsky-veeravalli05general-asymptotic-quickest-change}.
\end{comment}
Let $m = \underset{1 \leq i \leq N, \ell_e \in {\cal A}_i, 1 \leq j \leq N, {\cal N}_j
\not\subseteq {\cal N}(\ell_e)}{\min}
|{\cal N}_j\setminus{\cal N}(\ell_e)|
$,
$\bar{m} = \underset{1 \leq i \leq N, \ell_e \in {\cal A}_i, 1 \leq j \leq N, {\cal N}_j
\not\subseteq {\cal M}_e}{\max}
|{\cal N}_j \setminus {\cal N}(\ell_e)|
$, and
define $K =
\underset{1 \leq i \leq N, \ell_e \in {\cal A}_i, 1 \leq j \leq N, {\cal N}_j
\not\subseteq {\cal N}(\ell_e)}{\max}
\left[\frac{\exp\left(-\frac{\alpha\underline{\omega}_0^2}{4} \right)}
{1-\exp\left(-\frac{\alpha\underline{\omega}_0^2}{4}\right)}\right]^
{ |{\cal N}_j \setminus {\cal N}(\ell_e)|} $.
Therefore,
\begin{align*}
{\sf PFI}(\tau^{\mathsf{MAX}})
&\leq \ \max_{1 \leq i \leq N} \
\sup_{\ell_e \in {\cal A}_i} \
\max_{1 \leq j \leq N, {\cal N}_j \not\subseteq{\cal N}(\ell_e)} \
\pmeasure{
\tau^{\sf MAX,({\cal N}_j)} \leq \tau^{\mathsf{MAX}, ({\cal N}_i)}
} \nonumber\\
&\leq \frac{K}{\alpha^*} \exp\left(-
\left(\frac{m\underline{\omega}_0}{2}c-(1+\bar{m})\ln(c)\right)
\right)(1+o(1)).
\end{align*}
where $\alpha^* = \underset{1 \leq i \leq N, \ell_e \in {\cal A}_i, 1 \leq j \leq N, {\cal N}_j
\not\subseteq {\cal N}(\ell_e)}{\min} \alpha^{1+
|{\cal N}_j \setminus {\cal N}(\ell_e)|}
$.
For any $n$ there exists $c_0(n)$ such that for all $c
> c_0(n), c < e^{c/n}$. Hence, for sufficiently large $c$
\begin{align*}
{\sf PFI}(\tau^{\mathsf{MAX}})
&\leq \frac{K}{\alpha^*} \exp\left(-
\left(\frac{m\underline{\omega}_0}{2}-\frac{1+\bar{m}}{n}\right)c
\right)(1+o(1)) \ = \
\frac{\exp(-b_{\mathsf{MAX},d} \cdot c)}{
B_{\sf \mathsf{MAX},d}
} (1+o(1)) ,
\end{align*}
where
$b_{\sf \mathsf{MAX},d}=
(\frac{m\underline{\omega}_0}{2})-(\frac{1+\bar{m}}{n})$ and
$B_{\mathsf{MAX},d} = \frac{\alpha^*}{K}$.
\subsection{${\sf PFI}(\tau^{\sf HALL})$ -- Path Loss Sensing Model}
\vspace{-9mm}
\begin{align*}
\pmeasure{\tau^{\sf HALL,({\cal N}_j)} = t
\mid \tau^{\mathsf{HALL}, ({\cal N}_i)}=k
}
&\leq
\pmeasure{\tau^{(s)} \leq t, \forall \ s \in {\cal N}_j \setminus {\cal
N}(\ell_e)
\mid \tau^{\mathsf{HALL}, ({\cal N}_i)}=k
}
\end{align*}
which has the same form as that of ${\sf MAX}$. Hence, from the analysis
of ${\sf MAX}$, it follows that
\begin{align*}
\pmeasure{
\tau^{\sf HALL,({\cal N}_j)} \leq \tau^{\mathsf{HALL}, ({\cal N}_i)}
}
&\leq
\beta^{
|{\cal N}_j\setminus{\cal N}(\ell_e)|}
{\mathsf E}_1^{({\bf d}(\ell_e))}\left[(\tau^{\mathsf{HALL},({\cal N}_i)})^{
1+ |{\cal N}_j\setminus{\cal N}(\ell_e)|
}
\right] \nonumber \\
&\leq
\beta^{
|{\cal N}_j\setminus{\cal N}(\ell_e)|
}
\frac{c^{1+
|{\cal N}_j\setminus{\cal N}(\ell_e)|
}}{
(\alpha|{\cal N}_i|)^{1+
|{\cal N}_j\setminus{\cal N}(\ell_e)|
}}(1+o(1)) \nonumber \\
\text{Therefore,} \
{\sf PFI}(\tau^{\mathsf{HALL}})
&\leq \ \max_{1 \leq i \leq N} \
\sup_{\ell_e \in {\cal A}_i} \
\max_{1 \leq j \leq N, {\cal N}_j \not\subseteq{\cal N}(\ell_e)} \
\pmeasure{
\tau^{\sf HALL,({\cal N}_j)} \leq \tau^{\mathsf{HALL}, ({\cal N}_i)}
} \nonumber\\
&\leq \frac{K}{\alpha^*} \exp\left(-
\left(\frac{m\underline{\omega}_0}{2}c-(1+\bar{m})\ln(c)\right)
\right)(1+o(1)).
\end{align*}
\begin{align*}
\text{Therefore for large $c$}, \mathsf{PFI}
&\leq \frac{K}{\alpha^*} \exp\left(-
\left(\frac{m\underline{\omega}_0}{2}-\frac{1+\bar{m}}{n}\right)c
\right)(1+o(1)) \ = \
\frac{ \exp(-b_{\mathsf{HALL},d} \cdot c)}
{B_{\sf \mathsf{HALL},d}}
(1+o(1)),
\end{align*}
where $\alpha^* = \underset{1 \leq i \leq N, \ell_e \in {\cal A}_i, 1 \leq j \leq N, {\cal N}_j
\not\subseteq {\cal N}(\ell_e)}{\min} \left(\alpha \cdot|{\cal
N}_i|\right)^{1+
|{\cal N}_j\setminus{\cal N}(\ell_e)|
}$,
$b_{\sf \mathsf{HALL},d} =
(m\underline{\omega}_0/2)-(1+\bar{m})/n$, and
$B_{\mathsf{HALL},d} = \alpha^*/K$.
\vspace{-5mm}
\section{$\mathsf{SADD}$ for the Boolean and the Path loss Models}
\label{app:sadd}
Fix $i, 1 \leq i \leq N$. For each change time $T \geq 1$, define
$\mathcal{F}_T = \sigma(X^{(s)}_k, s \in \mathcal{N}, 1 \leq k \leq
T),$ and for $\ell_e \in {\cal A}_i$, $\mathcal{F}^{(i)}_T = \sigma(X^{(s)}_k, s \in
\mathcal{N}_i, 1 \leq k \leq T)$. From
\cite{stat-sig-proc.mei05information-bounds} (Theorem 3, Eqn.\ (24)),
\begin{eqnarray} \label{eqn:mei05-thm3-24}
\mathrm{ess} \sup \mathsf{E}^{({\bf d}(\ell_e))}_T
\left( (\tau^{{\sf rule},({\cal N}_i)} - T)^+ | \mathcal{F}^{(i)}_{(T-1)} \right) \leq
\frac{c}{I} (1 + o(1)), \
\text{as $c \to \infty$},
\end{eqnarray}
Define
$\mathcal{F}_{\{\tau^{{\sf rule},({\cal N}_i)} \geq T\}}$ as the $\sigma$-field generated
by the event $\{\tau^{{\sf rule},({\cal N}_i)} \geq T\}$, and similarly define the
$\sigma$-field $\mathcal{F}_{\{\tau^{\sf rule} \geq T\}}.$ Evidently
$ \mathcal{F}_{\{\tau^{{\sf rule},(i)} \geq T\}} \subset \mathcal{F}^{(i)}_{(T-1)}
\ \ \mathrm{and} \ \
\mathcal{F}_{\{\tau^{\sf rule} \geq T\}} \subset \mathcal{F}_{(T-1)}$.
By iterated conditional expectation,
\begin{eqnarray}
\label{eqn:bound_by_esssup}
\mathsf{E}^{({\bf d}(\ell_e))}_T \left( (\tau^{{\sf rule},({\cal N}_i)} - T)^+ |
\mathcal{F}_{\{\tau^{\sf rule} \geq T\}} \right)
&\leq&
\mathrm{ess} \sup \mathsf{E}^{({\bf d}(\ell_e))}_T
\left( (\tau^{{\sf rule},({\cal N}_i)} - T)^+ | \mathcal{F}_{(T-1)} \right)
\end{eqnarray}
We can further assert that
\begin{eqnarray*}
\mathsf{E}^{({\bf d}(\ell_e))}_T
\left( (\tau^{{\sf rule},({\cal N}_i)} - T)^+ | \mathcal{F}_{(T-1)} \right)
\stackrel{\mathrm{a.s.}}{=}
\mathsf{E}^{({\bf d}(\ell_e))}_T
\left( (\tau^{{\sf rule},({\cal N}_i)} - T)^+ | \mathcal{F}^{(i)}_{(T-1)} \right)
\end{eqnarray*}
Using this observation with Eqn.~\ref{eqn:bound_by_esssup} and
Eqn.~\ref{eqn:mei05-thm3-24}, we can write, as $c \to \infty$,
\begin{eqnarray} \label{eqn:bound_derived_from_mei05}
\mathsf{E}^{({\bf d}(\ell_e))}_T \left( (\tau^{{\sf rule},({\cal N}_i)} - T)^+ |
\mathcal{F}_{\{\tau^{\sf rule} \geq T\}} \right)
\leq \frac{c}{I} (1 + o(1))
\end{eqnarray}
Finally,
$ \mathsf{E}^{({\bf d}(\ell_e))}_T \left( (\tau^{{\sf rule},({\cal N}_i)} - T)^+ | \tau^{\sf
rule} \geq T \right) I_{\{ \tau^{\sf rule} \geq T \}}
\stackrel{\mathrm{a.s.}}{=}
\mathsf{E}^{({\bf d}(\ell_e))}_T \left( (\tau^{{\sf rule},({\cal N}_i)} - T)^+ |
\mathcal{F}_{\{\tau^{\sf rule} \geq T\}} \right)
I_{\{
\tau^{\sf
rule}
\geq
T
\}}$.
We conclude, from \ref{eqn:bound_derived_from_mei05}, that, as $c \to
\infty$,
$ \mathsf{E}^{({\bf d}(\ell_e))}_T \left( (\tau^{{\sf rule},({\cal N}_i)} - T)^+ | \tau^{\sf
rule} \geq T \right)
\leq \frac{c}{I} (1 + o(1))$.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
2,869,038,154,708 | arxiv |
\section{Introduction}
\label{sec:introduction}
The Message-Passing Interface (MPI) is the most commonly used model
for programming large-scale parallel systems today. The traditional
model for using MPI hitherto has been the ``MPI everywhere"
model in which the application launches an MPI process on each core
of the supercomputer and executes by ignoring the fact that some of
the MPI processes reside on different cores of the same node while
some execute on different nodes. The MPI implementation then
internally optimizes communication within the node by using shared
memory or other techniques.
\begin{figure}[htbp]
\vspace{-1em}
\begin{center}
\includegraphics[width=0.49\textwidth]{pics/sota_endpoints.pdf}
\end{center}
\vspace{-1em}
\caption{Endpoint configuration of MPI-everywhere and MPI+threads
models.}
\label{fig:sotaeps}
\vspace{-1em}
\end{figure}
While the MPI-everywhere model of parallelism has served applications
well for several decades,
scaling applications in this model is becoming increasingly difficult. The biggest reason for this
difficulty in scaling is that not all on-node resources scale
at the same rate. Specifically, the number of cores available on a
node is increasing rapidly. Other on-node resources such as memory,
cache, TLB space, and network resources, however, scale much more
slowly. Since the MPI-everywhere model uses a separate MPI process
for each core, it inadvertently leads to a static split of all on-node
resources, resulting in underutilization and wastage of resources. While
optimizations such as \emph{MPI shared memory}~\cite{hoefler2013mpi+}
address sharing a subset of resources (in particular, memory), these
optimizations are not a generic solution for all on-node resources.
Consequently, researchers have been increasingly looking at hybrid
MPI+threads programming (e.g., MPI+OpenMP) as an alternative to the
traditional MPI-everywhere model~\cite{thakur2010mpi}.
Current implementations of these two models---MPI everywhere and
MPI+threads---represent the two extreme cases of communication
resource sharing in modern MPI implementations. \figref{fig:sotaeps}
contrasts these two models in state-of-the-art MPI implementations,
such as MPICH~\cite{mpich}, that use one communication endpoint per
MPI process~\cite{thakur2010mpi}. A communication endpoint is a set
of communication resources that allows the software to interface with
the network hardware to send messages over the network.
In the MPI-everywhere model, multiple communication endpoints exist
per node where each MPI process communicates using its own endpoint.
This allows each MPI process to communicate completely independently
of other processes, thus providing a direct and contention-free path
to the network hardware and leading to the best-achievable
communication performance (assuming that the MPI implementation is
sufficiently optimized). In the MPI+threads model, on the other hand,
all threads within an MPI process communicate using a single endpoint, which
causes the MPI implementation to use locks on the endpoint for serialization.
This model hurts communication throughput; more important, the available
network-level parallelism remains underutilized. But, this model
uses the least possible amount of communication resources.
A straightforward way to achieve maximum communication path
independence between threads in the MPI+threads model is to dedicate a
separate context each containing an endpoint with its own set of
resources to each thread. This emulates the endpoint configuration
in the MPI-everywhere model where each MPI process has its own
context. Although such a na\"\i ve\ approach can achieve the maximum
throughput for a given number of threads, it wastes the hardware's
limited resources. \figref{fig:tputvswastage}(a) shows how this
na\"\i ve\ approach translates to 93.75\% hardware resource wastage on a
modern Mellanox mlx5 InfiniBand device. In order to achieve maximum
resource efficiency, multiple threads can share just one endpoint,
which is the case for the MPI+threads model in state-of-the-art MPI
implementations. Doing so, however, drastically impacts communication
throughput. \figref{fig:tputvswastage}(b) shows the tradeoff between
throughput and hardware resource wastage in a multithreaded
environment that emulates state-of-the-art endpoints in the
MPI-everywhere and MPI+threads models.
\begin{figure*}[htbp]
\centering
\begin{minipage}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{pics/ctx_wastage.pdf}
\vspace{-1.5em}
\end{minipage}
\begin{minipage}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{pics/tput_vs_wastage.pdf}
\vspace{-1.5em}
\end{minipage}
\caption{(a) Demonstration of 93.75\% hardware resource wastage
per context in the na\"\i ve\ approach. (b) (i) Throughput (higher
is better) and (ii) number of wasted hardware resources (lower
is better) with state-of-the-art endpoints on Mellanox's
ConnectX-4 adapter.}
\label{fig:tputvswastage}
\vspace{-1em}
\end{figure*}
Note that the MPI+threads model itself does not force the extreme of
using a single endpoint for communication by all threads. That is simply
how the state-of-the-art MPI libraries implement it. Unlike
MPI everywhere, the MPI+threads environment allows for any arbitrary level
of sharing of communication resources between the different threads.
The question that we really need to answer is, \textit{what level of
resource sharing is ideal?} As is the case with any computer
science question that is worth its salt, the answer is, \textit{it depends.}
If one is looking for the least amount of resources to
use without losing any performance compared with the MPI-everywhere
model, a certain set of resources can be shared while others cannot.
If a small percentage of performance loss is acceptable, a
different division of shared vs. dedicated resources would be ideal.
If resource efficiency is the most important criterion and additional
performance loss is acceptable, yet another division of shared
vs. dedicated resources would be ideal. Understanding this tradeoff
space between performance and resource usage is the primary goal of
this paper.
To that end, this paper makes the following contributions.
\begin{enumerate}
\item We demonstrate the two extreme cases---one where all
threads share a single communication endpoint and another
where each thread gets its own dedicated endpoint. We showcase
the inefficiencies in both these cases.
\item We explore the tradeoff space between performance
(communication throughput) and communication resource
usage in a multithreaded environment.
In \secref{sec:resources},
we first discuss
the communication resources of an endpoint.
In \secref{sec:analysis}, we
thoroughly analyze the different levels of resource sharing
in MPI+threads environments in the context of Mellanox
InfiniBand, the most popular high-speed interconnect on
the TOP500 and also the preferred interconnect for both artificial intelligence and high-performance computing (HPC)~\cite{ibtop500}.
\item Using the lessons learned from our analysis, we design
efficient resource-sharing models in \secref{sec:scalableepdesign}
to provide \emph{scalable communication endpoints}. Scalable
endpoints provide a wide range of resource-sharing models,
ranging from fully independent to fully shared communication paths.
Our evaluation on scalable
endpoints in \secref{sec:scalableepeval} shows that
fully independent communication paths can achieve performance
as high as MPI-everywhere endpoints by using 3.2x fewer resources.
\end{enumerate}
\section{Communication Resources}
\label{sec:resources}
To send messages across the network, the software (CPU)
coordinates with the hardware (NIC) to \emph{initiate} a transfer
and confirm its \emph{completion}. This coordination occurs
through three communication resources: a software transmit
queue, a software completion structure, and a NIC's hardware
resource. The three interact using the mechnasims described in
~\cite{scalable} and features described in \secref{sec:ibfeatures}.
In IB, the transmit queue is the QP,
the completion structure is the CQ, and the hardware resource
is the uUAR contained within a UAR page. The QP, UAR, and
uUAR make up the \emph{initiation} interface; the CQ is
the \emph{completion} interface.
The threads of a MPI+threads application eventually map to QPs,
and the QPs eventually map to a uUAR on a UAR of the NIC.
As seen in \secref{sec:ibresources}, the interconnect's
driver dictates the mapping between the transmit queues and the
hardware resources while the user decides the
mapping between the transmit queues and completion structures.
Multiple QPs could share the same CQ, or each could have its own.
The QP and CQ are associated with circular buffers that contain
their WQEs and CQEs, respectively. The CPU writes to the QP's buffer,
and the NIC DMA-reads it when \emph{Inlining}\ is not used. The NIC
DMA-writes the CQ's buffer and the CPU reads it when polling for progress.
Both buffers are pinned by the operating system during resource creation.
\begin{table}
\centering
\caption{Bytes used by mlx5 Verbs resources}
\label{tab:resmem}
\begin{tabular}{ c | c | c | c | c | c }
\hline
\textbf{CTXs} & \textbf{PDs} & \textbf{MRs} & \textbf{QPs} & \textbf{CQs} & \textbf{Total} \\ \hline
256K & 144 & 144 & 80K & 9K & 345K \\
\hline
\end{tabular}
\vspace{-1.25em}
\end{table}
The QP and CQ occupy memory with their circular buffers. So,
every time we create a QP or a CQ, we impact memory
consumption. \tabref{tab:resmem} shows the memory used by
each type of a Verbs resource (for mlx5) that is required to
open a QP. Creating one endpoint requires at least 354 KB
of memory, with the CTX occupying 74.2\% of it.
However, the memory usage of the QP and the CQ is on the
order of kilobytes, whereas the memory on the nodes of
clusters and supercomputers is typically on the order of hundreds of
gigabytes. Hence, we will notice a formidable impact on memory consumption
only when the number of the Verbs resources is on the order of thousands.
The impact of creating a QP or a CQ on memory is not of immediate
concern.
On the other hand, the limit on the hardware resource is much smaller:
8K UAR pages on the ConnectX-4 NIC with only two uUARs
per UAR. The situation is similar for other interconnects such as
Intel Omni-Path, where the maximum number of hardware contexts
on its NIC is 160~\cite{hfi_guide}. The 8K UARs on
ConnectX-4 translates to a maximum of 907 CTXs, considering that the
user creates a TD-assigned QP contained within its own CTX for each
thread. Each CTX contains a total of 18 uUARs---the 16 static ones plus
the two from the TD's dynamically allocated UAR (see \secref{sec:ibresources}).
The resource wastage of this approach is a
staggering 94\% since it uses only one uUAR out of 18. Arguably, we will not run out of
hardware resources even if we create one endpoint per core on
existing processors with this
approach, but eliminating this huge
wastage would enable vendors to significantly reduce the power and cost
of their NICs. Such high wastage translates to requiring a
second NIC on the node after only marginally utilizing the resources on the
first.
\section*{Acknowledgment}
We thank Pavel Shamis from Arm Research, the members of
the [email protected] mailing list for their prompt responses,
Benjamin Allen and Kumar Kalyan from JLSE for their support, and the
members of the PMRS group
at ANL for their continuous feedback. This material is based upon work
supported by the U.S. Department of Energy, Office of Science, under
contract DE-AC02-06CH11357.
\section{Background}
\label{sec:background}
InfiniBand (IB) is the popular choice among high-speed interconnects.
Mellanox
Technologies is the most renowned IB vendor, powering 216 systems
(both IB and Ethernet) on the TOP500~\cite{ibtop500}. Hence, we study
the mlx5 provider of Verbs, the IB software stack. Mellanox's
Connect-IB adapter and its ConnectX series,
starting from ConnectX-4, are mlx5 devices.
\subsection{InfiniBand Resources}
\label{sec:ibresources}
The software bidirectional communication portal in IB is the queue pair (QP):
a pair of send and receive FIFO queues, to which work queue entries (WQEs), IB's
message descriptors, are posted. Each QP is associated with a completion queue
(CQ) that contains completion queue entries (CQEs) corresponding to the completion of signaled WQEs.
To create a QP, we need at least one memory buffer (BUF),
device context (CTX), protection domain (PD), and CQ. A memory region (MR)
is required if the NIC needs direct access to memory. Chapter 10 of
the IB specification details the IB resources~\cite{ibspec}. Additionally,
we can assign QPs to thread domains (TDs) to provide single-threaded access hints
to the QPs in a TD.
The CTX is the container of all IB resources and is also a slice of the network hardware,
containing a subset of the NIC's hardware resources. In mlx5 devices, the hardware
resources are part of the user access region (UAR) of the NIC's address space. Each
UAR page consists of two micro UARs (uUARs). By default, a CTX contains eight UARs (UAR pages) and,
hence, 16 uUARs. The user's QPs are mapped to one of the \emph{statically allocated}
uUARs unless a QP is part of a TD in which case the QP is mapped to a uUAR in a UAR
that was \emph{dynamically allocated} during TD creation. ~\cite{scalable}
details these resources and describe mlx5's uUAR-to-QP assignment policy.
\subsection{InfiniBand Operational Features}
\label{sec:ibfeatures}
To send a message on InfiniBand, the application calls \texttt{ibv\_post\_send}.
What follows is a series of coordinated operations between the CPU and the
NIC to fetch the WQE (DMA read), read its payload (DMA read), and signal its completion (DMA write).
~\cite{scalable} portrays the operations involved.
The NIC is typically a PCIe device and hence, the overhead of the
operations is multiple PCIe round-trip latencies. Naturally, reducing
the number of round-trip latencies for small messages impacts
throughput significantly. \emph{Inlining}, \emph{Postlist}, \emph{Unsignaled Completions}, and \emph{BlueFlame}\
are IB's operational features that help reduce this overhead. We describe them
below considering the depth of the QP to be $n$.
\noindent \emph{\textbf{Postlist.}} Instead of posting only one WQE per
\texttt{ibv\_post\_send}, IB allows the application to post a
linked list of WQEs with just one call to \texttt{ibv\_post\_send}. It
can reduce the number of \emph{DoorBell}\ rings from $n$ to 1.
\noindent \emph{\textbf{Inlining.}} Here, the CPU copies the data
into the WQE. Hence, with its first DMA read for the WQE, the NIC gets
the payload as well, eliminating the second DMA read for the payload.
\noindent \emph{\textbf{Unsignaled Completions.}} Instead of signaling
a completion for each WQE, IB allows the application to turn off
completions for WQEs provided that at least one out of every \emph{n}
WQEs is signaled. Turning off completions reduces the DMA writes of CQEs
by the NIC. Additionally, the application polls fewer CQEs, reducing the
overhead of making progress.
\noindent \emph{\textbf{BlueFlame.}} \emph{BlueFlame}\
is Mellanox's terminology for programmed
I/O---it writes the WQE along with the \emph{DoorBell}, cutting off the first DMA read.
With \emph{BlueFlame}, the UAR pages are mapped as
write-combining (WC) memory. Hence, the WQEs sent using \emph{BlueFlame}\ are buffered
through the CPU's WC buffers. Note that \emph{BlueFlame}\ is not used with
\emph{Postlist}; the NIC will DMA-read the WQEs in
the linked list.
Using both \emph{Inlining}\ and \emph{BlueFlame}\ for small messages eliminates two
PCIe round-trip latencies. While the use of \emph{Inlining}\ and \emph{BlueFlame}\ is
dependent on message size, the use of \emph{Postlist}\ and \emph{Unsignaled Completions}\ is
reliant primarily on the user's design choices and application semantics.
\section{Experimental Evaluation and Analysis}
\label{sec:eval}
In this section, we experimentally evaluate the impact of IB resource
sharing on performance (communication throughput) and the usage
of communication resources (see \secref{sec:comm_resources} -- \aparna{fix link})
between independent threads across the different IB features using
the setup described in \secref{sec:setup}. For each IB resource, we
evaluate sharing across 16 threads. In the figures below, x-way
sharing means the resource of interest is being shared x ways. In other
words, two-way sharing means the resource is shared between two
threads (eight instances of the shared resource), 16-way sharing means
the resource is shared between 16 threads (one instance of the shared
resource), and so on. Similarly, we define the values of
\emph{Postlist}\ and \emph{Unsignaled Completions}\ with respect to the threads. In the following graphs,
we are interested in the change in throughput with
increasing sharing rather than the absolute throughput obtained by
using certain features. We start with the na\"\i ve\ endpoint
configuration (MPI everywhere) in a multithreading environment and
move down each level of resource sharing.
\subsection{Memory Buffer sharing}
\label{sec:bufsharingeval}
\begin{figure}[htbp]
\centering
\begin{minipage}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{pics/buf_sharing.pdf}
\par(a)
\end{minipage}
\begin{minipage}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{pics/buf_sharing_resusage.pdf}
\par(b)
\end{minipage}
\caption{(a) Message rate and (b) communication resource usage with increasing
BUF sharing across 16 threads.}
\label{fig:bufsharing}
\end{figure}
\noindent \emph{\textbf{Performance.}} \figref{fig:bufsharing}(a) shows
the performance-impact of sharing the memory buffer across the
independent CTXs for each thread. The throughput decreases with
increasing BUF sharing. In other words, the increasing concurrent DMA
reads to the same memory address demonstrably hurts throughput.
To pinpoint the cause of this problem, we experiment with using
independent buffers that are on the same page with and without 64-byte
cache alignment. Without cache alignment, the 16 independent buffers
would be on the same cache line since each buffer is only 2
bytes. \figref{fig:bufsharingexp}(a) shows that concurrent DMA reads
to different addresses on the same cache line are detrimental. Also, the
total number of PCIe reads (measured using PMU tools~\cite{pmu_tools})
with and without cache alignment is the same. However,
\figref{fig:bufsharingexp}(b) shows that the rate of these PCIe reads
is much slower when the buffers are on the same cache line. This points us
to how the NIC translates virtual addresses (contained in the WQE)
to physical addresses needed for a DMA read of the payload, bolstering the analysis in
\secref{sec:bufsharing}. From our evaluation we can infer that the
hash function of the NIC's parallel TLB design is based on the cache
line, causing all translations to hit the same TLB with 16-way BUF
sharing.
\noindent \emph{\textbf{Resource usage.}} \figref{fig:bufsharing}(b) shows
that sharing the buffer has no impact on the usage of communication
resources.
\begin{figure}[htbp]
\centering
\begin{minipage}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{pics/buf_sharing_exp.pdf}
\par(a)
\end{minipage}
\begin{minipage}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{pics/pcie_rds.pdf}
\par(b)
\end{minipage}
\caption{Effects on (a) message rate and (b) PCIe reads with and
without cache-aligned buffers.}
\label{fig:bufsharingexp}
\end{figure}
\subsection{Device Context Sharing}
\label{sec:ctxsharingeval}
\begin{figure}[htbp]
\centering
\begin{minipage}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{pics/ctx_sharing.pdf}
\par(a)
\end{minipage}
\begin{minipage}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{pics/ctx_sharing_resusage.pdf}
\par(b)
\end{minipage}
\caption{(a) Message rate and (b) communication resource usage with increasing
CTX sharing across 16 threads.}
\label{fig:ctxsharing}
\end{figure}
With CTX sharing, the QPs of multiple threads are contained within
the same CTX.
\noindent \emph{\textbf{Performance.}} \figref{fig:ctxsharing}(a)
shows that sharing the CTX doesn't hurt performance for all
configurations except when we do not use \emph{Postlist}\ {\it i.e.}\ when
we use \emph{BlueFlame}\ writes. For example, we notice a 1.15x drop
in performance going from 8-way to 16-way CTX with maximally
independent TDs. While the engineers at Mellanox
Technologies are able to reproduce this drop even on the newer
ConnectX-5 card, the cause for the drop is unknown. We discovered
that creating twice the number of maximally independent TDs but
using only half of them (even or odd ones) can eliminate this drop, as
seen in the ``All w/o Postlist 2xQPs" line. Additionally, without our
patch, the Verbs user cannot request for maximally independent paths;
from the ``All w/o Postlist Sharing 2" line, we can see the harmful effects of
sharing a CTX when the mlx5 provider uses the second sharing level
(its default) for assigning TDs to uUARs. Again, while this evaluation
validates the need for maximally independent paths, it does not explain
the decline in throughput when there are concurrent \emph{BlueFlame}\ writes
to distinct uUARs sharing the same UAR page. Finding the precise reason
for this behavior is hard since the hardware-software interaction is
dependent on two proprietary technologies -- (1) the CPU architecture's
implementation of flushing write combining memory, which is used in
\emph{BlueFlame}\ writes, and (2) the NIC's ASIC logic that distinguishes between
a \emph{BlueFlame}\ write and a \emph{DoorBell}.
\noindent \emph{\textbf{Resource usage.}} \figref{fig:ctxsharing}(b)
shows that sharing the CTX does not impact QP and CQ usage but it
greatly impacts uUAR and UAR usage. This is because a
maximally independent TD within a shared CTX adds only one UAR
as opposed to nine UARs when it is created within its own independent
CTX. Creating twice as many TDs to address the 8-way to 16-way
CTX sharing drop increases the number of QPs and UARs by 16,
and uUARs by 32 (``2xQPs" in \figref{fig:ctxsharing}(b))
since each of the extra 16 maximally independent TDs allocates its
own QP and UAR. On the other hand, the second level of sharing
that mlx5 is hardcoded to use consumes twice as low the number
of UARs as the default maximally independent TDs within a shared
CTX, resulting in eight and 16 lesser UARs and uUARs, respectively.
\subsection{Protection Domain Sharing}
\label{sec:pdsharingeval}
\begin{figure}[htbp]
\centering
\begin{minipage}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{pics/pd_sharing.pdf}
\par(a)
\end{minipage}
\begin{minipage}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{pics/pd_sharing_resusage.pdf}
\par(b)
\end{minipage}
\caption{(a) Message rate and (b) communication resource usage with increasing
PD sharing across 16 threads.}
\label{fig:pdsharing}
\end{figure}
To study the impact of sharing protection domains, we fix the number
of CTXs to one because a PD cannot be shared across CTXs.
\noindent \emph{\textbf{Performance.}} From
\figref{fig:pdsharing}(a), we observe that sharing PDs across
the QPs of the threads has no effect on performance, supporting
our analysis in \secref{sec:pdsharing}.
\noindent \emph{\textbf{Resource usage.}} \figref{fig:pdsharing}(b)
shows that PD sharing has no effect on communication resource
usage either. The uUAR and UAR values reflect those of one CTX
containing 16 maximally independent TDs.
\subsection{Memory Region Sharing}
\label{sec:mrsharingeval}
\begin{figure}[htbp]
\centering
\begin{minipage}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{pics/mr_sharing.pdf}
\par(a)
\end{minipage}
\begin{minipage}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{pics/mr_sharing_resusage.pdf}
\par(b)
\end{minipage}
\caption{(a) Message rate and (b) communication resource usage with increasing
MR sharing across 16 threads.}
\label{fig:mrsharing}
\end{figure}
Since the memory region can be shared only under a PD, we set the
number of PDs and consequently the number of CTXs to one.
Keeping the BUFs independent for each thread, sharing
the MR implies that the MR spans multiple buffers, a situation that is
possible only if the BUFs of the threads are contiguous. Building on
our observations in \secref{sec:bufsharingeval}, we ensure that the
BUFs of each thread are cache aligned even though they lie in the same
contiguous memory area.
\noindent \emph{\textbf{Performance.}} \figref{fig:mrsharing}(a) shows
that sharing the MR between threads does not hurt throughput as long
as the memory buffers of each thread are cache-aligned, supporting our
analysis in \secref{sec:mrsharing}. If the buffers are not cache-aligned,
sharing the MR means sharing the buffer as well and we would observe
the same effect of BUF sharing as seen in \figref{fig:bufsharing}(a).
\noindent \emph{\textbf{Resource usage.}} \figref{fig:mrsharing}(b)
shows that sharing the MR does not impact communication resouce
usage.
\subsection{Completion Queue Sharing}
\label{sec:cqSharing}
\begin{figure}[htbp]
\centering
\begin{minipage}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{pics/cq_sharing.pdf}
\par(a)
\end{minipage}
\begin{minipage}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{pics/cq_sharing_resusage.pdf}
\par(b)
\end{minipage}
\caption{(a) Message rate and (b) communication resource usage with increasing
CQ sharing across 16 threads.}
\label{fig:cqsharing}
\end{figure}
A completion queue is associated only with the CTX and can be shared
only within a CTX. Therefore, we set the number of CTXs to one and keep
all the other resources independent between the 16 threads. For this
evaluation, we use the standard CQ instead of the extended CQ since
the lock on the CQ is needed for multithreaded access. With CQ
sharing, each CQ will contain completions for multiple QPs. Hence, we
multiply the CQ depth by x-way CQ sharing since the depth of the
CQ must be able to accommodate the maximum number of outstanding
completions.
\noindent \emph{\textbf{Performance.}}
\figref{fig:cqsharing}(a) demonstrates that hurtful effects of CQ
sharing corresponding to the analysis in \secref{sec:cqsharing}.
In the ``All" and ``All w/o \emph{BlueFlame}" lines, we see a drop only
after 8-way sharing because there exists a tradeoff space between
the benefits of \emph{Unsignaled Completions}\ and the overheads of CQ sharing.
\figref{fig:varyingunsig}(a) portrays this tradeoff space. Lower values
of \emph{Unsignaled Completions}\ correspond to higher overheads, which are most visible in
``All w/o \emph{Unsignaled Completions}" where every WQE generates a completion. For
different \emph{Unsignaled Completion}\ values, we see a drop in throughput
only after a certain level of CQ sharing because the benefits of
\emph{Postlist}\ outweigh the impact of contention. When we remove the
benefits from \emph{Postlist}, \figref{fig:varyingunsig}(b) shows a linear
decrease in throughput with increasing contention.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.49\textwidth]{pics/varying_unsig_x_postlist_cqsharing.pdf}
\end{center}
\caption{(a) Postlist size of 32, (b) Postlist size of 1.}
\label{fig:varyingunsig}
\end{figure}
\noindent \emph{\textbf{Resource usage.}} \figref{fig:cqsharing}(b)
shows that sharing the CQ does not impact hardware resource usage.
However, it improves the total memory consumption of the software
communication resources by 1.1x with 16-way sharing.
\subsection{Queue Pair Sharing}
\label{sec:qpSharing}
\begin{figure}[htbp]
\centering
\begin{minipage}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{pics/qp_sharing.pdf}
\par(a)
\end{minipage}
\begin{minipage}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{pics/qp_sharing_resusage.pdf}
\par(b)
\end{minipage}
\caption{(a) Message rate and (b) communication resource usage with increasing
QP sharing across 16 threads.}
\label{fig:qpsharing}
\end{figure}
The queue pair can be shared only within a PD. Hence, we set the
number of PDs and consequently the number of CTXs to one. For
this evaluation, we do not use TDs since the QP has to support
multithreaded access. Additionally, we use the standard CQ since
sharing a QP with its own CQ means sharing the CQ as well; hence,
requiring the need for a lock on the CQ.
\noindent \emph{\textbf{Performance.}}
\figref{fig:qpsharing}(a) reports throughput with increasing QP
sharing between 16 threads and supports the expected decline analyzed
in \secref{sec:qpsharing}. Similar to CQ sharing, the threads can
read each other's completions and hence require atomic updates
for their completion counters. Removing \emph{Postlist}\ is more detrimental
than removing \emph{Unsignaled Completion}\ because we naturally expect more contention on
the QP's lock without \emph{Postlist}.
\noindent \emph{\textbf{Resource usage.}}
\figref{fig:qpsharing}(b) shows that sharing the QP does not
impact hardware resource usage. The reason why the
absolute hardware resource usage of no QP sharing is
lower than that of no CQ sharing even though they both have
one CTX is because the CQ sharing evaluation uses dynamically
allocated uUARs and UARs with maximally independent TDs, unlike
this QP sharing evaluation which used the statically allocated uUARs
and UARs to allow for multithreaded access. QP sharing reduces both
the number of QPs and CQs, hence reducing the total memory
consumption of the software communication resources by 16x with
16 threads.
\section{Resource Space and Limits}
\label{sec:memandlimits}
In \secref{sec:memusage} we detail the memory usage of the various
Verbs resources and the impact of resource sharing on memory usage. We
also study the limits of Verbs resources---the maximum number of
entities we can create for each Verbs resource---in
\secref{sec:reslimits}.
\subsection{Memory Usage}
\label{sec:memusage}
\tabref{tab:resmem} shows the memory used by each type of a Verbs
resource. We measure this memory usage manually using \texttt{gdb} on
MOFED4.1 by hitting breakpoints on \texttt{malloc}, \texttt{calloc},
\texttt{posix\_memalign}, and \texttt{mmap} during the corresponding
resource creation functions. Creating 1 endpoint requires at least 354
KB of memory, with the CTX occupying 74.2\% of it. Since the CTX is
the major consumer of an endpoint's memory and can be shared across
QPs, we next measure the total memory used for multiple endpoints with
and without CTX sharing. \figref{fig:memsharing}(a) shows an increase
of over 14x in memory usage going from 1 to 16 endpoints with distinct
resources for each endpoint. If we share CTXs between the endpoints,
we can reduce the memory footprint by up to 9x in the case of 16
endpoints according to \figref{fig:memsharing}(b). Note, however, that
any amount of CTX sharing, according to \secref{sec:ctxSharing}, will
impact performance. More important, both \tabref{tab:resmem} and
\figref{fig:memsharing} show that the absolute memory consumption of
the Verbs resources is minimal---on the order of megabytes---and thus
has barely noticeable impact on memory consumption. The memory on the
nodes of clusters and supercomputers is typically on the order of
hundreds of gigabytes. Hence, from a memory perspective, creating
distinct resources for each endpoint is viable.
\begin{table}
\begin{center}
\caption{Bytes used by Verbs resources} \label{tab:resmem}
\begin{tabular}{ c | c | c | c | c | c }
\hline
\textbf{CTXs} & \textbf{PDs} & \textbf{MRs} & \textbf{QPs} & \textbf{CQs} & \textbf{Total} \\ \hline
256K & 144 & 144 & 80K & 9K & 345K \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}[htbp]
\centering
\begin{minipage}[t]{0.24\textwidth}\centering\includegraphics[width=\textwidth]{pics/memusage_nosharing.png}\par(a) 1 QP-per-CTX \end{minipage}
\begin{minipage}[t]{0.24\textwidth}\centering\includegraphics[width=\textwidth]{pics/memusage_ctxsharing.png}\par(b) CTX-sharing \end{minipage}
\caption{Memory usage (in MB) of 16 endpoints.}
\label{fig:memsharing}
\end{figure}
\subsection{Resource Limits}
\label{sec:reslimits}
We primarily study the limits on the ConnectX-4 NIC, which is an mlx5
device. We programmatically attempt to hit the maximum number of
entities we can create for each resource. Our experimentally found
limits match those reported by MOFED-4.3's\footnote{The calculation of
\texttt{max\_device\_ctx} is wrong in versions earlier than
MOFED-4.3} \texttt{ibv\_devinfo} but with certain caveats. The
maximum number of CTXs (\texttt{max\_device\_ctx} field in
\texttt{ibv\_devinfo}) we can create is 1,021. The calculation of
1,021 assumes, rightly so, that each CTX contains 8 UAR pages since
MOFED's libmlx5 driver maps 8 UAR pages during
\texttt{ibv\_open\_device}. In OFED, on the other hand, we can control
the number of UAR pages mapped in a CTX through the MLX5\_TOTAL\_UUARS
environment variable. Setting it to 512 (2 times MLX5\_MAX\_UARS), we
can create a maximum of only 31 CTXs. Hence, the limit on CTXs is
imposed by the number of hardware registers (uUARs in Mellanox
terminology) available on the NIC. From the varying limit on CTXs, we
derive that the ConnectX-4 adapter hosts 16336 hardware registers (2
uUARs per 8,168 UARs).
The upper limit of the other resources is on the order of hundreds of
thousands and millions. For example, we can create 16 million CQs and
262,000 QPs across processes on a node with one ConnectX-4 HCA. Hence,
we are ultimately bottlenecked by the 16,000 hardware registers. This
take-away also holds for mlx4 devices such as the ConnectX-3 adapter
where we are limited by 8,088 Blue Flame registers and 1,011 regular
Doorbell registers; the maximum entities of the other resources is at
least a magnitude higher.
Thus, to create multiple endpoints on a node, we conclude that we will
run into hardware resource limits before running out of memory. These
limits hold beyond Mellanox adapters as well. Intel Omni-Path's HFI
adapter, for example, features 160 independent hardware
registers~\cite{opaUG}.
\section{Resource Sharing Analysis}
\label{sec:analysis}
\begin{figure*}[htbp]
\centering
\begin{minipage}[t]{0.26\textwidth}
\centering
\includegraphics[width=\textwidth]{pics/ib_res_dep.pdf}
\par(a)
\end{minipage}
\begin{minipage}[t]{0.70\textwidth}
\centering
\includegraphics[width=\textwidth]{pics/mlx5_sharing.pdf}
\par(b)
\end{minipage}
\caption{(a) Hierarchical relation between the various Verbs
resources (the arrow points to the parent); each resource can have
multiple children but only one parent. (b) Four levels of sharing
in mlx5 between two independent threads.}
\label{fig:sharinganalysis}
\end{figure*}
In Mellanox InfiniBand, a thread can map to the hardware resource (uUAR) in four possible ways, each of which represents a level of
sharing with another thread. \figref{fig:sharinganalysis}(b)
demonstrates the four ways described below.
\begin{enumerate}
\item \emph{Maximum independence} -- No sharing of any hardware resource between the threads; each thread is assigned to its own UAR page (used in MPI everywhere).
\item \emph{Shared UAR} -- The threads are assigned to distinct uUARs sharing the same UAR page (mlx5 default for multiple TDs described in \secref{sec:ctxsharing}).
\item \emph{Shared uUAR} -- Although the threads have their own QPs, the distinct QPs share the same uUAR (regular uUARs in \secref{sec:ibresources}). A lock is needed on the shared uUAR for concurrent \emph{BlueFlame}\ writes.
\item \emph{Shared QP} -- The threads share the same QP (used in state-of-the-art MPI+threads), in which case a lock on the QP is needed for concurrent device WQE preparation. The lock on the QP also protects concurrent \emph{BlueFlame}\ writes on the uUAR since the lock is released only after a \emph{BlueFlame}\ write.
\end{enumerate}
While sharing software and hardware communication resources at
different levels can improve resource efficiency, it can also hurt
throughput. Below, we explore the tradeoff space between resource
efficiency and communication throughput from the perspective of
Mellanox InfiniBand user while considering the various IB features
described in \secref{sec:ibfeatures}. The user allocates the
interconnect's resources and interacts with them through the IB
resources shown in \figref{fig:sharinganalysis}(a). Each of those
objects represents a level of sharing between independent threads. We
start with the na\"\i ve\ approach---each thread driving its own set of
resources---and then move down each level of IB resource sharing
according to the hierarchical relation in
\figref{fig:sharinganalysis}(a).
\subsection{Memory Buffer Sharing}
\label{sec:bufsharing}
The highest level of sharing is the non-IB resource: memory
buffer. The BUF refers to the memory location that contains the
payload of the message. We define the BUF to be the pointer to the
payload. If the payload size is small enough, it can be inlined within
the message descriptor. By default, the maximum message size that can
be inlined on ConnectX-4 exposed through Verbs is 60 bytes. Therefore,
for a message size larger than 60 bytes, the NIC must DMA-read the
payload through a memory region.
\noindent \emph{\textbf{Performance.}} When the payload cannot be
inlined, sharing this BUF between the threads' memory regions seems
harmless since the NIC will only ever read from it on the sender
node. However, the NIC's TLB design is important to consider since a
virtual-to-physical address translation is imperative for a DMA
read. The NIC typically has a multirail TLB design that handles
multiple transactions in parallel in order to sustain the high speed
of the NIC's ASIC. The load is distributed across the TLBs by using a
hash function. If this hash function is based on the cache line,
concurrent DMA reads to the same cache line will hit the same
translation engine, serializing the reads. With a shared BUF, the WQEs
of multiple threads would point to the same cache line and serialize
the DMA reads. When the payload can be inlined, however, sharing the
BUF is harmless since the CPU reads the payload instead of the
NIC. Concurrent reads to the same memory location in a CPU are
harmless.
\noindent \emph{\textbf{Resource usage.}} Sharing the BUF reduces
memory usage. However, its impact on memory consumption is
proportional to the size of the payload of the message. It doesn't
affect the usage of any of the communication resources.
\subsection{Device Context Sharing}
\label{sec:ctxsharing}
The highest level of sharing among the IB resources is the device
context. This is essentially a container of hardware and software
resources that the other IB resources within the CTX will use. These
hardware resources are either statically allocated during CTX creation
or dynamically allocated when the user uses hints such as thread
domains (see \secref{sec:ibresources}).
Sharing a CTX between threads means that the CTX contains multiple
QPs. These multiple QPs may or may not share the hardware resources
contained within the context. Even if multiple QPs are assigned to
distinct uUARs, the uUARs could share the same UAR page. Without CTX
sharing, this is never the case since the QPs naturally get assigned
to uUARs on different UAR pages.
More important, the Verbs user has no way to explicitly request
maximally independent paths. When the user creates multiple TDs, the
mlx5 provider can assign the threads to a uUAR using either the first
or the second level of sharing, as shown in
\figref{fig:sharinganalysis}(b); the third and fourth levels of
sharing do not apply to TDs since they imply multithreaded access to one
hardware resource. Currently, the mlx5 provider is hardcoded to use
the second level of sharing for multiple TDs, restricting the user
from creating maximally independent QPs within a CTX. More abstractly,
the Verbs users today have no way to request for a sharing level for
the QPs/TDs they create. The number of levels of sharing is provider
specific, and one can imagine multiple levels of sharing to exist for
TDs.
To overcome this Verbs design limitation, we propose a variable,
\texttt{sharing}, in the TD initialization attributes (\texttt{struct
ibv\_td\_init\_attr}) that are passed during TD creation. The higher
the value of \texttt{sharing}, the higher is the amount of hardware
resource sharing between multiple TDs. A \texttt{sharing} value of $1$
refers to maximally independent paths. In mlx5, only two levels of
sharing exist for TDs, corresponding to (1) and (2) in
\figref{fig:sharinganalysis}(b).
Note that the second uUAR of the UAR dedicated to a maximally
independent TD is wasted. Since the number of hardware resources is
limited, the user can request only a certain maximum number of
independent hardware resources within a CTX. This would be half of the
maximum number of UARs the user can dynamically allocate using TDs. In
mlx5, the maximum number of maximally independent paths is 256.
\noindent \emph{\textbf{Performance.}} For maximally independent
threads, sharing the CTX should not affect performance since we
emulate the QP-to-uUAR mapping of endpoints in the MPI everywhere
model, which represents the maximum possible throughput for a given
number of threads. Sharing a CTX with the second level of sharing
between independent threads could hurt performance. The NIC could
implement its mechanism of reading concurrent \emph{DoorBells}\ and
\emph{BlueFlame}\ writes at the granularity of UAR pages, negatively
impacting throughput when multiple QPs are assigned to uUARs on the
same UAR page. The CPU architecture's implementation of flushing write
combining memory can also impact performance in the second level of
sharing since the memory attribute of the uUARs is set at the page
level granularity using the Page Attribute Table (PAT)~\cite{pat}.
\noindent \emph{\textbf{Resource usage.}} Sharing the CTX is critically
useful for both memory consumption and uUAR usage. For example, when
shared between 16 threads, it can reduce the memory consumption by 9x
(reducing from 5.15 MB to 0.35 MB). More important, sharing the CTX
means that the 16 static uUARs allocated by the mlx5 provider during
CTX creation (see \secref{sec:ibresources}) are wasted only
once. Nonetheless, maximally independent threads will waste one uUAR
per thread. On the other hand, sharing the CTX with the second level
of sharing will not waste any uUAR after the 16 static ones.
\subsection{Protection Domain Sharing}
\label{sec:pdsharing}
The next level of sharing between threads is the protection
domain. The PD is a means of isolating a collection of IB
resources. Resources such as QPs and MRs contained under different PDs
cannot interact with each other. For example, a QP on the sender node
cannot access a remote MR on the receiver node owned by a PD that does
not contain the receiver QP connected to the sender QP.
\noindent \emph{\textbf{Performance.}} The software PD object is not
accessed on the critical data-path; the protection checks occur in the
NIC. Hence, from a performance perspective, sharing a PD between
multiple threads would be harmless.
\noindent \emph{\textbf{Resource usage.}} Sharing the PD between
threads will reduce memory consumption but only marginally so since
each PD occupies less than 200 bytes. The PD doesn't impact the usage
of any of the communication resources.
\subsection{Memory Region Sharing}
\label{sec:mrsharing}
Within a protection domain, the memory region can be shared between
multiple threads. The MR is an object that registers a region in the
virtual address space of the user with the operating
system. Registering a memory region means pinning the memory and
preparing that region for DMA accesses from the NIC. The virtual
address region typically corresponds to that of the memory buffer. The
MR is a precondition for when the NIC needs to access a node's (local
or remote) memory.
\noindent \emph{\textbf{Performance.}} Sharing the MR itself between
threads will have no impact on performance since the MR is just an
object that points to a registered memory region. The MR may span
multiple BUFs given that they are contiguous. Sharing an MR that
contains only one BUF means the threads are sharing the BUF as well,
which would imply the same effects of BUF sharing when the
payload cannot be inlined for an operation that sends data over the
wire such as an RDMA write.
\noindent \emph{\textbf{Resource usage.}} Sharing the MR between
threads will reduce memory consumption but only marginally so since,
like the PD, each MR occupies less than 200 bytes. The MR doesn't
impact the usage of any of the communication resources.
\subsection{Completion Queue Sharing}
\label{sec:cqsharing}
The completion queue is a work queue that contains completion queue
entries corresponding to the successful or unsuccessful completion of
the WQEs posted in the QP. The user actively polls the CQ on the
critical data-path to confirm progress in communication. The IB
specification guarantees ordering between the completions; that is, a
successful poll of the $n$th WQE guarantees the completion of all the
WQEs prior to it. The Verbs user can map multiple QPs to the same CQ.
\noindent \emph{\textbf{Performance.}} The CQ has a lock that a thread
will acquire before polling it. Hence, the threads sharing a CQ will
contend on its lock. Additionally, if QP $i$ and QP $j$ share a CQ,
then thread $i$ driving QP $i$ could read QP $j$'s completions. Hence,
the completion counter for any thread $i$ requires atomic updates.
The atomics and locks are obvious sources of contention when sharing
CQs between threads. Additionally, lower values of \emph{Unsignaled Completions}\ imply that
the thread reads more completions from the CQ than for higher values.
This translates to the thread currently polling the shared CQ holding
the lock longer than it needs to, in order to read fewer
completions. Thus, the impact of lock contention on the shared CQ is
higher without \emph{Unsignaled Completions}\ than with. The user may use \emph{Unsignaled Completions}\ if the
semantics of the user's application allows that.
Even if the Verbs user can guarantee single-thread access to a CQ, the
standard CQ does not allow the user to disable the lock on the CQ. The
extended CQ, on the other hand, allows the user to do so during CQ
creation (\texttt{ibv\_create\_cq\_ex}) with the
\texttt{IBV\_CREATE\_CQ\_ATTR\_SINGLE\_THREADED} flag.
\noindent \emph{\textbf{Resource usage.}} Sharing the CQ translates to
lesser circular buffers and hence it reduces memory consumption of the
completion communication resource.
\subsection{Queue Pair Sharing}
\label{sec:qpsharing}
Ultimately, the user can choose to share the queue pair between
threads to achieve maximum resource efficiency. This is the case in
state-of-the-art MPI implementations.
\noindent \emph{\textbf{Performance.}} The QP has a lock that a thread
needs to acquire before posting on it. Hence, threads will contend on a
shared QP's lock. Additionally, the threads need to coordinate to post
on the finite QP-depth of the shared QP since the values of
\emph{Postlist}\ and \emph{Unsignaled Completion}\ are with respect to each thread. This
coordination requires atomic fetch and subtracts on the remaining
QP-depth value. The locks and atomics are obvious sources of
contention when sharing QPs between threads. Most important, the NIC's
parallel capabilities are not utilized with shared QPs since each QP
is assigned to only one hardware resource through which the messages
of multiple threads are serialized.
When the user assigns a QP to a thread domain, however, the lock on
the QP is still obtained. The mlx5 provider currently optimizes the
critical data-path by removing the lock only on the uUAR that the TD
is assigned to. Since the user guarantees no concurrent access from
multiple threads to a QP assigned to a TD, the lock on the QP itself
can be disabled. We optimize the mlx5 provider for this case by first
implementing the infrastructure to control locking on each individual
mlx5 Verbs object and then using it to disable the lock on a QP
assigned to a TD~\cite{mlx5lockcontrol}.
\noindent \emph{\textbf{Resource usage.}} Sharing the QP means lesser
circular buffers for the WQEs and hence, a lower memory
consumption. In the extreme case of one shared QP, we have only one of
each of the IB resources, resulting in minimum memory
usage. Furthermore, a shared QP between threads means only 15, as
opposed to 16 with a TD-assigned-QP per thread, static uUARs are
wasted. The 16th, low-latency uUAR is assigned to the shared QP.
\section{Conclusions}
\label{sec:conclusion}
For a given number of hardware threads, state-of-the-art MPI
implementations either achieve maximum communication
throughput and waste 93.75\% of hardware resources using multiple
processes or achieve maximum resource efficiency and perform up to 7x
worse with multiple threads. In this work, we study the tradeoff space
between performance and resource usage that lies in between the
two extremes. We do so by first analyzing and evaluating in depth the
consequences of sharing network resources between independent
threads. In the process, we extend the existing Verbs design to allow
for maximally independent paths, for which case we also optimize the
mlx5 stack. As a result of our analysis, we describe \emph{scalable
communication endpoints}, an efficient resource sharing model for
multithreading scenarios at the lowest software level of interconnects.
Each category of the model reflects a performance level and its
corresponding resource usage that users, such as MPICH, can use to
guide their creation of endpoints. The model's 2xDynamic endpoints, for
example, can achieve 108\% of the performance of the endpoints
in MPI-everywhere while using only 31.25\% as many resources.
\section{Resource-Sharing Analysis}
\label{sec:analysis}
\begin{figure*}[htbp]
\centering
\begin{minipage}[t]{0.26\textwidth}
\centering
\includegraphics[width=\textwidth]{pics/ib_res_dep.pdf}
\par(a)
\end{minipage}
\begin{minipage}[t]{0.70\textwidth}
\centering
\includegraphics[width=\textwidth]{pics/mlx5_sharing.pdf}
\par(b)
\end{minipage}
\caption{(a) Hierarchical relation between the various Verbs
resources (the arrow points to the parent); each resource can have multiple children but only one parent. (b) Four levels of thread-to-uUAR
mapping in mlx5 between independent threads.}
\label{fig:sharinganalysis}
\vspace{-1.75em}
\end{figure*}
From an analytical perspective, a thread can map to
the hardware resources in four possible ways.
\figref{fig:sharinganalysis}(b)
demonstrates the four ways described below.
\begin{enumerate}
\item \emph{Maximum independence} -- There is no sharing of any
hardware resource between the threads; each is
assigned to its own UAR page (used in MPI everywhere).
\item \emph{Shared UAR} -- The threads are assigned to
distinct uUARs sharing the same UAR page (mlx5 default
for multiple TDs described in \secref{sec:ctxsharing}).
\item \emph{Shared uUAR} -- Although the threads have
their own QPs, the distinct QPs share the same uUAR
(medium-latency uUARs in ~\cite{scalable}). A lock is
needed on the shared uUAR for concurrent \emph{BlueFlame}\ writes.
\item \emph{Shared QP} -- The threads share the same QP
(used in state-of-the-art MPI+threads), in which case a
lock on the QP is needed for concurrent device WQE
preparation. The lock on the QP also protects concurrent
\emph{BlueFlame}\ writes on the uUAR since the lock is released
only after a \emph{BlueFlame}\ write.
\end{enumerate}
Sharing software and hardware communication resources at
different levels improves resource efficiency but can hurt
throughput. Below, we explore the tradeoff space between resource
efficiency and communication throughput from the perspective of
the Mellanox IB user while considering the various IB
features described in \secref{sec:ibfeatures}. The user allocates
and interacts with the communication resources
through the IB
resources shown in \figref{fig:sharinganalysis}(a). Each of those
objects represents a level of sharing between threads.
Hence, we analyze the impact of sharing each IB resource on
performance and resource usage. We verify our analyses for 16
threads using the setup described in \secref{sec:setup}.
In the figures below, x-way sharing means the resource of interest
is being shared x ways. For example, 8-way sharing means
the resource is shared between 8 threads (two instances of
the shared resource). Moreover, we are interested in the change in
throughput with increasing sharing rather than the absolute
throughput obtained by using certain features.
Starting with na\"\i ve\ endpoints---each thread driving its own
set of resources using a TD-assigned QP---we move down each level of IB resource
sharing according to the hierarchical relation shown in
\figref{fig:sharinganalysis}(a). \figref{fig:scalability} shows the
performance and resource usage of this approach for 16 threads.
\subsection{Memory Buffer Sharing}
\label{sec:bufsharing}
\begin{figure}[htbp]
\centering
\begin{minipage}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{pics/buf_sharing.pdf}
\end{minipage}
\begin{minipage}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{pics/buf_sharing_resusage.pdf}
\end{minipage}
\vspace{-2em}
\caption{Message rate (left) and communication resource usage (right) with increasing
BUF sharing across 16 threads.}
\label{fig:bufsharing}
\vspace{-1.5em}
\end{figure}
The highest level of sharing is the non-IB resource: memory
buffer. We define the BUF to be the pointer to the
payload of the message. If the payload size is small enough, it can be inlined within
the WQE; that is, the CPU will read it. By default, the maximum message size that can
be inlined on ConnectX-4 exposed through Verbs is 60 bytes. Therefore,
for any larger message, the NIC must DMA-read the
payload.
\noindent \emph{\textbf{Performance.}} When the CPU reads
the payload, sharing this BUF between the threads is
safe since concurrent reads to the same memory
location in a CPU are harmless.
When the NIC reads the payload, however, its TLB design is important since a
virtual-to-physical address translation is imperative for the DMA
read. The NIC typically has a multirail TLB design that handles
multiple transactions in parallel in order to sustain the high speed
of the NIC's ASIC. The load is distributed across the TLBs by using a
hash function. If this hash function is based on the cache line,
concurrent DMA reads to the same cache line will hit the same
translation engine, serializing the reads. With a shared BUF, the WQEs
of multiple threads would point to the same cache line, serializing
the DMA reads.
\figref{fig:bufsharing} indeed shows that the throughput
decreases with increasing BUF sharing without \emph{Inlining}\, that is, when the NIC reads the payload.
To further validate our analysis, \figref{fig:bufsharingexp}(a) shows that independent 2-byte buffers without
64-byte cache alignment also hurt performance since
all 16 buffers are on the same cache line.
While the total number of PCIe reads
(measured using PMU tools~\cite{pmu_tools}) with and without
cache alignment is equal,
\figref{fig:bufsharingexp}(b) shows that the rate of these PCIe
reads is much slower when the buffers are on the same cache
line.
\noindent \emph{\textbf{Resource usage.}} The BUF is a non-IB
resource. Hence, it does not affect the usage of any of the
communication resources, as we can see in
\figref{fig:bufsharing}.
\begin{figure}[htbp]
\centering
\begin{minipage}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{pics/buf_sharing_exp.pdf}
\par(a)
\end{minipage}
\begin{minipage}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{pics/pcie_rds.pdf}
\par(b)
\end{minipage}
\caption{Effects on (a) message rate and (b) PCIe reads with and
without cache-aligned buffers.}
\label{fig:bufsharingexp}
\vspace{-1.5em}
\end{figure}
\subsection{Device Context Sharing}
\label{sec:ctxsharing}
We note that the Verbs user gets maximally independent (level 1 in \figref{fig:sharinganalysis}(b)) paths
without CTX sharing since the QPs naturally get assigned to uUARs
on different UARs. Within a shared CTX, however, the user has no way to explicitly request
maximally independent paths for multiple QPs. When the user creates multiple TDs, the
mlx5 provider can assign the threads to a uUAR using either the first
or the second level of sharing, as shown in
\figref{fig:sharinganalysis}(b). Currently, the mlx5 provider is hardcoded to use
the second level of sharing for multiple TDs, restricting the user
from creating maximally independent QPs within a CTX. More abstractly,
the Verbs users today have no way to request a sharing level for
the QPs/TDs they create. The number of levels of sharing is provider
specific.
To overcome this Verbs design limitation, we propose a variable,
\texttt{sharing}, in the TD initialization attributes (\texttt{struct
ibv\_td\_init\_attr}) that are passed during TD creation. The higher
the value of \texttt{sharing}, the higher is the amount of hardware
resource sharing between multiple TDs. A \texttt{sharing} value of $1$
refers to maximally independent paths. In mlx5, only two levels of
sharing exist for TDs, corresponding to (1) and (2) in
\figref{fig:sharinganalysis}(b).
Note that the second uUAR of the UAR dedicated to a maximally
independent TD is wasted. Since the number of hardware resources is
limited, the user can request only a certain maximum number of
independent hardware resources within a CTX. This would be half of the
maximum number of UARs the user can dynamically allocate using TDs. In
mlx5, the maximum number of maximally independent paths is 256.
\begin{figure}[htbp]
\centering
\begin{minipage}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{pics/ctx_sharing.pdf}
\end{minipage}
\begin{minipage}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{pics/ctx_sharing_resusage.pdf}
\end{minipage}
\vspace{-2em}
\caption{Message rate (left) and communication resource usage (right) with increasing
CTX sharing across 16 threads.}
\label{fig:ctxsharing}
\vspace{-1.5em}
\end{figure}
Furthermore, we note that when the user assigns a QP to a TD, the
lock on the QP is still obtained. The mlx5 provider currently removes
only the lock on the uUAR that the TD
is assigned to. Since the user guarantees no concurrent access from
multiple threads to a QP assigned to a TD, the lock on the QP itself
can be disabled. We optimize the mlx5 provider for this case~\cite{mlx5lockcontrol}.
\noindent \emph{\textbf{Performance.}} For maximally independent
threads, sharing the CTX should not affect performance since we
emulate the thread-to-uUAR mapping in the MPI-everywhere
model. Sharing a CTX with the second level of sharing
between threads could hurt performance---the uUARs on the same
UAR could be sharing the same set of the NIC's registers, negatively
impacting throughput. Additionally, the CPU architecture's implementation of flushing write
combining memory can impact performance in the second level of
sharing since the memory attribute of the uUARs is set at the
page-level granularity by using the Page Attribute Table (PAT)~\cite{pat}.
\figref{fig:ctxsharing} shows that sharing the CTX
does not hurt performance except when we
do not use \emph{Postlist}\, that is, when we use \emph{BlueFlame}\ writes.
For example, we notice a 1.15x drop in performance going
from 8-way to 16-way CTX sharing even with maximally independent TDs.
While the engineers at Mellanox are able to
reproduce this drop even on the newer ConnectX-5,
the cause for the drop is unknown. We discovered that creating
twice the number of maximally independent TDs but using only
half of them (even or odd ones) can eliminate this drop, as
seen in the ``All w/o Postlist 2xQPs" line. Additionally,
from the ``All w/o Postlist Sharing 2" line,
we can see the harmful effects of sharing a UAR when the
mlx5 provider is hardcoded to use the second sharing level for
assigning TDs within a shared CTX to uUARs.
While this evaluation validates the
need for maximally independent paths, it does not explain
the decline in throughput when there are concurrent \emph{BlueFlame}\
writes to distinct uUARs sharing the same UAR page. Finding
the precise reason for this behavior is hard since the
hardware-software interaction is dependent on the aforementioned proprietary
technologies.
\noindent \emph{\textbf{Resource usage.}} Sharing the CTX is critical
for hardware resource usage, as seen in \figref{fig:ctxsharing}. The reason is
that a maximally independent TD within a shared CTX adds only
1 UAR as opposed to 9 UARs when it is created within its own
independent CTX. Also, the 16 uUARs
and 8 UARs statically allocated by the mlx5 provider during CTX
creation (see \secref{sec:ibresources}) are wasted only once.
Nonetheless, maximally independent TDs will waste one uUAR
per thread. While sharing the CTX does not impact QP and CQ usage, it does reduce
the overall memory consumption. For example, when shared between 16 threads, it can reduce the
overall memory consumption by 9x (from 5.15 MB to 0.35 MB).
\begin{figure}[htbp]
\centering
\begin{minipage}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{pics/pdnmr_sharing.pdf}
\end{minipage}
\begin{minipage}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{pics/pdnmr_sharing_resusage.pdf}
\end{minipage}
\vspace{-2em}
\caption{Message rate (left) and communication resource usage (right) with increasing
PD or MR sharing across 16 threads.}
\label{fig:pdnmrsharing}
\vspace{-1.5em}
\end{figure}
Creating twice as many TDs (``2xQPs" in \figref{fig:ctxsharing})
increases resource usage since each the extra 16 maximally independent TDs allocates their
own QP and UAR. The second level of sharing
that mlx5 is hardcoded to use consumes 2x fewer
UARs than do maximally independent TDs.
\subsection{Protection Domain Sharing}
\label{sec:pdsharing}
The protection domain is just a means of isolating a collection of IB
resources. Resources contained under different PDs
cannot interact with each other.
\noindent \emph{\textbf{Performance.}} The software PD object is not
accessed on the critical data-path; the protection checks occur in the
NIC. Hence, from a performance perspective, sharing a PD between
multiple threads would be harmless, as observed in
\figref{fig:pdnmrsharing}.
\noindent \emph{\textbf{Resource usage.}} The PD does not impact
the usage of any of the communication resources, as we can see
in \figref{fig:pdnmrsharing}. The uUAR and UAR values reflect
those of one CTX since
the PD can be shared only within a CTX.
\subsection{Memory Region Sharing}
\label{sec:mrsharing}
The MR is an object that pins memory in the
virtual address space of the user with the OS
and prepares it for DMA accesses from the NIC.
\noindent \emph{\textbf{Performance.}} Sharing the MR between
threads will have no impact on performance since the MR is just an
object that points to a registered memory region. The MR may span
multiple contiguous BUFs. Sharing an MR
containing only one BUF means that the threads are sharing the BUF as well,
which implies the same effects of BUF sharing. \figref{fig:pdnmrsharing} confirms
that sharing the MR does not affect performance as long as the
threads have independent cache-aligned buffers.
\noindent \emph{\textbf{Resource usage.}} The MR does not
control the allocation of any of the communication resources. Hence,
sharing it will have no impact, as we can see in \figref{fig:pdnmrsharing}.
\ignore{
\begin{figure}[htbp]
\centering
\begin{minipage}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{pics/mr_sharing.pdf}
\end{minipage}
\begin{minipage}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{pics/mr_sharing_resusage.pdf}
\end{minipage}
\vspace{-2em}
\caption{Message rate (left) and communication resource usage (right) with increasing
MR sharing across 16 threads.}
\label{fig:mrsharing}
\end{figure}
}
\begin{figure}[htbp]
\centering
\begin{minipage}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{pics/cq_sharing.pdf}
\end{minipage}
\begin{minipage}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{pics/cq_sharing_resusage.pdf}
\end{minipage}
\vspace{-2em}
\caption{Message rate (left) and communication resource usage (right) with increasing
CQ sharing across 16 threads.}
\label{fig:cqsharing}
\vspace{-1em}
\end{figure}
\subsection{Completion Queue Sharing}
\label{sec:cqsharing}
The Verbs user can map multiple QPs to the same CQ, allowing
for CQ-sharing between threads. In a latency-bound application, the user actively polls the CQ on the
critical data-path to confirm progress in communication.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.49\textwidth]{pics/varying_unsig_x_postlist_cqsharing.pdf}
\end{center}
\vspace{-1.5em}
\caption{(a) Postlist size of 32, (b) Postlist size of 1.}
\label{fig:varyingunsig}
\vspace{-1.5em}
\end{figure}
\noindent \emph{\textbf{Performance.}} The CQ has a lock that a thread
will acquire before polling it. Hence, the threads sharing a CQ will
contend on its lock. Additionally, if QP $i$ and QP $j$ share a CQ,
then thread $i$ driving QP $i$ can read QP $j$'s completions. Hence,
the completion counter for any thread $i$ requires atomic updates.
Atomics and locks are obvious sources of contention when sharing
CQs between threads. \figref{fig:cqsharing} demonstrates
these hurtful effects of CQ sharing. The effects are most noticeable in
16-way sharing because there exists a tradeoff space between the
benefits of \emph{Unsignaled Completions}\ and the overheads of CQ sharing.
\figref{fig:varyingunsig}(a) portrays this tradeoff space.
Lower values of \emph{Unsignaled Completions}\ imply that the thread reads more completions
from the CQ than for higher values, translating to longer hold-time
of the shared CQ's lock. Thus, the impact of lock contention is
most visible in ``All w/o Unsignaled." For higher \emph{Unsignaled Completion}-values, we
see a drop only after a certain level of CQ sharing because the benefits of
\emph{Postlist}\ outweigh the impact of contention. Removing
\emph{Postlist}\ shows a linear decrease in throughput with increasing
contention in \figref{fig:varyingunsig}(b).
We note that even if the Verbs user can guarantee single-thread
access to a CQ, the standard CQ does not allow the user to disable
the lock on the CQ. The extended CQ, on the other hand, allows
the user to do so during CQ creation (\texttt{ibv\_create\_cq\_ex})
with the \texttt{IBV\_CREATE\_CQ\_ATTR\_SINGLE\_THREADED} flag.
\noindent \emph{\textbf{Resource usage.}} Sharing the CQ translates to
fewer circular buffers, and hence it reduces the memory consumption of the
completion communication resource. But it does not affect hardware
resource usage, as we can see in \figref{fig:cqsharing}. The uUAR and
UAR usage shown corresponds to that of one CTX since a CQ can be shared
only within a CTX.
\subsection{Queue Pair Sharing}
\label{sec:qpsharing}
Ultimately, the user can choose to share the queue pair between
threads to achieve maximum resource efficiency. This is the case in
state-of-the-art MPI implementations.
\begin{figure}[htbp]
\centering
\begin{minipage}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{pics/qp_sharing.pdf}
\end{minipage}
\begin{minipage}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{pics/qp_sharing_resusage.pdf}
\end{minipage}
\vspace{-2em}
\caption{Message rate (left) and communication resource usage (right) with increasing
QP sharing across 16 threads.}
\label{fig:qpsharing}
\vspace{-1.5em}
\end{figure}
\noindent \emph{\textbf{Performance.}} The QP has a lock that a thread
needs to acquire before posting on it. Hence, threads will contend on a
shared QP's lock. Additionally, the threads need to coordinate to post
on the finite QP-depth of the shared QP using atomic fetch and decrement
the QP-depth value. These locks and atomics are sources of
contention when sharing QPs. Most important, the NIC's
parallel capabilities are not utilized with shared QPs since each QP
is assigned to only one hardware resource through which the messages
of multiple threads are serialized. \figref{fig:qpsharing}(a) shows the
expected decline in throughput with increasing QP-sharing. Removing
\emph{Postlist}\ is more detrimental than removing \emph{Unsignaled Completion}\ because the
contention on the QP's lock without \emph{Postlist}\ is higher.
\noindent \emph{\textbf{Resource usage.}} Sharing the QP means
fewer circular buffers for the WQEs and hence lower memory
consumption. It does not affect hardware resource usage, as we
can see in \figref{fig:qpsharing}(b). QP sharing reduces
the number of both QPs and CQs, reducing the total memory
consumption of the software communication resources by 16x with
16-way sharing.
\noindent\emph{\textbf{Summary.}} Below are the lessons learned
from our analysis.
\begin{compactitem}
\item Each thread must have its own cache-aligned buffer to prevent
a performance drop.
\item CTX-sharing is the most critical for the usage of
hardware resources. With 16-way sharing, ``2xQPs" can achieve the same
performance as independent CTXs using 80 uUARs instead of 288.
If 20\% less performance is acceptable, we can use maximally
independent TDs that use 6x fewer resources. If 50\% less performance
is acceptable, we can use ``Sharing 2" that uses 9x fewer resources.
\item Sharing the PD or the MR will not hurt performance, while keeping them
independent will not utilize any communication resource.
\item Only QP- and CQ-sharing affects the memory consumption of the software resources.
However, the reduction in memory usage by sharing them is not as critical as
the consequent drop in performance. For example, 16-way sharing of the CQ
improves memory usage by 1.1x but can result in an 18x drop in performance.
\end{compactitem}
\section{Evaluating Scalable Endpoints}
\label{sec:scalableepeval}
We evaluate the performance
and resource usage of scalable endpoints
described in \secref{sec:scalableepdesign} on two benchmarks, namely, global array and 5-point stencil
on our two-node evaluation setup\footnote{
Thread domains are supported only kernel 4.16
onward; the latest stable kernel was 4.17.2, hence,
only two nodes since the combination of a
mlx5 device along with the latest stable kernel was a rarity.}.
We limit our evaluation
to conservative application semantics---those
that do not allow \emph{Postlist}\ and \emph{Unsignaled Completions}\ and
focus on \emph{BlueFlame}\ writes instead of \emph{DoorBells}\
since they are latency oriented.
\noindent\emph{\textbf{Global array benchmark.}}
The pattern of fetching and writing tiles from
and to a global array is at the core of many scientific
applications such as NWChem~\cite{valiev2010nwchem},
which constitutes a multidimensional
double-precision matrix multiply (DGEMM).
We implement a DGEMM benchmark ($A \times B = C$),
where the global matrices $A$, $B$, and $C$ reside
on a server node and a client node performs
the DGEMM using Verbs for internode communication.
We design the benchmark such that all the QPs share
the same PD but each has three BUFs and three MRs---one
for each of the three tiles from A, B, and C.
\begin{figure}[htbp]
\centering
\begin{minipage}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{pics/ga_BlueFlame.pdf}
\end{minipage}
\begin{minipage}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{pics/ga_resusage.pdf}
\end{minipage}
\vspace{-2em}
\caption{Scalable endpoints for the global array kernel with 16 threads.
Left: Communication throughput. Right: Communication resource
usage}
\label{fig:gaeval}
\vspace{-1.5em}
\end{figure}
\figref{fig:gaeval}
shows the performance and resource
usage of scalable communication endpoints
for 16 threads.
Performance
decreases with lower resource usage. For RDMA
writes, for example, we observe that using maximally
independent TDs with twice the number of QPs (2xDynamic)
gives us 108\% of the performance
of dedicated endpoints (MPI everywhere) while
using only 31.25\% as many hardware resources.
Maximally independent paths with as many QPs as threads (Dynamic) gives us
94\% of the performance of
MPI everywhere while using 18.75\% as many hardware
resources.
Sharing the UAR (Shared Dynamic)
gives us 65\% of the performance using 12.5\% of
the hardware resources.
Sharing the uUAR (Static) gives us
64\% of the performance using 6.25\% as many hardware
resources. We observe only a minimal drop in performance
in Static since only two threads share the
uUAR in Static; the rest share the UAR (see
\secref{sec:scalableepdesign}), and hence
we observe performance similar to Shared Dynamic.
Finally, sharing the QP results in only 3\% of the
performance while still using 6.25\%
as many hardware resources.
The memory consumption of QPs and CQs is the
same for all categories except 2xDynamic and
MPI+threads. While the number of QPs and CQs
in 2xDynamic is twice that of MPI everywhere,
the overall memory usage in the former is 3.27x
lower (1.64 MB vs 5.39 MB; see
\secref{sec:resources}) since MPI everywhere
has 16 CTXs while 2xDynamic has only one.
The memory consumption is the lowest in
MPI+threads with only one QP and one CQ.
\noindent\emph{\textbf{Stencil benchmark.}}
Stencil codes are at the heart of various application domains
such as computational fluid dynamics, image processing, and partial
differential equation solvers. We evaluate scalable endpoints on a 5-pt stencil
benchmark with a 1D partitioning of the grid.
\figref{fig:stencildesign} shows the design of
our benchmark. We vary the number
of ranks per node and threads per rank such that
the total number of hardware threads engaged
is 16, the number of cores in a socket. Each
rank gets its tile from the grid, and each
thread gets a corresponding subtile. Each
thread requires two QPs, one for each of its
neighbors. We map the two QPs to
one CQ. Hence the number of QPs is twice the number
of CQs for all cases. \figref{fig:stencileval} shows
the performance,\footnote{The message rates are
above 150 million, the maximum reported for
ConnectX-4~\cite{x4record} since a majority of
the halo exchanges are intranode. Intranode
communication in InfiniBand still involves the NIC.} and resource usage of scalable
endpoints for the different hybrid scenarios.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.49\textwidth]{pics/stencil_design.pdf}
\end{center}
\vspace{-1em}
\caption{1-D partitioning of a grid for the 5-pt stencil between two nodes, four ranks
(P0..P3), and four threads per rank showing the QP-CQ connection
for each thread using one sample. The shaded regions are the halo regions.}
\label{fig:stencildesign}
\vspace{-1.5em}
\end{figure}
\begin{figure*}[htbp]
\begin{center}
\includegraphics[width=0.99\textwidth]{pics/stencil_eval.pdf}
\end{center}
\vspace{-2em}
\caption{(a) Message rate and (b) resource
usage of (i) QP, (ii) CQ, (iii) UAR, and (iv) uUAR
of 16 threads with scalable endpoints for a 5-pt stencil kernel.}
\label{fig:stencileval}
\end{figure*}
For each category, a higher number of processes
performs better than a lower one. For MPI-everywhere
endpoints, for example, the fully hybrid approach
(1.16) performs 1.4x worse than the
processes-only approach (16.1). The
reason for this behavior is that the number of messages
with processes only is 16x higher, while 16 threads
per rank can exchange the halo only 7.67x faster than
with one thread per rank.
In the processes-only case, there is no resource
sharing since each
process has only one thread. 2xDynamic, Dynamic,
and Shared Dynamic achieve 106\% of MPI-everywhere's
performance because of the absence of the lock
on the QP. Static produces 100\% performance since the
lock on its QP exists. MPI+threads achieves
87\% of the performance even though there is no
contention between threads, because of the overhead
of atomics and additional branches associated with QP-sharing. In 16.1, the
number of QPs and CQs is the same for all cases
except for 2xDynamic, where they are 2x
higher. The hardware resource usage is higher in
2xDynamic, Dynamic, and Shared Dynamic since
they waste the statically allocated resources in each process,
unlike other categories.
For the hybrid cases, we observe a performance trend similar
to the global array kernel. 2xDynamic achieves 103\%
of the performance of MPI everywhere; and with increasing
resource-sharing, we improve resource usage but lose
performance. In the case of 4.4, of the eight QPs per CTX,
the fifth QP uses the first level of sharing in Static, resulting in eight
such QPs in total; hence, it performs better than Shared Dynamic
wherein all the QPs use only the second level of sharing. Similarly,
in 1.16, of the 32 QPs per CTX, 28 use the third level of sharing
in Static, hence performing worse than Shared Dynamic. For a given category, the
hardware resource usage is lower when the number of processes
is smaller since fewer processes mean fewer CTXs, and hence,
the total number of statically allocated resources is smaller.
Similarly, the number of QPs and CQs is the same for all hybrid
cases in all categories except in MPI+threads, where it is a function of the number of processes.
\section{Designing Scalable Endpoints}
\label{sec:scalableepdesign}
Building on our
analysis, we define
the \emph{scalable endpoints} resource
sharing model that concretely categorizes
the design space of multiple communication
endpoints into six categories. Below we describe
the design of the initiation interface
in each category, state how the user can create it, discuss what
occurs internally in the IB stack, and
discuss its implications on performance and
resource usage.
For simplicity, we maintain
a separate CQ for each QP.
\noindent \textbf{\emph{MPI everywhere.}} This
category emulates the endpoint configuration
when multiple ranks run on a node. It
represents level 1 in
\figref{fig:sharinganalysis}(b). The user creates
this by creating a separate CTX for each thread,
each containing its own QP and CQ.
Within each CTX, the mlx5 driver assigns the QP
to a low-latency uUAR. Since each CTX contains
8 UARs, consecutive QPs naturally get assigned
to distinct UAR pages. The
performance of this category is the closest
to the best possible since there is no
sharing of resources. It is not the best since
the lock on the QP is still taken even though no other thread
contends for it. The resource usage of
this category is high: every CTX allocates 8 UARs.
Additionally, it is wasteful since only 1
of the 16 allocated uUARs is used
per thread. The
memory consumption increases linearly
with the number of threads since the
number of QPs and CQs is an identity
function of the number of threads.
\noindent \textbf{\emph{2xDynamic.}} This category also
represents
a 1-to-1 mapping between a
uUAR and a thread. Unlike MPI everywhere,
however, the user creates only one CTX for all the
threads and creates twice as many TD-assigned-QPs
as threads. The threads use only the even or odd QPs.
The mlx5 provider dynamically
allocates a new UAR page for each TD and
assigns the first uUAR to
the TD, enabling a 1-to-1 mapping.
This category delivers the best
performance. Since the number of QPs is twice
the number of threads, however, each thread wastes 1
dynamically allocated UAR, 3 uUARs, and 1
QP. The memory consumption
of QPs and CQs is twice that of MPI everywhere. The
statically allocated hardware resources are
wasted regardless of the number of threads.
\noindent \textbf{\emph{Dynamic.}} This category also
represents a 1-to-1 mapping between a uUAR
and a thread, but the number QPs equals the
number of threads. The user creates this
configuration similar to ``2xDynamic" by creating only as many QPs
as threads.
According to
\secref{sec:ctxsharing}, this configuration
hurts communication throughput. In terms
of resource usage, however, only one uUAR is wasted
per thread. The 8 statically allocated UARs
are naturally wasted; none of the dynamically
allocated UARs are wasted. The memory consumption
of QPs and CQs is half of that in ``2xDynamic"
and same as MPI everywhere.
\noindent \textbf{\emph{Shared Dynamic.}} This category
represents level 2 in \figref{fig:sharinganalysis}(b). The user
creates this configuration using a shared CTX,
similar to the way in ``Dynamic," but assigns
each QP to a TD with the second level of sharing.
The mlx5 driver will dynamically allocate UARs
only for the even TDs and map the even TDs
to the first uUAR and the odd TDs to the second
uUAR of the allocated UAR.
According to
\secref{sec:ctxsharing}, sharing the UAR will hurt
performance. The hardware resource usage is less
than with ``Dynamic" since only half as many UARs and
uUARs as threads are allocated. Apart from the 8
statically allocated UARs and uUARs, none of the
dynamically allocated resources are wasted. The
memory consumption of QPs and CQs is
equivalent to that of ``Dynamic."
\noindent\textbf{\emph{Static.}} The user uses the statically allocated resources
within a CTX, resulting in a many-to-one mapping
between the threads and uUARs (and UARs). To do so, the user
simply creates a QP for each thread within a
shared CTX without any TDs. The final
state of the mapping for a given number of QPs
is dependent on the driver's assignment policy. In mlx5, with
16 QPs, we end up with a
combination of the second and third level of sharing
in \figref{fig:sharinganalysis}(b)---the $5$th and
$16$th QP are mapped to the same uUAR (third
level), while the others are mapped to the rest of
the uUARs using the second level of sharing.
The hardware resource usage is the number of statically
allocated resources. Resources are
wasted only when the number of threads
is less than 16. The memory consumption
is equivalent to that of ``Dynamic."
\noindent \textbf{\emph{MPI+threads.}} This
category represents level
4 in \figref{fig:sharinganalysis}(b). The
user creates this by creating only 1 CTX,
1 QP, and 1 CQ. The mlx5 driver assigns
the one QP to a low-latency uUAR. The
performance of this category is the worst
possible since the communication of all the
threads is bottlenecked through one QP.
The resource usage of this category is not
a function of the number of threads and
hence is the best possible. All threads
allocate only 8 UARs, 16 UARs, 1
QP, and 1 CQ.
Note that the CQ can be shared in any manner in the above
categories and its impact is orthogonal to
the effects of the initiation interface.
\section{Evaluation Setup}
\label{sec:setup}
To evaluate the impact of resource sharing
on performance, we write a
multithreaded ``sender-receiver," RDMA-write message rate
benchmark. We choose RDMA writes to
eliminate any receiver-side processing on the
critical path.
We conduct our study on the Joint Laboratory for
System Evaluation's Gomez cluster
(each node has quad-socket Intel Haswell processors with 16 cores/socket and one hardware thread/core) using a patched
rdma-core~\cite{rdmacore} library that contains the infrastructure
to allow for maximally independent paths and disabled
mlx5 locks as described in
\secref{sec:ctxsharing}. The two nodes are
connected via a switch, and each
node hosts a single-port Mellanox ConnectX-4 NIC.
We ensure that each
thread is bound to its own core. For repeatable and
reliable measurements, we disable the processor's
turbo boost and set the CPU frequency to 2.5 GHz.
The design of our message-rate benchmark is
adopted from \texttt{perftest}~\cite{perftest}.
The loop of a thread iterates until all its messages are
completed. In each iteration, the thread posts WQEs on a QP of depth,
$d$ in multiples of
\emph{Postlist}, $p$ requesting for one signaled completion
every $q$ WQEs, where $q$ is the value of \emph{Unsignaled Completions}. In
each poll on the CQ, the thread requests for $c = d/ q$
completions, namely, all possible completions in an
iteration. The depth of the
CQ is $c$.
\emph{Postlist}\ and \emph{Unsignaled Completions}\ control the rate and amount of
interaction between the CPU and NIC.
Empirically, we find that setting $p=32$ and
$q=64$ achieves the maximum throughput for 16 threads; hence, we
use them as our default values. Note that we define the
values of \emph{Postlist}\ and \emph{Unsignaled Completions}\ with respect to the
threads, not to their associated QPs.
To study the effect of an IB feature, we
remove that feature while using others,
referring to this case as ``All w/o $f$," where $f$ is the
feature of interest. To disable \emph{BlueFlame}, we set the MLX5\_SHUT\_UP\_BF
environment variable. To enable \emph{Inlining}, we set
the IBV\_SEND\_INLINE flag on the send-WQE. We use ``w/o Postlist" to mean
$p=1$, and similarly ``w/o Unsignaled" to mean $q=1$.
\figref{fig:scalability} shows the scalability\footnote{The
NIC is attached only to the first socket; cross-socket
NIC behavior is out of the scope of this work.} of communication
throughput across features and communication resource usage of endpoints created with
one TD-assigned QP per context per thread for 2-byte RDMA writes.
We observe that the number of QPs and CQs is an identity function of the
number of threads and increases their memory consumption
from 89 KB with one thread to 1.39 MB with 16 threads. The
usage of UARs and uUARs also grows by a factor of 9 and 18,
respectively. The reason is that each CTX containing one
TD allocates 9 UARs and each
UAR consist of two uUARs.
\begin{figure}[htbp]
\centering
\begin{minipage}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{pics/scalability_x_features.pdf}
\end{minipage}
\begin{minipage}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{pics/naive_eps_resusage.pdf}
\end{minipage}
\vspace{-2em}
\caption{Scalability using a TD-assigned-QP per CTX per thread. Left: Throughput
across features. Right: Resource usage.}
\label{fig:scalability}
\vspace{-1.5em}
\end{figure}
\section{Related Work}
\label{sec:related}
To the best of our knowledge, the resource-sharing analysis in this
paper to design a resource-sharing model at the low level of
interconnects is the first of its kind. The idea of multiple endpoints for
multinode programming models such as MPI and Unified Parallel C (UPC),
however, is not new. The research in this domain is motivated by the same
problem: loss in communication throughput in hybrid environments.
\noindent \textbf{MPI endpoints}. Dinan et al.~\cite{dinan2014enabling}
enable multiple communication endpoints by creating additional MPI ranks
that serve as the ``MPI endpoints." The threads within the MPI ranks then
map to the MPI endpoints, achieving the same configuration as MPI-everywhere
endpoints since each MPI endpoint has its CTX. However, they do not
consider the resource usage of their approach. Consequently, the 93.75\%
wastage of resources still holds with MPI endpoints. Our work explores the
tradeoff space between performance and resource usage instead of
providing one solution, allowing users to choose the best endpoint for their
needs.
\noindent \textbf{PAMI endpoints}. Tanase et al.~\cite{tanase2012network} implement
multiple endpoints for the IBM xlUPC runtime by assigning contexts
to UPC threads with a one-to-one mapping. While their work is a complete
solution, it does not demonstrate the indirect impact on the resource
usage. We show a holistic picture of the different mappings
between threads and hardware resources and discuss the
tradeoff between performance and resource usage for each
mapping.
\noindent \textbf{UPC endpoints}. Luo et al. ~\cite{luo2011multi}
implement network endpoints for the UPC runtime. However, their work
does not consider the mapping between the runtime's network endpoints
and the interconnects network resources. Consequently, their work
does not evaluate the hardware-resource utilization of their
implementation, which is an essential factor for understanding the
scalability of multiple communication endpoints.
\section{User Access Region}
\label{app:uar}
The User Access Region (UAR) is part of a mlx5 NIC's address space and
consists of UAR pages. Different pages allow the multiple processes and
threads to get isolated, protected, and independent direct access to the
NIC. The UAR pages are mapped into the application's userspace during
CTX creation, allowing the user to bypass the kernel and directly write to the NIC.
A mlx5 UAR page is 4 KB, and each UAR consists of four uUARs
(micro UARs). Only the first two are used for user operations; we refer
to them as data-path uUARs. The last two are used by the hardware for
executing priority control tasks~\cite{fastpathposting}. Each uUAR consists
of two equally sized buffers that are written to alternatively~\cite{mlxRPM}.
The first eight bytes of a buffer constitute the \emph{DoorBell}\ register~\cite{mlxRPM}.
Atomically writing eight bytes to this register rings the \emph{DoorBell}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.49\textwidth]{pics/uar_page.png}
\end{center}
\caption{4 KB mlx5 UAR page. The last two uUARs are used by the NIC.}
\label{fig:mlx5uar}
\end{figure}
\section{mlx5's uUAR-to-QP Assignment Policy}
\label{app:uuartoqp}
When the Verbs user creates a CTX, the mlx5 driver statically allocates
a discrete number of UARs. By default, it allocates 8 UARs and 16
data-path uUARs. When the user creates QPs, the \texttt{mlx5\_ib} kernel
module assigns a uUAR to each QP. To guide this assignment, mlx5 categorizes
the statically allocated uUARs into different categories: the zeroth uUAR as
\emph{high latency}, a subset as \emph{low latency}, and the remaining as
\emph{medium latency}. By default, mlx5 categorizes four uUARs (uUAR12-15)
as low latency. Users can change this default using environment variables
that allow them to control the total number of statically allocated uUARs
(MLX5\_TOTAL\_UUARS) and categorize a subset of them (up to a maximum
of all but one) to be low-latency uUARs (MLX5\_NUM\_LOW\_LAT\_UUARS).
Low-latency uUARs are called so because only one QP is assigned to such a
uUAR; thus the lock on the uUAR is disabled. The medium-latency uUARs
may be assigned to multiple QPs, and locks are needed to write to them. The
high-latency uUAR can also be assigned to multiple QPs but it allows only
atomic \emph{DoorBells}\ and no \emph{BlueFlame}\ writes. Hence, it is not protected by a lock.
\figref{fig:uuaralloc} portrays mlx5's uUAR-to-QP assignment
policy for an example CTX containing six static uUARs of which
two are low latency (uUAR4-5). Within a CTX, the QPs are first assigned
to the low-latency uUARs (QP0 and QP1). Once all the low-latency uUARs
are exhausted, the driver maps the next QPs to the
medium-latency uUARs in a round-robin fashion (QP2--QP6). The high-latency
uUAR is assigned to QPs only when the user declares the maximum allowed
number of uUARs to be low latency, in which case (not shown) all the QPs after
those assigned to the low-latency uUARs will map to the zeroth uUAR.
The mlx5 driver will \emph{dynamically allocate} a new UAR page if the user
creates a thread domain (TD). Every even TD will allocate a new UAR page;
every even-odd pair of TDs will map to the separate uUARs on the same UAR
page, as we can see for the three TDs in \figref{fig:uuaralloc}. All the QPs in a TD
will map to the uUAR associated with the TD; and since the user guarantees that
all the QPs assigned to a TD will be accessed only from one thread, mlx5 disables the
lock on the TD's uUAR. The maximum number of dynamically allocated UARs allowed
per CTX in mlx5 is 512.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.49\textwidth]{pics/uuar_alloc.pdf}
\end{center}
\caption{Assigning seven QPs and three TDs to uUARs of a CTX
containing six static uUARs, of which two are low-latency uUARs
(blue). The high-latency uUAR is in grey, the medium-latency
ones are in yellow, and the dynamically allocated ones are in red.}
\label{fig:uuaralloc}
\end{figure}
\section{InfiniBand Mechanism}
\label{app:ibmech}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.49\textwidth]{pics/ib_mech.pdf}
\end{center}
\caption{IB mechanisms on sender node to send data over wire. Refer
to ~\ref{sec:ibfeatures}, which describes each step in detail.}
\label{fig:ibmech}
\end{figure}
To send the message over the InfiniBand network, the user posts a work queue
element (WQE) to a queue pair (QP) using an \texttt{ibv\_post\_send}. \figref{fig:ibmech}
portrays the series of coordinated operations between the CPU and the NIC that follow
to transmit a message and signal its completion. We describe them below.
\begin{enumerate}
\item Using an 8-byte atomic write (memory-mapped I/O) on the buffer of the uUAR
associated with the QP, the CPU first notifies the NIC that WQE is ready to be read.
This is called \emph{ringing the} \emph{DoorBell}.
\item After the \emph{DoorBell}\ ring, the NIC will fetch the WQE
using a DMA read. The WQE contains the virtual address
of the payload (stored in the WQE’s scatter/gather list).
\item The NIC will then fetch the WQE’s payload from a
registered memory region using another DMA read. Note
that the virtual address has to be translated to its physical
address before the NIC can DMA-read the data. The NIC
will then transmit the read data over the network.
\item Once the host NIC receives a hardware acknowledgment
from the receiver NIC, it will generate a CQE and DMA-write
it to the buffer (residing in host’s memory) of the
CQ associated with the QP. Latency-oriented users will
then poll this CQ to dequeue the CQE to ”make progress.”
\end{enumerate}
In summary, the critical data path of each \texttt{ibv\_post\_send}
entails one MMIO write, two DMA reads, and one DMA write.
|
2,869,038,154,709 | arxiv |
\section{Introduction}
\label{sec:intro}
Software developers constantly face implementation choices that affect
performance, such as choices of data structures, algorithms, and parameter
values. Unfortunately, traditional programming languages lack support for
expressing such alternatives directly in the code, forcing the programmer to commit to
certain design choices up front.
To improve the performance of a program, a developer can
profile the code and manually tune the program by explicitly executing
the program repeatedly, testing and changing different algorithm
choices and parameter settings. However, such manual tuning is both tedious and
error prone because the number of program alternatives grows
exponentially with the number of choices. Besides choices of program
parameters, hardware properties, such cache layout and core
configurations make manual optimization even more challenging.
An attractive alternative to manual tuning of programs is to perform
the tuning automatically. Conceptually, a tuning problem is an
optimization problem where the search space consists of potential
program alternatives and the goal is to minimize an objective, such
as execution time, memory usage, or code size. The search space is
explored using some search technique, which can be performed either
offline (at compile-time) or online (at runtime).
In this work, we focus on offline tuning.
Several offline auto-tuning tools have been developed to target
problems in specific domains. For instance, ATLAS~\cite{atlas} for
linear algebra, FFTW~\cite{fftw} and SPIRAL~\cite{spiral-overview-18}
for linear digital processing, PetaBricks~\cite{petabricks-09} for
algorithmic choice, as well as tools for choosing sorting algorithm
automatically~\cite{sorting-04}. Although these domain-specific tuning
approaches have shown to work well in their specific area,
they are inherently targeting a specific domain and cannot be used
in general for other kinds of applications.
In contrast to domain-specific tuning, \emph{generic} auto-tuners offer
solutions that work across domains and may be applicable to arbitrary software
systems. For instance, approaches such as ATF~\cite{atf},
OpenTuner~\cite{open-tuner}, CLTune~\cite{cltune},
HyperMapper~\cite{hypermapper}, and PbO~\cite{pbo} make it possible for
developers to specify unknown variables and possible values. The goal of the
tuner is then to assign values to these variables optimally, to minimize for
instance execution time. In particular, existing generic auto-tuners assign
values to variables \emph{globally}, i.e., each unknown variable is assigned the
same value throughout the program.
In this work, we focus on two problems with state-of-the-art
methods. Firstly, only using global decision variables does not scale in
larger software projects and it violates fundamental software
engineering principles. Specifically, a global decision variable does
not take into consideration \emph{the context} from where a function
may be called. Global decision variables can of course be manually
added in all calling contexts, but such manual refactoring of code
makes code updates brittle and harder to maintain.
Secondly, auto tuning is typically computationally expensive, since the search
space consists of all combinations of decision variable values. This is because
decision variables are dependent on each other in general. However, if a subset
of the decision variables is \emph{independent}, then the auto tuner wastes
precious time exploring unnecessary configurations.
In this paper, we introduce a methodology where a software developer postpones
design decisions during code development by explicitly stating \emph{holes} in
the program. A hole is a decision variable for the auto tuner, with a given
default value and domain, such as integer range or Boolean type. In contrast to
existing work, we introduce \emph{context-sensitive holes}. This means that a
hole that is specified in a program (called a base hole) can be expanded into a
set of decision variables (called context holes) that take the calling context
into consideration.
The compiler statically analyses the call graph of the program and transforms
the program so that the calling context is maintained during runtime. Only paths
through the call graph up to a certain length are considered, to avoid a
combinatorial explosion in the number of variables.
In our approach, context-sensitive holes can be automatically tuned using input
data, without manual involvement.
We develop and apply a novel static \emph{dependency analysis} to build a
bipartite dependency graph, which encodes the dependencies among the
context-sensitive holes. In contrast to existing approaches, the dependency
analysis is automatic, though optional annotations may be added to increase the
accuracy. The dependency graph is used to reduce the search space in a method we
call \emph{dependency-aware tuning}. Using this method, the auto tuner
concentrates its computation time on exploring only the necessary configurations
of hole values.
Specifically, we make the following contributions:
\begin{itemize}
\item We propose a general methodology for programming with
context-sensitive holes that enables the software developer to
postpone decisions to a later stage of automatic tuning. In
particular, we discuss how the methodology can be used for algorithm
and parallelization decisions
(Section~\ref{sec:holes}).
\item To enable the proposed methodology, we propose a number of
algorithms for performing program transformations for efficiently
maintaining calling context during runtime
(Section~\ref{sec:transformations}).
\item We design and develop a novel static dependency analysis, and use the
result during tuning to reduce the search space of the problem
(Sections~\ref{sec:dependency-analysis}--\ref{sec:dep-tuning}).
\item We design and develop a complete solution for the proposed idea
within the Miking framework~\cite{Broman:2019}, a software framework for developing
general purpose and domain-specific languages and compilers
(Section~\ref{sec:implementation}).
\end{itemize}
\noindent To evaluate the approach, we perform experiments on several
non-trivial case studies that are not originally designed for the approach
(Section~\ref{sec:eval}).
\section{Programming with Context-Sensitive Holes}\label{sec:holes}
In this section, we first introduce the main idea behind the methodology of
programming with holes. We then discuss various programming examples, using
global and context-sensitive holes.
\subsection{Main Idea}\label{sec:main-idea}
Traditionally, an idea is implemented into a program by making a number of
implementation choices, and gradually reducing the design space.
By contrast, when programming with holes, we delay the decision of
implementation choices. Design choices---such as what parts of the
program to parallelize or which algorithms to execute in different
circumstances---are instead left unspecified by specifying holes, that
is, decision variables stated directly in the program code. Instead of
taking the design decision up-front, an auto-tuner framework makes use
of input data and runtime profiling information, to make decisions of
filling in the holes with the best available alternatives. Thus, the
postponed decisions can be automated and based on more information,
resulting in less ad hoc and more informed decisions.
\subsection{Global Holes}
\label{sec:global-holes}
The key concept in our methodology is the notation of \emph{holes}.
A hole is an unknown variable whose optimal value is to be decided by the
auto-tuner, defined in a program using the keyword \mcoreinline{hole}. The
\mcoreinline{hole} takes as arguments the type of the hole (either
\mcoreinline{Boolean} or \mcoreinline{IntRange}) and its default value.
Additionally, the \mcoreinline{IntRange} type expects a minimum and maximum
value.
\begin{example}\label{ex:global}
To illustrate the idea of a hole, we first give a small example illustrating
how to choose sorting algorithms based on input data length. Consider the
following program, implemented in the Miking core
language\footnote{We use the Miking core language in the rest of the paper
because the experimental evaluation is implemented in Miking. The concepts
and ideas presented in the paper are, however, not bound to any specific
language or runtime system.}: %
\lstinputlisting[language=MCore,firstline=6,lastline=10]{examples/sort.mc}
%
\sloppypar{%
The example defines a function \mcoreinline{sort} using the
\mcoreinline{let} construction. The function has one parameter
\mcoreinline{seq}, defined using an anonymous lambda function
\mcoreinline{lam}.
%
Lines~\ref{l:sort1}--\ref{l:sort2} define a hole with possible values in
the range $\RangeInclusive{0}{10000}$ and default value~$10$. The program
then chooses to use \verb|insertionSort| if the length of the sequence is
less than the threshold, and \verb|mergeSort| otherwise. That is, an
auto-tuner can use offline profiling to find the optimal threshold value
that gives the overall best result. Note also that the default value can be
used if the program is compiled without the tuning stage.} \qed{}
\end{example}
The example above illustrates the use of a \emph{global hole}, i.e., a
hole whose value is chosen globally in the program. Although global
holes are useful, they do not take into consideration the calling
context.
\subsection{Context-Sensitive Holes}
\label{sec:context-sensitive-holes}
All holes that are explicitly stated in the program code using the
\mcoreinline{hole} syntax represent \emph{base holes}. As illustrated
in the previous example, a base hole that does not take into
consideration the calling context is the same thing as a global
hole. One of the novel ideas in this paper is the concept of
\emph{context-sensitive} holes. In contrast to a global hole, the
value of a context-sensitive hole varies depending on which
call path is taken to the place in the program where the hole is
defined. They are useful in programs where we believe the optimal
value of the hole varies depending on the context. Context-sensitive
holes are implicitly defined from a base hole, taking into
consideration the different possible call paths reaching the base
hole. The idea is illustrated with the following example.
\begin{example}\label{example-map1}
Consider the higher-order function \mcoreinline{map}, which applies a
function~\mcoreinline{f} to all elements in a sequence~\mcoreinline{s}. The function can
be applied either sequentially or in parallel (given that~\mcoreinline{f} is
side-effect free).
%
The optimal choice between the sequential or parallel version likely depends
partly on the length of the sequence, but also on the nature of the
function~\mcoreinline{f}.
In a large program, a common function such as \mcoreinline{map} is probably called
from many different places in the program, with varying functions \mcoreinline{f}
and sequences. Therefore, globally deciding whether to use the sequential or
parallel version may result in suboptimal performance. \qed
\end{example}
To define a context-sensitive hole, we provide an additional (optional)
field~\mcoreinline{depth}, which represents the length of the most recent function
call history that should influence the choice of the value of the hole.
\begin{example}\label{example-map2}
The following is an implementation of \mcoreinline{map} that chooses between the
parallel and sequential versions (\mcoreinline{pmap} and \mcoreinline{smap},
respectively).
%
\lstinputlisting[language=MCore,firstline=5,lastline=7]{examples/map-example.mc}
%
Line~\ref{l:bool-hole2} defines a Boolean hole \mcoreinline{par} with default
value \mcoreinline{false} (no parallelization). The \mcoreinline{depth = 1}
tells the tuner that the value of \mcoreinline{par} should consider the call
path one step backward. That is, two calls to \mcoreinline{map} from different
call locations can result in different values of \mcoreinline{par}.\qed{}
\end{example}
\begin{example}\label{example-map3}
Given that we choose the parallel map \mcoreinline{pmap} in
Example~\ref{example-map2}, another choice is how many parts to split the
sequence into, before mapping~\mcoreinline{f} to each part in parallel. We can
implement this choice with an integer range hole that represents the
\mcoreinline{chunkSize}, i.e. the size of each individual part.
\lstinputlisting[language=MCore,firstline=25,lastline=31]{examples/pmap.mc}
%
Function \mcoreinline{split} splits the sequence \mcoreinline{s} into chunks of
size (at most) \mcoreinline{chunkSize}, that \mcoreinline{async} sends the tasks to a
thread pool, and that \mcoreinline{await} waits for the tasks to be finished.
Note the choice of \mcoreinline{depth = 2} in this example. We expect the function
\mcoreinline{pmap} to be called from \mcoreinline{map}, which has \mcoreinline{depth = 1}.
Since we want to capture the context from \mcoreinline{map} we need to increment
the depth parameter by one.\qed
\end{example}
The choice of the \mcoreinline{depth} parameter for a hole should intuitively be
based on the number of function calls backward that might influence the choice
of the value of the hole. A larger depth might give a better end result, but it
also gives a larger search space for the automatic tuner, which influences the
tuning time.
Note that in order to use global holes to encode the same semantics as the
context-sensitive holes, we need to modify the function signatures.
For example, in Example~\ref{example-map2}, which has a hole of depth $1$, we
can let \mcoreinline{map} take \mcoreinline{par} as a parameter, and pass a
global hole as argument each time we call \mcoreinline{map}.
For holes of depth $d > 1$, this strategy becomes even more cumbersome, as each
function along a call path need to pass on global holes as arguments.
By instead letting the compiler handle the analysis of contexts, we do not need
to modify function signatures, and we can easily experiment with the depth
parameter. Additionally, holes can be ``hidden'' inside libraries, so that a
user does not need to be aware of where the holes are defined, but still benefit
from context-sensitive tuning.
Another advantage of context-sensitive holes compared to global holes is that
the compiler may use the knowledge that two context holes originate from the
same base hole to speed up the tuning stage.
\section{Program Transformations}\label{sec:transformations}
This section covers the program transformations necessary for maintaining the
context of each hole during runtime of the program.
The aim of the program transformations is that the resulting program maintain
contexts at a minimum runtime overhead.
Sections~\ref{sec:context-intuition} and~\ref{sec:graph-coloring} provide
definitions and a conceptual illustration of contexts, while
Section~\ref{sec:impl-transformations} covers the implementation in more detail.
\subsection{Definitions}
\label{sec:context-intuition}
\newcommand{\Depth}[0]{\delta}
\newcommand{\Home}[0]{\eta}
\newcommand{\SinglePaths}[0]{\Sigma}
\newcommand{\CallStrings}[0]{\mathit{CS}}
\newcommand{\Program}[0]{p}
Central in our discussion of contexts are call graphs. Given a program
$\Program{}$ with holes, its \emph{call graph} is a quintuple $G = (V, E, L, S,
H)$, where:
\begin{itemize}
\item the set of vertices $V$ represents the set of functions in $\Program{}$,
\item each edge $e\in E$ is a triple $e = (v_1, v_2, l)$ that represents a
function call in $\Program{}$ from $v_1\in V$ to $v_2\in V$, labeled with
$l\in L$,
\item $L$ is a set of labels uniquely identifying each call site in $\Program$;
that is: $|E| = |L|$,
\item $S \subseteq V$ is the set of entry points in the program,
\item the triple $H = (n, \Depth, \Home)$ contains the number $n$ of base holes;
a function $\Depth : \RangeInclusive{1}{n} \rightarrow \mathbb{N}$, which maps
each base hole to its depth parameter; and a function $\Home :
\RangeInclusive{1}{n} \rightarrow V$, which maps each base hole to the vertex
$v\in V$ in which the hole is defined.
\end{itemize}
Furthermore, a \emph{call string} in a call graph is a string from the alphabet
$L$, describing a path in the graph from a start vertex $v_s\in S$ to some end
vertex $v_e\in V$. Let $\CallStrings_i$ denote the set of call strings of the
$i$th hole, that is, the set of call strings starting in some vertex $v_s\in S$
and ending in $\Home(i) \in V$.
\begin{example}\label{ex:call-graph-1}
The call graph in Figure~\ref{fig:call-graph} is the quintuple:
\begin{alignat*}{2}
(&\{A,B,C,D\},\\
&\{(A,B,b), (A,C,a), (B,C,f), (C,C,e), (C,D,c), (C,D,d)\},\\
&\{a,b,c,d,e,f\},\\
&\{A\},\\
&(1, \{1 \mapsto 3\}, \{1 \mapsto D\}))
\end{alignat*}
%
In Figure~\ref{fig:call-graph}, note that we mark a base hole with its depth as
a smaller circle within the vertex where it is defined, that we mark each entry
point of the graphs with an incoming arrow, and that there are two function
calls from $C$ to $D$, hence two edges with different labels. The set
$\CallStrings_1$ of call strings from $A$ to $D$ in Figure~\ref{fig:call-graph}
is: $$ \mathit{ae^*c}, \mathit{ae^*d}, \mathit{bfe^*c}, \mathit{bfe^*d} $$ where
the Kleene-star ($^*$) denotes zero or more repetitions of the previous label.
\qed
\end{example}
\begin{wrapfigure}{r}{0.5\linewidth}
\centering
\includegraphics[width=0.5\textwidth,trim={1cm 0cm 0cm 0cm},clip,scale=0.6]{callgraph-normal.png}
\caption{Call graph representing a program with four functions: $A$, $B$,
$C$, and $D$. Each edge represents a function call from a certain call
site in the program, and is marked with a unique
label.}\label{fig:call-graph}
\end{wrapfigure}
Call strings provide the context that is relevant for context-sensitive holes.
Ideally, the tuning should find the optimal value for each call string in
$\CallStrings_i$ leading to the $i$th hole. However, in
Example~\ref{ex:call-graph-1} we see that there can be infinitely many call
strings, which means that we would have infinitely many decision variables to
tune. Therefore, we wish to partition the call strings into equivalence classes,
and tune each equivalence class separately.
This leads to the question of how to choose the equivalence relation.
One possible choice of equivalence relation is to consider two call strings that
have an equal suffix (that is, an equal ending) as being equal. We can let the
length of the suffix be $d$, where $d$ is the context depth of the hole:
\begin{example}\label{ex:call-strings-1}
Let $\sim_i$ be an equivalence relation defined on each set $\CallStrings_i$,
such that:
$$
s_1 \sim_i s_2 \ \text{iff} \ \text{suffix}_{\Depth{i}}(s_1) =
\text{suffix}_{\Depth{i}}(s_2)\text{, } s_1, s_2 \in \CallStrings_i
$$
where $\text{suffix}_d(s)$ returns the last $d$ labels of a call string $s$,
or all of the labels if the length of the string is less than $d$.
%
We choose a canonical representation from each equivalence class as the result
from the $\text{suffix}_d$ function.
%
The call strings from Example~\ref{ex:call-graph-1} have the following
canonical representations (that is, unique results after applying
$\text{suffix}_3$ to the call strings):
$$
\mathit{ac}
, \mathit{aec}
, \mathit{eec}
, \mathit{ad}
, \mathit{aed}
, \mathit{eed}
, \mathit{bfc}
, \mathit{fec}
, \mathit{bfd}
, \mathit{fed}
$$
Note that the canonical representations are \emph{suffixes} of call strings
but not always call strings by strict definition, as they do not always start
in a start vertex $v_s\in S$. We call these canonical representations
\emph{context strings}.
%
\qed
\end{example}
While the equivalence relation in Example~\ref{ex:call-strings-1} at least gives
an upper bound on the number of decision variables to tune, it may still result
in a large number of equivalence classes. Limiting the number of recursive calls
that are considered results in a more coarse-grained partitioning:
\begin{example}\label{ex:call-strings-2}
Consider the equivalence relation $\sim_i^r$, which is like $\sim_i$ from
Example~\ref{ex:call-strings-1}, but where we consider at most $r$ repetitions
of any label, for some parameter $r$. That is, if a string contains more than
$r$ repetitions of a label, then we keep the $r$ rightmost occurrences.
%
For example, using $r = 1$ in the call strings of
Example~\ref{ex:call-graph-1} yields the $8$ context strings:
$$
\mathit{ac}
, \mathit{aec}
, \mathit{ad}
, \mathit{aed}
, \mathit{bfc}
, \mathit{fec}
, \mathit{bfd}
, \mathit{fed}
$$
Note that compared to the context strings in Example~\ref{ex:call-strings-1},
we have filtered out $\mathit{eec}$ and $\mathit{eed}$ as they include more
than $1$ repetition of the label $e$.
%
For example, this means that the two call strings $\mathit{aec}$ and
$\mathit{aeec}$ belongs to the same equivalence class, namely the class
represented by the context string $\mathit{aec}$.
%
\qed
\end{example}
Using the definition of context strings, we can finally define what we mean by
context-sensitive holes. Given some equivalence relation $\sim$, a base hole is
expanded (by the compiler) into $c$ number of \emph{context} holes, where $c$ is
the number of equivalence classes (i.e., the number of context strings) under
the relation $\sim$.
If $\Depth(i) = 0$, then the hole is \emph{global}, and $c = 1$.
In Example~\ref{ex:call-strings-1}, there are $10$~context holes, and in
Example~\ref{ex:call-strings-2} there are $8$.
Thus, the choice of the equivalence relation influences the number of decision
variables to tune: few equivalence classes give fewer variables to tune and
potentially a shorter tuning time, while more classes might increase the tuning
time, but give better performance of the resulting program.
\subsection{Graph Coloring for Tracking Contexts}
\label{sec:graph-coloring}
Each time a context-sensitive hole is used, we need to decide which equivalence
class the current call string belongs to, in order to know which value of the
hole to use.
The challenge is to introduce tracking and categorization of call strings in the
program with minimum runtime overhead.
A naive approach is to explicitly maintain the call string during runtime.
However, this requires book keeping of auxiliary data structures and potentially
makes it expensive to decide the current context string.
This section describes an efficient graph coloring scheme that leaves a color
trail in the call graph, thereby implicitly maintaining the call history during
runtime of the program.
We first discuss the underlying equivalence relation that the method implements
(Section~\ref{sec:eq-rel}), and then divide our discussion of the graph coloring
into two parts: complete programs (Section~\ref{sec:complete-program}) and
separately compiled library code (Section~\ref{sec:library}).
\subsubsection{Equivalence Relation}
\label{sec:eq-rel}
\newcommand{\Concat}[0]{\oplus}
The equivalence relation that the graph coloring method implements is an
approximation of the $\sim_i^1$ relation of Example~\ref{ex:call-strings-2}. The
difference is that we do not track the call string beyond a recursive (including
mutually recursive) call.
This is because a recursive call overwrites the call history in graph coloring,
as we will see in Section~\ref{sec:graph-coloring}.
The context strings for the call strings from the previous section are:
$$
\mathit{ac}
, \mathit{ec}
, \mathit{ad}
, \mathit{ed}
, \mathit{bfc}
, \mathit{bfd}
$$
Compared to the context strings in Example~\ref{ex:call-strings-2}, the strings
$\mathit{aed}$ and $\mathit{fed}$ are merged into $\mathit{ed}$, and the
strings $\mathit{aec}$ and $\mathit{fec}$ are merged into $\mathit{ec}$.
A consequence is that, for instance, the call strings $\mathit{bfed}$ and
$\mathit{aed}$ belong to the same equivalence class, namely the class
represented by $\mathit{ed}$.
Algorithm~\ref{algo:context-strings} describes how to explicitly compute the
context strings for the $i$th hole. The recursive sub-procedure
$\textsc{ContextStringsDFS}$ traverses the graph in a backwards depth-first
search manner. It maintains the current vertex $v$ (initially $\Home(i)$), the
current string $s$ (initially the empty string $\epsilon$), the set of visited
vertices $U$ (initially $\varnothing$) and the remaining depth $d$ (initially
$\Depth(i)$).
\begin{algorithm}
\caption{Algorithm for computing the context strings of a
hole.}\label{algo:context-strings}
\textbf{Input} Call graph $G = (V,E,L,S,H)$, index $i$ of the base hole. \\
\noindent\textbf{Output} Set of context strings of the hole. \\
\begin{algorithmic}[1]
\Procedure{ContextStrings}{$G$,$i$}
\Procedure{ContextStringsDFS}{$v$, $s$, $U$, $d$}
\If{$d=0 \lor \text{inc}(G,v) = \varnothing \lor v\in U$}
\Return $\{s\}$\label{l:cs-return}
\Else
\State
$\mathit{CS} \gets
\bigcup\limits_{(v_p,v,l) \in \text{inc}(G,v)}
\textsc{ContextStringsDFS}(v_p, s \Concat l, U \Union \{v\}, d-1)$\label{l:cs-rec}
\If{$v\in S$}
\Return $\mathit{CS} \Union \{s\}$\label{l:cs-ret-1}
\Else \textbf{ return} $\mathit{CS}$\label{l:cs-ret-2}
\EndIf
\EndIf
\EndProcedure
\State \Return \textsc{ContextStringsDFS}($\Home(i)$, $\epsilon$, $\varnothing$, $\Depth(i)$)
\EndProcedure
\end{algorithmic}
\end{algorithm}
Line~\ref{l:cs-return} returns a singleton set if the depth is exhausted, if the
set of incoming edges to $v$ is empty, or if $v$ is visited. The function
$\text{inc}(G,v)$ returns the set of incoming edges to vertex $v$ in $G$.
Line~\ref{l:cs-rec} recursively computes the context strings of the preceding
vertices of $v$, and takes the union of the results. The $\Concat$ operator adds
a label to a string.
Lines~\ref{l:cs-ret-1}--\ref{l:cs-ret-2} return the final result. If the current
vertex $v$ is a start vertex, then the current string $s$ is a context string
starting in $v$, and is therefore added to the result. Otherwise, we return the
result of the recursive calls.
\subsubsection{Coloring a Complete Program}
\label{sec:complete-program}
\newcommand{\Setup}[0]{Setup}
\newcommand{\PruneGraph}[0]{Setup}
\newcommand{\TraverseEdge}[0]{TraverseEdge}
\newcommand{\Categorize}[0]{Categorize}
\newcommand{\ColorV}[0]{c_V}
\newcommand{\ColorE}[0]{c_L}
In the case of a complete program, the program has a single entry point, for
instance, a main function where the execution starts. We will now walk through a number of examples, showing how graph coloring works conceptually.
\begin{figure}[b!]
\begin{subfigure}[t]{.45\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{callgraph-init.png}
\caption{Initial state of the call graph when performing graph coloring:
all vertices are white, and each edge is colored so that the colors of the
incoming edges for each node are different.
For readability on a black and white rendering of the figure, we mark each
edge with an integer representing the color in addition to coloring the
edge ($1$ for blue, $2$ for magenta, and $3$ for
green).}\label{fig:color-init}
\end{subfigure}\hfill
\begin{subfigure}[t]{.45\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{callgraph-1.png}
\caption{State of the call graph after the call string $\mathit{ad}$ is
taken. For readability, for each non-white node we write the integer
representing the color in the vertex in addition to coloring
it.}\label{fig:color-step-1}
\end{subfigure}
\begin{subfigure}[t]{.45\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{callgraph-2.png}
\caption{State of the call graph after the call string $\mathit{bfed}$ is
taken. Because of the recursive call in $C$, we cannot trace the calls
further backward from $B$. The current context string is therefore
$ed$.}\label{fig:color-step-2}
\end{subfigure}\hfill
\begin{subfigure}[t]{.45\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{library-2.png}
\caption{A call graph with several entry points. Node $C'$ is a sentinel
vertex that forwards external calls to the internal vertex $C$. The call
string $\mathit{ac}$ has been taken, immediately followed by call string
$c$. The current context string is $c$.}\label{fig:library-2}
\end{subfigure}
\caption{Conceptual illustration of how the call history is maintained by
coloring the call graph during runtime.}\label{fig:color}
\end{figure}
\begin{example}\label{ex:coloring-setup}
Figure~\ref{fig:color-init} shows the initial coloring state of the call graph
in Figure~\ref{fig:call-graph}.
For instance, the vertex~$C$ has three incoming edges (with colors
blue~($1$), magenta~($2$), and green~($3$)), while~$D$ has two (blue and
magenta).
%
Note that we can reuse a given color for several edges, as long as each vertex
does not have two incoming edges with the same color. For instance, the edges
labeled $a$ and $b$ are both blue. \qed
\end{example}
Algorithm~\ref{algo:gc-traverse} describes the update to the coloring when an
edge is traversed in the call graph.
\begin{algorithm}
\caption{Traversing an edge.}\label{algo:gc-traverse}
\textbf{Input} A call graph $G = (V,E,L,S,H)$, coloring functions $\ColorV$
and $\ColorE$, and traversed edge $(v_1, v_2, l) \in E$.
\\
\textbf{Output} Modified coloring function $\ColorV'$. \\
\begin{algorithmic}[1]
\Procedure{TraverseEdge}{$G, \ColorV, \ColorE$, $(v_1,v_2,l)$}
\State $\ColorV \gets \left( \ColorV \setminus \{v_2 \mapsto \ColorV(v_2)\} \right) \Union \{v_2 \mapsto \ColorE(l)\}$
\Comment{Overwrite previous mapping of $v_2$ in
$\ColorV$}\label{l:traverse-1}
\State \Return $\ColorV$
\EndProcedure
\end{algorithmic}
\end{algorithm}
Line~\ref{l:traverse-1} updates the color of the destination vertex to the color
of the label of the edge being traversed.
\begin{example}\label{ex:coloring-traverse}
Figure~\ref{fig:color-step-1} shows the state of the call graph after the call
string $\mathit{ad}$ is taken, and Figure~\ref{fig:color-step-2} shows the
state after the call string $\mathit{bfed}$ is taken. \qed
\end{example}
When the value of a hole is used during runtime of the program, we check the
current context by following the colors of the vertices and edges backwards in
the graph. The tracing stops when we reach the depth of the hole, when we reach
a white vertex, or when we detect a cycle.
\begin{example}\label{ex:coloring2}
To determine the current context string in Figure~\ref{fig:color-step-1}, we
first inspect the color of $D$ (magenta), which means that $d$ is the last
label in the string. Next, we see that $C$ is blue, so $C$ was called by $A$,
thus $a$ is the second to last label. The color of $A$ is white, so we stop
the tracing. Thus, the context string is $ad$. \qed
\end{example}
\begin{example}\label{ex:coloring3}
Similarly, in Figure~\ref{fig:color-step-2} we determine that the last two
labels are $d$ and $e$. Since $C$ called itself via $e$, we have detected a
cycle. Thus, the context string is $ed$. \qed
\end{example}
\subsubsection{Code Libraries}
\label{sec:library}
\sloppypar{%
In contrast to a complete program, a separately compiled code library may
have several entry points, namely, the publicly exposed functions. Further,
each entry point may have incoming edges from internal calls within the
library. This poses a problem for the coloring scheme described thus far,
illustrated in the following example.
%
\begin{example}\label{ex:library}
Assume that vertex $C$ in Figure~\ref{fig:color-init} also is an entry
point, along with vertex $A$. Then the set of context strings listed in
Section~\ref{sec:eq-rel} is extended with $\mathit{c}$ and $\mathit{d}$. The reason for this is that a call directly to $C$ can result in the path $c$ or path $d$, without visiting any other edges.
%
Further, assume that the call string $\mathit{ac}$ has been taken
immediately before a call to $C$ is made and the call string $c$ is taken. Then the coloring state of
the graph would be like in Figure~\ref{fig:color-step-1}, if we follow the
coloring scheme described thus far. From this state, we cannot determine
whether the current context string is $\mathit{ac}$ or $c$.
%
\qed
\end{example}
A possible solution to this problem is shown in
Figure~\ref{fig:library-2}. For each library node ($C$ in this
example), we add an sentinel vertex (noted with a prime) which
directly connects to the original entry point of the library. If a
call is via an sentinel vertex, the next vertex is colored
white. Hence, it is possible to distinguish between if the call is
coming from the library entry point or from another vertex in the
graph.
\subsection{Implementation of Graph Coloring}
\label{sec:impl-transformations}
\newcommand{\TransP}[0]{\Program_t}
The input to the program transformations is a program $\Program$ with holes, as
well as the path to the tune file. The output is a transformed program $\TransP$
that performs graph coloring, and where each base hole in $\Program{}$ has been
replaced by code that statically looks up a value depending on the current
context.
We discuss the program transformations in Section~\ref{sec:trans-static}, and
analyze the runtime overhead of the transformed program in
Section~\ref{sec:overhead}.
\subsubsection{Program Transformations}
\label{sec:trans-static}
During compile-time, we build a call graph $G$ as defined in
Section~\ref{sec:context-intuition}. A key idea is that we do need to maintain the call graph during runtime of the program; it is only used for analysis during the program
transformations.
In the transformed program, we introduce for
each vertex $v\in V'$, an integer reference $r_v$ whose initial value is $0$
(white).
The $\textsc{TraverseEdge}$ procedure in Algorithm~\ref{algo:gc-traverse} is
then implemented as a transformation. Immediately before a function call from $v_1$
to a function $v_2$, where $(v_1, v_2,l)\in E'$, we introduce an update of the
reference $r_{v_2}$ to the color (i.e., integer value) that the edge $(v_1,
v_2,l)$ is assigned to.
Finally, determining the current context string, as informally described in
Examples~\ref{ex:coloring2} and~\ref{ex:coloring3} also requires a program
transformation, which we call \emph{context expansion}. In context expansion, we
replace each base hole in the program with code that first looks up the current
context string, and thereafter looks up the value of the associated context
hole. Determining the current context string for a hole of depth $d$ requires
checking the values of at most $d$ integer references. For example, if the
following declaration of a base hole exists in vertex $D$ in
Figure~\ref{fig:call-graph}:
\begin{mcore-lines}
hole (Boolean {default = true, depth = 3})
\end{mcore-lines}
then we replace it by the following program code:
\begin{mcore-lines}
switch deref $r_D$
case 1 then
switch deref $r_C$
case 1 then <lookup $\mathit{ac}$>
case 2 then <lookup $\mathit{bfc}$>
case 3 then <lookup $\mathit{ec}$>
end
case 2 then
switch deref $r_C$
case 1 then <lookup $\mathit{ad}$>
case 2 then <lookup $\mathit{bfd}$>
case 3 then <lookup $\mathit{ed}$>
end
end
\end{mcore-lines}
where \mcoreinline{deref} reads the value of a reference, each $r_v$ is the
reference storing the color of function $v$, and each \mcoreinline{<lookup
$\phantom{s}s$>} is code that looks up the value for the context hole
associated with the context string $s$.
In the final tuned program, each \mcoreinline{<lookup $\phantom{s}s$>} is simply
a static value: the value that has been tuned for context $s$ (tuned
compilation, see Section~\ref{sec:implementation}).
During tuning of the program, each \mcoreinline{<lookup $\phantom{s}s$>} is an
access into an array that stores the values of the context holes contiguously.
This array is read as input to the program via the tune file, so that the
program does not have to be re-compiled during tuning (see
Section~\ref{sec:implementation} for more details).
Note that a global hole (depth $0$) can be seen as having one context string,
namely the empty string, and thus does not need any \mcoreinline{switch}
statement.
\subsubsection{Adaption to Parallel Execution}
\label{sec:parallel}
In a parallel execution setting, there might be more than one active call string
during each given time in the program. We make an adaption to the program
transformation in order to handle a fixed number of threads $T$.
Instead of introducing \emph{one} reference per (relevant) function, we
introduce an array with $T$ number of references per (relevant) function. Each
thread $t$ is assigned an array index $t_i$. Each thread uses the references at
index $t_i$ only. In this way, we maintain up to $T$ active context strings
simultaneously.
If a thread pool is used, then the size of the thread pool needs to be
known at compile-time.
Otherwise, the transformation works as described in
Section~\ref{sec:trans-static}.
\subsubsection{Runtime Overhead in the Resulting Program}
\label{sec:overhead}
\newcommand{\ProgramDef}[0]{\Program_{\mathit{def}}}
As a baseline for runtime overhead, consider the original program $\Program{}$
where each base hole is replaced by its default value (default compilation, see
Section~\ref{sec:implementation}): we call this program $\ProgramDef$.
The overhead of the transformed program $\TransP$ compared to $\ProgramDef{}$
when the number of threads $T = 1$, includes initializing at most one integer
reference per function.
The program $\TransP$ performs at most one reference update per function call.
Moreover, $\TransP$ performs at most $d$ number of matches on references in
\mcoreinline{switch} statements each time it uses the value of a hole, where $d$
is the depth of the hole.
The underlying compiler can transform the \mcoreinline{switch} statements into
an indexed lookup table. This lookup table is compact by construction, as we
use contiguous integers as values representing colors.
When $T > 2$, then $\TransP$ introduces at most one array of references per
function, and each reference update and reference read includes an indexing into
an array.
\section{Static Dependency Analysis}\label{sec:dependency-analysis}
This section discusses static dependency analysis. The goal is to detect holes
that can be tuned independently of each other. This information is later used
during tuning in order to reduce the search space.
Section~\ref{sec:dep-motivation} motivates the need of dependency analysis and
provides intuition, Section~\ref{sec:dep-definitions} makes necessary
definitions that are used in Section~\ref{sec:dep-analysis}, which describes the
details of the dependency analysis. Finally,
Section~\ref{sec:dep-instrumentation} describes how the program is instrumented
given the result of the dependency analysis.
\subsection{Motivation and Running Example}\label{sec:dep-motivation}
Consider the following $k$-nearest neighbor ($k$-NN) classifier, which will be
our running example in this section:
\lstinputlisting[language=MCore,firstline=38,lastline=51]{examples/knn.mc}
The classifier takes three arguments:
the parameter \mcoreinline{k} of the algorithm;
the \mcoreinline{data} set, which is a sequence of tuples $(d,l)$, where $d$ is
a data point (representing an integer vector) and $l$ is the class label; and
the \mcoreinline{query} data point, whose class label we want to decide.
The algorithm has three steps.
In the first step, we compute the pairwise distances between the query and each
point in the data set, in this example using Euclidean distance.
In step two, we sort the pairwise distances. The first argument to the function
\mcoreinline{sort} is the comparison function, which in this case computes the
difference between two distances.
In the last step, we extract the $k$ nearest neighbors by taking the $k$ first
elements in the sorted sequence \mcoreinline{sortedDists}. Finally, we assume
that the \mcoreinline{mostCommonLabel} function returns the most frequent label
in a sequence of labels, so that the query point is classified to the most
common class among its neighbors.
\newcommand{\Hseq}[0]{h_{\text{seq}}}
\newcommand{\Hmap}[0]{h_{\text{map}}}
\newcommand{\Hsort}[0]{h_{\text{sort}}}
Now, assume that the $k$-NN classifier implicitly uses three holes.
The first hole, $\Hseq$, is for deciding the underlying data structure for the
sequences. In the Miking core language, a sequence can either be represented by
a cons list, or a Rope~\cite{ropes}. We can use a Boolean hole to choose the
representation when creating the sequence, by either calling the function
\mcoreinline{createList} or \mcoreinline{createRope}.
The second hole, $\Hmap$, chooses between sequential or parallel code in the
\mcoreinline{map} function, see Example~\ref{example-map2}.
Finally, the third hole, $\Hsort$, chooses between two sorting algorithms
depending on an unknown threshold value, see Example~\ref{ex:global}.
When tuning the classifier, these three choices need to be taken into
consideration. If the program is seen as a black box, then an auto tuner needs
to consider the \emph{combination} of each of these choices.
In this small example, we quickly see that while some choices are indeed
necessary to explore in combination, others can be explored in isolation.
Choices that must be explored together, called \emph{dependent} choices, are for
instance:
(i) the underlying sequence representation and the \mcoreinline{map} function
($\Hseq$ and $\Hmap$); and (ii) the underlying sequence representation and the
\mcoreinline{sort} function ($\Hseq$ and $\Hsort$).
In both cases, this is because the sequence representation affects the execution
time of the operations performed on the sequence in the respective function.
On the other hand, the \mcoreinline{sort} function and the \mcoreinline{map}
function do \emph{not} need to be explored in combination with each other: the
holes $\Hmap$ and $\Hsort$ are \emph{independent} of each other. Regardless of
what choice is made in the \mcoreinline{map} function (sequential or parallel),
the result of the function is the same, which means that the \mcoreinline{sort}
function should be unaffected.\footnote{We say \emph{should} here as cache
effects from \mcoreinline{map} may still affect the execution time of
\mcoreinline{sort}.}
With knowledge about independencies, an auto tuner can use the tuning time in a
more intelligent way, as it does not need to waste time exploring unnecessary
combinations of holes.
The remainder of this section describes how we can automatically detect
(in-)dependencies such as the examples discussed here, using static analysis.
\subsection{Definitions}\label{sec:dep-definitions}
Before discussing the details of the dependency analysis, we need to define the
entities that constitute dependency: measuring points and dependency graphs.
\subsubsection{Measuring Points}\label{sec:dep-measuring-points}
Intuitively, two holes are independent if they affect the execution time of
disjoint parts of the program. That is, we want to find the set of
subexpressions of the program whose execution times are affected by a given
hole. There are often many such subexpressions. For instance, the complete
\mcoreinline{knnClassify} in Section~\ref{sec:dep-motivation} is a subexpression
whose execution time depends on three holes: $\Hseq$, $\Hmap$, and $\Hsort$.
Moreover, the subexpression on
Lines~\ref{l:knn-dists-start}--\ref{l:knn-dists-end} (the computation of
\mcoreinline{dists}) in \mcoreinline{knnClassify} depends on $\Hseq$ and
$\Hmap$.
So how do we choose which subexpressions that are relevant in the dependency
analysis?
Clearly, it is not useful to consider too large subexpressions of the program.
This is because two holes $h_1$ and $h_2$ may affect a large subexpression $e$,
even though they in reality only affect smaller, disjoint subexpressions $e_1$
and $e_2$, respectively, where $e_1$ and $e_2$ are subexpressions of $e$.
Therefore, we want to find small subexpressions whose execution time depends on
a given hole. We exemplify the type of expressions we are interested in for
\mcoreinline{knnClassify} in Example~\ref{ex:measuring-points}, before going
into details.
\begin{example}\label{ex:measuring-points}
%
Assume that the \mcoreinline{map} function is given by
Example~\ref{example-map2}, and the \mcoreinline{sort} function is given by
Example~\ref{ex:global}.
%
A small subexpression affected by~$\Hmap$ is Line~\ref{l:par-ite} in
\mcoreinline{map} (the if-then-else expression), because which branch is taken
is decided by the hole \mcoreinline{par}.
%
Similarly, the if-then-else expression on
Lines~\ref{l:sort-ite-1}--\ref{l:sort-ite-2} in \mcoreinline{sort} is a
small subexpression affected by~$\Hsort$.
%
These two subexpressions are also affected by $\Hseq$, because the execution
times of the branches depend on the underlying sequence representation.
%
Furthermore, Line~\ref{l:knn-subsequence} in \mcoreinline{knnClassify} is a
minimal subexpression affected by $\Hseq$, because the execution time of
\mcoreinline{subsequence} also depends on the underlying sequence
representation.
%
\qed{}
\end{example}
We call these small subexpression whose execution time depends on at least one
hole a \emph{measuring point}. The rationale of the name is that we measure the
execution time of these subexpressions by using instrumentation (see
Section~\ref{sec:dep-instrumentation}).
The Miking language, being a core language, consists of relatively few language
constructs; any higher-order language implemented on top of Miking will compile
down to this set of expressions. The type of expressions that construct
measuring points are either:
\begin{enumerate}
\item a match statement (including if-then-else expressions); or
\item a call to a function \mcoreinline{f x}, where \mcoreinline{f} is either a
built-in function (such as \mcoreinline{subsequence}), or user-defined.
\end{enumerate}
Section~\ref{sec:dep-analysis} clarifies under which circumstances these
expressions are measuring points. Other types of expressions in Miking, such as
lambda expressions, constants, records, and sequence literals, are not relevant
for measuring execution time.
\subsubsection{Dependency Graph}\label{sec:dep-dependency-graph}
\begin{wrapfigure}{r}{0.4\linewidth}
\centering
\includegraphics[trim={0 2cm 0
0},clip,width=0.16\textwidth]{knn-dep-graph.pdf}
\caption{Dependency graph for the \mcoreinline{knnClassify} function in
Section~\ref{sec:dep-motivation}. The set $H$ is the set of holes (the
choice of sequence representation, choice of sequential or parallel map,
and the choice of sorting function, respectively). The set $M$ is the
set of measuring points, where $m_1$ is Line~\ref{l:knn-subsequence} of
\mcoreinline{knnClassify}, point $m_2$ is Line~\ref{l:par-ite} of
\mcoreinline{map}, and $m_3$ is
Lines~\ref{l:sort-ite-1}--\ref{l:sort-ite-2} of
\mcoreinline{sort}.}\label{fig:knn-dep-graph}
\end{wrapfigure}
We now define dependency graphs.
Given a program $\Program{}$ with a set of context holes $\HolesSet{}$, and a
set of measuring points $\MeasSet{}$, its \emph{dependency graph} is a bipartite
graph $(\HolesSet{},\MeasSet{},E)$. There is an edge $(h,m) \in E$, $h\in H$,
$m\in M$, iff the hole $h$ affects the execution time of $m$.
\begin{example}
A (partial) dependency graph of \mcoreinline{knnClassify} is given by
Figure~\ref{fig:knn-dep-graph}. It is partial because there are more measuring
points in the functions \mcoreinline{mostCommonLabel} and \mcoreinline{pmap}
(because they contain sequence operations, whose execution times depend on
$\Hseq$), but these functions are omitted for brevity.
%
%
\qed{}
\end{example}
The dependency graph encodes the dependencies among the program holes. For
instance, in Figure~\ref{fig:knn-dep-graph}, we see that $\Hseq$ and $\Hmap$ are
dependent, because they both affect $m_2$, while $\Hmap$ and $\Hsort$ are
independent, because they have no measuring point in common.
\subsection{Dependency Analysis}\label{sec:dep-analysis}
The goal of the dependency analysis is to compute the dependency graph of a
given program.
\subsubsection{0-CFA analysis}\label{sec:cfa-analysis}
The backbone of the dependency analysis is $0$-CFA
analysis~\cite{principles-of-program-analysis}. We extend standard $0$-CFA
analysis, which tracks data-flow of functions, to additionally track data-flow
of holes.
The result is that we compute for each subexpression in the program the set of
holes that affect the \emph{value} of the subexpression.
Standard data-flow rules apply.
The first two columns of Table~\ref{tab:dep-data} shows the result of the
data-flow analysis for a few example subexpressions from
\mcoreinline{knnClassify}.
We denote the data dependency of a subexpression by the set of holes whose value
the subexpression depends on.
In the first row, the variable \mcoreinline{query} depends on $\Hseq$ because
the variable refers to a sequence whose representation is decided by $\Hseq$.
Second, the if-then-else expression from the \mcoreinline{map} function depends
on both $\Hseq$ and $\Hmap$, because the condition of the if-then-else depends
on $\Hmap$, and the result of the subexpression is again a sequence dependent on
$\Hseq$.
In the third row, the variable \mcoreinline{dists} also depends on both $\Hseq$
and $\Hmap$, because the variable refers to the result of the \mcoreinline{map}
function.
In the fourth row, Lines~\ref{l:sort-ite-1}--\ref{l:sort-ite-2} in
Example~\ref{ex:global} depends on all three holes. It depends on $\Hsort$
because the condition of the if-then-else depends on $\Hsort$. It depends on
$\Hseq$ and $\Hmap$ because the branches of the if-then-else manipulate the
sequence referred to by \mcoreinline{dists}.
Finally, the call to \mcoreinline{subsequence} also depends on all holes,
because the built-in function \mcoreinline{subsequence} returns a sequence that
will have the same data dependency as its input sequence,
\mcoreinline{sortedDists}.
\begin{table}
\centering
\caption{Result of the dependency analysis for a subset of the subexpressions
in \mcoreinline{knnClassify}, using Examples~\ref{ex:global} and
\ref{example-map2} as implementations of \mcoreinline{sort} and
\mcoreinline{map}, respectively.
%
Columns~$2$--$3$ and $4$--$5$ show the data dependency and the execution
time dependency of each subexpression, for a program without and with
annotations, respectively.
%
We see that without the annotation, the analysis concludes that all three
holes are execution time dependent, while the version with annotations
results in the dependency graph in Figure~\ref{fig:knn-dep-graph}.
}\label{tab:dep-data}
\begin{tabular}{ccccc}
& \multicolumn{2}{c}{Without annotations} & \multicolumn{2}{c}{With annotations} \\
\cmidrule(lr){2-3}\cmidrule(lr){4-5}
Subexpression & Data dep. & Exe. dep. & Data dep. & Exe. dep. \\
\midrule
\mcoreinline{query} & $\{\Hseq\}$ & $\varnothing$ & $\{\Hseq\}$ & $\varnothing$ \\
\mcoreinline{if par then ... else ...} & $\{\Hseq,\Hmap\}$ & $\{\Hseq,\Hmap\}$ & $\{\Hseq\}$ & $\{\Hseq,\Hmap\}$ \\
\mcoreinline{dists} & $\{\Hseq,\Hmap\}$ & $\varnothing$ & $\{\Hseq\}$ & $\varnothing$ \\
Lines~\ref{l:sort-ite-1}--\ref{l:sort-ite-2} in Example~\ref{ex:global} & $\{\Hseq,\Hmap,\Hsort\}$ & $\{\Hseq,\Hmap,\Hsort\}$ & $\{\Hseq\}$ & $\{\Hseq,\Hsort\}$ \\
\mcoreinline{subsequence sortedDists 0 k} & $\{\Hseq,\Hmap,\Hsort\}$ & $\{\Hseq,\Hmap,\Hsort\}$ & $\{\Hseq\}$ & $\{\Hseq\}$ \\
\end{tabular}
\end{table}
Recall that we are interested in subexpressions whose \emph{execution time}
(\emph{not} value) depends on holes: these are the measuring points of the
program. Luckily, we can incorporate the analysis of measuring points into the
$0$-CFA, by using the data-flow information of the holes.
Besides data dependency, we introduce another kind of dependency:
\emph{execution time dependency}.
A subexpression with a non-empty execution time dependency is a measuring point.
There are two kinds of expressions that may give rise to execution time
dependency, i.e., measuring points: match expressions, and calls to functions.
\paragraph{Match Expressions.}
Given a match expression \mcoreinline{e} on the form \mcoreinline{match e1 with
pat then e2 else e3}, the following two rules apply:
(1) If \mcoreinline{e1} is \emph{data}-dependent on a hole $h$, then
\mcoreinline{e} is \emph{execution time}-dependent on $h$; and
(2) If \mcoreinline{e2} executes (directly or via a function call) another
subexpression that is \emph{execution time}-dependent on a hole $h$, then
\mcoreinline{e} is also execution time-dependent on $h$, and the same applies
for \mcoreinline{e3}.
The justification of rule 1 is that if the decision of which branch to take
depends on a hole, then the execution time of the match expression depends on
the hole. The justification of rule 2 is that the execution time of the whole
subexpression \mcoreinline{e} should include any execution time dependencies of
the individual branches.
The third column of Table~\ref{tab:dep-data} shows execution time dependencies
of some subexpressions in \mcoreinline{knnClassify}. Rows 2 and 4 are match
expressions.
Note that in the Miking core language, an if-then-else expression is syntactic
sugar for a match expression where \mcoreinline{pat} is \mcoreinline{true}.
The conditions of the match expressions depend on $\Hmap$ and $\Hsort$,
respectively, thus these holes are included in the execution time dependencies.
The branches of each subexpression perform sequence operations, which will be
measuring points dependent on $\Hseq$. Thus, the execution time of the match
expressions also depends on $\Hseq$.
The dependency on row 4 also includes $\Hmap$, because the input sequence,
\mcoreinline{dists}, has a data dependency on $\Hmap$.
\paragraph{Function calls.}
If the expression \mcoreinline{e} is the result of an application of a
\emph{built-in} function, then custom rules apply for each built-in. For
instance, for the expression \mcoreinline{subsequence s i j}, if the sequence
\mcoreinline{s} is \emph{data-dependent} on a hole $h$, then \mcoreinline{e} is
\emph{execution time}-dependent on $h$.
The \mcoreinline{subsequence} expression in the last row of
Table~\ref{tab:dep-data} has the same execution time dependency as its data
dependency, by following this rule.
In addition, for all function calls \mcoreinline{e} on the form \mcoreinline{e1
e2}, if \mcoreinline{e1} is \emph{data}-dependent on a hole $h$, then
\mcoreinline{e} is \emph{execution time}-dependent on $h$. As a simple example,
the function call \mcoreinline{(if h then f else g) x} is a measuring point,
given that \mcoreinline{h} is data-dependent on some hole. In other words, since
the left hand side of the application is determined by a hole, the execution
time of the function call depends on a hole.
Note that the function call on
Lines~\ref{l:knn-dists-start}--\ref{l:knn-dists-end} in
\mcoreinline{knnClassify} does \emph{not} constitute a measuring point, even
though its execution time depends on $\Hseq$ and $\Hmap$. The reason is that the
function \mcoreinline{map} itself is not data-dependent on a hole. The relevant
execution times of the \mcoreinline{map} functions are already captured by
measuring points within the \mcoreinline{map} function.
\subsubsection{Call Graph Analysis}\label{sec:call-graph-after-cfa}
The $0$-CFA analysis finds the set of measuring points of the program and
attaches an initial set of dependencies to each measuring point. Some
dependencies, however, are not captured in the $0$-CFA analysis. Namely, if a
measuring point $m_1$ executes \emph{another} measuring point $m_2$, then the
holes that affect $m_2$ also affect $m_1$.
For instance, the expression \mcoreinline{if par then pmap f s else smap f s}
executes any measuring points within the \mcoreinline{pmap} and
\mcoreinline{smap} function.
We perform another analysis step that analyzes the call graph of the program, in
order to find the set of measuring points that each measuring point executes.
This analysis step does not introduce any new measuring points, but it adds
more dependencies (edges in the dependency graph).
\subsubsection{False Positives and Annotations}\label{sec:annotations}
As we have seen so far, the dependencies (both for data and execution time) on
some subexpressions in Table~\ref{tab:dep-data} are unnecessarily large. For
instance, the \emph{value} of the subexpression on row 2 should intuitively
\emph{not} depend on $\Hmap$. After all, whether the map is performed in
parallel or sequentially does not affect the final result.
In other words, the data dependency on $\Hmap$ on row 2 is a false positive.
The result of false positives on data dependencies is that some execution time
dependencies may also be unnecessarily large. As we see in
Table~\ref{tab:dep-data}, the false positive on $\Hmap$ on row 2 propagates to
the data dependency of row 3 (\mcoreinline{dists}), which in turn affects the
\emph{execution time} dependencies of rows 4 and 5.
While it is in general hard for a compiler to detect, for instance, that
parallel and sequential code gives the same end result, or that two sorting
functions are equivalent, this information is typically obvious for a
programmer.
Therefore, we introduce the option to add \emph{annotations} to a program to
reduce the number of false-positive dependencies. The annotation states the set
of variables that a match expression is independent of, and is added directly
after a match expression using the keyword \mcoreinline{independent}.
For instance, replacing Line~\ref{l:par-ite} in Example~\ref{example-map2} with
\mcoreinline{independent (if par then pmap f s else smap f s) par}
states that the value of the match expression is independent of the variable
\mcoreinline{par}. More variables can be included in the set by nesting several
\mcoreinline{independent} annotations, e.g. \mcoreinline{independent (independent
<e> x) y}.
By incorporating this information in the analysis, the data dependency on the
independent set is ignored for the match expression.
Columns $4$--$5$ in Table~\ref{tab:dep-data} show the result of the analysis
given that the match expressions on rows 2 and 4 have been annotated to be
independent of the variables \mcoreinline{par} and \mcoreinline{threshold},
respectively. We see that the execution time dependencies now match the
dependency graph in Figure~\ref{fig:knn-dep-graph}. Row 2 in
Table~\ref{tab:dep-data} corresponds to $m_2$, row 4 corresponds to $m_3$, and
row 5 corresponds to $m_1$.
\subsubsection{Context-Sensitive Measuring Points}\label{sec:dep-context}
A property of $0$-CFA is that it does not include context information for the
data-flow, unlike $k$-CFA for $k>0$. While we are limited to $0$-CFA for
efficiency reasons, it is necessary to consider the contexts of
context-sensitive holes.
Therefore, we consider the context strings (see
Section~\ref{sec:transformations}) during the dependency analysis.
As an example, consider the \mcoreinline{map} function in
Example~\ref{example-map2}. Assume that it is called from two locations, so that
there are two possible call strings for the hole \mcoreinline{par}; $s_1$ and
$s_2$. During analysis of the measuring point on Line~\ref{l:par-ite}, we
conclude that the execution time depends \emph{either} on the context hole
associated with $s_1$, \emph{or} the one associated with $s_2$, but not both.
This is taken into account during instrumentation of the program, see
Section~\ref{sec:dep-instrumentation}.
\subsection{Instrumentation}\label{sec:dep-instrumentation}
The instrumentation is a program transformation step, where the input is the
program $p$ and the dependency graph $G=(\HolesSet{},\MeasSet{},E)$. The output
is an instrumented program $p_I$ that collects execution time information for
each measuring point.
Section~\ref{sec:instrumentation-challenges} introduces three challenges when
designing the instrumentation. In Section~\ref{sec:instrumentation-overview}, we
present the proposed design and clarify how the design addresses the identified
challenges.
\subsubsection{Challenges}\label{sec:instrumentation-challenges}
Assume that we wish to instrument the measuring point in row 2 in
Table~\ref{tab:dep-data} on page~\pageref{tab:dep-data}: \mcoreinline{if par
then pmap f s else smap f s}. A naive approach is to save the current time
before and after the expression has been executed, and then record the elapsed
time after the expression has been executed.
However, there are a number of problems with this simple solution.
First, the measuring point can execute another measuring point. In this specific
case, it executes any measuring points within the \mcoreinline{pmap} or
\mcoreinline{smap} functions. If we do not keep track of whether a measuring
point executes within another one, we will count some execution times several
times, which gives an inaccurate total execution time of the program.
Second, this simple instrumentation approach does not allow for tail-call
optimizations. The reason is that after the transformation of the program, some
operations are performed \emph{after} the execution of the measuring point. The
result is that a recursive call within the measuring point will no longer be in
tail position.
The third challenge has to do with context-sensitivity. For instance, assume
that \mcoreinline{map} in Example~\ref{example-map2} is called from two
locations. The instrumented code must then consider these two calling contexts
when recording the execution of the measuring point.
\subsubsection{Solution}\label{sec:instrumentation-overview}
The instrumentation introduces a number of global variables and functions in the
program, maintaining the current execution state via a lock mechanism. Moreover,
every measuring point is uniquely identified by an integer. In particular:
\begin{itemize}
\item
The variable \mcoreinline{lock} stores the identifier of the measuring
point that is currently running, where the initial value $0$ means that no
measuring point is running.
%
Note that we do not mean a lock for parallel execution; we can still execute
and measure parallel execution of code.
\item
The array \mcoreinline{s} of length $|M|$, where \mcoreinline{s}$[i]$
stores the latest seen start time of the $i$th measuring point.
\item
The array \mcoreinline{log} of length $|M|$, where \mcoreinline{log}$[i]$
stores a tuple $(T,n)$ where $T$ is the accumulated execution time, and $n$ is
the number of runs, of the $i$th measuring point.
\item
The function \mcoreinline{acquireLock} takes an integer $i>0$ (a unique
identifier of a measuring point) as argument, and is called upon entry of a
measuring point. If the \mcoreinline{lock} equals $0$, then the function sets
\mcoreinline{lock} to $i$, and writes the current time to
$\mcoreinline{s}[i]$. Otherwise, that is, if the \mcoreinline{lock} is already
taken, the function does nothing.
\item
The function \mcoreinline{releaseLock} also takes an integer identifier
$i>0$ as argument, and is called when a measuring point exits. If the
\mcoreinline{lock} equals $i$, then the function sets \mcoreinline{lock} to
$0$, and adds the elapsed execution time to the global \mcoreinline{log}. If
the \mcoreinline{lock} is taken by some other measuring point, the function
does nothing.
\end{itemize}
After the instrumented program is executed, the array \mcoreinline{log} stores
the accumulated execution time and the number of runs for each measuring point.
As a result, the measuring point in row 2 in
Table~\ref{tab:dep-data} is replaced by the following lines of code:
\begin{mcore-lines}
acquireLock $\mathit{i}$; --*\label{l:instr-1}*--
let v = if par then pmap f s else smap f s in --*\label{l:instr-2}*--
releaseLock $\mathit{i}$; --*\label{l:instr-3}*--
v
\end{mcore-lines}
The \mcoreinline{lock} design addresses the first of the identified challenges:
to keeping track of whether a measuring point is executed within another one.
Because only one measuring point can possess the lock at any given moment, we do
not record the execution if one executes within another. Thus, the sum of the execution times in the \mcoreinline{log} array
never exceeds the total execution time of the program.
We now consider the second challenge: allowing for tail-call optimization.
For a measuring point with a recursive call \mcoreinline{f x} in tail position,
for instance \mcoreinline{if <$\mathit{cond}$> then <$\mathit{base case}$> else f x},
the call to \mcoreinline{releaseLock} is placed in the base case only, so that
the recursive call remains in tail position:
\begin{mcore-lines}
acquireLock $\mathit{i}$;
if <$\mathit{cond}$> then
let v = <$\mathit{basecase}$> in
releaseLock $\mathit{i}$;
v
else f x
\end{mcore-lines}
There can be more than one call to \mcoreinline{releaseLock} in each base case,
because of measuring points in mutually recursive functions.
The instrumentation analyzes the (mutually) recursive functions within the
program and inserts the necessary calls to \mcoreinline{releaseLock} in the base
cases of these functions.
The third challenge, dependency on context-sensitive holes, is addressed
similarly as in Section~\ref{sec:trans-static}. Consider again the measuring
point \mcoreinline{if par then pmap f s else smap f s in} within the
\mcoreinline{map} function, and assume that there are two possible calls to
\mcoreinline{map}. The measuring point is assigned a different identifier
depending on which of these contexts is active. The identifier is found by a
\mcoreinline{switch} expression of depth 1, reading the current color of the
\mcoreinline{map} function.
If there is only one call to \mcoreinline{map}, then the identifier is simply an
integer, statically inserted into the program.
\section{Dependency-Aware Tuning}\label{sec:dep-tuning}
In contrast to standard program tuning, dependency-aware tuning takes the
dependency graph into account to reduce the search space of the problem.
This section describes how to explore this reduced search space, and how to find
the expected best configuration given a set of observations.
\subsection{Reducing the Search Space Size}\label{sec:reduce-search-space}
\begin{figure}
\begin{subfigure}[t]{.30\textwidth}
\centering
\includegraphics[width=0.5\textwidth]{dep-graph-dependent.pdf}
\caption{Dependency graph of a program with fully dependent
holes. All $2^n$ possible combinations need to be taken into
consideration when tuning.}\label{fig:bipartite-dependent}
\end{subfigure}\hfill
\begin{subfigure}[t]{.30\textwidth}
\centering
\includegraphics[width=0.5\textwidth]{dep-graph-independent.pdf}
\caption{Dependency graph of a program with fully \emph{in}dependent
holes. It is enough with $2$ program runs to exhaust the search space if
fine-grained instrumentation of each measuring point is used.
}\label{fig:bipartite-independent}
\end{subfigure}\hfill
\begin{subfigure}[t]{.30\textwidth}
\centering
\includegraphics[width=0.5\textwidth]{dep-graph-partial.pdf}
\caption{Dependency graph that is neither fully dependent nor fully
independent. The number of required program runs with instrumentation is
$2^r$, where $r$ is the maximum number of holes affecting any given
measuring point. }\label{fig:bipartite-partial}
\end{subfigure}
\caption{Possible dependency graphs for a program with $n$ holes of Boolean
type. In all cases, there are $2^n$ number of possible configurations.
However, the more independence, the fewer of these configurations need to
be considered when tuning.}\label{fig:bipartite}
\end{figure}
In standard program tuning (without dependency analysis), each hole needs to be
tuned in combination with every other hole, which means that the number of
configurations to consider grows exponentially with the number of holes.
\begin{example}\label{ex:dep-fully-dep}
Consider a program with $n$ Boolean holes, which has a search space of size
$2^n$.
%
With no dependency analysis, we may view the program as consisting of one
measuring point, affected by all the holes in the program. This corresponds to
the dependency graph in Figure~\ref{fig:bipartite-dependent}. If exhaustive
search is used, then $2^n$ program runs are required to find the optimal
configuration.
%
\qed{}
\end{example}
Dependency analysis finds the fraction of the total number of configurations
that are relevant to evaluate during tuning, as illustrated in
Examples~\ref{ex:dep-fully-indep} and~\ref{ex:dep-partial}.
\begin{example}\label{ex:dep-fully-indep}
Consider the program from Example~\ref{ex:dep-fully-dep}.
%
Figure~\ref{fig:bipartite-independent} shows a dependency graph where all the
holes are completely independent, so that each measuring point is affected by
exactly one hole.
%
%
If instrumentation is used, we collect the execution time for each measuring
point in isolation. In this case, it is enough to run $2$ configurations to
exhaust the search space. For example, we can run one configuration where all
holes are set to \mcoreinline{true}, and one where they are set to
\mcoreinline{false}. After this, the optimal configuration is found by
considering the results for each hole in isolation and determining whether its
value should be \mcoreinline{true} or \mcoreinline{false}.
If end-to-end time measurement is used for the dependency graph in
Figure~\ref{fig:bipartite-independent}, then $n+1$ program runs is required.
For example, one run where all holes are set to \mcoreinline{false}, followed
by $n$ runs where each hole at a time is set to \mcoreinline{true}, while
keeping the remaining holes fixed.
\qed{}
\end{example}
\begin{example}\label{ex:dep-partial}
Again considering the program from Example~\ref{ex:dep-fully-dep},
Figure~\ref{fig:bipartite-partial} shows a scenario where the holes are
neither fully dependent nor fully independent. Assume that $n=4$ and $m=3$, so
that the dependency graph contains only the holes and measuring points that
are visible (without the ``$\ldots$'' parts). There are at most $2$ holes that
affect any given measuring point, and each hole has $2$ possible values.
Therefore it is enough to consider $2\cdot 2=4$ configurations. For example,
we may consider the ones listed in Table~\ref{tab:configurations}, though this
table is not unique.
%
Note that the table contains all combinations of values for $\{h1, h2\}$, for
$\{h2, h3\}$, and for $\{h4\}$, respectively. However, some combinations of
$\{h1,h3\}$ are missing, because $h1$ and $h3$ do not have any measuring point
in common.
%
\begin{table}
\centering
\begin{tabular}{lllllccc}
& $h_1$ & $h_2$ & $h_3$ & $h_4$ & $m_1$ & $m_2$ & $m_3$ \\
$1$ & \mcoreinline{false} & \mcoreinline{false} & \mcoreinline{false} & \mcoreinline{false}
& 7 & 5 & 1 \\
$2$ & \mcoreinline{false} & \mcoreinline{true} & \mcoreinline{false} & \mcoreinline{true}
& 2 & 4 & 2 \\
$3$ & \mcoreinline{true} & \mcoreinline{false} & \mcoreinline{true} & \mcoreinline{?}
& 3 & 6 & \mcoreinline{?} \\
$4$ & \mcoreinline{true} & \mcoreinline{true} & \mcoreinline{true} & \mcoreinline{?}
& 6 & 3 & \mcoreinline{?} \\
\end{tabular}
\caption{Columns $h_1$--$h_4$ show the four possible combinations of values
for the four holes in the dependency graph in
Figure~\ref{fig:bipartite-partial}, for $n=4$ and $m=3$. %
%
Columns $m_1$--$m_3$ show possible costs for the three measuring points.
The cost of a measuring point could for instance be the sum (in seconds)
of the execution times over all invocations of the measuring point.
%
The \mcoreinline{?}s indicate that we may choose any value for $h_4$
in iteration $3$ and $4$, since we have already exhausted the
sub-graph consisting of $h_4$ and $m_3$.
Note that the four combinations listed in the table are not unique. For
instance, we may shift the values in the $h_1$ column one step to produce
another table. However, there are never more than four necessary
combinations to evaluate.}\label{tab:configurations}
\end{table}
\qed{}
\end{example}
We define the \emph{reduced search space size} given a dependency graph
$(H,M,E)$ as: $ \max\limits_{m\in M}\prod\limits_{(h,m)\in E} |\text{dom}(h)| $,
where $\text{dom}(h)$ denotes the domain, that is, the set of possible values,
for a hole $h$.
Applying this formula to Example~\ref{ex:dep-partial} gives $2^2=4$
configurations, as expected.g
\subsection{Choosing the Optimal Configuration}\label{sec:choosing-optimal}
\newcommand{\ResultTable}[0]{T}
\newcommand{\ConfigMatrix}[0]{C}
\newcommand{\ObservationMatrix}[0]{O}
\newcommand{\Measures}[0]{K}
This section considers how to choose the optimal configuration, given an
objective value to be minimized and the observed results of a set of hole value
combinations.
Specifically, we assume that we have:
a dependency graph $G=(\HolesSet{},\MeasSet{},E)$;
a configuration matrix $\ConfigMatrix{}$ of dimension $r \times
|\HolesSet{}|$, where $\ConfigMatrix[i,j]$ gives the value of the $j$th hole
in the $i$th iteration (compare columns $h_1$--$h_4$ in
Table~\ref{tab:configurations});
%
a number of observation matrices $\ObservationMatrix_k{}$, each with
dimension $r \times |\MeasSet{}|$, for $k$ in some set of measures
$\Measures{}$. For the rest of this section, we assume that there is only one
measure, namely accumulated execution time. Thus, we denote the only
observation matrix by $O$. That is, $O[i,j]$ gives the accumulated execution
time for the $j$th measuring point in the $i$th iteration (compare columns
$m_1$--$m_3$ in Table~\ref{tab:configurations}).
%
The problem is to assign each hole in $\HolesSet{}$ to values in their domains,
such that the objective function is minimized, where the objective function is
built from the observation matrices. In this section, we assume that the
objective is to minimize the sum of the accumulated execution times for the
measuring points. However, the approaches discussed here are general enough to
handle any number of observation matrices, with some other custom objective
function.
Before presenting two general approaches for solving this problem, we consider
how to solve it for the example in Table~\ref{tab:configurations}:
\begin{example}
Consider the results for the measuring points $m_1$, $m_2$, and $m_3$ in
Table~\ref{tab:configurations}.
%
At first glance, the optimal configuration seems to be configuration 2, since
it has the lowest total execution time, namely $8$~s, out of the four options
(regardless of the value of $h_4$ in iteration $3$ and $4$, the total value
will exceed $8$).
%
However, the first improvement to this is that we can choose the value of
$h_4$ independently of the values of the other holes, since $h_4$ is disjoint
from the other holes in the dependency graph. We see that the best value for
$h_4$ is $\mcoreinline{false}$, giving $m_3$ the execution time $1$~s.
%
The second improvement is that we can choose the value of $h_1$ independently
of the value of $h_3$. With this in mind, the best values for $h_1$, $h_2$,
and $h_3$ is $\mcoreinline{false}$, $\mcoreinline{true}$, and
$\mcoreinline{true}$, respectively. This gives $m_2$ and $m_3$ the execution
times $2$~s and $3$~s, respectively.
%
Thus, the optimal configuration is one that is not explicitly listed in the
table, and has the estimated cost of $6$~s.
%
\qed{}
\end{example}
\newcommand{\TableConstraint}[0]{\texttt{table}}
\newcommand{\ListConcat}[0]{\mathit{++}}
The first approach is to consider each possible combination explicitly and pick
the combination giving the lowest total execution time.
As an optimization, we can consider each disjoint part (that is, each connected
component) of the dependency graph separately. In the example in
Table~\ref{tab:configurations}, this means that we create one explicit matrix
for the connected component consisting of the vertices $\{h_1,h_2,h_3,m_1,m_2\}$
and one for the connected component with vertices $\{h_4,m_4\}$.
That is, we would infer the matrix in Table~\ref{tab:m1m2} for the measuring
points $m_1$ and $m_2$, and similarly a matrix with two rows for $m_3$. From
these explicit matrices, we can directly find that the minimum expected cost is
$6$, when $h_1=\mcoreinline{false}$, $h_2=\mcoreinline{true}$,
$h_3=\mcoreinline{true}$, and $h_4=\mcoreinline{false}$.
\begin{table}
\centering
\begin{tabular}{llllcc}
& $h_1$ & $h_2$ & $h_3$ & $m_1$ & $m_2$ \\
$1$ & \mcoreinline{false} & \mcoreinline{false} & \mcoreinline{false}
& 7 & 5 \\
$2$ & \mcoreinline{false} & \mcoreinline{false} & \mcoreinline{true}
& 7 & 6 \\
$3$ & \mcoreinline{false} & \mcoreinline{true} & \mcoreinline{false}
& 2 & 4 \\
$4$ & \mcoreinline{false} & \mcoreinline{true} & \mcoreinline{true}
& 2 & 3 \\
$5$ & \mcoreinline{true} & \mcoreinline{false} & \mcoreinline{false}
& 3 & 5 \\
$6$ & \mcoreinline{true} & \mcoreinline{false} & \mcoreinline{true}
& 3 & 6 \\
$7$ & \mcoreinline{true} & \mcoreinline{true} & \mcoreinline{false}
& 6 & 4 \\
$8$ & \mcoreinline{true} & \mcoreinline{true} & \mcoreinline{true}
& 6 & 3 \\
\end{tabular}
\caption{Explicit listing of expected results for $m_1$ and $m_2$. The
combinations in rows $2$, $4$, $5$, and $7$ have never been run; they are
inferred from the observations in Table~\ref{tab:configurations}, taking
the dependency graph into account.
}\label{tab:m1m2}
\end{table}
%
Although the complexity of the explicit approach scales exponentially with the
number of holes, we observe in our practical evaluation that this step is not
a bottle neck for performance of the tuning.
%
However, if this were to become a practical problem in the future, this
problem can be solved in a more efficient way by formulating it as a
constraint optimization problem (COP)~\cite{DBLP:reference/fai/2}. There exist
specialized constraint programming solvers (CP solvers), that are highly
optimized for solving general COPs, for instance Gecode~\cite{gecode} and
OR-Tools~\cite{ortools}. The problem of choosing the optimal configuration
given a set of observations can be expressed and solved using one of these
solvers.
\subsection{Exploring the Reduced Search Space}\label{sec:heuristics}
In the experimental evaluation of this paper, we have implemented exhaustive
search of the \emph{reduced search space}. As an additional step in the search
space reduction, the tuner can optionally focus on the measuring points having
the highest execution times. These measuring points are found by executing the
program with random configurations of the hole values a number of times, and
finding the measuring points with the highest mean execution times. This
optional step reduces the search space additionally in practical experiments.
Of course, there exist other heuristic approaches for exploring the search
space, such as tabu search or simulated annealing. Evaluating these approaches
is outside the scope of this paper, but they can be implemented in our modular
tuning framework. In Section~\ref{sec:implementation}, we see that modularity
is a key concept of the Miking language.
\section{Design and Implementation}\label{sec:implementation}
We implement the methodology of programming with holes into the Miking compiler
toolchain~\cite{Broman:2019}. Figure~\ref{fig:design} shows the design of the
implementation. In this section, we first discuss the overall design of the
toolchain, and then go through the three possible flows through it: default
compilation; tuned compilation; and tuning.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth,scale=1.0,trim={0cm 0cm 0cm
0cm},clip]{design.pdf}
\caption{There are three possible flows through the Miking compiler
toochain: %
default compilation (green solid arrows); %
tuned compilation (dotted magenta-colored arrows); %
and tuning (dashed blue arrows). %
Sections~\ref{sec:default-compilation}--\ref{sec:tuning} give a detailed
explanation of each flow. %
In the figure, artifacts are marked grey and has sharp corners, while
transformations are white and rounded. }\label{fig:design}
\end{figure}
\subsection{The Miking Compiler Toolchain}
\label{sec:tool-chain}
Miking is a general language system for developing domain-specific and
general-purpose languages.
The Miking compiler is self-hosting (bootstrapped with OCaml).
The core language of the Miking system is called MCore (Miking Core) and is a
small functional language.
A key language feature of MCore is language fragments. A language fragment
defines the abstract syntax and semantics of a fragment of a programming
language. By composing several language fragments, new languages are built in a
modular way.
To extend the Miking compiler toolchain with holes, we create a new language
fragment defining the abstract syntax and semantics of holes, and compose this
fragment with the main MCore language.
The holes are transformed away before the compilation of the program.
The motivation for implementing our methodology in Miking is partly because the
system is well-designed for implementing language extensions and program
transformations, and partly because the methodology can be incorporated in any
language developed in Miking.
\newcommand{\MikingNbrFiles}[0]{$300$}
\newcommand{\MikingNbrLines}[0]{$55,000$}
\newcommand{\MikingBlankPercentage}[0]{$30$}
\newcommand{\MikingTuningNbrFiles}[0]{$18$}
\newcommand{\MikingTuningNbrLines}[0]{$5,000$}
\newcommand{\MikingTuningBlankPercentage}[0]{$30$}
The Miking toolchain consists of approximately \MikingNbrFiles~files and
\MikingNbrLines~lines of MCore code (out of which approx.
\MikingBlankPercentage$\%$ is either blank lines or comments). The contribution
of this paper is the part implementing holes (including language extensions,
program transformations, tuning, tuned compilation and dependency analysis).
This part consists of \MikingTuningNbrFiles~files and approx.
\MikingTuningNbrLines~lines of code (approx. \MikingTuningBlankPercentage$\%$
blank lines or comments).
\subsection{Default Compilation}
\label{sec:default-compilation}
The green solid path in Figure~\ref{fig:design} shows default compilation. In
this path, each hole in the program is statically replaced by its default value.
The resulting program is compiled into an executable.
Default compilation is useful during development of a program, as tuning can
take considerately longer time than default compilation.
\subsection{Tuned Compilation}
\label{sec:tuned-compilation}
The dotted magenta-colored path in Figure~\ref{fig:design} shows tuned
compilation. In this scenario, the program and the tune file are given as input
to the graph coloring, followed by context expansion
(Section~\ref{sec:transformations}).
The context expansion statically inserts the tuned values for each context into
the program.
Finally, the program is compiled into an executable.
Tuned compilation is done \emph{after} tuning has been performed, in order to
create an executable where the holes are assigned to the tuned values.
Optionally, tuned compilation can be performed automatically after tuning.
\subsection{Tuning}
\label{sec:tuning}
The blue dashed flow in Figure~\ref{fig:design} shows the offline tuning.
The program and the tune file (optional) are given as input to the graph
coloring. If the tune file is provided, then these values are considered
defaults, instead of the values provided via the \mcoreinline{default} keyword.
The graph coloring outputs (i) \emph{context information} about the holes, which
is used in the offline tuning, and in later transformation stages, and (ii) a
transformed program.
Next, the dependency analysis and instrumentation
(Section~\ref{sec:dependency-analysis}) computes a \emph{dependency graph},
which is also used in the offline tuning, and an instrumented program.
The last transformation stage, context expansion, replaces each hole with code
that looks up its current calling context.
The context expansion sends the context information and the dependency graph to
the offline tuning, and the transformed program to the Miking compiler.
The Miking compiler creates an executable to be used during tuning, the
(\emph{tuning executable}).
The \emph{offline tuning} takes the context information, dependency graph,
tuning executable, and a set of input data as input. The tuner maintains a
temporary tune file, which contains the current values of the holes. In each
search iteration, the tuning executable reads these values from the file, and
the tuner measures the runtime of the program on the set of input data.
When the tuning finishes, the tuner writes the best-found values to a final tune
file.
The tuner first reduces the search space using the dependency graph and then
applies dependency-aware tuning (Section~\ref{sec:dep-tuning}).
The stopping condition for the tuning is configurable by the user and is either
a maximum number of search iterations, or a timeout value.
\section{Empirical Evaluation}\label{sec:eval}
\newcommand{\Expressibility}[0]{1}
\newcommand{\Reduced}[0]{2}
This section evaluates the implementation. The purpose is to demonstrate that
the approach scales to real-world use cases, and to show that context-sensitive
holes are useful in these settings.
Specifically, we evaluate the following claims:
\begin{description}
\item[Claim~\Expressibility:] We can express implementation choices in
real-world and non-trivial programs using context-sensitive holes.
\item[Claim~\Reduced:] Dependency analysis reduces the search space of
real-world and non-trivial tuning problems.
\end{description}
The evaluation consists of three case studies of varying sizes and from
different domains. Two of the case studies, probabilistic programming and Miking
compiler, are real-world applications not originally written for the purpose of
this evaluation. The third case study, $k$-nearest neighbor classification, is
of smaller size, yet is a non-trivial program.
The experiments are run under Linux Ubuntu~18.04 ($64$~bits) on an Intel Xeon
Gold 6148 of $2.40$~GHz, with $20$~cores, hyperthreading enabled ($2$~threads
per core). The computer has $64$~GB RAM and a $1$~MB L2 cache.
As backend compiler for the Miking compiler, we use the OCaml compiler available
as an OPAM switch \texttt{4.12.0+domains}. At the time of writing, this is the
latest OCaml compiler with multicore support.
\subsection{$k$-Nearest Neighbor Classification}\label{sec:eval-knn}
This case study consists of a variant of the running example in
Section~\ref{sec:dependency-analysis}, $k$-NN classification.
Again, we consider the three implementation choices of the underlying
representation of sequence ($\Hseq$), parallelization of the \mcoreinline{map}
function ($\Hmap$), and choice of sorting algorithm ($\Hsort$).
We assume that the performance of these choices depends on the size of the input
data, and that we are interested in tuning the classifier for a range of
different sizes of the data set.
We believe that for small data sets, the sequence representation cons list is
more efficient than Rope, the \mcoreinline{map} function is more efficient when
run sequentially than in parallel, and that insertion sort is more efficient
than merge sort, respectively. However, we do not know the threshold values for
these choices. Therefore, we let the three base holes $\Hseq$, $\Hmap$ and
$\Hsort$ be of type \mcoreinline{IntRange}, representing the unknown threshold
values. For instance, assuming the $\Hmap$ hole is called
\mcoreinline{parThreshold}, the following: \mcoreinline{if lti (length seq)
parThreshold then smap f seq else pmap f seq},
encodes the choice for the \mcoreinline{map} function.
We assume we are interested in data sets of sizes $10^3$--$10^5$ points. We set
the minimum and maximum values of the holes accordingly to
$\mcoreinline{min}=10^3$ and \mcoreinline{max} slightly higher than $10^5$, say
$\mcoreinline{max}=101,000$, respectively. That is, the \mcoreinline{min} value
corresponds to making the first choice (e.g., sequential \mcoreinline{map}) for
all input sizes, while the \mcoreinline{max} value corresponds to making the
second choice (e.g., parallel \mcoreinline{map}) for all input sizes.
We generate $6$~random sets of data points with dimension $3$, in sizes in the
range of interest: $10^3, 2\cdot 10^4, 4\cdot 10^4, 6\cdot 10^4, 8\cdot 10^4,
10^5$. We use a step size of $2\cdot 10^4$ when tuning, so that hole values with
this interval are considered.
The dependency analysis results are that the search space size is reduced by
approx.~$83\%$ (from $216$ to $36$ configurations). The best found configuration
for the threshold values were $21000$, $1000$, and $101000$ for $\Hmap$,
$\Hsort$, and $\Hseq$, respectively. That is, all input sizes except the
smallest runs \mcoreinline{map} in parallel, all input sizes use merge sort, and
all input sizes use cons lists as sequence representation.
Table~\ref{tab:knn} presents the execution time results of the tuned program
compared to the worst configuration. We see that the tuning gives
between~$3$--$1500$ speedup of the program.
We note that for this case study, allowing for tail-call optimization in the
instrumented program (see Section~\ref{sec:dep-instrumentation}) is of utmost
importance. The sorting functions are tail recursive, so an instrumented
program without tail-call optimizations gives non-representative execution
times, or even stack overflow for large enough input sizes.
The total time for the tuning is approx. $3.5$ hours, and the static analysis
takes less than $100$ ms.
\begin{figure}
\begin{subfigure}[t]{.45\textwidth}
\centering
\begin{tabular}{lll}
Input size & Execution time & Speedup \\
\midrule
\input{experiments/knn/knn.tex}
\end{tabular}
\caption{Tuning results for the $k$-NN classifier. The measurements are done
on different data sets than were used during tuning.
%
The 'Input size' column shows the size of each data set.
%
The 'Execution time' column shows the execution time in seconds for the
tuned program for a given input size.
%
The 'Speedup' column presents the speedup of the tuned program compared to
the worst configuration. The worst configuration for each input size is
found by measuring the execution time of the program when setting the
threshold so that it is \emph{below} respectively \emph{above} the given
input size, for each of the three holes. That is, in total there are $8$
($=2^3$) candidate configurations for each input size.
%
}\label{tab:knn}
\end{subfigure}\hfill
\begin{subfigure}[t]{.45\textwidth}
\centering
\begin{tabular}{l|ll}
& Sequential & Parallel (worst) \\
\midrule
Rope & 16.8 & 1.96 \\
List & 18.4 & 1.94 \\
\end{tabular}
\caption{Speedup of the tuned probabilistic program. The
numbers show the speedup of the
best found configuration (Rope and
parallel \mcoreinline{map} using a chunk size of 610 elements), compared to
the other configurations.
%
The 'Rope' and 'List' rows are using the respective sequence representation.
The 'Sequential' column uses sequential \mcoreinline{map}, and the
'Parallel (worst)' column uses parallel
\mcoreinline{map}
with the chunk size giving the worst execution time. For both Rope and List, the worst
choice for the chunk size is $10$ elements.
%
The execution time of each program is measured $10$ times, and the speedup
is calculated as the ratio of the mean execution time of the program divided
by the mean of the baseline. The execution time of the baseline (the tuned
program) is $7.1 \pm 0.06$ seconds (mean and standard deviation over $10$
runs). }\label{tab:prob}
\end{subfigure}
\caption{Results for the $k$-NN and the probabilistic programming case studies.}
\end{figure}
\subsection{Probabilistic Programming}\label{sec:eval-prob}
This case study considers a probabilistic programming framework developed consisting of approx. $150$ files and $1,500$ lines of
MCore code. Note that the majority of this code is the standard MCore code of
the general purpose program and that the probabilistic programming parts
consists of a minimal extension.
We focus on the inference backend of the framework, using the
importance sampling inference method. The inference is a core part of the
framework, and is used when solving any probabilistic programming model.
We tune the \emph{underlying sequence representations} and the \emph{map
function} within the inference backend. The sequence representation is either
cons list or Rope. The map function chooses between a sequential or parallel
implementation.
In addition, we tune the \emph{chunk size} of the parallel implementation (see
Example~\ref{example-map3}).
We use a simple probabilistic model representing the outcome of tossing a fair
coin. The model makes $10,000$ observations of a coin flip from a Bernoulli
distribution, and infers the posterior distribution, given a Beta prior
distribution.
We expect that the choices the tuner makes are valid for a given model and
number of particles used in the inference algorithm, because these two factors
are likely to influence the execution time of the \mcoreinline{map} function.
Once a given model is tuned, however, it does not need to be re-tuned for other
sets of observed data, as long as the number of observations is the same.
We tune the model using $30,000$ particles for the inference algorithm. The
tuner chooses to use Rope as sequence representation in combination with
parallel map with a chunk size of $610$ elements.
Table~\ref{tab:prob} shows the speedup of the best found configuration compared
to the others. For instance, we see that we get a speedup of $1.96$ when using a
chunk size of $610$ for Rope compared to using the worst chunk size ($10$).
The total tuning time for the program is approx. $6$~minutes.
\subsection{Miking Compiler}\label{sec:eval-miking}
This case study considers the bootstrapping compiler, a subset of the Miking
compiler toolchain. The purpose is to test the dependency for a problem of
larger scale.
For each sequence used within the compiler, we express the choice of which
underlying representation to use (Rope or list) using a context-sensitive hole.
By default, the compiler uses Rope. Because the main use of sequences within the
compiler consists of string manipulation, which is very efficient using Rope, we
do not believe there is much to gain from using lists. However, the purpose of
this experiment is not to improve the execution time of the compiler, but rather
to show search space reduction.
After the context expansion, there are in total $2,309$ holes. That is, the size
of the original search space is $2^{2309}$.
After applying dependency analysis, the search space is reduced to $2^{924}$.
By filtering out all measuring points that have a mean execution time of less
then $10$~ms, the search space is further reduced to $2^{816}$.
The total time of the static analysis is approx.~$16$ minutes, which is
considerably higher than for the previous case studies, due to the size of this
program.
When performing this case study, we choose to disable the feature of the
instrumentation that allows for tail-call optimization, because of an identified
problem with this feature.
We only observe this problem for this large-scale program; for the other case
studies the correctness of the instrumentation is validated manually and by
assertions within the instrumented code.
\subsection{Discussion}
This section relates the claims with the results from the case studies, and
discusses correctness of possible hole values.
This evaluation considers two claims and three case studies.
Claim~\Expressibility, expressibility of implementation choices in real-world
and non-trivial programs, is shown in all three case studies. Using holes, we
can encode the automatic selection of algorithms and data structures, as well as
parallelization choices. We can also encode dependencies on data size in the
program, using threshold values.
We address Claim~\Reduced{} in the $k$-NN classification and Miking compiler
case studies. In both these cases, the search spaces are considerably reduced.
We observe, especially from the Miking compiler case study, that a possible area
for improvement in the dependency analysis is the call graph analysis step
(Section~\ref{sec:call-graph-after-cfa}). The reason is that for a large
program, dependencies from \emph{potential} executions nested measuring points
within branches in match expressions quickly accumulates, giving a quickly
growing search space. By taking into account that the execution of the nested
points are only \emph{conditionally} dependent on the condition of the match
expressions, we can reduce the search space further.
An essential and challenging aspect when programming in general is the
functional correctness of the program.
When programming with holes, this aspect can become even more challenging, as
combinations of hole values form a (sometimes complicated) set of possible
programs.
The typical software engineering approach for increasing confidence of
correctness is to use testing.
As it turns out, testing can also aid us in the case of programming with holes.
The MCore language has built-in support for tests (via the language construct
\mcoreinline{utest}), and these are stripped away unless we provide the
\texttt{--test} flag. By providing the \texttt{--test} flag when invoking the
tuning stage, the tuner will run the tests during tuning, using the currently
evaluated hole values. The result is a slight degradation in tuning time but no
overhead in the final tuned binary.
As a practical example, we use \mcoreinline{utest}s in the $k$-NN case study in
this evaluation in order to ensure that the classifier indeed chooses the
correct class for some test data sets.
\section{Related Work}\label{sec:related}
This section discusses related work within auto-tuners, by partitioning them
into domain-specific and generic tuners. We also discuss work using static
analysis within auto tuning.
Many successful auto-tuners target domain-specific problems.
SPIRAL~\cite{spiral-overview-18} is a tuning framework within the digital signal
processing domain, ATLAS~\cite{atlas} tunes libraries for linear algebra,
FFTW~\cite{fftw} targets fast Fourier transforms,
PetaBrikcs~\cite{petabricks-09} focuses on algorithmic choice and composition,
and the work by~\cite{sorting-04} automatically chooses the best sorting
algorithm for a given input list.
Moreover, within the area of compiler optimizations, a popular research field is
auto-tuning the selection and phase-ordering of compiler optimization
passes~\cite{survey-compiler-autotunings-using-ml}.
On the one hand, a natural drawback with a domain-specific tuner is that it is
not applicable outside of its problem scope, while generic tuners (such as our
framework) can be applied to a wider range of tuning problems.
On the other hand, the main strength of domain-specific tuners is that they can
use knowledge about the problem in order to reduce the search space. For
instance, SPIRAL applies a dynamic programming approach that incrementally
builds solutions from smaller sub-problems, exploiting the recursive structure
of transform algorithms.
Similarly, PetaBricks also applies dynamic programming as a bottom-up approach
for algorithmic composition, and the authors of \cite{sorting-04} include the
properties of the lists being sorted (lengths and data distribution) in the
tuning.
Such domain-dependent approaches are not currently applied in our framework,
because the tuner has no deep knowledge about the underlying problem.
We are therefore limited to generic search strategies.
An interesting research problem is investigating how problem-specific
information can be incorporated into our methodology, either from the user, or
from compiler analyses, or both.
Potentially, such information can speed up the tuning when targeting particular
problems, while not limiting the generalizability of our approach.
Among the generic tuners, CLTune~\cite{cltune} is designed for tuning the
execution time of OpenCL kernels, and supports both offline and online tuning.
OpenTuner~\cite{open-tuner} allows user-defined search-strategies and objective
functions (such as execution time, accuracy, or code size).
ATF~\cite{atf} also supports user-defined search strategies and objectives and
additionally supports pair-wise constraints, such as expressing that the value
of a variable must be divisible by another variable.
The HyperMapper~\cite{hypermapper} framework
has built-in support for multi-objective optimizations so that trade-off curves
of e.g. execution time and accuracy can be explored.
Our approach is similar to these approaches as we have a similar programming
model: defining unknown variables (holes) with a given set of values.
The difference is that we support context-sensitive holes, while previous works
perform global tuning.
There are a few previous approaches within the field of program tuning using
static analysis to speed up the tuning stage. For instance, to collect metrics
from CUDA kernels in order to suggest promising parameter settings to the auto
tuner~\cite{DBLP:conf/icpp/LimNM17}, or static analysis in combination with
empirical experiments for auto tuning tensor
transposition~\cite{DBLP:conf/ipps/WeiM14}.
To the best of our knowledge, there is no prior work in using static analysis
for analyzing dependency among decision variables.
One prior work does exploit dependency to reduce the search
space~\cite{DBLP:conf/europar/SchaeferPT09}, but their approach relies entirely
on user annotations specifying the measuring points, the independent code blocks
of the program, permutation regions, and conditional dependencies.
By contrast, our approach is completely automatic, where the user can refine the
automatic dependency analysis with certain annotations, if needed.
\section{Conclusion}\label{sec:conclusion}
In this paper, we propose a methodology for programming with holes that enables
developers to postpone design decisions and instead let an automatic tuner find
suitable solutions. Specifically, we propose two new concepts: (i)
context-sensitive holes, and (ii) dependency-aware tuning using static analysis.
The whole approach is implemented in the Miking system and evaluated on
non-trivial examples and a larger code base consisting of a bootstrapped
compiler. We contend that the proposed methodology may be useful in various
software domains, and that the developed framework can be used for further
developments of more efficient tuning approaches and heuristics.
\begin{acks}
This project is financially supported by the Swedish Foundation for Strategic Research (FFL15-0032 and RIT15-0012). The research has also been carried out as part of the Vinnova Competence Center for Trustworthy Edge Computing Systems and Applications (TECoSA) at the KTH Royal Institute of Technology. We would like to thank Joakim Jald\'{e}n, Gizem \c{C}aylak, Oscar Eriksson,
and Lars Hummelgren for valuable comments on draft versions of this paper. We
also thank the anonymous reviewers for their detailed and
constructive feedback.
\end{acks}
\bibliographystyle{ACM-Reference-Format}
|
2,869,038,154,710 | arxiv | \section{Introduction}
Bound state in the continuum (BIC) has attracted researchers owing to the property of ultra-high quality factor (Q-factor) \cite{zhenbo2014,yuri2021,joseph2021}. BIC is the vortex center in the polarization directions of far-field radiation and can be characterized by the topological charge \cite{zhenbo2014}. BIC can be used to construct laser \cite{huangc2020}, high-quality sensor \cite{wang2022}, opto-mechanical crystal \cite{liusy2022}, and chiral-emission metasurface \cite{zhangxd2022}. By splitting the BICs up, the unidirectional guided resonance can be achieved \cite{yinxf2020,lee2020,zhangzi2021,zyx2021}. Utilizing the concept of BIC, the doubly resonant photonic crystal (PhC) cavity can be designed and it shows the improved nonlinear frequency conversion process \cite{minkov2019,minkov2020,minkov2021}. In doubly resonant cavity, a BIC mode is mode-matched with a band-edge mode outside the light cone and the generated second-harmonic mode can be collected within a small angle. However, the product of Q-factors at the band-edge mode and the BIC mode still show room to improve. \par
Recently, researchers start to notice a special kind of BIC called "merged-BICs" \cite{jinji2019}, which is also defined as the "super-BIC" \cite{Hwang2021}. The merged BICs is formed by merging multiple BIC modes into one point. Traditional BIC-based device usually suffers from scattering loss which is caused by the coupling with nearby radiative modes. The strategy of merging BICs into one point can enhance the Q-factors of nearby resonances in the same band, which will result in the BIC being robust against inevitable fabrication imperfection \cite{jinji2019,kangmeng2019,kangmeng2022,zhaochen2022}. In addition, researchers subsequently find other merits of the merged BICs resulting from this trait. For example, the radiative Q-factor at the merged BICs point is generally much larger than the one at pre-merging point or isolated BIC point for a device with finite size \cite{Hwang2021}. Merged BICs have already been widely applied to ultra-low threshold laser \cite{Hwang2021}, chiral resonator \cite{wan2022}, and acoustic resonator \cite{huang2022}. Moreover, these devices show improved quality in comparison to the device that uses isolated BIC mode. \par
Owing to the large Q-factor of the merged BICs in a PhC with finite size, the doubly resonant PhC cavity based on that may show improved nonlinear conversion efficiency. However, it requires that the resonant mode at the second-harmonic frequency must be the merged BICs mode, and a band-edge mode at the fundamental frequency must be mode-matched with that. Simultaneously meeting these conditions is inconvenient. Meanwhile, nonlinear conversion efficiency also depends on the Q-factor at fundamental frequency and the nonlinear overlapping factor, which should also be considered. In this paper, we take lithium niobate (LN) PhC as an example to demonstrate that utilizing the supercell constructed by the large and small air holes can easily achieve these goals. The BIC-based LN photonic devices are already theoretically and experimentally exhibited \cite{kanglei2021,yefan2022,zhang2022,huangzhi2022,zhengze2022} and are proven to be an ideal platform for nonlinear process. In previous works, our group has proposed the beam splitter \cite{duan2016}, nonlinear cavity \cite{jiang2018}, logic gate \cite{lu2019}, and valley waveguide \cite{ge2021} based on lithium niobate PhC. Firstly the band property of the proposed PhC supercell is analyzed and the approach for matching a band-edge mode outside the light cone with the merged BICs is introduced. Then a heterostructure PhC cavity is considered. After determining the geometry parameters the device can meet the mode-matching condition. Finally, we estimated the SHG efficiency of the device at different lattice constants.
\section{Model and theory}
Before discussing our simulations we must firstly review the four requirements to design a doubly resonant PhC cavity using an isolated BIC \cite{minkov2019}: (1) The fundamental mode must be a band-edge mode outside the light cone. (2) The second-harmonic mode must be a BIC mode at the $\Gamma$ point. (3) Those modes are either a maximum or a minimum in their bands. (4) The periodic electric field of those modes has a nonzero nonlinear overlapping factor. Now an additional requirement must be satisfied: the BIC mode must be the merged BICs mode. \par
\begin{figure}[!ht]
\centering
\includegraphics[width=0.55\linewidth]{FIG1.jpg}
\caption{(a) Schematic of the simulated unit cell. (b) Band diagram at the fundamental frequency. (c) Field profiles of the modes inside the circles at fundamental and second-harmonic frequencies. (d) Band diagram at second-harmonic frequency together with the Q-factor variations of the band at different thicknesses. (e) Variations of the thickness where the merged BICs locates versus the radii of holes. (f) Dependence of radius of the holes on the doubled frequency of the band-edge mode outside the light cone, the frequency of the merged BICs, and their difference, respectively. (g) Band diagram at the fundamental frequency with $r$=0.338$a$ and $r$=0.369$a$.}
\label{fig1}
\end{figure}
Now back to our design, the first step is to find an accidental BIC mode that satisfies $f_{acc-BIC}$$\approx2$$f_{band-edge}$, where $f_{acc-BIC}$ and $f_{band-edge}$ represent the frequencies of accidental BIC and band edge mode. It is generally simple to obtain an accidental BIC \cite{zhenbo2014}, while the latter requirement is, to some extent, difficult to meet. Here we choose the structure in Fig. \ref{fig1}(a) which is infinitely extended. The air holes are etched in the suspended slab and are arranged in hexagonal lattice. The material of the slab is chosen as LN, which has outstanding nonlinear properties \cite{huangzhi2022}. Current etching technique for LN has demonstrated 85 degrees sidewall angle of holes \cite{liang2017,limx2019,lim2019}. The dispersion of the material should be considered \cite{minkov2019}. The optical axis of LN is set to $z$ direction and consequently, the nonlinear tensor $d_{31}$ is responsible for the second-harmonic generation process. Moreover, we mainly focus on the transverse electric (TE) band at the fundamental frequency and the transverse magnetic (TM) band at second-harmonic frequency. Intuitively once the $d_{33}$ tensor is used the nonlinear conversion efficiency will be higher. However, the merged BICs mode that satisfies four requirements is hard to obtain. Detailed simulation results are discussed in the Appendix part. The refractive index tensors are $n_{x}$=2.2111, $n_{y}$=2.2111, and $n_{z}$=2.1376 near 1550 nm and $n_{x}$=2.2587, $n_{y}$=2.2587, and $n_{z}$=2.1784 near 775 nm \cite{saravi2015}. After numerous simulations with parameter sweep, the lattice constant is set to $a$=650 nm and the radius of holes is set to $r$=0.338$a$=220 nm. The thickness of the slab is set to $t$=0.461$a$=300 nm. The dashed box in Fig. \ref{fig1}(a) indicates the simulation area. All the simulations in our work are completed using three-dimensional finite difference time domain (3D-FDTD) method. \par
The TE band diagram of the structure at the fundamental frequency is plotted in Fig. \ref{fig1}(b). The blue dots indicate the bulk bands while the red dots indicate the light line. The band gap region is marked by a blue rectangle and the Brillouin zone used for the sweep is also plotted. According to Ref.\cite{minkov2019}, the band edge mode at upper bulk band near the band gap can be selected for the frequency conversion as shown in Fig. \ref{fig1}(b). The $|H_{z}|$ field of the chosen band-edge mode outside the light cone inside the brown circle is demonstrated in the left-half part in Fig. \ref{fig1}(c). The absolute value is shown to search the position with the maximum energy. The TM band diagram of the structure at the second-harmonic frequency is plotted in the top part in Fig. \ref{fig1}(d). The $|E_{z}|$ field of the chosen BIC mode inside the black circle is demonstrated in the right-half part of Fig. \ref{fig1}(c). From the field profiles it can be inferred that the energy in chosen TE mode is mainly located in two side lobes around the hole, while the the energy in chosen TM mode is mainly located in six clusters around the hole. The TM band indicates a symmetry-protected BIC and an accidental BIC, which is verified by calculating the Q-factors of the band. The bottom part of Fig. \ref{fig1}(d) shows that at $k_{y}$=0 and $k_{y}$=0.075 ($2\pi/a$) the Q-factors become infinite when $t=0.461a$. Previous work has already demonstrated that the FDTD method can simulate above ~10$^{6}$ Q-factor \cite{minkov2019}. As for the TE band, the Q-factor at $\Gamma$ point is infinite owing to that it locates outside the light cone. According to the conservation law of the topological charge, the accidental BIC mode will move its position in $k$ space as the thickness or the lattice constant of PhC gradually changes while the symmetry-protected BIC will be fixed \cite{zhenbo2014,kangmeng2022}. Here the thickness of the slab is gradually increased, and it can be observed that the accidental BIC mode gradually merges toward $\Gamma$ point and finally the merged BICs mode can be obtained at $t=0.467a$. The blue arrow indicates the moving direction as shown in Fig. \ref{fig1}(d). At the merged BICs point the Q-factor decreases extremely slowly as $k_{y}$ increases, which is the evidence of the merged BICs. As the thickness becomes further larger, the topological charges will cancel each other and the mode at $\Gamma$ point will be a symmetry-protected BIC again instead of the merged BICs. Although the structure in Fig. \ref{fig1}(a) does not work near 1550 nm, according to the scaling rule of PhC by changing the absolute value of the lattice constant the device can work near 1550 nm. \par
Next, the requirement of the mode-matching condition must be satisfied, i.e., $f_{acc-BIC}$=2$f_{band-edge}$ must be satisfied. In the current geometry parameter, the mode-mismatch $\Delta$$f$=$f_{acc-BIC}$-2$f_{band-edge}$ still cannot attain zero. According to Ref. \cite{minkov2019}, the mode-matching can be achieved by adjusting one geometry parameter like $r$ or $t$ as the $f_{acc-BIC}$ and 2$f_{band-edge}$ change with different speeds versus $r$ or $t$. However, once $r$ or $t$ is adjusted, the BIC will no longer be the merged state. Intuitively, both $r$ and $t$ can be varied for the mode-matching. Here the $r$ is gradually varied, and according to the conservation law of the topological charge for each $r$ there must be a certain $t$ where the merged BICs locates. In Fig. \ref{fig1}(e) we show the thickness of the slab where the merged BICs locates versus numerous radii of holes and the results. Interestingly, as the radius decreases the thickness attains a saturation value. Variations of $f_{acc-BIC}$, 2$f_{band-edge}$ and $\Delta$$f$ versus $r$ are plotted in Fig. \ref{fig1}(f) and it should be noted that for each different $r$ in the abscissa of Fig. \ref{fig1}(f) its corresponding $t$ is also different to ensure each point is a state where BICs merge. As $r$ decreases the $\Delta$$f$ also attains a saturation value and cannot approach zero. The simulated results indicate that adjusting both $r$ and $t$ is not sufficient to realize the mode-matching condition. \par
In Fig. \ref{fig1}(g) the band diagram at the fundamental frequency for $r$=0.338$a$ and $r$=0.369$a$ are plotted. It can be seen that the whole bands have moved into the higher frequency when $r$=0.369$a$. That's to say, the selected band with $r$=0.338$a$ locates at the band gap region in the diagram with $r$=0.369$a$. Consequently, the PhC with larger $r$ can be used to confine light at fundamental frequency in our device. \par
\begin{figure}[!ht]
\centering
\includegraphics[width=0.55\linewidth]{FIG2.jpg}
\caption{(a) Schematic of the simulated unit cell. (b) $E_{x}$, (c) $E_{y}$, and (d) $H_{z}$ field profiles of the modes at the fundamental frequency, and (e) $E_{z}$ field profile at the second-harmonic frequency. (f)-(g) Q-factor variations of the band with the lattice constant of 653 nm, 649 nm and 640 nm, respectively. (i) Variations of the lattice constant where the merged BICs locates versus radii of holes. (j) Dependence of radius of holes on the doubled frequency of the band-edge mode outside the light cone, the frequency of the merged BICs mode, and their difference, respectively.}
\label{fig2}
\end{figure}
To make the band-edge mode outside the light cone mode match the merged BICs mode, six new small holes around the large hole are added as shown in Fig. \ref{fig2}(a). The implementation does not break the C$_{6v}$ symmetry of the system and the merged topological charge at $\Gamma$ point will not suddenly disappear \cite{Yoda2020}. Constructing the supercell with the large and the small holes has already been used to achieve BIC-based negative refraction \cite{lari2021}. Here the parameters are $a$=650 nm, $t$=286 nm, and $r_{c1}$=210 nm, respectively. $r_{c1}$ and $r_{c2}$ represent the radius of large holes and small holes. The mode profiles of the merged BICs at $\Gamma$ point are shown in Figs. \ref{fig2}(b)-\ref{fig2}(d). The mode profile of the band-edge mode outside the light cone is shown in Fig. \ref{fig2}(e). Here the profiles are plotted to ensure the fundamental and the second-harmonic modes have a nonzero nonlinear overlapping factor. The principle of this design mainly lies in the different field distributions of the fundamental and the second-harmonic modes. We expected that adding small holes influences the wavelength of BIC mode owing to that the energy mainly distributes over the area where the small holes are constructed. As the $a$ is gradually adjusted, the accidental BICs will merge at $\Gamma$ point and the merged BICs can be obtained. When $r_{c2}$=50 nm, the Q-factor variations of the band at the lattice constant of 653 nm, 649 nm, and 640 nm versus $k$ are plotted as shown in Figs. \ref{fig2}(f)-\ref{fig2}(h), respectively. The blue arrow indicates the moving direction of the accidental BIC. In comparison to the results in Fig. \ref{fig1}(d) the simulated maximum Q-factor becomes lower. The reason lies in that adding small structure enhances the difficulties of dividing the mesh. It reminds us that for each lattice constant, we will select a suitable $r_{c2}$ to achieve the mode-matching condition. In Fig. \ref{fig2}(i) the lattice constant variations of the slab where the merged BICs locates for numerous radii of small holes are shown. For each merged BICs point $f_{acc-BIC}$, 2$f_{band-edge}$ and $\Delta$$f$ versus $r$ are shown in Fig. \ref{fig2}(j) and similarly it should be noted that for each different $r_{c2}$ in the abscissa its corresponding $t$ is also different to ensure each point is a merged BICs point. As we predicted the 2$f_{band-edge}$ nearly stays steady as $r_{c2}$ varies while the $f_{acc-BIC}$ changes fiercely. $\Delta$$f$ finally crosses the zero point and the matching condition can be realized. The method of adding small holes can adjust the $\Delta$$f$ over a large range.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.55\linewidth]{FIG3.jpg}
\caption{(a) Model of simulated heterostructure PhC cavities. (b) Three dimensional and (c) two dimensional enlarged views of the proposed device. (d) Dependence of radiative Q-factors of the PhC cavity versus the lattice constants.}
\label{fig3}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.55\linewidth]{FIG4.jpg}
\caption{Field profiles of the modes at (a) 768.6 nm, (b) 763.6 nm, (c) 1541.6 nm, and (d) 1533.6 nm, respectively. Polar far-field emission profiles of the modes at (e) 768.6 nm, (f) 763.6 nm, (g) 1541.6 nm, and (h) 1533.6 nm, respectively.}
\label{fig4}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.7\linewidth]{FIG5.jpg}
\caption{(a) Q-factors at the fundamental frequencies versus lattice constants. (b) The square of nonlinear overlapping factors versus lattice constants. (c) Second harmonic generation efficiency versus lattice constants.}
\label{fig5}
\end{figure}
\section{Results and discussions}
In this section, the heterostructure PhC cavity is considered as shown in Fig. \ref{fig3}(a). The lattice constant of the device is enlarged compared with that in Fig. \ref{fig1} to ensure the cavity works near 1550 nm. The small and the large holes possess the same lattice constant. The optical axis of the material points to $z$ direction. Like Ref. \cite{minkov2019} the heterostructure PhC can be divided into core region, transition region, and outer region. These regions possess the same lattice constant. The outer region is constructed for confining the photons near 1550 nm and the transition region is just for improving the Q-factor \cite{minkov2019}. The radii of holes in core, transition, and outer regions are $r_{c1}$=254 nm, $r_{t}$=260 nm, and $r_{o}$=270 nm. By carefully adjusting the geometry parameters the mode-matching condition can be satisfied. The side lengths of three regions are $l_{c}$=10$a$, $l_{t}$=14$a$, and $l_{o}$=24$a$, respectively. Only in core region there are small holes and $r_{c2}$=40 nm. The thickness is $t$=345 nm. When the lattice constant is $a$=769 nm, the device supports the merged BICs mode at 768.6 nm and a band-edge mode at 1541.1 nm. \par
In Fig. \ref{fig3}(d) the dependence of radiative Q-factor of the BIC mode on the lattice constant of the structure is plotted. When $a$=769 nm, the Q-factor obtains its maximum value of 1979, while for the isolated BIC it is below 1000. This is the evidence of the merged BICs mode for a finite-size PhC \cite{Hwang2021}. It should be noted that according to Ref. \cite{Hwang2021} the $a$ with the maximum Q-factor usually differs from the $a$ of the merged BICs point as shown in Fig. \ref{fig3}(d). The $a$ in our infinite system shows the merged BICs point at $a$=773 nm. \par
To characterize the performance of the proposed device, the field profiles and the far-field emission profiles of the merged BICs mode and the band-edge mode outside the light cone are plotted. The $|E_{z}|$ field profile of the merged BICs mode at 768.6 nm is shown in Fig. \ref{fig4}(a). The field profile of the higher-order mode at 763.6 nm is also shown in Fig. \ref{fig4}(b). In Figs. \ref{fig4}(c)-\ref{fig4}(d) the $|H_{z}|$ field of the fundamental and the higher-order band-edge mode are plotted and they locate at 1541.1 nm and 1533.6 nm. In Figs. \ref{fig4}(e)-\ref{fig4}(f) the polar far-field emission profiles of the merged BICs mode in the upper half space are obtained. As we predicted the far-field emission is the hollow beam, which is determined by the vortex nature of the BIC \cite{minkov2019}. In Figs. \ref{fig4}(g)-\ref{fig4}(h) the far-field emission profiles of the band-edge mode and the higher-order band-edge mode in the upper half space are obtained. The results indicate that the far-field emissions at 768.6 nm and 1541.1 nm are highly collimated around the normal incidence, which leads to efficient excitation and collection of nonlinear signal \cite{minkov2019}. \par
At last we estimate the performance of the proposed device as a nonlinear cavity. The second-harmonic generation efficiency can be calculated using the formula \cite{minkov2019,minkov2021}:
\begin{equation}
\frac{P_{o}}{P_{i}^{2}}=\frac{8}{\omega_{1}}(\frac{\chi^{(2)}}{\sqrt{\epsilon_{0}\lambda_{FH}^{3})}})^{2}|\overline{\beta}|^{2}Q_{FH}^{2}Q_{SH}
\label{eq:1}
\end{equation}
Where $\epsilon_{0}$ is the vacuum permittivity. $\overline{\beta}$ can be determined by \cite{minkov2019,minkov2021,chenya2021}:
\begin{equation}
\overline{\beta}=\frac{\int d\textbf{r}\sum_{ijk}\overline{\chi}_{ijk}E_{2\omega i}^{*}E_{\omega j}E_{\omega k}}{(\int d\textbf{r}\epsilon_{\omega}(\textbf{r})|\textbf{E}_{\omega}|^{2})(\int d\textbf{r}\epsilon_{2\omega}(\textbf{r})|\textbf{E}_{2\omega}|^{2})^{1/2}}\lambda_{FH}^{3/2}
\label{eq:2}
\end{equation}
Where $\lambda_{FH}$ is the wavelength of the band-edge mode, $\overline{\chi_{ijk}}$ is the dimensionless nonlinear tensor elements. Here we assumed that the extrinsic Q-factor is infinite and the perfect pumping and collecting condition can be satisfied. We noted that most energy is distributed in the center of the cavity and electric field data in the boundary of the cavity is not considered for convenience of calculation. Consequently the $|\overline{\beta}|$ is a little larger than its actual value but it does not influence its relative value for different lattice constants. \par
It can be inferred that the theoretical second-harmonic generation efficiency is mainly determined by the Q-factors at the fundamental and the second harmonic frequencies and the nonlinear overlapping factor. In our work, the PhC slab is constructed by LN, and the optical axis is pointed to a fixed direction. The resonant frequencies of modes at different constants near 769 nm are similar. In Fig. \ref{fig5}(a)-\ref{fig5}(b) we demonstrate the Q-factors of the band-edge mode outside the light cone and the square of nonlinear overlapping factors versus lattice constants. The results indicate that they also show a peak near the merged BICs point. Consequently, the second-harmonic generation efficiency will obtain its maximum value near the merged BICs point as shown in Fig. \ref{fig5}(c). The maximum value is ~60 W$^{-1}$ (6000\% W$^{-1}$). The value is far large than that of isolated BIC.
\section{Conclusion}
In conclusion, we proposed a doubly resonant LN photonic crystal cavity using the merged BICs to achieve a higher SHG efficiency. The unit cell of large and small holes is established and a band-edge mode at the fundamental frequency will be mode-matched with the merged BICs mode. It can be found that the SHG of the merged BICs of ~60 W$^{-1}$ (6000\% W$^{-1}$) is higher than the one of the isolated BIC. Our design recipe is not limited to LN and can be extended to other nonlinear materials like GaN or AlGaAs. In addition, except for the second-harmonic generation, our design can apply to the parametric down-conversion process. Except for the higher Q-factor at the merged BIC point for a finite-size PhC, the merit of the merged BICs also includes the robustness of Q-factor against the random fluctuations on radii or lattice constants of the holes. Consequently, the nonlinear conversion efficiency may shows the slighter degradation compared with the one of isolated BIC against disorder. However, it is just a hypothesis and requires our future simulation to verify. This work is expected to broader the application of the merged BICs in nonlinear photonic area.
\par
This work was supported by the National Natural Science Foundation of China (Grant Nos. 91950107, and 12134009), the National Key R\&D Program of China (Grant Nos. 2019YFB2203501), Shanghai Municipal Science and Technology Major Project (2019SHZDZX01-ZX06), and SJTU No. 21X010200828.
|
2,869,038,154,711 | arxiv | \section{Introduction}
\label{s:intro}
Due to promising results obtained on invariants of algebraic
functions, and to the similarities in properties between
differential equations and algebraic equations, the study of
invariants of differential equations began in the middle of the
nineteen century. One of the most important studies of these
functions was carried out by Forsyth in his very valuable memoir
~\cite{for-inv}, in which he considers the linear ordinary
differential equation of general order $n$ in various canonical
forms, the first of which is given by
\begin{equation}\label{eq:stdlin}
y^{(n)} + a_{n-1} y^{(n-1)} + a_{n-2} y^{(n-2)}+\dots + a_0 y=0,
\end{equation}
where the coefficients $a_{n-1}, a_{n-2}, \dots, a_0$ are arbitrary
functions of the independent variable $x.$ Earlier writers on the
subject, cited here in an almost chronological order, include
Laplace ~\cite{laplace}, Laguerre ~\cite{lag}, Brioschi
~\cite{brioch}, and more importantly Halphen, who made
ground-breaking contributions in his celebrated memoir
~\cite{halphen}.\par
Methods used up to the middle of the nineteen century for studying
invariant functions were very intuitive, and based on a direct
analysis, in which most results were obtained by a comparison of
coefficients of the equation before and after it was subjected to
allowed transformations. However, the application of these
techniques has been restricted to linear equations.\par
Based on ideas outlined by Lie ~\cite{lie1}, the development of
infinitesimal methods for the investigation of invariants of
differential equations started probably in ~\cite{ovsy1}, and a
formal method has been suggested ~\cite{ibra-not}, which is being
commonly used ~\cite{ndog08a, ibra-nl, ibra-par, waf,
faz}. However, the latter method requires the knowledge of the
equivalence transformations of the equation, and thus a new method
that provides these transformations and at the same time the
infinitesimal generators for the invariant functions has recently
been suggested ~\cite{ndogftc}.\par
In this paper, we use the method of ~\cite{ndogftc} to derive
explicit expressions for the invariants of various canonical forms
of the general linear ordinary differential equations of order up to
5. We also use the same method to derive the structure invariance
group for a certain canonical form of the equation, and for a
general order. Relationships between the invariants found as well as
some of their other properties, including their exact number, are
also investigated. We start our discussion in Section
\ref{s:2methods} by an illustrative example comparing the former
infinitesimal method of
~\cite{ibra-not, ibra-nl} with that of ~\cite{ndogftc}.
\section{Two methods of determination}
\label{s:2methods} %
We begin this section with some generalities about equivalence transformations.
Suppose that $\mathcal{F}$ represents a
family of differential equations of the form
\begin{equation}\label{eq:delta}
\Delta(x, y_{(n)}; C)=0,
\end{equation}
where $x=(x^1, \dots, x^n)$ is the set of independent variables, $y=
(y_1, \dots, y_q)$ is the set of dependent variables and $y_{(n)}$
denotes the set of all derivatives of $y$ up to the order $n,$ and
where $C$ denotes collectively the set of all parameters specifying
the family element in $\mathcal{F}.$ These parameters might be
either arbitrary functions of $x,\, y$ and the derivatives of $y$ up
to the order $n,$ or arbitrary constants. Denote by $G$ a connected
Lie group of point transformations of the form
\begin{equation}\label{eq:eqvgp}
x= \phi (\bar{x}, \bar{y}; \sigma), \qquad y = \psi (\bar{x},
\bar{y}; \sigma),
\end{equation}
where $\sigma$ denotes collectively the parameters specifying the
group element in $G.$ We shall say that $G$ is the {\em equivalence
group} of ~\eqref{eq:delta} if it is the largest group of
transformations that maps every element of $\mathcal{F}$ into
$\mathcal{F}.$ In this case ~\eqref{eq:eqvgp} is called the {\em
structure invariance group} of ~\eqref{eq:delta} and the transformed
equation takes the same form
\begin{subequations}\label{eq:delta2}
\begin{align}
&\Delta (\bar{x}, \bar{y}_{(n)}; \bar{C})=0,\\[-5mm]
\intertext{where \vspace{-4mm}}
&\bar{C}_j = \bar{C}_j(C, C_{(s)}; \sigma), \label{eq:coef}
\end{align}
\end{subequations}
and where $C_{(s)}$ represents the set of all derivatives of $C$ up
to a certain order $s.$ In fact, the latter equality
~\eqref{eq:coef} defines another group of transformations $G_c$ on
the set of all parameters $C$ of the differential equation
~\cite{liegp}, and we shall be interested in this paper in the
invariants of $G_c.$ These are functions of the coefficients of the
original equation which have exactly the same expression when they
are also formed for the transformed equation. We also note that $G$
and $G_c$ represent the same set, and we shall use the notation
$G_c$ only when there is need to specify the group action on
coefficients of the equation.\par
The terminology used for invariants of differential equations and
their variants in the current literature ~\cite{ndog08a, ibra-par,
waf, faz, melesh} is slightly different from that of Forsyth
~\cite{for-inv} and earlier writers on the subject. What is now
commonly called semi-invariants are functions of the form $\Phi(C,
C_{(r)})$ that satisfies a relation of the form $\Phi(C, C_{(r)}) =
\mathbf{w}(\sigma)\cdot \Phi (\bar{C}, \bar{C}_{(r)}).$ When the
weight $\mathbf{w}(\sigma)$ is equal to one, $\Phi(C, C_{(r)}),$ is
called an invariant, or an absolute invariant. Semi-invariants
usually correspond to partial structure invariance groups obtained
by letting either the depend variable or the independent variables
unchanged.\par
Because the infinitesimal method proposed in ~\cite{ndogftc} is
still entirely new and has been, in particular, applied only to a
couple of examples, we wish to illustrate a comparison of this
method with the most well-known method of ~\cite{ibra-not}. For this
purpose we consider Eq. ~\eqref{eq:stdlin} with $n=3,$ which is the
lowest order for which a linear ODE may have nontrivial invariants.
The structure invariance group of ~\eqref{eq:stdlin} is given by the
change of variables $x= f(\bar{x}), \; y= T(\bar{x}) \bar{y},$ where
$f$ and $T$ are arbitrary functions, and for $n=3,$ this equation
takes the form
\begin{equation}\label{eq:d3gn}
y^{(3)} + a_2 y'' + a_1 y' + a_0 y=0,
\end{equation}
and it is easy to see that the corresponding infinitesimal
transformations associated with the structure invariance group may
be written in the form
\begin{equation}\label{eq:infid3}
\bar{x} \approx x+ \varepsilon \xi(x), \qquad \bar{y} \approx y+
\varepsilon (\eta(x) y),
\end{equation}
where $\xi$ and $\eta$ are some arbitrary functions. If we now let
$$V_a^{(3)}= \xi \,\partial_x + y\, \eta \,\partial_y + \zeta_1 \,\partial_{y'} + \zeta_2 \,\partial_{y''}
+\zeta_3 \,\partial_{y'''} $$
denote the third prolongation of $ V_a= \xi \,\partial_x + y \eta \,\partial_y,$
then we may write
\begin{equation}
\label{eq:yppp} \bar{y}\,'\approx y' + \varepsilon\, \zeta_1 (x, y,
y'), \quad \bar{y}\,'' \approx y''+\varepsilon\, \zeta_2 (x, y,
y', y''), \quad \bar{y}\,^{(3)}\approx y^{(3)} + \varepsilon
\,\zeta_3(x, y_{(3)}).
\end{equation}
According to the former method ~\cite{ibra-not}, in order to find
the corresponding infinitesimal transformations of the coefficients
$a_2, a_1,$ and $a_0$ of ~\eqref{eq:d3gn}, and hence to obtain the
infinitesimal generator for the associated induced group $G_c,$ the
original dependent variable $y$ and its derivatives are to be
expressed in terms of $\bar{y}$ and its derivatives according to
approximations of the form
\begin{equation}
\label{eq:yppp2} y' \approx \bar{y}\,' - \varepsilon\, \zeta_1(x,
y, \bar{y}\,'), \quad y''\approx \bar{y}\,'' - \varepsilon\,\zeta_2
(x,y, \bar{y}\,', \bar{y}\,''), \quad y^{(3)}\approx \bar{y}\,^{(3)}
- \varepsilon \,\zeta_3(x,y, \bar{y}_{(3)}).
\end{equation}
These expressions are then used to substitute $\bar{y}$ and its
derivatives for $y$ and its derivatives in the original differential
equation. In certain cases such as that of simple infinitesimal
transformations of the form ~\eqref{eq:infid3}, the required
approximations ~\eqref{eq:yppp2} can be obtained from
~\eqref{eq:yppp}. More precisely, an explicit calculation of
$V_a^{(3)}$ shows that
\begin{subequations}\label{eq:zetad3}
\begin{align}
\zeta_1 &= y g' + (g-f')y' \\
\zeta_2 &= y'(2 g'-f'') + y g'' + (g- 2f')y''\\
\zeta_3 &= 3(g'-f'')y''+ y'(3 g'' - f''')+ y g'''+ (g-3f')y'''
\end{align}
\end{subequations}
The first explicit approximation is readily obtained from the second
equation of ~\eqref{eq:infid3} which shows that
\begin{equation}\label{eq:rly}
y= \bar{y} \frac{1}{1+ \varepsilon g}= \bar{y} (1- \varepsilon g),
\end{equation}
by neglecting terms of order two or higher in $\varepsilon.$ In a
similar way, using the equations ~\eqref{eq:yppp2} and
~\eqref{eq:zetad3} we obtain after simplification the following
approximations
\begin{subequations}\label{eq:rlyppp}
\begin{align}
y' &= (\bar{y}\,' - \varepsilon y g')(1- \varepsilon (g-f'))\\
y'' &= \bigl[ \bar{y}\,'' - \varepsilon \left( y' (2 g'-f'')+ y g'' \right)\left(1- \varepsilon (g- 2 f') \right) \bigr]\\
y''' &= \left[ \bar{y}\,^{(3)} - \varepsilon \left( 3 (g'-f'')y''+
y' (3 g'' - f''') + y g^{(3)} \right) \right] \left( 1- \varepsilon
(g- 3 f') \right)
\end{align}
\end{subequations}
Substituting equations ~\eqref{eq:rly} and ~\eqref{eq:rlyppp} into
~\eqref{eq:d3gn} and rearranging shows that the corresponding
infinitesimal transformations for the coefficients of the equation
are given by
\begin{subequations}\label{eq:ifcofd3}
\begin{align}
\bar{a}_0 & = a_0 + \varepsilon \bigl[ - 3 a_0 f'- a_1 g' - a_2 g'' - g^{(3)} \bigr] \\
\bar{a}_1 &= a_1 + \varepsilon \bigl[ - 2 a_1 f' - 2 a_2 g'+ a_2 f'' - 3 g'' + f^{(3)}\bigr]\\
\bar{a}_2 &= a_2 + \varepsilon \bigl[ - a_2 f' - 3g' + 3 f'' \bigr].
\end{align}
\end{subequations}
In other words, the infinitesimal generator of the group $G_c$
corresponding to ~\eqref{eq:d3gn} is given by
\begin{equation}\label{eq:generic3}
\begin{split} X^0=&f \,\partial_x \\
&+ ( - a_2 f' - 3g' + 3 f'' ) \,\partial_{a_2} \\
&+ (- 2 a_1 f' - 2 a_2 g'+ a_2 f'' - 3 g'' + f^{(3)}t) \,\partial_{a_1}\\
&+ (- 3 a_0 f'- a_1 g' - a_2 g'' - g^{(3)} ) \,\partial_{a_0}
\end{split}
\end{equation}
The other method that has just been proposed in ~\cite{ndogftc} for
finding the infinitesimal generators $X^0$ of the group $G_c$ for a
given differential equation simultaneously determines the
equivalence group of the equation, in infinitesimal form. The first
step
in the determination of $X^0$ is, according to that method, to look
for the infinitesimal generator $X$ of ~\eqref{eq:d3gn}, in which
the arbitrary coefficients are also considered as dependent
variables. For a general equation of the form ~\eqref{eq:delta}, $X$
takes the form
$$
X= \xi_1 \,\partial_{x^1} +\dots + \xi_p \,\partial_{x^p} + \eta_1 \,\partial_{y_1} +
\dots + \eta_q \,\partial_{y_q} + \phi_1 \,\partial_{C^1} + \phi_2 \,\partial_{C^2} +
\dots ,
$$
where $(C^1, C^2, \dots)=C$ denotes the set of parameters
specifying the family element in $\mathcal{F}.$ This expression for
$X$ may be written in the shorter form $X= \set{\xi, \eta, \phi},$
where $\xi, \eta$ and $\phi$ represents collectively the functions
$\xi_j, \eta_j$ and $\phi_j$, respectively. The next step according
to the method is to look at minimum conditions that reduce $V=
\set{\xi, \eta}$ to a generator $V^0 = \set{\xi^0, \eta^0}$ of the
group $G$ of equivalence transformations. These conditions are also
imposed on $\phi$ to obtain the resulting function $\phi^0,$ and
hence the generator $X^0= \set{\xi^0, \phi^0}$ of $G_c.$\par
In the actual case of equation ~\eqref{eq:d3gn} we readily find that
the generator
\begin{equation}\label{eq:symd3gn}
X= \xi \,\partial_x + \eta \,\partial_y + \phi_2 \,\partial_{a_2} + \phi_1 \,
\,\partial_{a_1} + \phi_0 \,\partial_{a_0}
\end{equation}
is given by
\begin{align*}
\xi &= f \\
\eta &= g y+ h\\
\phi_2 &= - a_2 f' - 3g' + 3 f''\\
\phi_1 &= - 2 a_1 f'- 2a_2 g' + a_2 f'' - 3 g'' + f^{(3)}\\
\phi_0 &=- \frac{1}{y}\left[ a_0 h + 3 a_0 y f' + a_1 y g' + a_1
h'x + a2 y g'' + a_2 h '' + y g^{(3)} + h^{(3)} \right],
\end{align*}
where $f,g$ and $h$ are arbitrary functions of $x.$ It is clear
that, due to the homogeneity of ~\eqref{eq:d3gn}, we must have
$h=0.$ This simple condition immediately reduces $V= \set{f, y g+h}$
to the well-known generator $V^0= \set{f, y g}$ of $G.$ If we denote
by $\phi^0= \set{\phi_2^0, \phi_1^0, \phi_0^0}$ the resulting vector
when the same condition $h=0$ is applied to $\phi= \set{\phi_2,
\phi_1, \phi_0},$ then $X^0= \set{f, \phi^0},$ which may be
represented as
\begin{equation}\label{eq:ifd3gnb}
X^0= f \,\partial_x + \phi_2^0 \,\partial_{a_2} + \phi_1^0 \,\partial_{a_1} + \phi_0^0
\,\partial_{a_0},
\end{equation}
is the same as the operator $X^0$ of ~\eqref{eq:generic3}.\par
It
should however be noted that the former method involves a great deal
of algebraic manipulations, and in the actual case of Eq.
~\eqref{eq:d3gn}, the method works because of the simplicity of the
infinitesimal generator $V_a= \set{f, y g}.$ For higher orders of
the Eq. ~\eqref{eq:stdlin}, difficulties with this method become
quite serious. In fact, variants of this former method are being
increasingly used, which also exploit the symmetry properties of the
equation ~\cite{ibra-par}, but even these variants still require the
knowledge of the equivalence group.
\section{Invariants for equations of lowest orders}
We shall use the indicated method of ~\cite{ndogftc} in this section
to derive explicit expressions for invariants of linear ODEs of
order not higher than the fifth. It should be noted that, not only
the infinitesimal method we are using for finding these invariants
is new, but also almost all of the invariants found are appearing in
explicit form for the first time. The focus of Forsyth and his
contemporaries was not only on linear equations of the form
~\eqref{eq:delta}, but also on functions $\Phi(C, C_{(r)})$
satisfying conditions of the form
\begin{equation}\label{eq:forminv}
\Phi(C, C_{(r)})= h(g)\cdot \Phi(\bar{C}, \bar{C}_{(r)}), \quad
\text{ or } \quad \Phi(C, C_{(r)})= (d \bar{x}/ d x)^\sigma \cdot
\Phi(\bar{C}, \bar{C}_{(r)}),
\end{equation}
in the case of linear ODEs, where $h$ is an arbitrary function,
$\sigma$ is a scalar and where
$g= g(\bar{x})$ and $\bar{x}$ are defined by a change of variables
of the form $x= f(\bar{x}),\; y= g\, \bar{y},$ where $f$ is
arbitrary. Such functions are clearly invariants of the linear
equation only if the dependent variable alone, or the independent
variable alone, is transformed, but not both as in our analysis.
However, in his very talented analysis starting with the
investigation of semi-invariants satisfying the second condition of
~\eqref{eq:forminv}, Forsyth obtained in his memoir ~\cite{for-inv}
a very implicit expression for true invariants of linear ODEs of a
general order, given as an indefinite sequence in terms of
semi-invariants. As it is stated in ~\cite{for-inv}, earlier works
on the subject had given rise only to semi-invariants of order not
higher than the fourth, with the exception of two special functions
which satisfy the condition of invariance for equations of all
orders.\par
\subsection{Equations in normal reduced form}
Invariants of differential equations generally have a prominent size
involving several terms and factors, and thus they are usually
studied by first putting the equation in a form in which a number of
coefficients vanish. By a change of variable of the form
\begin{equation}\label{eq:chg2nor}
x=\bar{x},\qquad y = \exp(- \int a_1 \,d \bar{x}) \bar{y},
\end{equation}
equation ~\eqref{eq:stdlin} can be reduced to the simpler form
\begin{equation}\label{eq:nor}
y^{(n)} + A_{n-2} y^{(n-2)}+ A_{n-3} y^{(n-3)}+ \dots + A_0 y=0,
\end{equation}
after the renaming of variables, and where the $A_j= A_j(x)$ are the
new coefficients. Eq. ~\eqref{eq:nor}, which is referred to as the
normal form of ~\eqref{eq:stdlin}, will be as in ~\cite{for-inv}
our starting canonical form for the determination of invariants.
The generic infinitesimal generator $X^0$ of $G_c$ such as the one
found in ~\eqref{eq:generic3} linearly depends on free parameters
$K_j,$ and can be written in terms of these parameters as a linear
combination of the form,
\begin{equation}
\label{eq:linearcomb} X^0= \sum_{j=1}^\nu K_j W_j,
\end{equation}
where the $W_j$ are much simplified vector fields free of
arbitrary parameters and with the same number of
components as $X^0,$ and a function $F=F(C)$ satisfies the
condition of invariance $X^0 \cdot F=0$ if and only if $W_j \cdot
F=0,$ for $j=1, \dots, \nu.$ Consequently, the invariant functions
are completely specified by the matrix $\mathcal{M}\equiv \set{W_1, \dots,
W_\nu}$ whose $j$th row is represented by the components of $W_j,$
plus the coordinate system in which the vector fields $W_j$ are
expressed. These considerations also apply to prolongations of
$X^0$ and corresponding differential invariants. For a given canonical
form of the equation, we will use the symbols $\Psi_{a}^{b,c}$ and
$X_a^m$ to represent explicit expressions of invariants and
corresponding infinitesimal generator $X^0,$ respectively. In such a
representation, the subscript $a$ will denote the order of the
equation, while the superscript $b$ will represent the order of
prolongation of the original generator $X^0,$ and $c$ will represent
the actual number of the invariant in some chosen order. The
superscript $m$ will represent the canonical form considered, and
will be $\mathsf{n}$ in the case of the normal form
~\eqref{eq:nor}, $\mathsf{s}$ for the standard form
~\eqref{eq:stdlin}, and $\mathsf{w}$ for another canonical form to
be introduced below. For consistency, coefficients in all canonical
forms considered will be represented as in ~\eqref{eq:stdlin} by the
symbols $a_j= a_j(x).$\par
For $n=3,$ the first nontrivial differential invariants of
~\eqref{eq:nor} occurs only as from the third prolongation of $X^0.$
This third prolongation of the generator $X^0$ has exactly one
invariant $\Psi_3^{3,1},$ and both $X^0=X_3^\mathsf{n}$ and
$\Psi_3^{3,1}$ have already appeared in the recent literature
~\cite{ndogftc}. However, we recall these results here in a slightly
simplified form, together with those we have now obtained for the
fourth prolongation of $X_3^\mathsf{n},$ as well as for the case
$n=4.$
\par
For $n=3,$ we have
\begin{subequations} \label{eq:casnord3}
\begin{align}
X_3^\mathsf{n} &= f \,\partial_x -2 (a_1 f' + f^{(3)}) \,\partial_{a_1} + \left(-
3 a_0 f' - a_1 f''
- f^{(4)}\right)\,\partial_{a_0} \\
\Psi_3^{3,1} &= \frac{ (9\, a_1
\mu^2+ 7 \mu'\,^2- 6 \mu \mu'')^3}{1728 \mu^8}\\
\Psi_3^{4,1} &= \Psi_3^{3,1} \\
\begin{split} \Psi_3^{4,2} &= \frac{-1}{18 \mu^4} \left( 216 a_0^4 - 324 a_0^3\, a_1' + 18 \, a_0^2 (9 a_1'^2+ 2 a_1
\mu') + + 9 \mu^2 \mu^{(3)}\right)\\
& \frac{-1}{18 \mu^4} \left(\mu' (28 \mu'\,^2 + 9 a_1'(a_1\, a_1' - 4 \mu'')) - 9 a_0(3 a_1'^3+ 4 a_1 \, a_1' \mu'- 8 \mu' \mu''), \right)\end{split}
\end{align}
\end{subequations}
where $f$ is an arbitrary function and where $\mu= -2a_0+ a_1'.$ For
$n=4,$ the first nontrivial invariant occurs only as from the second
prolongation of $X^0= X_4^\mathsf{n},$ and we have
\begin{subequations}\label{eq:norfn4}
\begin{align}
\begin{split} X_4^\mathsf{n} &= f\,\partial_x +(- 2 a_2 f' - 5 f^{(3)}) \,\partial_{a_2} + (-3 a_1
f' - 2 a_2 f'' - 5 f^{(4)})\,\partial_{a_1}\\
& \quad +\frac{1}{2} \left[ -8\, a_0
f' - 3 (a_1 f'' + a_2 f^{(3)} )\right]
\,\partial_{a_0}\end{split} \\
\Psi_4^{2,1} &= \frac{1}{\mu} (-100 a_0 + 9 a_2^2 + 50 a_1' - 20 a_2'') \\
\begin{split} \Psi_4^{2,2} &= \mu_1 (-1053 a_2^4 - 4500
a_2(2 a_1^2 - a_1 a_2' - a_2'^{\,2}) + 180 a_2^2 (130 a_0 - 3 (5
a_1'+ 8\, a_2'')) \\
& - 100 (1300 a_0^2+75 \mu (-10 a_0'+ 3 a_1'') + 90 a_1' a_2'' + 27
a_2''^{\,2}-60 a_0 (5 a_1'+ 8\, a_2'') ), \end{split}
\end{align}
\end{subequations}
where $\mu= a_1- a_2',$ and $\mu_1= 1/\left[75000
\mu^{(8/3)}\right].$
\subsection{Equations with vanishing coefficients $a_{n-1}$ and
$a_{n-2}$} A much simpler expression for the invariants is obtained
if the two terms involving the coefficients $a_{n-1}$ and $a_{n-2}$
in Eq. ~\eqref{eq:stdlin} are reduced to zero in the transformed
equation. Such a change of variables can be accomplished by a
transformation of the form
\begin{subequations}\label{eq:chg2sch}
\begin{align}
\set{\bar{x}, x}&= \frac{12}{n (n-1)(n+1)} a_{n-2}, \qquad y =
\exp(- \int a_{n-1} d \bar{x}) \bar{y},\\
\intertext{ where } \set{\bar{x}, x} &= \left(\bar{x}\,'
\bar{x}\,^{(3)} - (3/2) \bar{x}\,''^{\,2} \right)\,
\bar{x}\,'^{\,-2}
\end{align}
\end{subequations}
is the Schwarzian derivative, and where $\bar{x}\,' = d \bar{x}/
dx.$ Thus by an application of ~\eqref{eq:chg2sch} to
~\eqref{eq:stdlin}, we obtain after the renaming of variables and
coefficients an equation of the form
\begin{equation}\label{eq:sch}
y^{(n)} + a_{n-3} y^{(n-3)} + a_{n-4} y^{(n-4)} + \dots + a_0 y=0.
\end{equation}
In fact, ~\eqref{eq:sch} is the canonical form that Forsyth
~\cite{for-inv}, Brioschi ~\cite{brioch}, and some of their
contemporaries adopted for the investigation of invariant functions
of linear ODEs. However, Forsyth who studied these equations for a
general order did not derive any explicit expressions for the
invariants, with the exception of a couple of semi-invariants. There
is a number of important facts that occur in the determination of
the invariants when the equation is put into the reduced form
~\eqref{eq:sch}, but we postpone the discussion on the properties of
invariants to the next Section, where a formal result regarding
their exact number is also given.\par
For $n=3,$ nontrivial invariants exist only as from the second
prolongation of $X^0= X_3^\mathsf{w},$ and we have computed them for
the second and the third prolongation. For $n=4,$ there are no
zeroth order invariants and we have computed these invariants for
the first and the second prolongation of $X_4^\mathsf{w}.$ For
$n=5,$ the invariants are given for all orders of prolongation of
$X_5^\mathsf{w},$ from $0$ to $2.$ Since invariants corresponding to
a given generator are also invariants for any prolongation of this
generator, its is only necessary to list the invariants for the
highest order of prolongation of any given generator. In the
canonical form ~\eqref{eq:sch}, and regardless of the order of the
equation, the generator $X^0$ of $G_c$ will depend on three
arbitrary constants that we shall denote by $k_1, k_2$ and
$k_3.$\par
For $n=3,$ we have
\begin{align*}
X_3^\mathsf{w} &= \left[ k_1+ x (k_2 + k_3 x)\right] \,\partial_x - 3 a_0
(k_2+ 2
k_3 x)\,\partial_{a_0}\\
\Psi_3^{3,1} &= \Psi_3^{2,1}= (-\frac{7 a_0'\,^2}{6 a_0} + a_0'')^3
/a_0^5\\
\Psi_3^{3,2} &= \frac{(28 a_0'^3 - 36 a_0\, a_0'\, a_0'' + 9
a_0^2\, a_0^{(3)})}{9 a_0^4}
\end{align*}
For $n=4,$ we have
\begin{align*}
X_4^\mathsf{w} &= \left[ k_1+ x (k_2 + k_3 x)\right] \,\partial_x - 3 a_1 (k_2 + 2 k_3 x)\,\partial_{a_1} + \left[-3 a_1 k_3 - 4 a_0 (k_2 + 2 k_3 x)\right]\,\partial_{a_0} \\
\Psi_4^{2,1} &= \Psi_4^{1,1}= \frac{(- 2 a_0 + a_1')^3}{a_1^4} \\
\Psi_4^{2,2} &= \Psi_4^{1,2} = \frac{(a_0' - \frac{a_0 (a_0 + 3 a_1')}{3 a_1})^3}{a_1^5} \\
\Psi_4^{2,3} &= \frac{(14 a_0^2 - 14 a_0\, a_1' + 3 a_1\, a_1'')^3}{27 a_1^8}\\
\Psi_4^{2,4} &= \frac{16 a_0^3 + 48 a_0^2\, a_1'+ 9 a_1^2\, a_0'' -
9 a_0\, a_1(6 a_0' + a_1'')}{9 a_1^4}
\end{align*}
Finally, for $n=5$ we have
\begin{subequations}
\begin{align}
\begin{split}X_5^\mathsf{w} &= \left[ k_1+ x (k_2 + k_3 x)\right] \,\partial_x - 3 a_2 (k_2 + 2 k_3 x)\,\partial_{a_2}\notag \\
& \quad -2 \left[-3 a_2 k_3 +2 a_1 (k_2 + 2 k_3 x)\right]\,\partial_{a_1} + \left[ -4 a_1 k_3 - 5 a_0 (k_2+ 2 k_3 x) \right] \,\partial_{a_0}\end{split} \notag \\
\Psi_5^{2,1} &= \Psi_5^{0,1}= \Psi_5^{1,1}= \frac{\left(3a_0\, a_2 - a_1^2 \right)^3}{27 a_2^8} \label{eq:schp0}\\
\Psi_5^{2,2} &= \Psi_5^{1,2} = \frac{(- a_1 + a_2')^3}{a_2^4} \notag \\
\Psi_5^{2,3} &= \Psi_5^{1,3} = \frac{\left( 6 a_2 a_1' - a_1^2 - 6 a_1\, a_2'\right)^3}{216\, a_2^8}\notag \\
\Psi_5^{2,4} &= \Psi_5^{1,4} =\frac{5 a_1^3 + 9 a_2^2\, a_0' - 3
a_1\, a_2(5 a_0 + 2 a_1')+ 3 a_1^2 \, a_2'}{9 a_2^4}
\end{align}
\end{subequations}
and
\begin{align*}
\Psi_5^{2,5} &= \frac{(7 a_1^2 - 14 a_1\, a_2' + 6 a_2\, a_2'')^3}{216\, a_2^8} \\
\Psi_5^{2,6} &= \frac{4 a_1^3 + 24 a_1^2 \, a_2'+ 9 a_2^2 a_1'' - 9 a_1 a_2(3 a_1'+ a_2'')}{9 a_2^4}\\
\Psi_5^{2,7} &= \frac{\left[18(- a_1^4 - a_1^3\, a_2'+ a_2^3\,
a_0'')- 6 a_1 a_2^2 (11 a_0' + 2 a_1'') + a_1^2 a_2(55 a_0+ 40 a_1'
+ 6 a_2'') \right]^3}{5832\, a_2^{16}}
\end{align*}
\subsection{Equations in the standard form $~\eqref{eq:stdlin}$}
No attempt has ever been made to our knowledge, to obtain the
invariant functions for equations in the canonical form
~\eqref{eq:stdlin}, due simply to difficulties associated with such
a determination and to the prominence in size that semi-invariants
for this canonical form already display. Indeed, semi-invariants for
the canonical form ~\eqref{eq:stdlin} were calculated by Laguerre
~\cite{lag} for equations of the third order. Transformations of the
form ~\eqref{eq:chg2nor} or ~\eqref{eq:chg2sch} are usually applied
to transform ~\eqref{eq:stdlin} to an equation in which one or both
of the coefficients $a_{n-1}$ and $a_{n-2}$ vanish. Even with the
infinitesimal generator $X^0$ of ~\eqref{eq:generic3} at our
disposal, it is still difficult to find directly the corresponding
invariants for the third order equation, which is the lowest for
which the linear equation may have nontrivial invariants. However we
show here by an example how these invariants can be found for the
third order from those of equations in a much simpler canonical
form.\par
Under the change of variables ~\eqref{eq:chg2nor}, Eq.
~\eqref{eq:d3gn} takes, after elimination of a constant factor, the
form
\begin{align}\label{eq:red2nord3}
&\bar{y}^{(3)} + B_1 \bar{y}\,' + B_0 \bar{y}=0, \\[-4mm]
\intertext{where \vspace{-5mm}}
&B_0 = (27 a_0 - 9 a_1\, a_2 + 2 a_2^3 - 9 a_2'')/27\\
&B_1 = (27 a_1 - 9 a_2^2 - 27 a_2')/27.
\end{align}
The sole invariant $\Psi_3^{3,1}$ for equations of the form
~\eqref{eq:red2nord3} and corresponding to the third prolongation of
the associated generator $X^0$ is given by ~\eqref{eq:casnord3}. If
in that function we replace $a_0$ and $a_1$ by $B_0$ and $B_1$
respectively, and then express $B_0$ and $B_1$ in terms of the
original coefficients $a_0, a_1$ and $a_2,$ the resulting function
is an invariant of ~\eqref{eq:d3gn}. However, it does not correspond
to the third, but to the fourth prolongation of the corresponding
generator $X^0= X_3^\mathsf{s}$ that was obtained in
~\eqref{eq:generic3}. In other words, in the canonical form
~\eqref{eq:stdlin} with $n=3,$ we have
\begin{align}
X_3^\mathsf{s} &= X^0, \;\text{as given by ~\eqref{eq:generic3}} \notag \\
\Psi_3^{4,1} &= \frac{ (9\, B_1 \mu^2+ 7 \mu'^2- 6 \mu
\mu'')^3}{1728\, \mu^8 }, \label{eq:ivd3gn}
\end{align}
where we have $\mu = -2 B_0 + B_1'.$ Using Theorem 2 of
~\cite{ndogftc}, it can be shown that there are no invariants
corresponding to prolongations of $X_3^\mathsf{s}$ of order lower
than the fourth, and that the fourth prolongation has precisely one
invariant.
\section{A note about the infinitesimal generators and the invariants}
\label{s:note} The method of ~\cite{ndogftc} that we have used in
the previous section provides the infinitesimal generator of the
equivalence group $G$ for a given equation of a specific order. This
infinitesimal generator must be integrated in order to obtain the
corresponding structure invariance group for the given family of
equations of a specific order. However, once the structure
invariance group for such a given equation has been found, it is
generally not hard to extend the result to a general order. We show
how this can be done for equations of the form ~\eqref{eq:sch}.\par
With the notations already introduced in Section \ref{s:2methods},
if we denote by $X$ the full symmetry generator of ~\eqref{eq:sch}
for a specific order and then obtain the infinitesimal generator
$V^0$ of the equivalence group $G,$ we find that
\begin{subequations}
\begin{alignat}{2}\label{eq:ifshd3-5}
V^0= V_{0,3} &= \left[ k_2 + x (k_3+ k_4 x) \right] \,\partial_x + y (k_1+ 2 k_4 x) \,\partial_y,&& \quad \text{ for $n=3,$}\\
V^0= V_{0,4} &= \left[ k_2 + x (k_3+ k_4 x) \right] \,\partial_x + \frac{y}{3} (2 k_1+ 2 k_3 + 9 k_4 x)\,\partial_y,&& \quad \text{ for $n=4,$}\\
V^0= V_{0,5} &= \left[ k_2 + x (k_3+ k_4 x) \right] \,\partial_x + y (k_1
+ k_3 + 4 k_4 x)\,\partial_y,&& \quad \text{ for $n=5,$}
\end{alignat}
\end{subequations}
where the $k_j, \text{for $j=1, \dots,4$}$ are arbitrary constants.
Upon integration of these three vector fields, we find that for
$n=3,4,5,$ the structure invariance group can be written in the
general form
\begin{equation}\label{eq:sigp3-5}
x= \frac{\bar{x} - c(1+ a \bar{x})}{b(1+ a\bar{x})}, \qquad y =
\frac{\bar{y}}{d(1+ a \bar{x})^{(n-1)}},
\end{equation}
where $a,b, c$ and $d$ are arbitrary constants. To see why
~\eqref{eq:sigp3-5} holds for any order $n$ of Eq. ~\eqref{eq:sch},
we first notice that these changes of variables are of the much
condensed form
$$
x=g (\bar{x}), \qquad y = T(\bar{x}) \bar{y}.
$$
It thus follows from the properties of derivations that when
~\eqref{eq:sigp3-5} is applied to ~\eqref{eq:sch}, every term of
order $m$ in $y$ will involve a term of order at most $m$ in
$\bar{y}$ upon transformation. Since the original equation
~\eqref{eq:sch} is deprived of terms of orders $n-1$ and $n-2,$ to
ensure that this same property will also holds in the transformed
equation, we only need to verify that the transformation of the term
of highest order, viz. $y^{(n)},$ does not involve terms in
$\bar{y}^{(n-1)}$ or $\bar{y}^{(n-2)}.$ However, it is easy to see
that under ~\eqref{eq:sigp3-5}, $y^{(n)}$ is transformed into $b^n
(1+ a \bar{x})^{n+1} \bar{y}^{(n)}/d.$ We have thus obtained the
following result.
\begin{thm}\label{th:sigpsch}
The structure invariance group of linear ODEs in the canonical form
~\eqref{eq:sch}, that is, in the form
$$ y^{(n)} + a_{n-3} y^{(n-3)} + a_{n-4} y^{(n-4)} + \dots + a_0 y=0,$$
is given for all values of $n>2$ by ~\eqref{eq:sigp3-5}.
\end{thm}
It is easy to prove a similar result using the same method for
equations in the other canonical forms ~\eqref{eq:stdlin} and
~\eqref{eq:nor}, for which the corresponding structure invariance
groups are well-known. It is also clear that ~\eqref{eq:sigp3-5}
does not violate in particular the structure invariance group of
~\eqref{eq:nor} which is given ~\cite{schw} by $x= F (\bar{x})$ and
$y= \alpha F'^{(n-1)/2} \bar{y},$ where $F$ is an arbitrary function
and $\alpha$ an arbitrary coefficient.
\par
All the invariants that we have found for linear ODEs are rational
functions of the coefficients of the corresponding equation and
their derivatives, and this agrees with earlier results
~\cite{for-inv}. Another fact about these invariants is that for
equations in the form ~\eqref{eq:sch}, nontrivial invariants that
depend exclusively on the coefficients of the equation and not on
their derivatives, that is, invariants such as that in
~\eqref{eq:schp0} which are zeroth order differential invariants of
$X^0$ occurs as from the order $5$ onwards. Indeed, since $X^0$
depends only on three arbitrary constants in the case of the
canonical form ~\eqref{eq:sch}, there will be more invariants, as
compared to the other canonical forms in which $X^0$ depends on
arbitrary functions. More precisely, we have the following result
\begin{thm}
For equations in the canonical form ~\eqref{eq:sch}, the exact
number $\mathbf{\small n}$ of fundamental invariants for a given
prolongation of order $p \geq 0$ of the infinitesimal generator
$X^0$ is given by
\begin{equation}\label{eq:nbinvsch}
\mathbf{\small n}=\begin{cases} n + p\,(n-2) -4, & \text{if $(n,p)
\neq (3,0)$} \\
0, & \text{otherwise}.
\end{cases}
\end{equation}
\end{thm}
\begin{proof}
We first note that the lowest order for equations of the form
~\eqref{eq:sch} is $3,$ and we have seen that when $n=3,4,5$ the
generator $X^0$ depends on three arbitrary constants alone, that we
have denoted by $k_1, k_2$ and $k_3,$ and do not depend on any
arbitrary functions. Consequently, this is also true for any
prolongation of order $p$ of $X^0,$ and it is also not hard to see
that this property does not depend on the order of the equation. It
is clear that the corresponding matrix $\mathcal{M}$ whose $j$th row
is represented by the components of the vector field $W_j$ of
~\eqref{eq:linearcomb} will always have three rows and maximum rank
$\mathsf{r} =3$ when $(n,p) \neq (3,0).$ On the other hand, $X^0$ is
expressed in a coordinate system of the form $\set{x, a_{n-3},
a_{n-4}, \dots, a_0}$ that has $n-1$ variables. Its $p$th
prolongation is therefore expressed in terms of $M=n-1 + p(n-2)$
variables. For $(n,p)\neq (3,0),$ the exact number of fundamental
invariants is $M-\mathsf{r},$ which is $n + p(n-2) -4,$ by Theorem 2
of
~\cite{ndogftc}. For $(n,p)=(3,0)$ a direct calculation shows that
the number of invariants is zero. This completes the proof of the
Theorem.
\end{proof}
Similar results can be obtained for equations in other canonical
forms, but as it is customary to choose the simplest canonical form
for the study of invariants, we shall not attempt to prove them
here. \par
It should be noted at this point that invariant functions have
always been intimately associated with important properties of
differential equations, and in particular with their integrability.
Laguerre \cite{lag} showed that for the third order equation
\eqref{eq:d3gn}, when the semi-invariant $\mu= - 2 B_0+ B_1'$ given
in Eq. \eqref{eq:ivd3gn} vanishes, there is a quadratic homogeneous
relationship between any three integrals of this equation, and
consequently its integrability is reduced to that of a second order
equation. Subsequently, Brioschi \cite{brioch} established a similar
result for equations in the canonical form \eqref{eq:nor}. More
precisely, he showed that for third order equations, when the
semi-invariant $\mu= -2 a_0+ a_1'$ of Eq. \eqref{eq:casnord3}
vanishes, the equation can be reduced in order by one to the second
order equation
$$ \bar{y}'' + \frac{a_1}{4} \bar{y}=0 ,$$
where $y= \bar{y}^2$ and he also gave the counterpart of this
result for fourth order equations when the corresponding
semi-invariant $\mu=a_1-a_2'$ of Eq. \eqref{eq:norfn4} vanishes. In
fact all reductions of differential equations to integrable forms
implemented in \cite{halphen} are essentially based on invariants
of differential equations. More recently, Schwarz \cite{schw}
attempted to obtain a classification of third order linear ODEs
based solely on values of invariants of these equations. He also
proposed \cite{schw2} a solution algorithm for large families of
Abel's equation, based on what is called in that paper a functional
decomposition of the absolute invariant of the equation.
\section{Concluding Remarks}\label{s:concl}
We have clarified in this paper the algorithm of the novel method
that has just been proposed in ~\cite{ndogftc} for the equivalence
group of a differential equation and its generators, and we have
shown how its application to the determination of invariants differs
from the former and well-known method of
~\cite{ibra-not, ibra-nl},
by treating the case of the third order linear ODEs with the two
methods. We have subsequently obtained some explicit expressions of
the invariants for linear ODEs of orders three to five, and
discussed various properties of these invariants and of the
infinitesimal generator $X^0.$\par
Another fact that has emerged concerning the infinitesimal generator $X^0$
of the group $G_c$ for linear ODEs is that, whenever the equation
is transformed into a form in which the number of coefficients in
the equation is reduced by one, the number of arbitrary functions in
the generator $X^0$ corresponding to the transformed equation is
also reduced by one, regardless of the order of the equation. Thus
while $X_a^\mathsf{s}$ depends on two arbitrary functions,
$X_a^\mathsf{n}$ depends only on one such function and $X_a^w$
depends on no such function, but on three arbitrary constants. This
is clearly in agreement with the expectation that equations with
less coefficients are easier to solve, since less arbitrary
functions in $X^0$ means more invariants and hence more possibility
of reducing the equation to a simpler one. However, the full meaning
of the degeneration of these functions with the reduction of the
number of coefficients of the equation is still to be clarified. We
also believe that on the base of recent progress on generating
systems of invariants and invariant differential operators (see
~\cite{olvmf, olvgen} and the references therein), it should be
possible to treat the problem of determination on invariants of
differential equations in a more unified way, regardless of the
order of the equation, or the order of prolongation of the operator
$X^0.$
|
2,869,038,154,712 | arxiv | \section{Introduction}
The high temperature plasma phase of QCD is characterized by the occurrence of
chromo-electric and -magnetic screening masses ($m_e$ and $m_m$) which control
the infrared behaviour of the theory \cite{Lin80}. It has been known for a long
time that the lowest order perturbative result for the electric mass in pure
gauge theory is $m_{e,0}(T) = \sqrt{N_c/3} \, g(T) \, T$. This is sufficient to
cure infrared divergences of ${\cal O}(gT)$. The magnetic mass is known to
be entirely of non-perturbative origin, as all orders in perturbation theory
would contribute equally. However, a dependence of the form
$m_m \sim {\cal O}(g^2T)$ is widely believed and would cure the higher order
infrared divergences of ${\cal O}(g^2T)$. If one finds indeed $m_m \ne 0$, than
$m_m$ contributes in next-to-leading order perturbation theory to
$m_e$ \cite{Re9394},
\begin{eqnarray}
m^2_e(T) \mbox{\hspace*{-1.5ex}} & = & \mbox{\hspace*{-1.5ex}}
m^2_{e,0} \left( 1 + \right. \nonumber \\
& & \mbox{\hspace*{-8ex}}\left. \frac{\sqrt{6}}{2 \pi} \, g(T)
\frac{m_e}{m_{e,0}} \left[ \log \frac{2 \, m_e}{m_m} - \frac{1}{2} \right] +
{\cal O}(g^2) \right) \; .
\label{rebscreen}
\end{eqnarray}
We note that if $m_m \sim {\cal O} (g^2T)$
the next-to-leading order correction to $m_e$ is ${\cal O} (g\ln g)$.
The above discussion shows that non perturbative methods are needed to obtain
results not only for the magnetic but also for the electric mass. In the
following we will present results for $m_e$ and $m_m$ that we calculated in
SU(2) lattice gauge theory, using both the Wilson action and a tree-level
Symanzik improved action with a planar 6-link loop. Simulations have been
performed on lattices of sizes
$32^3 \!\times\! 4$ and $32^2 \!\times\! 64 \!\times\! 8$ at
temperatures above the critical temperature of the deconfinement phase
transition from $T \!=\! 2 \, T_c$ up to very high temperatures,
$T \!\simeq\! 10^4 T_c$, in order to get in contact with perturbative
predictions for $m_e$.
\section{Screening Masses from Gluon Correlation Functions}
It was shown in \cite{KoKuRe9091} that the pole mass definition of the
screening masses,
\begin{equation}
m_\mu^2 = \Pi_{\mu \mu} (p_0=0, \vec{p}\,^2 = -m_\mu^2) \quad ,
\label{scrmass}
\end{equation}
is gauge invariant although the gluon polarization tensor $\Pi_{\mu \mu}$
itself is a gauge dependent quantity. These pole masses can be obtained from
the long distance behaviour of momentum dependent gluon correlation functions
in the static sector ($p_0 = 0$),
\begin{equation}
\tilde{G}_\mu(p_\bot,x_3) = \left\langle \Tr \; \tilde{A}_\mu (p_\bot,x_3)
\tilde{A}_\mu^\dagger (p_\bot,0) \right\rangle
\label{gtilde}
\end{equation}
with the momentum dependent gauge fields
\begin{equation}
\tilde{A}_\mu (p_\bot,x_3) \,=\!
\sum_{x_0, x_\bot} e^{i \, x_\bot p_\bot} A_\mu(x_0,x_\bot,x_3) \quad .
\label{atilde}
\end{equation}
We define $p_\bot \!\equiv\! (p_1,p_2), x_\bot \!\equiv\! (x_1,x_2)$.
As $\tilde{G}_\mu$ is gauge dependent one has to work in a fixed gauge, which
is in our case the Landau gauge. Details on the gauge fixing algorithm can be
found in \cite{HeKaRa95}. The relation between $\tilde{G}_\mu$ and $m_\mu$ and
techniques how to extract $m_\mu$ most reliable from a lattice simulation are
discussed in \cite{HeKaRa97}.
In Fig.\ \ref{me_gluon}
\psfigure 2.95in 0.0in {me_gluon} {me_gluon.eps} {Electric screening mass in
units of the temperature vs.\ $T/T_c$; see text for details.}
we show the electric screening mass in units of the temperature vs.\ $T/T_c$.
The Wilson action data are symbolized by filled ($N_\tau \!=\! 8$) resp.\ open
squares ($N_\tau \!=\! 4$), the data based on the tree-level Symanzik improved
action with $N_\tau \!=\! 4$ by open circles.
Within statistical errors, all these data are compatible. Especially the fact
that an improvement of the action does not shift the data in any direction is
a first hint that the electric mass indeed is entirely dominated by low
momentum modes. A comparison of the data with the lowest order perturbative
prediction (the dashed line) shows that this is not a good description
although the functional dependence on the temperature does seem to describe
the data well. For $T \ge 9 \, T_c$ we performed a one par\-ameter fit to our
data with the ansatz $m^2_e(T) = A_{\mbox{\scriptsize fit}} \; g^2(T) \, T^2$,
using the two-loop $\beta$-function for the running coupling. The result
$A_{\mbox{\scriptsize fit}} = 1.69(2)$ (with $\chi^2 / \mbox{dof} = 4.51$),
which is shown as a solid line in Fig.~\ref{me_gluon}, exceeds the perturbative
value $2/3$ by a factor of more than 2.5.
To test the next-to-leading order result (\ref{rebscreen}) we also
calculated the magnetic mass $m_m$. We did this for the Wilson action
with $N_\tau \!=\! 8$. Our results for the ratio $m^2_e/m^2_m$ are shown in
Fig.~\ref{g2}.
\psfigure 2.95in 0.0in {g2} {g2.eps} {Squared ratio of electric and magnetic
screening masses vs.\ $T/T_c$ on a $32^2 \times 64 \times 8$ lattice using
the Wilson action.}
They suggest that the ratio runs with the temperature as expected,
$m^2_e/m^2_m \sim g^{-2}(T)$. A fit over the entire temperature range yields
$m^2_e/m^2_m = 7.43(27) g^{-2}(T)$. The quality of the fit becomes obvious
from the small value of $\chi^2 / \mbox{dof} = 1.39$.
Based on this result we fitted the magnetic mass itself, using the ansatz
$m_m(T) \sim g^2(T) \, T$. In the temperature range above $3 T_c$ we obtain
$m_m(T) = 0.457(6) \; g^2(T) \, T$ with $\chi^2 / \mbox{dof} = 1.50$. This is
in good agreement with our result obtained in \cite{HeKaRa95} for
$T < 20 \, T_c$.
With the numerical result for $m_e/m_m$ we are now able to compute $m_e$ in
next-to-leading order, using Eq.~(\ref{rebscreen}). The result is shown by
the dashed-dotted line in Fig.~\ref{me_gluon}. It lies about 20\% above the
tree-level result. However, it is still too small to describe the data.
Therefore we have performed another fit of the electric mass, using the
ansatz
\begin{eqnarray}
\left( \frac{m_e(T)}{T} \right)^2 \mbox{\hspace*{-1.5ex}}
& = & \mbox{\hspace*{-1.5ex}} \frac{2}{3} \, g^2(T) \,
\left( 1 + \frac{\sqrt{6}}{2 \pi} \, g(T) \cdot \right. \nonumber \\
& & \mbox{\hspace*{-10ex}} \left.
\left[ \log \frac{2 \, m_e}{m_m} - \frac{1}{2} \right] \right)
+ B_{\mbox{\scriptsize fit}} \; g^4(T) \quad .
\label{rebhan_g4_fit}
\end{eqnarray}
As the $g^4$ correction term leads to a temperature dependence which is too
strong within the entire $T$-interval we have restricted the fit to very
high temperatures, $T \ge 250 \, T_c$. The fit gives
$B_{\mbox{\scriptsize fit}} = 0.744(28)$ with $\chi^2 / \mbox{dof} = 4.55$
(dotted line in Fig.~\ref{me_gluon}).
So far we have only discussed the screening masses which had been extracted
from zero momentum correlation functions. In addition, we also measured
gluon correlation functions at finite momenta $p_1 a = 2 \pi \, k_1 / N_1$ with
$k_1 = 1,2$. In order to analyze modifications of the free particle dispersion
relation, which arise from interactions in a thermal medium, we introduce a
parameter $\kappa$ in the dispersion relation:
\begin{equation}
\sinh^2 \frac{a E_e(p_1)}{2} = \sinh^2 \frac{a m_e}{2} +
\kappa \, \sin^2 \frac{a p_1}{2} \quad .
\label{eq:disprel_mod2}
\end{equation}
For $m_e$ we use the results from the $\vec{p}=0$ measurements.
For $T \to \infty$ one expects that the dispersion
relation approaches that of a free particle, i.e.\
$\kappa \to 1$. In the temperature interval analysed by
us, however, we do not observe any statistically
significant increase in $\kappa$. We therefore only quote
a value averaged over the
temperature interval $T\ge 9 \, T_c$. We obtain $\kappa = 0.37(10)$ for
$k_1 = 1$ and $\kappa = 0.65(3)$ for $k_1 = 2$.
\section{Polyakov Loop Correlation Functions}
For temperatures above $T_c$ the relation between the electric (or Debye)
screening mass and the colour singlet potential $V_1$ is in lowest order
perturbation theory given by \cite{McSv81}
\begin{equation}
V_1 (R,T) = - g^2 \, \frac{N^2_c - 1}{8 \pi N_c} \cdot
\frac{e^{- m_e(T) R}}{R}
\label{deconfinement_potential}
\end{equation}
which is again valid only at large distances. On the lattice one can extract
$V_1$ by measuring Polyakov loop correlation functions
\begin{equation}
e^{- V_1(R,T)/T} = 2 \, \frac{ \langle \Tr \,( L(\vec{R}) \,
L^{\dagger}(\vec{0})) \rangle }{ \langle | L | \rangle^2} \quad .
\label{v1_correlation}
\end{equation}
In addition, we are using not only $V_1$ to extract $m_e$ but also
$V_{1,\mbox{\scriptsize sum}}$ which is based on Polyakov loops that are
averaged over hyperplanes \cite{HeKaRa97}.
The numerical values for $m_e$ using this methods agree within errors with
the data obtained from the gluon correlation functions at zero momentum. The
values of the fits of $m_e/T$ using the two fit ans\"atze described in the last
section are listed in \cite{HeKaRa97}.
To check whether or not an improvement of the action weakens the violation of
the rotational symmetry caused by the lattice discretization we measured
$V_1 / T$ both along an axis, labeled with $(1,0,0)$, and along three
different off-axis directions, $(1,1,0)$, $(1,1,1)$, and $(2,1,0)$.
For each action we computed the fit of the $(1,0,0)$ data from which we
calculated the $\chi^2$ deviation of the off-axis data. For all 12
temperatures we observed much larger deviations in the case of Wilson
action than for the tree-level Symanzik improved action. In addition, the
parameters of the fits of the $(1,0,0)$ data themselves have bigger errors and
furthermore the fits have larger $\chi^2$ and lower goodness in the case of
Wilson action than for the improved action.
\section{Summary}
We have studied electric and magnetic screening masses obtained from Polyakov
loop and gluon correlation functions in the high temperature deconfined phase
of SU(2) lattice gauge theory, using both the standard Wilson action and a
tree-level Symanzik improved action.
For $m_m$ we find the expected ${\cal O}(g^2T)$ behaviour,
$m_m(T) = 0.457(6) \, g^2(T) \, T$. We also find
$(m_e / m_m)^2 \!\sim\! g^{-2}$,
which suggest that the temperature dependence of $m_e$ is well described by
$m_e/T \!\sim\! g(T)$ as expected by lowest order PT. On a
quantitative level we do, however, find large deviation. Our result
$m_e(T) = \sqrt{1.69(2)} \, g(T) \, T$, deviates strongly from the
lowest order PT prediction, and it can only be insufficiently cured by the
next-to-leading order term.
The improvement of the action does not show, within statistical errors, any
significant modification in the behaviour of the screening masses, although we
find that the violation of the rotational symmetry of the singlet potential,
which also is used to extract $m_e$, is weakened.
|
2,869,038,154,713 | arxiv | \section{Experimental Procedure}
\subsection{Principle of measurement and spectator tagging}
The reaction under study is $np\to pp\pim$. Due to the lack of a neutron beam,
deuterons were used instead and the data were analyzed along the lines of the
spectator model. This method has been described in detail in
\cite{spect}, hence we will give only some short remarks here. The basic idea
of the model is that 1) the proton in the deuteron can be regarded as an
unaffected spectator staying on-shell throughout the reaction and 2) the matrix
element for quasi-free pion production from a bound neutron is identical to that
for free pion production from an unbound neutron. Crucial to the method is the
task to detect and identify the spectator proton $p_s$, since the information
gathered from this particle gives a direct measure of the Fermi momentum carried
by the off-shell neutron within the deuteron at the time of the $np$ reaction.
The Fermi momentum distribution as calculated from any of the existing $NN$
potentials has a maximum near 40~MeV/c and a tail extending towards several
hundred MeV/c, hence a wide range in excess energy $Q$ can be covered with a
monoenergetic deuteron beam. The main result of our recent study was that the
two assumptions quoted above can be regarded as being fulfilled for Fermi
momenta below 150\,MeV/c.
The experiment was carried out with the time-of-flight spectrometer COSY-TOF set
up on an external beamline of the proton synchrotron COSY \cite{rudi} at the
Forschungszentrum J\"ulich. A deuteron beam of momentum $p_d$\,=\,1.85\,GeV/c
was focussed onto a liquid hydrogen target, charged particles emerging from the
reaction zone were detected in a multi-layer scintillator hodoscope with its
main components Quirl, Ring and Barrel. Details of the various subdetectors,
their performance as well as the different steps necessary for calibrating the
whole system have been described in a series of papers, see \cite{spect}
and references therein. Here only a short overview will be given. By measuring
each particle's flight time and direction, their velocity vectors given as
$\vvec = (\beta, \theta, \phi)$ could be determined with a time-of-flight
resolution of better than 300\,ps $(\sigma )$ and an angular track resolution of
better than 0.3$^\circ (\sigma )$. The momentum 4-vectors $\mathbb{P}$ of all
detected particles were then obtained from the measured observables by applying
additional mass hypotheses. Carrying out various tests as, $e.g.$, momentum
conservation, missing mass and invariant mass analyses as well as comparisons
with results obtained from our Monte Carlo simulations helped to find the correct
assignment for each event with a high degree of probability as quantified below.
In the reaction $dp\to pp\pim p_s$ four charged particles are emitted which in
most cases all are detected in the time-of-flight spectrometer. Thus the main
trigger condition was such that a total of four hits was required in any of the
stop scintillator hodoscopes Quirl, Ring and Barrel and at least one hit in
the twelve-fold segmented start scintillator. Due to the fact that pions can
also be emitted into the backward region where no detector was installed we set
up a second trigger condition with only three required hits at a reduction
factor of 10. Since for these events the unobserved pion can be reconstructed
through a missing mass analysis, the full kinematically allowed phase space was
covered.
With a beam momentum below the threshold for $2\pi$-production, any 4-hit
event apart from accidentals could only result from the reaction under study.
As the first step in our analysis we checked on the four possible hypotheses,
$i.e.$, the pion being particle 1, 2, 3, or 4 and calculated for each case the
sums of longitudinal and transversal momentum components $\sum p_L$ and
$\sum p_T$. As the correct assignment we took the one where these values were
closest to $\sum p_L$\,=\,p$_d$ and $\sum p_T$\,=\,0. As the spectator proton
we then chose the one which was detected close to the beam axis with a
momentum near p$_d$/2. The spread in Fermi momentum caused the momentum of the
spectator to vary considerably, higher $Q$ values correspond to lower
momenta and vice versa. This is illustrated in Fig.~2 where for two narrow
ranges in $Q$ ($\lbrack$18.0-34.0$\rbrack$ and $\lbrack$61.0-74.5$\rbrack$\,MeV)
the momentum distribution of the spectator proton and the summed distribution of
both reaction protons is plotted. The spectator distribution given by the solid
histogram at a mean $<Q>$\,=\,26 MeV sticks out as a sharp line well separated
from the much broader momentum distribution of the other two protons, whereas at
$<Q>$\,=\,68 MeV the spectator line is still rather narrow, but starts to
overlap with the one of the reaction protons. Since a unique identification of
the spectator is essential for the analysis we found it necessary to also limit
the range in $Q$ due to this effect and only considered events where the excess
energy was below 90\,MeV which roughly coincides with our proposed limit for
the Fermi momentum \cite{spect}. In our finally accepted data sample of
2.2$\cdot$10$^5$ events we obtained a longitudinal momentum distribution
$\sum p_L$ which had its center at $p_d$ with a width of 39\,MeV/c ($\sigma$). In
case of the transversal distribution the spread was even smaller, namely
13\,MeV/c. Alternatively we also used the missing mass method for identifying
the various ejectiles, see ref.\cite{spect}, and found full agreement.
\begin{figure}[htb]
\vspace*{-1.0cm}
\hspace{5cm}
\epsfig{file=piminus_fig2.eps,scale=0.55}
\vspace*{-3.5cm}
\caption{\it Experimentally deduced momentum distributions for the spectator
protons (solid histograms) and the reaction protons (dashed histograms) for two
ranges in $Q$.}
\end{figure}
Reconstructing those events where the pions were emitted into the backward region
by missing mass techniques in principle caused no problems. In several cases,
however, we found events where three protons were detected as for a true $dp\to
pp\pim p_s$ reaction, only there a third proton was produced in a chain of two
consecutive $np$ elastic scattering processes. From the first quasielastic
scattering reaction one gets a scattered proton, a forward flying spectator and
a neutron. The scattered neutron in traversing one of the start detector
elements hits another proton which reaches the detector whereas the slowed-down
neutron remains unobserved. In simulating this process we found that by suitable
selections in missing mass and angles these events could be eliminated. Thus an
additional set of roughly 0.6$\cdot 10^5$ reconstructed $pp\pim p_s$ events was
obtained.
As has been outlined in \cite{spect}, the timing signals deduced from both ends
of the Barrel scintillators not only yield information on the flight times, but
also on the hit position of any track passing through the Barrel. Hence an
important step in the detector calibration is the fixing of the absolute time
offset which was carried out through a comparison with the results obtained for
$dp$ elastic scattering. This binary reaction with its unique kinematics and
sizeable cross section was repeatedly measured in separate runs with an adjusted
trigger condition. As a check of the reliability of the event reconstruction we
show in Fig.~3 the deuteron angular distribution (given as histograms) in
comparison with older data obtained at a somewhat higher beam momentum (solid
dots) \cite{boot}. Instead of the deuteron beam momentum of 1.85\,GeV/c we quote
a value of 0.92\,GeV/c ($T_{kin}$\,=\,376\,MeV) which corresponds to the
inverse reaction for a proton beam hitting a deuterium target at the same
$\sqrt{s}$. The absence of data in the forward region is due to the fact that
the corresponding protons were emitted towards angles $>60^\circ$ which is out
of the acceptance of the spectrometer. The overall agreement is very good, the
apparent mismatch in the peaking of the forward maximum results from the
difference in beam momentum $p_p$. To eliminate this dependency on $p_p$ we
\begin{figure}[htb]
\vspace*{-1.0cm}
\begin{center}
\epsfig{file=piminus_fig3.eps,scale=0.6}
\end{center}
\vspace*{-1.4cm}
\caption{\it Angular distributions for the elastic $pd$ scattering reaction
plotted as a function of the CM angle cos $\theta^{*}_d$ (top) and momentum
transfer t (bottom) in comparison with data from ref.\cite{boot}. The dashed
vertical lines denote the acceptance limits of our detector.}
\end{figure}
plotted the same data (for cos $\theta^*_d > -0.35$) as a function of the
Mandelstam variable t (Fig.~3, lower frame) and found a very satisfying agreement.
\subsection{Monte Carlo simulation}
The analysis of our experimental data samples was accompanied by extensive Monte
Carlo simulations. In order to allow each simulated quasi-free $np\to
pp\pim$-event to have different initial kinematical parameters the program
package was modified in a way as was described in detail in \cite{spect}, hence
we will give only a short outline of the main ideas. Using the CERNLIB event
generator GENBOD \cite{gb} one generates $N$-body events for a given reaction
specified by $N$, type and mass of the particles involved and the total CM energy
$\sqrt{s}$. The code returns momentum 4-vectors for each ejectile in the overall
center-of-mass system and weight factors $w_e$ based on the phase space density
of the reaction. In the present case the basic reaction to be simulated is $np
\to pp\pim$. For each event randomly chosen values for cos $\theta^{*}$,
$\phi^{*}$ and momentum $|\vec{p}^{*}|$/(MeV$\cdot$c$^{-1})$ were picked, the
two former ones following uniform distributions, whereas the momentum was folded
with the above mentioned Fermi distribution. We identify the three-component
vector {$|\vec{p}^{*}|$, cos $\theta^{*}, \phi^{*}$} as well as the one pointing
into the opposite direction with those of an $np$-pair within the deuteron in
its CM system. Transformation into the laboratory system then allows one to
deduce the corresponding vectors for spectator and projectile particle within
a fast moving deuteron of momentum p$_d$\,=\,1.85\,GeV/c. The fact that in the
laboratory system the flight direction of the projectile neutron deviates
by a small angle from that of the beam deuteron is accounted for by a
suitably chosen rotation such that the neutron's flight direction serves
as the actual beam direction. After having fixed event-by-event the
momentum vector for the ``beam neutron'' it is straightforward to perform
the simulation for $np \to pp \pim$.
By using approximately 1 million Monte Carlo events uniformly distributed across
the available phase-space we could determine the energy dependent acceptance of
our detector and the reconstruction efficiency as a function of excess energy
$Q$. The main limitations in acceptance stemmed from the maximum in detector
angle $\theta_{max}\,=\,60^{\circ}$ and from the charged particles' energy loss
in the various detector layers resulting in a low $\beta$-threshold of $\beta
\approx$\,0.5 for $\pim$ mesons and $\beta \approx$\,0.35 for protons. In Fig.~4
we show the resulting acceptance curves for the relative proton momentum angle
cos $\theta_P^*$ (left) and the proton-proton invariant mass
\begin{figure}[!htb]
\vspace*{-0.6cm}
\begin{center}
\epsfig{file=piminus_fig4.eps,scale=0.65}
\end{center}
\vspace*{-6.5cm}
\caption{\it Simulated acceptance curves as obtained for the relative proton
momentum angle \mbox{cos $\theta^*_P$} (left) and the proton-proton invariant
mass $M_{pp}$ (right) for different values of excess energy $Q$. The dashed
vertical lines in the plot of the right panel denote the kinematical limits.}
\end{figure}
\newpage
$M_{pp}$ (right) for selected values of $Q$. The acceptance goes to zero
near $|$cos $\theta_P^*|\,=\,1$. This comes as a result of the way the relative
proton momentum vector $\vec{P^*}$ is constructed (see also Fig.~1). When the
direction of $\vec{P^*}$ approaches the beam direction, one of the protons in
the CM system moves backwards, hence has minimum energy and drops below the
detection threshold. Similar calculations have been performed for all other
observables.
\section{Results and discussion}
Cross sections for various emission angles and invariant masses of each
two-particle subsystem as well as two-dimensional Dalitz plots were extracted
from the data. In order to derive absolute cross sections one must know the
integrated luminosity. Defined as $\cal{L}$ =$ \int n_b \cdot n_t\,dt$ with
$n_b (n_t)$ denoting the number of beam and target particles, respectively, one
finds the cross section from the relation
\vspace{-8mm}
\begin{center}
\begin{equation}
\sigma = n/(\mathcal{L}\cdot f \cdot \epsilon),
\end{equation}
\end{center}
where $n$ is the number of observed events, $f$ the deadtime correction factor
and $\epsilon$ gives the geometrical and reconstruction efficiency. This simple
relation, however, has to be modified in case of a quasifree reaction. The Fermi
motion of the neutrons within the deuteron will lead to a wide span in excess
energy $Q$ such that the number of beam particles initiating a $pn$ reaction at
a given $Q$ will vary. In the present case of a close-to-threshold measurement,
one furthermore will observe a strong variation in $\sigma$. In order to extract
the energy dependence of the cross section and to compare it with the one of the
free reaction it is necessary to unfold the effect of the Fermi motion from the
data. By dividing the range in $Q$ in small bins $<Q>_i$ such that per bin the
variation in $\sigma$ is small and can be approximated by a constant $\sigbar_i$
the number of produced events $N_i$ is \cite{stina}
\vspace{-8mm}
\begin{center}
\begin{equation}
N_i = \sigbar_i \mathcal{L}\int_{<Q>_i} |\phi(p_b)| d^3\mathbf{p}_b,
\end{equation}
\end{center}
where the integral is taken over all neutron beam momenta $\mathbf{p}_b$
contributing to $<Q>_i$ and $\phi(p_b)$ is the deuteron wave function as
given by the PARIS potential \cite{paris}. Here $\cal{L}$ is again the
overall luminosity, its $Q$-dependence is accounted for by the integral.
Correspondingly the number of observed events is given by
\vspace{-8mm}
\begin{center}
\begin{equation}
n_i = N_i\cdot f \cdot \epsilon_i
\end{equation}
\end{center}
The evaluation of the integral is performed by means of Monte Carlo simulations.
Denoting the total number of generated Monte Carlo events by $N^{MC}$ and the one
generated for the bin $<Q>_i$ by $N_i^{MC}$ the integral is given by the ratio
\vspace{-8mm}
\begin{center}
\begin{equation}
\int_{<Q>_i} |\phi(p_b)| d^3\mathbf{p}_b = \frac{N_i^{MC}}{N^{MC}}.
\end{equation}
\end{center}
Finally, by using eqs.~$\lbrack$3-5$\rbrack$ one finds for the cross section
\vspace{-8mm}
\begin{center}
\begin{equation}
\bar{\sigma _i} = \frac{1}{\mathcal{L}\cdot f \cdot \epsilon_i} \cdot
\frac{n_i}{N_i^{MC}/N^{MC}}.
\end{equation}
\end{center}
Defining $\tilde n_i$\,=\,$n_i/\epsilon_i$ as the number of observed and acceptance
corrected events the cross section is essentially given as $\bar{\sigma _i}
\propto \tilde n_i/N_i^{MC}$ since $\cal{L}$, $f$ and $N^{MC}$ are constants.
This is demonstrated in Fig.~5 where we show the distribution of observed events
in the top frame as the solid histogram; it extends from threshold up to 90\,MeV.
Also shown in this frame is the corresponding distribution as obtained for our
Monte Carlo events. Calculated for a deuteron beam momentum of
$p_d$\,=\,1.85\,GeV/c it has its largest $Q$ values also near 90\,MeV, but on
the low side starts with a sizeable yield already at threshold. Its maximum is
shifted to lower $Q$ values towards the peak of the deuteron wavefunction.
When extracting the ratio of the experimentally deduced distribution and the
Monte Carlo data, the histogram as shown in the bottom frame is obtained which
(note the logarithmic scale) rises by more than two orders of magnitude. Also
shown as a dashed curve are the total cross section data obtained with a free
neutron beam at PSI \cite{daum1} and parameterized as a $3^{rd}$ order polynomial
in $Q$. When applying a suitably chosen normalization factor the present data
are in good agreement with these absolute values with only some minor deviations
at the lower and upper ends of the covered range. Henceforth this one
normalization factor will be used in all of our further presentations and
discussions of differential cross sections.
\begin{figure}[htb]
\vspace*{-1.2cm}
\hspace{5cm}
\epsfig{file=piminus_fig5.eps,scale=0.75}
\vspace*{-5.0cm}
\caption{\it Top: Simulated (dashed, MC) and measured (solid histogram, exp)
distributions of $pp\pim$ events as a function of Q. Bottom: Ratio of the two
distributions exp/MC (solid histogram) normalized to the cross section data
measured at PSI \cite{daum1} (dashed curve).}
\end{figure}
No attempt was made to derive in an independent way absolute cross sections from
the present experiment. The natural choice for the determination of the
luminosity $\cal{L}$ would have been the $pd$ elastic scattering reaction.
Although it was quite successfully used for calibration procedures we did not
consider it as being suited for finding the size of $\cal{L}$. Firstly the
amount of available cross section data is still scarce. Apart from the already
mentioned experiment by Booth et al.~\cite{boot} at $p_p$\,=\,0.99\,GeV/c which
corresponds to a bombarding energy of 425\,MeV we only found one more set of
published data by Alder et al.~\cite{ald} at comparable energies. These authors
present data at proton bombarding energies of 316 and 364 MeV covering far
backward angles cos $\theta^* <-0.6$ and at 470 and 590 MeV at angles
cos $\theta^* <0$. Due to this small range in cos $\theta^*$ we consider these
data unfit for a reliable interpolation. Recent data by G\"ulmez et al.
\cite{gul} were taken at much higher energies of 641 and 793\,MeV, those by
Rohdje\ss{} et al.~\cite{rohd} at energies up to 300\,MeV. Secondly we found it
difficult to estimate the error in $\sigma_{pd}$ when extracting the $dp$
elastic events from the underlying background which was dominated by the much
stronger $pp$ quasielastic scattering events. Finally the uncertainties due to
effects like shadowing and rescattering which tend to reduce the cross sections
of any quasifree reaction by about $8\%$ \cite{chia} and thus add to the size of
the systematic error, would only then be of minor consequence when a comparison
with another quasifree reaction is carried out.
An order of magnitude estimate of the cross section could nevertheless be made.
The integrated luminosity was found from the known target thickness (4~mm
liquid hydrogen corresponding to $1.8\cdot 10^{22}$/cm$^2$ target particles), the
average beam intensity of $7\cdot 10^6$ deuterons/s, and the total running time
to be of order 40\,nb$^{-1}$. Using eq.~2 with $f\approx 0.5$ and $\epsilon
\approx 0.25$ one finds a mean cross section near 60\,$\mu$b in good agreement
with the PSI data \cite{daum1}.
\subsection{Angular distributions}
Acceptance corrected angular distributions are shown in Figs.~6 and 7 together
with fits in terms of Legendre polynomials. The excess energy range 1.0-88.0
MeV was cut into six bins, namely $\lbrack$1.0-18.0$\rbrack$, $\lbrack$18.0-34.0
$\rbrack$, $\lbrack$34.0-47.5$\rbrack$, $\lbrack$47.5.0-61.0$\rbrack$, $\lbrack$
61.0-74.5$\rbrack$, and $\lbrack$74.5.0-88.0$\rbrack$ MeV, the indicated excess
energies $<Q>$ denote the center values of these bins. Error bars when given
denote statistical errors only. It should be kept in mind that, although the
cross
\begin{figure}[!htb]
\vspace*{-0.5cm}
\begin{center}
\epsfig{file=piminus_fig6.eps,scale=0.60}
\end{center}
\vspace*{-0.8cm}
\caption{\it Angular distributions of the relative proton momentum for selected
excess energies $<$Q$>$ together with results of Legendre polynomial fits.}
\end{figure}
section rises monotonically with $Q$, the observed counting rate does
not. Due to the non-uniform Fermi distribution which governs the available excess
energies, the highest rates are found near $Q$\,=\,50\,MeV and consequently one
also observes the lowest statistical errors there. As already outlined above
(see Fig.~1) the angular distributions of the relative proton momentum by
construction are symmetric with respect to cos $\theta_P^*$\,=\,0. They are
plotted as a function of cos$^2\theta_P^*$ and were fitted with even Legendre
polynomials $W(cos~\theta) \propto 1 + \sum_{\nu}^{}a_{2\nu}\cdot
P_{2\nu}(cos~\theta)$ up to $\nu\,=\,2$. The extracted expansion coefficients
are given in Table~1, up to $<Q>$\,=\,68\,MeV the $P_4$-term was neglected.
In addition we give the numerical values for $d\sigma /d\Omega^*$ in Table~2
where, as mentioned before, the absolute scale was adjusted to the PSI data
\cite{daum1}.
\begin{table}[!htb]
\begin{center}
\caption {\it Expansion coefficients of the Legendre polynomial fits to the
angular distributions of the relative proton momentum.}
\vspace{0,5cm}
\begin{tabular}{ccc}
\hline
$<Q>/MeV$ &$a_2$ & $a_4$ \\
\hline
10 & 0.34$\pm$0.08 & - \\
26 & 0.43$\pm$0.07 & - \\
40 & 0.51$\pm$0.06 & - \\
54 & 0.44$\pm$0.05 & - \\
68 & 0.54$\pm$0.05 & - \\
82 & 1.28$\pm$0.13 & 0.37$\pm$0.13 \\
\hline
\end{tabular}
\end{center}
\end{table}
\vspace{-0.8cm}
\begin{table}[!htb]
\begin{center}
\caption {\it Differential cross sections in $\mu b/sr$ for the angular
distributions of the relative proton momentum at six excess energies Q.}
\vspace{0,5cm}
\begin{tabular}{ccccccc}
\hline
cos$^2\theta^*_P$ & 10~MeV & 26~MeV & 40~MeV & 54 MeV & 68~MeV & 82~MeV \\
\hline
0.002 & 0.124$\pm$0.022 & 0.94$\pm$0.14 & 2.04$\pm$0.16 & 4.42$\pm$0.34 & 7.96$\pm$0.61 & 8.58$\pm$1.31 \\
0.014 & 0.147$\pm$0.023 & 0.87$\pm$0.13 & 2.01$\pm$0.15 & 4.50$\pm$0.36 & 7.56$\pm$0.62 & 8.78$\pm$1.43 \\
0.040 & 0.142$\pm$0.024 & 0.89$\pm$0.15 & 2.07$\pm$0.16 & 4.45$\pm$0.36 & 7.69$\pm$0.64 & 8.93$\pm$1.52 \\
0.078 & 0.134$\pm$0.024 & 0.92$\pm$0.16 & 2.16$\pm$0.17 & 4.67$\pm$0.38 & 7.81$\pm$0.65 & 9.29$\pm$1.48 \\
0.130 & 0.150$\pm$0.024 & 0.96$\pm$0.16 & 2.26$\pm$0.20 & 4.87$\pm$0.38 & 8.30$\pm$0.71 & 9.70$\pm$1.53 \\
0.194 & 0.151$\pm$0.028 & 0.98$\pm$0.16 & 2.40$\pm$0.25 & 5.10$\pm$0.46 & 8.85$\pm$0.73 &11.07$\pm$1.57 \\
0.270 & 0.154$\pm$0.033 & 1.08$\pm$0.21 & 2.60$\pm$0.33 & 5.45$\pm$0.52 & 9.65$\pm$0.92 &12.24$\pm$1.60 \\
0.360 & 0.170$\pm$0.044 & 1.18$\pm$0.34 & 2.74$\pm$0.38 & 6.04$\pm$0.54 &10.78$\pm$1.13 &15.21$\pm$2.13 \\
0.462 & 0.120$\pm$0.047 & 1.02$\pm$0.46 & 3.20$\pm$0.52 & 5.42$\pm$0.80 &11.33$\pm$1.31 &18.98$\pm$2.76 \\
0.578 & 0.222$\pm$0.071 & 0.90$\pm$0.61 & 3.85$\pm$0.63 & 6.15$\pm$1.11 &10.98$\pm$1.53 &21.94$\pm$3.12 \\
\hline
\end{tabular}
\end{center}
\end{table}
As one can see from inspection of Fig.~6 the scatter of the data points as well
as the size of the error bars increases drastically for values cos$^2\theta_P^*
\geq 0.4$. This is the result of the very low acceptance observed in this angular
region (see also the discussion in context with Fig.~4). Accordingly the
$\chi^2$ minimisation was only performed on the first eight data points. In the
literature we found one measurement of these proton distributions, which was
extracted from roughly 4000 bubble chamber frames \cite{hand}. At an average
value of $Q$ near 54\,MeV the authors report a value of $a_2\,=\,0.276\pm 0.032$
which is to be compared to the present one of $a_2\,=\,0.44\pm 0.05$. We believe
the observed 4$\sigma$ deviation to be due to systematic errors in their method
which were not included in the quoted error. As mentioned above only $Ss, Sp, Ps$
and $Pp$ partial waves should be present in the energy region covered in the
present experiment. From our data, however, it can be seen that in the higher $Q$
range contributions of the $Ds$ wave are present as well. Near $<Q>$\,=\,82\,MeV
a $P_4$(cos $\theta$) term with a sizeable expansion coefficient
$a_4\,=\,0.37\pm0.13$ had to be included in the fit.
The pion angular distributions as deduced for the same intervals in $Q$ are
shown in Fig.~7. In general all are asymmetric and were fitted with sizeable
$a_1$ and $a_2$ coefficients (see Table~3). The one obtained at $<Q>$\,=\,54
MeV is compared with data taken from ref.~\cite{hand} (dotted line) and
ref.~\cite{daum1} (dashed line) and good agreement is observed. The cross
section as given in \cite{hand} exceeds the one measured at PSI by the factor
1.29. The authors of ref.~\cite{daum1} explain this discrepancy with a possible
underestimation of the mean neutron energy in the older experiment. That
measurement had been carried out over a broad neutron energy range and the
mean energy had been deduced from a maximum likelihood fit. In the present
comparison the data of \cite{hand} have been rescaled to match the PSI cross
section. For the sake of completeness we additionally give in Table 4 the
numerical values of the differential $\pim$ cross sections. In passing we like
to add that the corresponding ones given in ref.~\cite{daum1} (Table 3 and
Fig.~11) are not consistent with the absolute cross section data presented in
their Table 4, but are too low by the factor $2\pi/10$ due to an error in
binning the data \cite{heiko}.
\begin{figure}[!htb]
\vspace*{-1.2cm}
\begin{center}
\epsfig{file=piminus_fig7.eps,scale=0.6}
\end{center}
\vspace*{-0.8cm}
\caption{\it (Color online) Angular distributions of the pions for selected
excess energies $<$Q$>$ together with results of Legendre polynomial fits. At
$<$Q$>$\,=\,54 MeV, the pion angular distributions as found by Daum et al.
\cite{daum1} and Handler \cite{hand} are also shown as a dashed (blue) and a
dotted (red) curve, respectively.}
\end{figure}
\begin{table}[!htb]
\begin{center}
\caption {\it Expansion coefficients of the Legendre polynomial fits to the
$\pi$ angular distributions.}
\vspace{0,5cm}
\begin{tabular}{ccc}
\hline
$<Q>/MeV$ &$a_1$ & $a_2$ \\
\hline
10 & 0.62$\pm$0.21 & 0.35$\pm$0.14 \\
26 & 0.88$\pm$0.21 & 0.63$\pm$0.28 \\
40 & 0.81$\pm$0.11 & 0.59$\pm$0.14 \\
54 & 0.65$\pm$0.07 & 0.63$\pm$0.07 \\
68 & 0.50$\pm$0.08 & 0.58$\pm$0.06 \\
82 & 0.36$\pm$0.07 & 0.40$\pm$0.06 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[!htb]
\begin{center}
\caption {\it Differential cross sections in $\mu b/sr$ of the pion angular
distributions at six excess energies Q.}
\vspace{0,5cm}
\begin{tabular}{ccccccc}
\hline
cos $\theta^*_{\pi}$ & 10~MeV & 26~MeV & 40~MeV & 54~MeV & 68~MeV & 82~MeV \\
\hline
-0.96 & 0.160$\pm$0.041 & 0.95$\pm$0.15 & 1.85$\pm$0.21 & 5.01$\pm$0.45 & 9.95$\pm$0.61 & 17.1$\pm$2.3 \\
-0.88 & 0.072$\pm$0.039 & 0.73$\pm$0.13 & 1.85$\pm$0.20 & 4.57$\pm$0.42 & 9.01$\pm$0.60 & 17.4$\pm$2.2 \\
-0.80 & 0.113$\pm$0.041 & 0.92$\pm$0.14 & 1.85$\pm$0.20 & 4.14$\pm$0.42 & 9.00$\pm$0.58 & 13.8$\pm$1.9 \\
-0.72 & 0.123$\pm$0.038 & 0.58$\pm$0.13 & 1.58$\pm$0.18 & 3.92$\pm$0.41 & 7.23$\pm$0.55 & 13.6$\pm$1.8 \\
-0.64 & 0.105$\pm$0.042 & 0.32$\pm$0.12 & 1.90$\pm$0.19 & 4.24$\pm$0.37 & 7.82$\pm$0.54 & 11.4$\pm$1.5 \\
-0.56 & 0.138$\pm$0.037 & 0.46$\pm$0.11 & 1.52$\pm$0.18 & 3.71$\pm$0.37 & 6.80$\pm$0.52 & 12.4$\pm$1.5 \\
-0.48 & 0.027$\pm$0.038 & 0.32$\pm$0.11 & 1.20$\pm$0.16 & 3.27$\pm$0.35 & 6.55$\pm$0.53 & 14.3$\pm$1.8 \\
-0.40 & 0.184$\pm$0.036 & 0.57$\pm$0.11 & 1.12$\pm$0.16 & 3.26$\pm$0.34 & 7.65$\pm$0.52 & 14.3$\pm$1.7 \\
-0.32 & 0.065$\pm$0.034 & 0.43$\pm$0.12 & 1.31$\pm$0.15 & 3.05$\pm$0.33 & 7.14$\pm$0.50 & 13.1$\pm$1.9 \\
-0.24 & 0.082$\pm$0.031 & 0.53$\pm$0.13 & 1.47$\pm$0.14 & 3.81$\pm$0.35 & 6.12$\pm$0.50 & 13.3$\pm$2.0 \\
-0.16 & 0.095$\pm$0.032 & 0.49$\pm$0.13 & 1.69$\pm$0.16 & 3.37$\pm$0.36 & 7.82$\pm$0.52 & 14.4$\pm$2.4 \\
-0.08 & 0.133$\pm$0.029 & 0.80$\pm$0.15 & 1.58$\pm$0.17 & 3.86$\pm$0.38 & 7.14$\pm$0.53 & 14.6$\pm$2.4 \\
0.0 & 0.111$\pm$0.028 & 0.70$\pm$0.17 & 1.80$\pm$0.19 & 3.97$\pm$0.40 & 6.97$\pm$0.52 & 14.0$\pm$2.3 \\
0.08 & 0.073$\pm$0.033 & 0.57$\pm$0.16 & 2.07$\pm$0.21 & 3.76$\pm$0.39 & 7.31$\pm$0.58 & 11.7$\pm$2.2 \\
0.16 & 0.218$\pm$0.038 & 0.95$\pm$0.18 & 2.18$\pm$0.23 & 4.68$\pm$0.43 & 8.76$\pm$0.61 & 14.5$\pm$2.5 \\
0.24 & 0.224$\pm$0.042 & 1.23$\pm$0.20 & 2.78$\pm$0.26 & 5.12$\pm$0.48 & 8.16$\pm$0.64 & 13.1$\pm$2.3 \\
0.32 & 0.170$\pm$0.047 & 1.33$\pm$0.21 & 3.10$\pm$0.29 & 5.66$\pm$0.50 & 8.84$\pm$0.66 & 15.8$\pm$2.6 \\
0.40 & 0.201$\pm$0.051 & 1.57$\pm$0.23 & 3.27$\pm$0.29 & 5.44$\pm$0.53 & 10.03$\pm$0.71 & 16.2$\pm$2.9 \\
0.48 & 0.236$\pm$0.052 & 1.72$\pm$0.25 & 3.65$\pm$0.32 & 6.10$\pm$0.58 & 10.55$\pm$0.75 & 20.2$\pm$2.8 \\
0.56 & 0.197$\pm$0.058 & 1.66$\pm$0.25 & 3.95$\pm$0.35 & 7.56$\pm$0.63 & 12.76$\pm$0.91 & 18.1$\pm$3.1 \\
0.64 & 0.231$\pm$0.060 & 1.85$\pm$0.24 & 4.24$\pm$0.34 & 8.27$\pm$0.74 & 14.88$\pm$1.12 & 24.9$\pm$3.1 \\
0.72 & 0.244$\pm$0.061 & 2.03$\pm$0.28 & 4.27$\pm$0.38 & 9.74$\pm$0.88 & 16.58$\pm$1.18 & 21.6$\pm$3.2 \\
0.80 & 0.259$\pm$0.061 & 2.19$\pm$0.29 & 5.36$\pm$0.41 & 10.45$\pm$0.88 & 17.26$\pm$1.22 & 25.5$\pm$3.4 \\
0.88 & 0.245$\pm$0.063 & 1.69$\pm$0.27 & 5.22$\pm$0.43 & 11.10$\pm$0.92 & 17.52$\pm$1.28 & 27.5$\pm$3.3 \\
0.96 & 0.223$\pm$0.064 & 2.40$\pm$0.30 & 5.61$\pm$0.46 & 11.76$\pm$0.95 & 19.98$\pm$1.41 & 26.8$\pm$3.5 \\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Dalitz plots and invariant mass distributions}
Dalitz plots and $M_{pp}$ invariant mass distributions of acceptance corrected
and kinematically fitted $pp\pim$ events are presented in Figs.~8 and 9 together
with statistical errors in the latter figure. In each case four Q bins 2\,MeV wide
were chosen, the center values are indicated in each frame. The kinematical
limits of the Dalitz plots given by the solid lines were calculated for these
values; due to the rapidly growing phase space some data extend over these
border lines. Here the size of the squares is a measure of the count rate. Each
plot is almost uniformly covered with the exception of the area in the upper
left corner where strong FSI effects between the reaction protons were expected.
Also some lowering in yield is observed in the opposite corner which we attribute
to the asymmetries found in the pion angular distributions. No enhancements due
to the $\Delta$ resonance are visible.
\begin{figure}[!htb]
\vspace*{-0.2cm}
\begin{center}
\epsfig{file=piminus_fig8.eps,scale=0.6}
\end{center}
\vspace*{-0.8cm}
\caption{\it Experimentally deduced Dalitz plots for the quasifree reaction
$np\to pp\pim$ at four 2\,MeV wide Q bins. The solid lines denote the
kinematical limits.}
\end{figure}
\begin{figure}[!htb]
\vspace*{-0.2cm}
\begin{center}
\epsfig{file=piminus_fig9.eps,scale=0.6}
\end{center}
\vspace*{-0.8cm}
\caption{\it (Color online) Proton-proton invariant mass distributions obtained
at four Q bins together with data from our Monte Carlo simulation. The dashed
(blue) lines denote the results as found for phase space distributed events, the
solid (red) curves give the ones where FSI effects with standard values for
scattering length and effective range have additionally been included (see text).}
\end{figure}
\newpage
The $d\sigma/dM_{pp}$ invariant mass distributions shown in Fig.~9 were plotted
on a linear $M_{pp}$ scale, also given are the results obtained from our Monte
Carlo simulation (solid and dashed lines). In all cases large deviations are
observed between experimental and simulated data, as long as purely phase
space distributed events were considered (dashed lines). Incorporating FSI
effects into our MC simulations by using the formalism of Watson \cite{wats}
and Migdal \cite{migd} which was later refined by Morton \cite{mort} the
distributions given by the solid lines were found. We calculated additional
weight factors $w_{fsi}$ given in a simplified form as
\vspace{-8mm}
\begin{center}
\begin{equation}
w_{fsi} = 1 + f_{pp} \cdot C^2 \cdot
\lbrack C^4 \cdot T^{CM}_{pp} + \frac{(\hbar c)^2}{m_p c^2} \left(
\frac{m_p c^2} {2(\hbar c)^2}r_0 \cdot T^{CM}_{pp} -\frac{1}{a_0}
\right) ^2 \rbrack ^{-1},
\end{equation}
\end{center}
where $T^{CM}_{pp}$ denotes the $pp$ center of mass kinetic energy $T^{CM}_{pp}
= (M_{pp} -2m_p)\,c^2$ and $C^2$ the Coulomb
penetration factor
\vspace{-10mm}
\begin{center}
\begin{equation}
C^2 = \frac{2\pi \cdot \gamma_p}{e^{2\pi \gamma_p} - 1}
\end{equation}
\end{center}
\vspace{-4mm}
with $\gamma_p = \frac{\alpha \cdot \mu_{pp} \cdot c}{p_{pp}}$. Here $\alpha$
is the fine structure constant, $p_{pp} = \sqrt{2 \mu_{pp} T^{CM}_{pp}}$ and
$\mu_{pp}$ is the reduced mass of the $pp$-system. The strength factor $f_{pp}$
is a measure of the contributing $Ss$ and $Sp$ partial waves and is adjusted for
each $Q$ interval. From literature we took the standard values $a_0$=-7.83 fm
and $r_0$=2.8 fm \cite{fsi} as input parameters for the scattering length and
effective range, respectively, for the two protons in the $^1S_0$ state. The
agreement for the two lowest $Q$ bins, where the relative weight of the
``diproton'' $1S_0$-state is high, is very good. In case of the two higher bins
this simple ansatz, however, only succeeds in reproducing the rise at
$M_{pp}\,=\,2m_p$.
\subsection{The isoscalar cross section}
Using equation (1) the isoscalar cross section $\sigma_{01}$ can be obtained
from the measured cross sections for the $pp\to pp\piz$ and the $np\to pp\pim$
reactions. The isospin I\,=\,0 partial waves of type $Sp$ ($^3S_1\to$$^1S_0p_1$
and $^3D_1\to$$^1S_0p_1$) which are forbidden in the $pp\to pp\piz$ reaction due
to the Pauli principle are generally believed \mbox{$\lbrack$2,6,10,11$\rbrack$}
to dominate $\sigma_{01}$ in the threshold region. In the $np$ reaction they
interfere with the isospin I\,=\,1 wave $^3P_0\to$$^1S_0s_0$ and thus are
responsible for the strong asymmetries in the $\pim$ angular distributions at
very low excess energies. In the literature one finds a vast amount of data for
both reactions and, as has been shown \cite{daum1}, the extracted $\sigma_{01}$
shows the expected $\eta^4$ dependence \cite{rose}, at least for $\eta >0.5$
($Q\,>17$\,MeV). We contend that the deviations quoted for smaller $\eta$
values are the result of wrong cross section data. As can be seen from (1), in
order to obtain a finite $\sigma_{01}$ the $np$ cross section must at least be
half as large as the one for the $pp$ reaction. For $\eta$\,=\,0.34 $\sigma_{np}$
is given as 1.43\,$\mu b$ \cite{daum1}. Recently our group at COSY reported new
data for the $pp\to pp\piz$ reaction which exceeded the published ones from IUCF
\cite{iucf} and CELSIUS \cite{cels} by roughly 50$\%$, the discrepancy could be
shown to originate from an underestimation of the $pp$ final-state interaction
\cite{cosy}. We found a cross section $\sigma_{pp}$\,=\,3.72$\pm 0.3 \mu b$ at
$\eta$\,=\,0.35 which is 2.6 times larger than $\sigma_{np}$ and as such would
leave no room for $\sigma_{01}$. This, however, is in contradiction to the
asymmetries observed for the $\pim$ angular distributions, which are only
possible with interfering $Ss$ and $Sp$ partial waves. In addition to assuming a
wrong cross section measurement one should also consider a wrong beam energy
determination. The uncertainty in neutron energy at the NA2 beam facility at PSI
is given as 3 MeV ($\sigma$) for $E_n$\,=\,287\,MeV \cite{daum1}. An error in
quoted beam energy of this size could possibly explain the observed deviations
in the very close to threshold region where the cross section of almost any
reaction rises dramatically.
\subsection{Summary}
With a deuteron beam at 1.85\,GeV/c impinging on a liquid hydrogen target the
quasi-free $np\to pp\pim$ reaction was studied for excess energies $Q$ up to 90
MeV. The data were analyzed in the framework of the spectator model where the
proton is assumed to be an unaffected spectator staying on-shell throughout the
reaction. Tagging the spectator proton in the forward scintillator hodoscope of
our COSY-TOF spectrometer allowed to determine such parameters as effective mass
and momentum of the off-shell neutron at the time of the reaction. We have
measured angular distributions and invariant mass distributions of the reaction
products and have set up Dalitz plots for several $Q$ bins distributed evenly
over the whole excess energy range. The data were compared to results derived
from Monte Carlo simulations and to data taken from the literature. In general
good agreement was found, the large asymmetries observed previously for the
$\pim$ angular distributions could be confirmed. Final-state interaction effects
between the reaction protons were found even at the highest excess energies.
The $d\sigma/dM_{pp}$ invariant mass distributions at 25 and 40\,MeV which are
governed by the ``diproton'' $^1S_0$-state could be reproduced by the Monte
Carlo simulations when incorporating FSI effects in the formalism of
refs.~\mbox{$\lbrack$23-25$\rbrack$} with standard values for scattering length
and effective range. Sizeable $d$-wave contributions were observed in the angular
distribution of the relative proton momentum at $Q$\,=\,82\,MeV. In view of new
cross section data of the $pp\to pp\piz$ reaction, reported deviations of the
isoscalar cross section $\sigma_{01}$ from an $\eta^4$ dependence \cite{daum1}
were explained as stemming most probably from small errors in beam energy.
\subsection*{Acknowledgements}
The big efforts of the COSY crew in delivering a low-emittance deuteron beam
is gratefully acknowledged. Helpful discussions with C.~Hanhart, H.~Lacker,
P.~Moskal and C.~Wilkin are very much appreciated. Special thanks are due to
G.~Sterzenbach who as head of the workstation group provided continuous help in
case of problems with the system. Financial support was granted by the German
BMBF and by the FFE fund of the Forschungszentrum J\"ulich.
|
2,869,038,154,714 | arxiv | \section{Introduction}
It has been known for some time that singularity-free solutions are possible for the scale factor and thermodynamical quantities describing a 4-dimensional braneworld embedded in a 5-dimensional space with a fluid
analogue depending on the extra spatial coordinate and with a specific equation of
state \cite{add}-\cite{ack21a}. The problem is to find the
most general circumstances that allow such solutions with the properties of
satisfying the energy conditions and localising gravity on the brane.
In a series of works, we have been able to find special families of asymptotic
solutions that satisfy all the above-mentioned properties (cf. \cite{ackr1,ackr2}
and refs. therein). Although this search constitutes a viable approach to
the cosmological constant problem through the mechanism of self-tuning, the
search for the simplest solutions with the desired properties does not
reveal the structure of the whole space of interesting solutions.
In this paper, we consider the general problem of a 4-braneworld embedded in
a five-dimensional bulk space filled with a linear or nonlinear fluid from a
more qualitative, dynamical viewpoint that provides us with an insight into the global
geometry of the orbits and the dynamics in the phase space of the problem. In addition, a more careful definition of the meaning of singularity-free solutions allows us to look into the problem from a more precise point of view.
The main difference of the present approach with earlier analyses is that
although previously our solutions were obtained as functions of the fifth
coordinate $Y$, in the present work we look for the global behaviour of
solutions as functions of the initial conditions. To achieve this goal, we
introduce new variables and a novel formulation of the basic
brane-bulk equations.
These variables are analogues of the Hubble and the
density parameters $H, \Omega$ of relativistic cosmology, and are given as
functions of a suitable monotone reparametrization of $Y$. Also since the new variables contain the scale factor, its first derivative, as well as the density, they are able to provide a more precise picture of the possible singular solutions. The resulting formulation
transforms the whole setup into a dynamical systems problem that can then be
studied using qualitative methods.
The reduction of the dynamics to the aforementioned form
allows interesting dynamical properties to
be studied here for the first time in a brane-bulk phase space context. These include
the topological nature and bifurcations of equilibria, the phase portraits of the dynamics, the question of existence of closed orbits, as well as the dependence of the Planck-mass integral on initial
conditions.
The plan of this paper is as follows. In the next Section, we rewrite the
problem in terms of new variables and arrive at dynamical
equations describing bulk fluids with an equation of state, and describe general features of the dynamics in Section 3.
In Sections 4, 5, we analyse the structure of dynamical solutions for
linear and nonlinear equations of state. In Section 6, we discuss the problem of localization of gravity on the brane, and we conclude with summarizing our results in the last Section.
\section{Dimensionless formulation}
In this Section, we rewrite the basic dynamical equations in a new
dimensionless formulation for both the linear and the nonlinear fluid cases.
The five-dimensional Einstein equations on the bulk are given by,
\begin{equation}
G_{AB}=R_{AB}-\frac{1}{2}g_{AB}R=\kappa^{2}_{5}T_{AB},
\end{equation}
and we assume a bulk-filling fluid analogue with an energy-momentum tensor,
\begin{equation}
\label{T old}T_{AB}=(\rho+p)u_{A}u_{B}-p g_{AB},
\end{equation}
where the indices run from 1 to 5, the `pressure' $p$ and the `density' $\rho$ are functions only of the
fifth coordinate $Y$, and the fluid velocity vector field $u_{A}=\partial/\partial Y$
is parallel to the $Y$-dimension. We also choose units such that $\kappa_{5}=1$.
We consider below the evolution of this model for a brane-bulk metric given
by
\begin{equation}
\label{warpmetric}g_{5}=a^{2}(Y)g_{4}+dY^{2},
\end{equation}
where $a(Y)$ is a warp factor with $a(Y)>0$, while the brane metric $g_{4}$ is taken to be the
four-dimensional flat, de Sitter or anti-de Sitter standard metric.
With this setup, the Einstein equations split into the conservation equation,
\begin{equation}
\label{syst2iii}\rho^{\prime}+4(p+\rho)\frac{a^{\prime}}{a}=0,
\end{equation}
the Raychaudhouri equation,
\begin{equation}
\label{syst2ii}\frac{a^{\prime\prime}}{a}=-\frac{1}{6}%
{(2p+\rho)},
\end{equation}
and the Friedmann equation,
\begin{equation}\label{syst2i}
6\frac{a^{\prime2}}{a^{2}}=\rho+\frac{6k}{a^{2}},
\end{equation}
which is a first integral of the other two when $a^{\prime}\neq0$. Here, the
constant $k$ is zero for a flat brane, $k=1$ for a de Sitter brane, and $k=-1$
for an anti de Sitter brane. For the brane-bulk problem, there is also the junction condition which in general describes a jump discontinuity in the first derivative of $a(Y)$ and takes the generic form,
\begin{equation} \label{j1}
a'(0^+)-a'(0^-)=f(a(0),\rho(0)),
\end{equation}
where the brane tension $f$ is a continuous, non-vanishing function of the initial values of $a,\rho$ (for examples of this in specific solutions, see Ref. \cite{ackr1,ackr2}).
Inspired by standard cosmology, we introduce the following `observational
parameters' related to the extra `bulk' dimension $Y$: The \emph{Hubble
scalar} $H$, measuring the rate of the expansion (a prime $^{\prime}$ means
$d/dY$),
\begin{equation}
H=\frac{a^{\prime}}{a}, \label{h}%
\end{equation}
the \emph{deceleration parameter} $q$, which measures the possible speeding up
or slowing down of the expansion in the $Y$ dimension,
\begin{equation}
q=-\frac{a^{\prime\prime}a}{a^{\prime2}}, \label{q}%
\end{equation}
and the \emph{density parameter} $\Omega$ describing the bulk matter density
effects,
\begin{equation}
\Omega=\frac{\rho}{3H^{2}}. \label{o}%
\end{equation}
While $\Omega,q$ are dimensionless, the Hubble scalar $H$ has dimensions
$[Y]^{-1}$.
Using these variables and dividing both sides by $3H^2$, the Friedmann equation (\ref{syst2i}) is,
\begin{equation}\label{constr}
2-\Omega=\frac{2k}{H^{2}a^{2}},
\end{equation}
and we conclude that evolution of the 5-dimensional models with
\begin{itemize}
\item $\Omega>2$ corresponds to those having an AdS brane ($k=-1$)
\item $\Omega=2$ corresponds to those having a flat brane ($k=0$)
\item $\Omega<2$ corresponds to those having a dS brane ($k=+1$).
\end{itemize}
(By redefining $\lambda_{5}^{2} =\kappa_{5}^{2} /2$, we would have obtained the usual trichotomy $\Omega\gtreqqless 1$ relations here, but this would have also changed various coefficients in the other field equations, so we prefer to leave it as above.)
We set $Y_{0}$ for some
arbitrarily chosen reference value of the bulk variable $Y$, and
$a_{0}=a(Y_{0})$. The evolution will be described not in terms of $Y$ but by a
new dimensionless bulk variable $\tau$ in the place of $Y$, with,
\begin{equation}
\label{dimless1}a=a_{0} e^{\tau}.
\end{equation}
Then we have,
\begin{equation}
a^{\prime}=a\,\frac{d\tau}{dY},
\end{equation}
and hence,
\begin{equation}\label{dimless2}
\frac{dY}{d\tau}=\frac{1}{H}.
\end{equation}
We shall assume that the brane lies at
$\tau=0$, or $a=a_{0}$ (we may assume without loss of generality that $Y_0=0$). In this case, the junction condition (\ref{j1}) expressed in terms of the new variables $H,\Omega$ implies that,
\begin{equation} \label{j2}
H(0^+)-H(0^-)=f(x(0),0),\quad x=H^2\Omega,
\end{equation}
where the brane tension $f$ is a function of the variables $x,\tau$. The variable $x$ is regular from the definition (\ref{o}), and $\tau$ is regular from (\ref{dimless2}) because $H$ has only a finite discontinuity at $0$ as it follows from the junction condition (\ref{j1}).
Having a brane located at the length scale value
$a_{0} $, with $0<a<+\infty$, we consider two intervals, $\tau\in(-\infty,0)$
- the `left-side' evolution, and the `right-side' interval $\tau\in
(0,+\infty)$. The latter is equivalent to the left-side interval under the
transformation $\tau\rightarrow-\tau$, and so without loss of generality we
shall restrict our attention only to the $\tau$-range $(-\infty,0)$. All our
results involving the dimensionless variable $\tau$ can be transferred to the
right-side interval by taking $\tau\rightarrow-\tau$. This will be important later.
We now show that the dynamics of the system (\ref{syst2iii})-(\ref{syst2i})
can be equivalently described by a simpler dynamical system in terms of $H$ and the
dimensionless variables $q,\tau $, and $\Omega$. From the above definitions for $H$ and $q$, we are led to the following
evolution equation for $H$, namely,
\begin{equation}
\frac{dH}{d\tau}=-(1+q)H.
\end{equation}
In the following we shall assume that fluids in the bulk satisfy linear or
nonlinear equations of state. Then the $q$-equation for a linear equation of
state (EoS),
\begin{equation}
p=\gamma\rho, \label{linro}%
\end{equation}
is found to be,
\begin{equation}
q=\left( \frac{1}{2}+\gamma\right) \Omega, \label{qlin}%
\end{equation}
while for the nonlinear equation of state,
\begin{equation}
p=\gamma\rho^{\lambda}, \label{nonlinro}%
\end{equation}
for some parameter $\lambda$, we have,
\begin{equation}
q=\left( \frac{1}{2}+\gamma\rho^{\lambda-1}\right) \Omega. \label{qnonlin}%
\end{equation}
We note here the usual fluid parameter $\gamma$ will be constrained later using the energy conditions. On the other hand, the parameter $\lambda$, the ratio $c_P/c_V$ of the specific heats of the bulk fluid under constant pressure and volume, is commonly taken to satisfy $\lambda>1$ in other contexts, most notable in standard stellar structure theory (cf. e.g., \cite{cha}, chap. IV). Although we generally put no constraint on it, we shall find that in the present problem there is a clear preference for the `polytropic' values $\lambda=1+1/n$, for integer $n$, valid for the entire bulk. Such polytropic changes provide ample differences as compared to the `collisionless' case $n=\infty$, for small values of $n$.
The evolution equation for the Hubble scalar for the linear EoS case
(\ref{linro}), becomes,
\begin{equation}
\frac{dH}{d\tau}=-\left( 1+\left( \gamma+\frac{1}{2}\right) \Omega\right)
H, \label{hlin}%
\end{equation}
whereas the nonlinear equation of state case (\ref{nonlinro}), gives the
Hubble evolution equation in the form,
\begin{equation}
\frac{dH}{d\tau}=-H-\frac{\Omega H}{2}-3^{\lambda-1}\gamma H^{2\lambda
-1}\Omega^{\lambda}. \label{hnonlin}%
\end{equation}
Let us lastly consider the continuity equation (\ref{syst2iii}). This
equation, assuming that $H\neq0$, in the linear-fluid case becomes,
\begin{equation}
\frac{d\Omega}{d\tau}=2(q-2\gamma-1)\Omega. \label{olin}%
\end{equation}
On the other hand, in the nonlinear-fluid case, assuming again that $H\neq0$,
and using Eq. (\ref{hnonlin}), we find the following evolution equation for
$\Omega$, namely,
\begin{equation}
\frac{d\Omega}{d\tau}=-2\Omega+\Omega^{2}+2\gamma3^{\lambda-1}H^{2\lambda
-2}\Omega^{\lambda+1}-4\gamma3^{\lambda-1}H^{2\lambda-2}\Omega^{\lambda}.
\label{ononlin}%
\end{equation}
Summarizing, in our new formulation of the bulk-brane problem, the basic dynamical systems
are given by the Friedman constraint Eq. (\ref{constr}), together with evolution equations in the following forms.
\textbf{Case A: Nonlinear EoS, $p=\gamma\rho^{\lambda}$.} In this case, we
have a 2-dimensional dynamical system, namely,%
\begin{align}
\frac{dH}{d\tau} & =-H-\frac{\Omega H}{2}-3^{\lambda-1}\gamma H^{2\lambda
-1}\Omega^{\lambda},\label{nls1}\\
\frac{d\Omega}{d\tau} & =-2\Omega+\Omega^{2}+2\gamma3^{\lambda-1}%
H^{2\lambda-2}\Omega^{\lambda+1}-4\gamma3^{\lambda-1}H^{2\lambda-2}%
\Omega^{\lambda}. \label{nls2}%
\end{align}
\textbf{Case B: Linear EoS, $p=\gamma\rho$.} This is the special case with
$\lambda=1$. We have the $H$ equation,
\begin{equation}
\frac{dH}{d\tau}=-\left( 1+\left( \gamma+\frac{1}{2}\right) \Omega\right)
H, \label{ls1}%
\end{equation}
and a single, decoupled evolution equation for $\Omega$, namely,
\begin{equation}
\frac{d\Omega}{d\tau}=2\left( \left( \gamma+\frac{1}{2}\right)
\Omega-2\gamma-1\right) \Omega. \label{ls2}%
\end{equation}
\section{General properties}
The 5-dimensional fluid solutions are then given in terms of the $(H,\Omega)$ variables which satisfy these evolution equations together with the Friedman constraint Eq. (\ref{constr}). For their physical interpretation, it is helpful to use a new classification in terms of $H, \Omega$ and $k$. We shall say that the bulk fluid is:
\begin{enumerate}
\item \emph{a dS (resp. an AdS) fluid}, when $k=+1$ (resp. $k=-1$)
\item a \emph{flat fluid}, when $k=0$.
\item \emph{Static}, when $H=0$ (it is necessarily flat in this case).
\item \emph{Expanding (resp. contracting)}, when $H>0\, (<0)$. (This means that the fluid is moving away (resp. towards) the brane for positive $\tau$)
\end{enumerate}
As we have noted after Eq. (\ref{constr}), the cases $\Omega <2, =2, >2$ correspond to a dS, a flat, or a AdS fluid respectively, while in the case $\Omega=0$, the bulk is empty, and the constraint equation (\ref{constr}) necessarily implies $k=+1$ for consistency, hence, $a(\tau)=\tau+C$ in this case. We shall refer to the $\Omega=0$ case as an \emph{empty dS bulk}.
We shall also invariably refer to any given phase point $(H,\Omega)$ as a `state', for example the point $(0,0)$ describes the state of as static, empty dS bulk, while the $(0,2)$ state is a static, flat fluid. Dynamical (non-static) states require $H\neq 0$, and these are described as non-trivial orbits in the $(H,\Omega)$ phase space. Further classification tags for each one of these models appear in the next Sections and depend on the ranges and values of the two fluid parameters $\gamma,\lambda$ that appear in the evolution equations.
Some general remarks about the dynamical system (\ref{nls1})-(\ref{nls2}), and its
special case (\ref{ls1})-(\ref{ls2}) are in order:
\begin{itemize}
\item Since the dynamical systems studied in this paper are two-dimensional (with a constraint), our search is for bifurcations, oscillations, or limit cycles, but no chaotic behaviour, strange attractors, or more complex phenomena can be present here.
\item Equation (\ref{nls2}) implies that the set $\Omega=0$ is invariant under
the flow of the dynamical system, i.e. $\Omega=0$ is a solution of the system.
Since no trajectories of the dynamical system can cross, we conclude that if
initially the state of the system is on the line $\Omega=0$ (that is if we start with an `empty dS bulk'), it will remain on
this line for ever. Therefore, if initially $\Omega$ is positive, it remains
positive for ever. We emphasize that this result holds for all
$\lambda\geq0$.
\item If $\lambda\geq1$, equation (\ref{nls1}) implies that the set $H=0$ is
also invariant under the flow of the dynamical system. Therefore, for
$\lambda\geq1$, assuming that initially $H>0$, then $H\left( \tau\right) >0$
for all $\tau\geq0$. We conclude that any trajectory starting at the first
quadrant $H\geq0,$ $\Omega\geq0,$ cannot cross the axes and therefore, cannot
escape out of this quadrant. For instance, expanding AdS fluids, and expanding empty dS bulks remain always expanding, and static AdS fluids always remain static.
\item For the linear EoS case, it is important that the $\Omega$-equation
(\ref{ls2}) decouples, that is, it does not contain the $H$. This decoupling is due to our choice of new variables (cf. Eq. (\ref{dimless1})). So now the $\Omega$-equation can be
treated separately as a logistic-type equation, and this simplifies the
analysis considerably. In this case, solving the $\Omega$-equation and substituting in the
$H$-equation, we have a full solution of the system. This feature is absent
from the nonlinear EoS fluid equations, which comprise a truly coupled $2D$
system. This is studied more fully below.
\item The necessity of satisfying the energy conditions (cf. \cite{ackr1}) leads in general to restrictions on the $\gamma$ range. For a fluid with a linear EoS the intersection of the requirements that follow from the weak, strong, or null energy conditions lead to the typical range $\gamma\in [-1,1]$. We shall assume this restriction as a minimum requirement for our acceptance of a solution property.
\end{itemize}
Below, with an slight abuse of language, we shall use the term \emph{linear (nonlinear) fluid} when the respective EoS is linear (nonlinear).
\section{Linear fluids and their bifurcations}
This Section provides a study of the behaviour of bulk fluids with the linear EoS given by Eq. (\ref{linro}) and described by the nonlinear system (\ref{ls1}), (\ref{ls2}), (\ref{constr}).
This system can be solved exactly and the asymptotic properties of the solutions displayed graphically. Equation (\ref{ls2}) has the form
\begin{equation}
\frac{d\Omega}{d\tau}=A\Omega^{2}+B\Omega,\label{function}%
\end{equation}
with $A=2(\gamma+1/2),~B=-2A$.
The $\Omega$-solution from Eq. (\ref{function}) with initial condition
$\Omega\left( 0\right) =\Omega_{0}$ is given by%
\begin{equation}
\Omega\left( \tau\right) =\frac{B/A}{\left( \frac{B}{A\Omega_{0}}+1\right)
e^{-B\tau}-1}=\frac{2\Omega_{0}}{\Omega_{0}+\left( 2-\Omega_{0}\right)
e^{2\left( 2\gamma+1\right) \tau}},\label{omega}%
\end{equation}
The resulting $H$-solutions are found by substituting in (\ref{ls1}) (which is a linear differential equation), yielding,
\begin{equation}
H\left( \tau\right) =\frac{H_{0}e^{-\tau}\sqrt{\Omega_{0}e^{-2\left(
2\gamma+1\right) \tau}-\Omega_{0}+2}}{\sqrt{2}}, \label{hubble}%
\end{equation}
where $H_{0}=H\left( 0\right) $.
From these solutions it follows that dS fluids ($2-\Omega_0>0$) are real solutions for all values of the fluid parameter $\gamma$ and signs of $\tau$. However, the AdS solutions ($2-\Omega_0<0$) become complex when $\gamma>-1/2, \tau>0$ or when $\gamma<-1/2, \tau<0$, since in these cases the exponential in the right-hand-side of Eq. (\ref{hubble}) decays. Hence, AdS fluids are real only when $\gamma<-1/2, \tau>0$, or when $\gamma>-1/2, \tau<0$, and we shall consider AdS solutions only in these ranges.
For dS fluids, we can see that $\Omega\left(\tau\right)$ approaches zero when $\gamma>-1/2, \tau>0$ or $\gamma<-1/2, \tau<0$, and approaches two when $\gamma<-1/2, \tau>0$, or when $\gamma>-1/2, \tau<0$. All AdS fluids develop $\Omega$ blow up singularities for $\gamma<-1/2, \tau>0$, or $\gamma>-1/2, \tau<0$.
On the other hand, to disclose the asymptotic nature of the $H$-solutions we can look at the monotonicity properties of the function $H(\tau)$ and its possible dependence on different $\gamma$ ranges.
The results show a further sensitive dependence on the $\gamma$ parameter around its
$\gamma=-1$ value\footnote{For $\gamma<-1$, the term $-2\left( 2\gamma+1\right) $ is
positive (and $>2$), therefore, the term $e^{-\tau}$ dominates over
$\sqrt{e^{-2\left( 2\gamma+1\right) \tau}}$ for $\tau<0$, while the opposite
happens for $\tau>0$. Thus, the solution $H\left(\tau\right)$ is
decreasing for $\tau<0$ and is increasing for $\tau>0$. As discussed already, solutions in this range of $\gamma$ are not acceptable as they do not satisfy the energy conditions, and so we shall not consider them further.}. For $\gamma>-1$, $H\left( \tau\right) $ always decreases and
approaches zero. At the critical value $\gamma=-1$ the solution $H\left(
\tau\right)$ decreases and approaches the constant
$H_{0}\sqrt{1-\Omega_{0}/2},$ provided that $\Omega_{0}<2$. We note that the behaviour of the solutions
(\ref{omega}) and (\ref{hubble}) is insensitive on the initial value $H_{0}$. The asymptotic behaviours of the $H,\Omega$ solution is shown in Figures \ref{oh1}, \ref{oh}.
\begin{figure}[tbh]
\begin{center}
\includegraphics[scale=1]{osingular.eps}
\end{center}
\vspace*{-20mm}
\caption{Solutions (\ref{omega}) for $\gamma<-1/2$ and $\gamma>-1/2$ when
$\Omega_{0}>2$.}%
\label{oh1}%
\end{figure}
\begin{figure}[tbh]
\begin{center}
\includegraphics[scale=1]{oh.eps}
\end{center}
\vspace*{-20mm}
\caption{Solutions for $\gamma=-2/3$ and $\gamma=1/2$. In the first case the solution (\ref{omega})
increases following the usual logistic curve and quickly approaches the
constant value $2$.}%
\label{oh}%
\end{figure}
The nature of the solutions is further revealed by studying the stability of the equilibrium solutions. This is effected by formally setting $X=(H,\Omega)$, and think
of the system (\ref{ls1})-(\ref{ls2}) as one of the form,
\begin{equation}
\frac{dX}{d\tau}=F(X,\gamma),\quad F=(F_{1},F_{2}),
\end{equation}
where $\gamma\in(-\infty,\infty)$, and with the $F_{i},i=1,2$, being the right-hand-sides
of Eqns. (\ref{ls1}) and (\ref{ls2}) respectively. With this notation, we now show that the solutions of the system undergo a transcritical bifurcation when $\gamma=-1/2 $ at the origin which is a
non-hyperbolic equilibrium. This means that bulk fluids exchange their stability when the EoS parameter $\gamma$ passes through $-1/2$.
Returning to Eq. (\ref{ls2}), when $\gamma=-1/2$, the system is $\Omega^{\prime}=0,H^{\prime}=-H$, with
immediate solution,
\begin{equation}
\Omega\left( \tau\right) =\Omega_{0}=\mathrm{const.},\ \ H\left(
\tau\right) =H_{0}e^{-\tau}. \label{gammaminus1}
\end{equation}
In this case, every point on the $\Omega$ axis is a non-hyperbolic equilibrium and every other point on the phase plane approaches the corresponding point of the $\Omega$ axis as shown in Fig \ref{phase-linear-half}. Therefore in this case we have expanding universes with ever-decreasing rate and collapsing ones with an ever-increasing rate both approaching a static dS bulk of a constant density.
\begin{figure}[tbh]
\begin{center}
\includegraphics[scale=0.7]{gamma_minhalf_half_lin.eps}
\end{center}
\vspace*{-10mm}
\caption{Phase portrait of the system (\ref{ls1})-(\ref{ls2}) for
$\gamma=-1/2$. Every point on the $\Omega$ axis is an equilibrium.}%
\label{phase-linear-half}%
\end{figure}
The case $\gamma\neq-1/2 $ is shown in Figures \ref{onedim}, \ref{phase_linear}. When $\gamma
\gtrless-1/2 $, we have $F_{2}^{\prime\prime}\gtrless0$, and so $F_{2}%
(\Omega)$ is a convex or a concave function respectively. In this case, there
are two equilibria, one at $\Omega=0$, and a second one at $\Omega=2$. When
$\gamma>-1/2 $, the equilibrium at the origin is stable while the one at
$\Omega=2$ is unstable, and they exchange their stability when $\gamma<-1/2$.
For initial conditions with $\Omega
_{0}<2$, that is for bulk models with a dS brane, and for the case
$\gamma>-1/2$ (the left diagram in Fig. \ref{onedim}), the solution $\Omega\left(
\tau\right) $ decreases approaching zero, the `Milne state' for positive
$\tau$, whereas for $\gamma<-1/2$ (the right diagram in Fig. \ref{onedim}), the solution
(\ref{omega}) increases and approaches the constant value $2$ which
corresponds to a flat brane ($k=0$).
The situation is different if initial conditions with $\Omega_{0}>2$, that is
for bulk models with a AdS brane ($k=+1$) are considered. For $\gamma>-1/2$,
the solution $\Omega\left( \tau\right) $ increases without bound, whereas
for $\gamma<-1/2$ the solution $\Omega\left( \tau\right) $ decreases to the
flat state at $\Omega=2$. Therefore we have a transcritical bifurcation occurring at the parameter value
$\gamma=-1/2$, so that the two equilibria switch their stability without
disappearing after the bifurcation, see Figure \ref{phase_linear} for the full
phase portrait of the system.
These results when translated to the bulk-brane language imply that the evolution of bulk fluids with a linear equation of state depends of the $\gamma$ parameter and is organized around the two simplest equilibria, namely, the empty bulk and the flat fluid, which exchange their stability because of the transcritical bifurcation as the nature of the fluid changes (depending on $\gamma$). A typical bulk fluid evolves either towards or away from the equilibrium states `empty bulk' and `flat fluid' depending on whether it has $\gamma\gtrless-1/2$ as shown in Fig. \ref{phase_linear}.
\begin{figure}[tbh]
\begin{center}
\includegraphics{onedlin.eps}
\end{center}
\vspace*{-10mm}
\caption{The 1-dimensional phase space ($\Omega$-line) in the linear fluid
case. The Figure on the left (right) shows the case $\gamma>-1/2 $
($\gamma<-1/2$). The arrows correspond to evolution in positive $\tau$.}%
\label{onedim}%
\end{figure}
\begin{figure}[tbh]
\begin{center}
\includegraphics[scale=0.8]{phase_linear.eps}
\end{center}
\vspace*{-5mm}
\caption{Phase portrait of the system (\ref{ls1})-(\ref{ls2}) for
$\gamma<-1/2$ and $\gamma>-1/2$. The stable node at $(0,2) $ (left), exchanges
stability with the saddle $( 0,0) $ (right) as the $\gamma$ values change.}%
\label{phase_linear}%
\end{figure}
\begin{figure}[tbh]
\begin{center}
\includegraphics[scale=0.6]{gammazero.eps}
\end{center}
\vspace*{-8mm}
\caption{Phase portrait of the system (\ref{nls1})-(\ref{nls2}) for arbitrary
$\lambda$ and $\gamma=0$. }%
\label{gammazero}%
\end{figure}
A last special case of importance is that of a `bulk dust'. The corresponding behaviour of a bulk fluid with a linear EoS is also shared in this case with all bulk fluids having a nonlinear EoS (see next Section). For $\gamma=0$, the system (\ref{nls1})-(\ref{nls2}) reduces to
\begin{equation}
\begin{aligned}
\frac{dH}{d\tau} & =-H-\frac{\Omega H}{2},\\
\frac{d\Omega}{d\tau} & =-2\Omega+\Omega^{2},\label{gamma0}%
\end{aligned}
\end{equation}
for all $\lambda$. The phase portrait of the system (\ref{gamma0}) indicates
that all solutions with initial values $\Omega_{0}<2$ (this corresponds to an dS dust fluid) and $H_{0}$ arbitrary,
asymptotically approach the node $\left( 0,0\right) $ (that is a static, empty dS bulk), see Figure
\ref{gammazero}. Hence, we find that dust-filled dS bulks rarefy to empty ones. This can be proved by noting that the formal solution of the
first of (\ref{gamma0}) can be written as
\begin{equation}
H\left( \tau\right) =H\left( 0\right) \exp\left( -\int_{0}^{\tau}\left(
1+\Omega\left( s\right) /2\right) ds\right) ,
\end{equation}
which goes to zero as $\tau\rightarrow\infty$. By the same formula, we can see again here (like in Fig. \ref{phase_linear}, right phase portrait)
that AdS trajectories starting above the line $\Omega_{0}=2$ approach the $\Omega$
axis, while $\Omega\left( \tau\right) $ diverges.
\section{Nonlinear fluids: Regularity and stability}
Let us now move to discuss properties of the nonlinear, two-dimensional system
(\ref{nls1})-(\ref{nls2}). In distinction to the linear case treated above,
this is a genuine, coupled, two-dimensional dynamical system and this results
in two important effects that we discuss below. We first discuss the nature of
the equilibria of the system and study the phase portrait. We then find a
suitable Dulac function for the dynamics of the nonlinear fluid-brane system
and show that there can be no closed (periodic) orbits for the system in certain parts of the phase space.
The nature of the $(H,\Omega)$-solutions of the system (\ref{nls1}%
)-(\ref{nls2}) is strongly dependent on the ranges of the $\lambda$-parameter
present in the fluid's nonlinear equation of state Eq. (\ref{nonlinro}), in
particular, on the three ranges, $\lambda<0$, $\lambda\in(0,1)$, and
$\lambda\geq1$.
When $\lambda<0$, there are no finite equilibria for the system (\ref{nls1}%
)-(\ref{nls2}). In this case, the dynamics is transferred to points at
infinity, a more complicated problem that we do not consider in this paper,
since it requires a deeper analysis of the `companion system' to
(\ref{nls1})-(\ref{nls2}), cf. \cite{go1,cb07}. Because of the presence of
denominators in the vector field that defines the system (\ref{nls1}%
)-(\ref{nls2}) when $\lambda<0$, an analysis of this case will help to further
clarify the question of the existence of stable singularity-free solutions of
the system. We only further note that the `dynamics at infinity' in this case
may be realized through the Poincar\'{e} sphere compactification as a boundary
dynamics in the framework of \emph{ambient cosmology}, an extension of brane
cosmology wherein the brane lies at the conformal infinity of the bulk
\cite{ac15}.
Next, for the case $\lambda>0$, we must distinguish between the two cases $\lambda\in(0,1)$ and $\lambda\geq 1$. For $\lambda\geq 1$, there are always the two $\gamma$-independent equilibria at $(0,0)$ and $(0,2)$. In addition, there are $\gamma$-dependent equilibria being generally complex:
\begin{equation}
\left( H_{\ast},\Omega_{\ast}\right) =\left( \exp\left( \frac
{-i\pi+\lambda\ln6+\ln\frac{\gamma}{6}}{2-2\lambda}\right) ,2\right)
.\label{equilibria}%
\end{equation}
These equilibria are real provided $\lambda$ takes the values
\begin{equation}
\lambda(n)=1\pm\frac{1}{2n},\quad n=1,2,\dots.\label{lambda}%
\end{equation}
It is interesting that this case falls into the polytropic index form of the $\lambda$ exponent (cf. \cite{sc}, section 8).
For $\lambda$ taking the values $\lambda(n)=1-1/2n$, the equilibria
(\ref{equilibria}) all correspond to flat bulk fluids as they lay on the line
$\Omega=2$, and at the points where,
\begin{equation}
H_{\ast}=\left\{ -\frac{\gamma}{\sqrt{6}},\frac{\gamma^{2}}{\sqrt{6}}%
,-\frac{\gamma^{3}}{\sqrt{6}},\frac{\gamma^{4}}{\sqrt{6}},...\right\}
.\label{hequ}%
\end{equation}
For the remaining values of $\lambda$ in the case where $\lambda\in(0,1)$, there are no other finite equilibria, and the presence of denominators in the vector field, as in the case of $\lambda<0$, makes this case more complicated dynamically.
To construct a typical phase portrait for $\lambda=1-1/2n$, we
may use $\lambda(1)=1/2$. The only equilibrium of the system is $\left(
-\gamma/\sqrt{6},2\right) $ as implied by (\ref{hequ}); for all $\gamma$ the
Jacobian matrix at this point is
\[
\left[
\begin{array}
[c]{cc}%
-2 & 0\\
0 & -2
\end{array}
\right] ,
\]
therefore $\left( -\gamma/\sqrt{6},2\right) $ is a stable improper node (`a
sink star' in other terminology). It is interesting that this node represents
a flat bulk fluid with a $\gamma$ that falls inside the range of acceptable
values as dictated by the energy conditions. It turns out that for all
$\gamma$, this equilibrium is a global attractor of all solutions of
(\ref{nls1})-(\ref{nls2}), see Figure \ref{node1}, where the attractor is
shown for the cases of a cosmological constant ($\gamma=-1$) and a massless scalar field bulk ($\gamma=1$).
\begin{figure}[tbh]
\begin{center}
\includegraphics[scale=0.8]{node1.eps}
\end{center}
\par
\vspace*{-10mm} \caption{Phase portrait of the system (\ref{nls1}%
)-(\ref{nls2}) for $\lambda=1/2$ and $\gamma=-1$ and $\gamma=1$. }%
\label{node1}%
\end{figure}
We now proceed with the analysis of the case $\lambda\geq1$.
As already mentioned, the system (\ref{nls1})-(\ref{nls2}) has two equilibrium points, located
at the origin and at the phase point $(0,2)$, that is the whole bulk dynamics
is organized around a static empty dS bulk and a static flat bulk fluid. We
note that the linearized system around $(0,0)$ becomes,
\begin{equation}
\left[
\begin{array}
[c]{c}%
H^{\prime}\\
\Omega^{\prime}%
\end{array}
\right] =\left[
\begin{array}
[c]{cc}%
-1 & 0\\
0 & -2
\end{array}
\right] \left[
\begin{array}
[c]{c}%
H\\
\Omega
\end{array}
\right] ,\label{linzed}%
\end{equation}
therefore the point $\left( 0,0\right) $ is a stable node, and the local
phase portrait of the full nonlinear system (\ref{nls1})-(\ref{nls2}) is
topologically equivalent to that of the linear system (\ref{linzed}). By
inspection of the Jacobian matrix at $\left( 0,2\right) $ we see that its
eigenvalues are $\pm2$, therefore this equilibrium is a saddle point, that is,
trajectories approaching this point eventually move away. A typical phase
portrait for $\lambda\geq1$ is given in Figure \ref{lambda2} for a
cosmological constant (for $\gamma=-1$) and a massless scalar field bulk (i.e., $p=\rho$).
For $\lambda(n)=1+1/2n$, the equilibria (\ref{equilibria}) lay on
the line $\Omega=2$, at the points where,%
\begin{equation}
H_{\ast}=\left\{ -\frac{1}{\sqrt{6}\gamma},\frac{1}{\sqrt{6}\gamma^{2}%
},-\frac{1}{\sqrt{6}\gamma^{3}},\frac{1}{\sqrt{6}\gamma^{4}},...\right\}
.\label{hequ1}%
\end{equation} We take as typical example the case $\lambda=3/2$. Apart from the
attracting sink at the origin and the saddle at $\left( 0,2\right) $, the
system has a third equilibrium located according to (\ref{hequ1}) at $\left(
-1/\left( \sqrt{6}\gamma\right) ,2\right) $, and belonging to the first
quadrant for $\gamma<0$, or to the second quadrant for $\gamma>0$. The phase
portrait of the system is shown in Figure \ref{saddle1} for $\gamma
=-1/\sqrt{6}$, $\gamma=1/\sqrt{6}$.
\begin{figure}[tbh]
\begin{center}
\includegraphics[scale=0.9]{lambda2.eps}
\end{center}
\vspace*{-10mm}
\caption{Phase portrait of the system (\ref{nls1})-(\ref{nls2}) for
$\lambda=2$ and $\gamma=-1$ and $\gamma=1$. There are three
saddles at $(-1,2)$ and at $(1,2)$ for $\gamma=-1$, but only one at $(0,2)$ for $\gamma=1$.}%
\label{lambda2}%
\end{figure}
\begin{figure}[tbh]
\begin{center}
\includegraphics[scale=0.9]{saddle1a.eps}
\end{center}
\vspace*{-10mm}
\caption{Phase portrait of the system (\ref{nls1})-(\ref{nls2}) for
$\lambda=3/2$ and $\gamma=-1/\sqrt{6}$ and $\gamma=1/\sqrt{6}$. In each case there are three
equilibrium points at $\left( 0,0\right) $, at $\left(
0,2\right) $ and at $\left( 1/\sqrt{6},2\right) $ and $\left( -1/\sqrt
{6},2\right) $ respectively.}%
\label{saddle1}%
\end{figure}
There are three invariant lines, namely $H=0$ and $\Omega=0$ as discussed
after (\ref{nls1})-(\ref{nls2}), as well as the line $\Omega=2$, corresponding to static, empty, and flat bulk fluids respectively. These lines
are the boundaries of the following invariant sets. The strip between the
lines $\Omega=0$ and $\Omega=2$ is an invariant set under the flow of the
system, since every trajectory starting in this strip remains there forever.
Similarly, the sets $\Omega>2,$ $H>0$ and $\Omega>2,$ $H<0$ (that is dynamic AdS bulks) are invariant
sets. For both signs of $\gamma$, all solutions with initial values
$\Omega_{0}>2$ and $H_{0}$ arbitrary, diverge to $\pm\infty$.
The only bounded
solutions observed in Figure \ref{saddle1} are those trajectories approaching the stable node at the origin.
For example, expanding dS scalar field bulks (that is for $\gamma=1$) with initial values $H_{0}>0$,
$\Omega_{0}<2$ asymptotically approach $\left(0,0\right)$, that is they become static and empty; however, the
determination of the whole basin of attraction of a sink is not always
possible. Finally, there are solutions with $\Omega\left( \tau\right) $
approaching the constant value $\Omega_{\ast}=2$ while $H\left( \tau\right)
$ is diverging to $\pm\infty$, depending on the sign of $\gamma$.
We conclude by giving in the following Table a summary of how the nature of the equilibria given in Eq. (\ref{equilibria}) depends on the type and range of $\gamma$ for the first few values of $n$ in Eq. (\ref{lambda}):
\begin{center}
\begin{tabular}{ |c|c|c| }
\hline
$n$ & $\lambda=1-1/2n$ & $\lambda=1+1/2n$\\
\hline
1 & sink for $\gamma\in\left[ -1,1\right] $ & saddle for $\gamma\in \left[ -1,1\right] $\\
2 & sink for $\gamma\in [-1,0)$, saddle for $\gamma\in (0,1]$ & saddle for $\gamma\in \left[ -1,1\right] $ \\
3 & sink for $\gamma\in [-1,0)$& saddle for $\gamma\in [-1,0)$\\
4 & saddle for $\gamma\in \left[ -1,1\right] $& saddle for $\gamma\in \left[ -1,1\right] $\\
5 & sink for $\gamma\in [-1,0)$& saddle for $\gamma\in [-1,0)$\\
\hline
\end{tabular}
\end{center}
It is interesting to note that because of the presence of a \emph{saddle connection} in Fig. \ref{saddle1} (the horizontal line $\Omega=2$ connecting the two saddles), Peixoto theorem on structural stability is violated.
It also clearly follows from Figs. \ref{node1}, \ref{lambda2}, \ref{saddle1}, that the three cases, two corresponding to the polytropic indices $\lambda(n)=1\pm 1/2n$, and the third case of $\lambda$ unequal to those, are all qualitatively inequivalent.
We conclude this Section by showing the \emph{impossibility of closed
orbits} for the system (\ref{nls1})-(\ref{nls2}) in the first quadrant of the
$H-\Omega$ plane, that is for expanding, non-empty bulks.
To see this, we introduce the function,
\begin{equation}
g=\frac{1}{H^a\Omega\,^b},\label{dulac}%
\end{equation}
and the problem is to use the system (\ref{nls1})-(\ref{nls2}) to determine the constants $a,b$ such that the divergence of the vector field given by the product of the function $g$ times the vector field $(\dot{H},\dot{\Omega})$, that is $g(\dot{H},\dot{\Omega})$, is positive for certain ranges of $\lambda,a,b,\gamma$. The vector field
$g(\dot{H},\dot{\Omega})$ is given by
\begin{equation}
g(\dot{H},\dot{\Omega})=(f_{1}(H,\Omega),f_{2}(H,\Omega)),
\end{equation}
where,
\begin{equation}
f_{1}(H,\Omega)=\frac{1}{H^{a-1}\Omega^b}-\frac{1}{2H^{a-1}\Omega^{b-1}}-3^{\lambda-1}
H^{2\lambda-1-a}\Omega^{\lambda-b},
\end{equation}
and
\begin{equation}
f_{2}(H,\Omega)=\frac{-2}{H^{a}\Omega^{b-1}}+\frac{1}{H^{a}\Omega^{b-2}}+2\gamma3^{\lambda-1}%
H^{2\lambda-2-a}\Omega^{\lambda+1-b}-4\gamma3^{\lambda-1} H^{2\lambda-2-a}\Omega^{\lambda-b}.
\end{equation}
Then, setting $a=1$, the divergence of this vector field is given by,
\begin{equation}
\begin{aligned}
\operatorname{div}[g(\dot{H},\dot{\Omega})]
=&\frac{2(b-1)}{H\Omega^b}+\frac{2-b}{H\Omega^{b-1}}\\
&+2\gamma 3^{\lambda-1}(2-b)H^{2\lambda-3}\Omega^{\lambda-b}\\
&-4\gamma 3^{\lambda-1} (\lambda-b) H^{2\lambda-3}\Omega^{\lambda-b-1}.
\end{aligned}
\end{equation}
The right-hand-side of this equation is positive provided,
\begin{equation}\label{cond}
\gamma>0,\,\,\,H>0,\,\,\Omega>0,\,\,1<b<2,\,\,\lambda<b,
\end{equation}
so when these inequalities are all true, the divergence is strictly positive. The function $g$
from (\ref{dulac}) with this property is a \emph{Dulac
function}, but no algorithm in general exists for finding such functions. From the inequalities in (\ref{cond}) it then follows that on the simply connected domain,
\begin{equation}
\mathcal{D}=\{(H,\Omega)|H,\Omega>0\},
\end{equation}
of the planar phase space, the vector field defined by (\ref{nls1})-(\ref{nls2}), satisfies
$(\dot{H},\dot{\Omega})\in\mathcal{C}^{1}(\mathcal{D})$, and $g\in
\mathcal{C}^{1}(\mathcal{D})$, and the divergence $\nabla\cdot g(\dot{H}%
,\dot{\Omega})$ is strictly positive on $\mathcal{D}$. Then by the
Bendixson-Dulac theorem, we conclude that there is no closed orbit lying
entirely on $\mathcal{D}$.
\section{Localisation}
The basic condition for gravity localisation on the brane is that the
4-dimensional Planck mass proportional to the integral $\int_{-\infty}%
^{0}a^{2}dY$ is finite, that is the integral be convergent. We can use
two different approaches to deal with this integral, firstly using the
constraint to re-express the integral in terms of dimensionless variables, and
secondly by direct evaluation.
To start with the first approach, we note that using the Friedmann constraint equation
(\ref{constr}) which is reproduced here,
\[
2-\Omega=\frac{2k}{H^{2}a^{2}},
\]
we can express the `Planck mass integral' $\int a^{2}dY$ in terms of the
dimensionless variables, namely,
\[
2k\int\frac{d\tau}{H^{3}(2-\Omega)}.
\]
One may think that this integral
expressing the Planck mass can be calculated explicitly, for $H\left(
\tau\right) $ and $\Omega\left( \tau\right) $ given by the solutions
(\ref{omega}) and (\ref{hubble}), however, the constraint equation (\ref{constr}) provides a
relation between the scale factor $a$ and the variables $H$ and $\Omega$ only
for $k\neq0$ models. Therefore we may instead choose to evaluate the integral directly,
\begin{equation}
\int a^{2}dY=\int\frac{a^{2}}{H}d\tau. \label{inte1}%
\end{equation}
Let us consider the case of the linear fluid first. By the definition (\ref{dimless1}), $a\left( \tau\right) $ is proportional
to $e^{\tau}$ and $H\left( \tau\right) $ is given by the solution
(\ref{hubble}). We are interested to examine whether the Planck mass
integral becomes finite on the interval $(-\infty,0]$. In fact, we are able to
show something more, namely, that it is finite on intervals of the form
$(-\infty,\tau_{1}]$, with a suitable chosen positive $\tau_{1}$ dependent of
the initial conditions and $\gamma$.
It turns out that the integral expressing
the Planck mass can be expressed explicitly in terms of the ordinary
hypergeometric function $_{2}F_{1}\left( a,b;c;z\right) $. More precisely,
the value of the indefinite integral (\ref{inte1}) is%
\begin{equation}
\frac{a_{0}^{2}\sqrt{2}e^{3t}\,_{2}F_{1}\left( a,b;c;z\right) }{3H_{0}%
\sqrt{2-\Omega_{0}}}, \label{inte2}%
\end{equation}
where%
\begin{equation}
a=\frac{1}{2},\ \ b=-\frac{3}{4\gamma+2},\ \ c=\frac{4\gamma-1}{4\gamma
+2},\ \ z=\frac{\Omega_{0}\exp\left( -2\left( 2\gamma+1\right) \right)
\tau}{\Omega_{0}-2}.
\end{equation}
For some particular values of $\gamma$, the integral (\ref{inte2}) can be expressed as
combination of elementary functions, although by complicated formulas. We treat the cases $\Omega_{0}\lessgtr 2$ separately below.
For
$\Omega_{0}<2$ the improper integral $\int_{-\infty}^{0}\left( a^{2}%
/H\right) d\tau$ exists, i.e. can be expressed in terms of the constants
$a_{0},\Omega_{0},H_{0},$ at least for the representative values,
\begin{equation}
\gamma=-1,-2/3,-1/2,-1/3,0,1/2,1.
\end{equation}
For the critical value $\gamma=-1/2$, the
hypergeometric function is not defined, but with the solution
(\ref{gammaminus1}), i.e., $H\sim e^{-\tau}$, the integral (\ref{inte1}) is
elementary and $\int_{-\infty}^{0}\left( a^{2}/H\right) d\tau=a_{0}%
^{2}/3H_{0}$.
Moreover, in all cases the integral is finite on intervals of
the form $(-\infty,\tau_{1}]$, for some positive $\tau_{1}$. This fact can be
understood, at least for $k\neq0$, if we take into account the remarks after
equation (\ref{hubble}): the integrand $1/H^{3}\left( \Omega-2\right) $
remains bounded and approaches zero, even if the functions $H\left(
\tau\right) $ and $\Omega\left( \tau\right) $ take arbitrary large values.
The situation is different if $\Omega_{0}>2$. In that case, for
$\gamma<-1/2,$ $H\left( \tau\right) $ is real only when the expression inside the root in Eq. (\ref{hubble}) is non-negative, that is when
\begin{equation} \label{tau*}
\tau\geq\tau_*,
\end{equation}
where,
\begin{equation}
\tau_*=\frac{1}{-2(2\gamma+1)}\ln\left(\frac{\Omega_0-2}{\Omega_0}\right)<0,
\end{equation}
with equality in (\ref{tau*}) giving the position of the brane. Then we find that the integral
$\int_{\tau_*}^{\infty}\left( a^{2}/H\right) d\tau$ always diverges. For $\gamma\geq-1/2$, $\tau_*$ is positive, but in this case we require $\tau<\tau_*$ for the expression inside the square root in Eq. (\ref{hubble}) to be positive. Thus we have to integrate in the range $(-\infty,\tau_*]$, and so we
find that the integral $\int_{-\infty}^{\tau_*}\left( a^{2}/H\right) d\tau$
exists, at least for the representative values $\gamma=-1/3,-1/2,0,1/2,1$.
To summarize our results for the linear fluid: if $\Omega_{0}<2$ the integral (\ref{inte2})
allows for a finite Planck mass for all $\gamma\in\left[ -1,1\right] $. If
$\Omega_{0}>2$, we have a finite Planck mass for all $\gamma\in\left[
-1/2,1\right]$. In this case, we choose the upper limit of the integral to
be less than some $\tau_{1}>0$.
Next we consider the case of the nonlinear fluid, $\lambda\neq1$. Equations
(\ref{nls1}) and (\ref{nls2}) can be solved numerically for various values of
the parameters $\gamma$ and $\lambda$ and initial values of the variables $H$
and $\Omega$. In all numerical evaluations the solutions develop finite time
singularities, see for example Figure \ref{ohnonlinear}.
Nevertheless, it
seems that the integral (\ref{inte1}) is finite in any interval between the
singularities. This is due to the fact that the integrand $e^{2\tau}/H\left(
\tau\right) $ remains bounded and approaches zero, even if the functions
$H\left( \tau\right) $ and $\Omega\left( \tau\right) $ take arbitrary
large values. \begin{figure}[tbh]
\begin{center}
\includegraphics{ohnonlinear.eps}
\end{center}
\caption{Numerical solution for $\gamma=1/2$, $\lambda=2$ and $\Omega_{0}>2$.
$\Omega(\tau)$ develops a singularity at about $\tau\sim0.74$ and $H\left(
\tau\right) $ has a singularity at $\tau\sim-0.5$.}%
\label{ohnonlinear}%
\end{figure}
The numerical investigation described above indicates that even
for the nonlinear fluid, the Planck mass expressed by (\ref{inte1}) may be
finite, although we were unable to prove this result rigorously.
\section{Discussion}
In this paper we have introduced and studied the consequences of a new formulation for the dynamics of a 4-braneworld embedded in a bulk 5-space. This formulation transforms the problem into a two-dimensional dynamical system that depends on parameters such as the EoS parameter and the degree of nonlinearity of the fluid. This allows us to study the phase space of the model, and also consider in detail the importance of different states, points in phase space, such as the origin or the $(0,2)$-state for the overall dynamical features of the bulk fluid.
For the case of a bulk fluid with a linear equation of state, our new formulation leads to a partial decoupling of the dynamical equations of this model. This in turn implies that the linear fluid case can be solved exactly, and the asymptotic nature of the $(H,\Omega)$ solutions to be directly revealed as well as their dependence on the EoS parameter and the initial conditions to be explicitly shown. In addition, we find that the equilibria of the system depend on the fluid parameter $\gamma$ and this has a major effect of the global dynamics of the system, not present in the simpler case of relativistic cosmologies. The main effect is the existence of a transcritical bifurcation around the $\gamma=-1/2$ value which change the nature of the local equilibria as well as their stability. We also concluded that the overall geometry of the orbits swirls around the two states we call empty bulk and flat fluid, as well as a number of other equilibria.
For the case of a nonlinear bulk fluid, the dynamics is organized differently for different $\lambda$-values, and shows a preference for polytropic bulk fluids. For instance, the existence on an overall attractor appears only for $\lambda=1/2$, while the dynamics for a bulk having $\lambda>1$ is characterized by portraits organized around nodes and saddle connections for the values of $\lambda=1+ 1/n$. This means that the nonlinear case has a variety of instabilities as well as stable and saddle behaviours. The non-existence of closed orbits in the first quadrant is also a marked feature of the nonlinear bulk fluid.
However, as numerical evaluations show, despite the existence of singularities, the phenomenon of brane-localization in the sense of having a finite Planck mass is \emph{self-induced} by the dynamics itself: restricting the dynamics on the orbifold leads generically to gravity localization on the brane. For this conclusion, although it follows clearly from various numerical evaluation that we have explicitly performed, we have not been able to provide a full analytic proof. We note, however, that for nonlinear fluids when the null energy condition is satisfied and $\gamma<0$, there are indeed solutions without finite-distance singularities as two of us have shown in \cite{ack21a} using different techniques such as representation of solutions through hypergeometric expansions and matching.
It would be interesting to extend some of these results further. For example, to the case of a bulk filled with a self-interacting scalar field instead of the fluid. Another extension is to study the ambient problem and allow for singularities at infinity using similar methods as those discussed here. It would also be interesting to study in detail the case $\lambda<1$ where the vector field is non-polynomial. These more general problems will be given elsewhere.
\section*{Acknowledgements}
IA would like to thank the hospitality and financial support of SISSA and ICTP where this work was partially done.
\addcontentsline{toc}{section}{References}
|
2,869,038,154,715 | arxiv | \section{Introduction}
Lorentz symmetry violation has been of permanent interest in the past two decades
whereby the major part of the investigations have been carried out within the
minimal Standard-Model Extension (SME) \cite{Colladay,Samuel}. The minimal SME
incorporates power-counting renormalizable contributions for Lorentz violation
in all particle sectors and has been subject to various studies.
The {\em CPT}-even photon sector of SME has been investigated thoroughly with
the main objective to obtain stringent bounds on its 19 coefficients \cite{KM1}.
The absence of vacuum birefringence has led to bounds at the level of
$10^{-32}$ to $10^{-37}$ \cite{KM3} for the 10 birefringent coefficients.
Investigating vacuum Cherenkov radiation \cite{Cherenkov2} for
ultra-high energy cosmic rays \cite{Klink2,Klink3} has provided a set of tight
constraints on the remaining coefficients \cite{Kostelecky:2008ts}.
Furthermore, there has been a vast interest in understanding
the properties of the {\em CPT}-odd photon sector of the SME that is represented
by the Carroll-Field-Jackiw (CFJ) electrodynamics~\cite{Jackiw}. This theory has
been extensively examined in the literature with respect to its consistency \cite{Higgs},
modifications that it induces in quantum electrodynamics (QED) \cite{Adam,Soldati},
its radiative generation \cite{Radio}, and many other aspects. As vacuum
birefringence has not been observed, the related coefficients are strongly bounded
at the level of $\unit[10^{-43}]{GeV}$ \cite{Kostelecky:2008ts}.
It is known that the timelike sector of CFJ electrodynamics is plagued by several
problems such as negative contributions to the energy density \cite{Colladay} and
dispersion relations that become imaginary in the infrared regime. In particular,
it was shown that violations of unitarity are present, at least for small momenta
\cite{Adam}. In the current work, our objective is to study unitarity of the timelike
sector of CFJ electrodynamics.
We would like to find out whether the inclusion of a photon mass can, indeed, solve
these issues. This idea is not unreasonable, as it is well-known that the introduction
of a mass for an otherwise massless particle helps to get rid of certain problems.
For example, a photon mass can act as a regulator for infrared divergences.
Furthermore, adding a mass to the graviton renders gravity renormalizable
(despite being plagued by the Boulware-Deser ghost \cite{Boulware:1973my} that is
removed by the construction of de Rham, Gabadadze, and Tolley \cite{deRham:2010kj}).
It was indicated in \cite{Colladay} and demonstrated in~\cite{Cov_quant}
that a photon mass is capable of mitigating the malign behavior in CFJ
electrodynamics.
The paper is organized as follows. In \secref{sec:theoretical-setting} we
introduce the theory to be considered and discuss some of its properties.
We determine the photon polarization vectors as well as the modified photon
propagator in \secref{sec:polarization-vectors}. Subsequently, we express
the propagator in terms of the polarization vectors, which is a procedure
that was introduced in \cite{Cov_quant}. The resulting object turns out to be
powerful to obtain quite general statements on perturbative unitarity in
\secref{sec:unitarity}. In \secref{sec:application-electroweak} we briefly argue
how the previous results can be incorporated into the electroweak sector of the
SME. Finally, we conclude on our findings in \secref{sec:conclusions}. Natural
units with $\hbar=c=1$ are used unless otherwise stated.
\section{Theoretical setting}
\label{sec:theoretical-setting}
We start from a Lagrange density that is a Lorentz-violating modification of
the St\"{u}ckelberg theory:
\begin{align}
\label{eq:lagrange-density-theory}
{\cal L}_{\gamma}&=-\frac14 F^{\mu \nu}F_{\mu \nu}-\frac14 (k_F)_{\kappa \lambda \mu \nu} F^{\kappa \lambda}F^{\mu \nu}
+\frac 12 (k_{AF})_{\kappa} \epsilon^{\kappa \lambda \rho \sigma}A_{\lambda} F_{\rho \sigma} \notag \\
&\phantom{{}={}}\,+\frac 12 m_{\gamma}^2 A^2-\frac {1}{2\xi}
(\partial\cdot A)^2\,,
\end{align}
with the \textit{U}(1) gauge field $A_{\mu}$, the associated field strength tensor $F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}$, a photon mass $m_{\gamma}$, and a real parameter $\xi$. Lorentz violation is encoded in the {\em CPT}-even and {\em CPT}-odd background fields $(k_F)_{\kappa\lambda\mu\nu}$ and $(k_{AF})_{\kappa}$, respectively. All fields are defined on Minkowski spacetime with metric signature $(+,-,-,-)$. Furthermore, $\varepsilon^{\mu\nu\varrho\sigma}$ is the Levi-Civita symbol in four spacetime dimensions where we use the convention $\epsilon^{0123}=1$. Restricting \eqref{eq:lagrange-density-theory} to the first and third term only, corresponds to the theory originally investigated by Carroll, Field, and Jackiw in \cite{Jackiw}.
For a timelike choice of $k_{AF}$, the latter theory is known to have stability problems, which explicitly show up, e.g., in the corresponding energy-momentum tensor \cite{Colladay} or the dispersion relations \cite{Adam}. In analyses carried out in the past, the introduction of a photon mass as a regulator \cite{Cov_quant} turned out to resolve these issues. It prevents the dispersion relation from becoming imaginary in the infrared region, i.e., for low momenta. Note that the current upper limit for a photon mass is $\unit[10^{-27}]{GeV}$ \cite{Tanabashi:2018}. Although this constraint on a violation of \textit{U}(1) gauge invariance is very strict, it still lies many orders of magnitude above the constraints on the coefficients of CFJ theory. Finally, as the propagator of Proca theory is known to have a singularity for $m_{\gamma}\mapsto 0$, we also include the last term in \eqref{eq:lagrange-density-theory}, which was introduced by St\"{u}ckelberg.
Now, we employ a specific parameterization of the background fields as follows:
\begin{subequations}
\begin{align}
\label{eq:nonbirefringent-ansatz}
(k_F)_{\mu\nu\varrho\sigma}&=\frac{1}{2}(\eta_{\mu\varrho}\tilde{k}_{\nu\sigma}-\eta_{\mu\sigma}\tilde{k}_{\nu\varrho}-\eta_{\nu\varrho}\tilde{k}_{\mu\sigma}+\eta_{\nu\sigma}\tilde{k}_{\mu\varrho})\,, \displaybreak[0]\\[2ex]
\tilde{k}_{\mu\nu}&=2\zeta_1\left(b_{\mu}b_{\nu}-\eta_{\mu\nu}\frac{b^2}{4}\right)\,, \displaybreak[0]\\[2ex]
(k_{AF})_{\kappa }&=\zeta_2 b_{\kappa }\,,
\end{align}
\end{subequations}
with a symmetric and traceless $(4\times 4)$ matrix $\tilde{k}_{\mu\nu}$ and a four-vector $b_{\mu}$ that gives rise to a preferred direction in spacetime. If Lorentz violation arises from a vector-valued background field, a reasonable assumption could be that the latter is responsible for both {\em CPT}-even and {\em CPT}-odd contributions. Furthermore, $\zeta_1$ and $\zeta_2$ are Lorentz-violating coefficients of mass dimension 0 and 1, respectively, that are introduced to control the strength of {\em CPT}-even and {\em CPT}-odd Lorentz violation independently from each other. The parameterization of $(k_F)_{\mu\nu\varrho\sigma}$ stated in \eqref{eq:nonbirefringent-ansatz} is sometimes called the nonbirefringent \textit{Ansatz} \cite{Altschul:2006zz}, as it contains the 9 coefficients of $(k_F)_{\mu\nu\varrho\sigma}$ that do not provide vacuum birefringence at leading order in Lorentz violation.
Let us rewrite the Lagrange density of \eqref{eq:lagrange-density-theory} in terms of the preferred four-vector $b_{\mu}$ introduced before:
\begin{align}
{\cal L}_{\gamma}&=-\frac14aF^{\mu \nu}F_{\mu \nu}-\zeta_1 b_{\kappa } b_{\mu } F^{\kappa}_{\phantom{\kappa}\lambda}F^{\mu \lambda} \notag \\
&\phantom{{}={}}+ \frac 1 2 \zeta_2 b_{\kappa} \epsilon^{\kappa \lambda \rho \sigma}A_{\lambda} F_{\rho \sigma}+\frac 12 m_{\gamma}^2 A^2-\frac {1}{2\xi}
(\partial\cdot A)^2\,,
\end{align}
with $a=1-\zeta_1 b^2$. Theories with a similar structure can be generated by radiative corrections and were, for example, studied in \cite{andrianov1}. Performing suitable integrations by parts, the latter Lagrange density is expressed as a tensor-valued operator $\hat{M}^{\nu\mu}$ sandwiched in between two gauge fields according to ${\cal L}_{\gamma}=(1/2)A_{\nu}\hat{M}^{\nu\mu}A_{\mu}$ with
\begin{align}
\label{eq:operator-field-equations}
\hat M^{\nu \mu}&=\left[a\partial^2 + m_{\gamma}^2+2\zeta_1 (b\cdot \partial)^2\right]\eta^{\mu\nu}
-\left(a-\frac{1}{\xi}\right)\partial^{\mu}\partial^{\nu} \notag \\
&\phantom{{}={}}-2\zeta_1 (b \cdot \partial) (b^{\mu} \partial^{\nu} + b^{\nu} \partial^{\mu}) \notag \\
&\phantom{{}={}}+2\zeta_1\partial^2 b^{\mu}b^{\nu}+2\zeta_2
\epsilon^{\alpha \nu \rho \mu}b_{\alpha}\partial_{\rho}\,.
\end{align}
This form directly leads us to the equation of motion for the massive gauge field:
\begin{equation}
\label{eq:field-equation-gauge}
\hat M^{\nu\mu}A_{\mu}=0 \,.
\end{equation}
\section{Polarization vectors}
\label{sec:polarization-vectors}
Analyzing the properties of the polarization vectors for modified photons will allow us to find a
relation between the sum over polarization tensors and the propagator~\cite{Cov_quant}. In turn, this
relation will be useful to compute imaginary parts of forward scattering amplitudes and to reexpress these
in terms of amplitudes associated with cut Feynman diagrams. The latter is required by the optical
theorem to test perturbative unitarity. Note that studies of Lorentz-violating modifications based on
the optical theorem have already been performed in several papers for field operators of mass dimension
4 \cite{Opt_Theorem_Minimal} as well as higher-dimensional ones \cite{Opt_Theorem_Nonminimal}.
The operator of Eq.~(\ref{eq:operator-field-equations}) transformed to momentum space (with a global sign
dropped) is given by
\begin{align}
\label{eq:field-eqs-operator-momentum}
M^{\nu\mu}&=\Big[ap^2-m_{\gamma}^2+2\zeta_1(b\cdot p)^2\Big]\eta^{\mu\nu}
-\left(a-\frac{1}{\xi} \right)p^{\mu}p^{\nu} \notag \\
&\phantom{{}={}}-2\zeta_1 (b \cdot p) \left(b^{\mu}p^{\nu}+ b^{\nu} p^{\mu} \right ) \notag \\
&\phantom{{}={}}+2\zeta_1 p^2 b^{\mu} b^{\nu} +2\mathrm{i}\zeta_2 \epsilon^{\alpha \nu \rho \mu} b_{\alpha} p_{\rho}\,.
\end{align}
We consider the eigenvalue problem
\begin{eqnarray}
\label{eq:eigenvalue-problem-polarization}
M^{\nu\mu} v^{(\lambda)}_{\mu}(p)= \Lambda_{\lambda}(p) v^{(\lambda)\nu} (p) \,,
\end{eqnarray}
for a basis $\{v^{(\lambda)}\}$ of polarization vectors which diagonalize the equation of motion.
We use $\lambda=\{0,+,-,3\}$ as labels for these vectors. The eigenvalue $\Lambda_{\lambda}(p)=\Lambda_{\lambda}$
corresponds to the dispersion equation of the mode $\lambda$. To find a real basis, we choose the temporal polarization vector as
\begin{eqnarray}
v^{(0)}_{\mu}=\frac{p_{\mu}}{\sqrt{p^2}}\,,
\end{eqnarray}
and the longitudinal one as
\begin{eqnarray}
v^{(3)}_{\mu}=\frac{p^2b_{\mu}-(p\cdot b)p_{\mu}}{\sqrt{p^2D}}\,,
\end{eqnarray}
with $D=(b\cdot p)^2-p^2b^2$, which is the Gramian of the two four-vectors $p^{\mu}$ and $b^{\mu}$.
It is not difficult to check that $p\cdot v^{(3)}=0$. The longitudinal mode becomes physical for a
nonvanishing photon mass. Let us choose $p^2>0$ such that we do not have to consider absolute values
of $D$ inside square roots.
The previous vectors are normalized according to
\begin{equation}
v^{(0)}_{\mu} v^{(0)\mu}=1\,,\quad v^{(3)}_{\mu} v^{(3)\mu}=-1\,.
\end{equation}
Proceeding with the evaluation of \eqref{eq:eigenvalue-problem-polarization} for $\lambda=0,3$ we obtai
\begin{subequations}
\begin{align}
\label{eq:dispersion-equations-0-3}
\Lambda_{0}&=\frac{p^2}{\xi}-m_{\gamma}^2\,,\quad \Lambda_3=Q-2\zeta_1D\,, \displaybreak[0]\\[2ex]
Q&=ap^2-m_{\gamma}^2+2\zeta_1(b\cdot p)^2\,.
\end{align}
\end{subequations}
The theory is gauge-invariant for vanishing photon mass. In this case, $\xi$ can be interpreted as a
gauge fixing parameter. The dependence of $\Lambda_0$ on $\xi$ tells us that this associated degree
of freedom is nonphysical. We will come back to this point later.
Now, we find the remaining two polarization states, which we label as $\lambda=\pm$.
First, let us define the two real four-vectors
\begin{subequations}
\label{eq:real-vectors}
\begin{align}
v^{(1)\mu}&=\epsilon^{\mu\nu\rho\sigma}\frac{p_{\nu}n_{\rho}b_{\sigma}}{N_1}\,, \displaybreak[0]\\[2ex]
v^{(2)\mu}&=\epsilon^{\mu\nu\rho\sigma}\frac{p_{\nu}v^{(1)}_{\rho}b_{\sigma}}{N_2} \,,
\end{align}
\end{subequations}
with an auxiliary four-vector $n^{\mu}$. We normalize these vectors
as
\begin{equation}
v^{(1,2)}_{\mu}v^{(1,2)\mu}=-1\,,
\end{equation}
which fixes the normalization constants:
\begin{subequations}
\begin{align}
\label{eq:normalization-constant-1}
|N_1|^2&=|-p^2((n\cdot b)^2-n^2b^2) -n^2(b\cdot p)^2-b^2(p\cdot n)^2 \notag \\
&\phantom{{}={}}+2(p\cdot n)(b\cdot n)(p\cdot b)|\,, \\[2ex]
N_2^2&=D\,.
\end{align}
\end{subequations}
Both vectors of Eqs.~(\ref{eq:real-vectors}) are orthogonal to $p^{\mu}$ and $b^{\mu}$. Besides, they
satisfy
\begin{equation}
\label{eq:equation-vectors-v12}
M^{\nu\mu}v^{(1,2)}_{\mu}=Q \,v^{(1,2)\nu}\pm 2\mathrm{i}\zeta_2\sqrt{D}\,v^{(2,1)\nu}\,.
\end{equation}
We now introduce the linear combinations
\begin{equation}
v^{(\pm)}_{\mu}=\frac{1}{\sqrt 2}(v^{(2)}_{\mu}\pm\mathrm{i}v^{(1)}_{\mu})\,,
\end{equation}
that obey the properties
\begin{subequations}
\begin{align}
v^{(+)} \cdot v^{(+)}=v^{(-)}\cdot v^{(-)}&=0\,, \displaybreak[0]\\[2ex]
v^{(+)} \cdot v^{(+)*}=v^{(-)}\cdot v^{(-)*}&=-1\,.
\end{align}
\end{subequations}
Constructing suitable linear combinations of Eqs.~(\ref{eq:equation-vectors-v12}) results~i
\begin{subequations}
\begin{align}
M^{\nu\mu}v^{(\pm)}_{\mu}&=\Lambda_{\pm}v^{(\pm)\nu}\,, \displaybreak[0]\\[2ex]
\label{eq:dispersion-equations-plus-minus}
\Lambda_{\pm}&=Q\pm 2 \zeta_2 \sqrt{D}\,.
\end{align}
\end{subequations}
Hence, the dispersion equation for the transverse modes reads
\begin{equation}
\Lambda_+\Lambda_-=Q^2-4\zeta_2^2D=0\,.
\end{equation}
In analyses performed in the past, the sum over two-tensors formed from the polarization
vectors and weighted by the dispersion equations turned out to be extremely valuable
\cite{Cov_quant}. In particular,
\begin{equation}
\label{eq:decomposition-propagator}
P_{\mu\nu}=-\sum_{\lambda,\lambda'=0,\pm,3}g_{\lambda\lambda'}\frac{v^{(\lambda)}_\mu v^{*(\lambda')}_\nu}{\Lambda_{\lambda}}\,,
\end{equation}
with the dispersion equations $\Lambda_{0,3}$ of \eqref{eq:dispersion-equations-0-3}
and $\Lambda_{\pm}$ of \eqref{eq:dispersion-equations-plus-minus}. Inserting the explicit
expressions for the basis $\{v^{(\lambda)}\}$ results in
\begin{align}
\label{eq:propagator}
P_{\mu\nu}&=-\frac{Q}{\Lambda_+\Lambda_-}\eta_{\mu\nu}-\frac{2\mathrm{i}\zeta_2}{\Lambda_+\Lambda_-}\epsilon_{\mu\alpha\nu\beta}p^{\alpha}b^{\beta} \notag \\
&\phantom{{}={}}+\left[\frac{(p\cdot b)^2}{(Q-2\zeta_1D)p^2D}-\frac{\xi}{p^2(p^2-\xi m_{\gamma}^2)}-\frac{Qb^2}{D\Lambda_+ \Lambda_-}\right]p_{\mu}p_{\nu} \notag \\
&\phantom{{}={}}+\frac{(p\cdot b)}{D}\left(\frac{Q}{\Lambda_+\Lambda_-}-\frac{1}{Q-2\zeta_1D}\right)(p_{\mu}b_{\nu}+p_{\nu}b_{\mu}) \notag \\
&\phantom{{}={}}+\frac{p^2}{D}\left(\frac{1}{Q-2\zeta_1D}-\frac{Q}{\Lambda_+\Lambda_-}\right)b_{\mu}b_{\nu}\,.
\end{align}
By computation, we showed that $P_{\mu\nu}$ is equal to the negative of the inverse of
the operator $M^{\mu\nu}$ in \eqref{eq:field-eqs-operator-momentum}:
$M^{\mu\nu}P_{\nu\varrho}=-\delta^{\mu}_{\phantom{\mu}\varrho}$. Therefore, we define
$\mathrm{i}P_{\mu\nu}$ as the modified photon propagator of the theory given by
\eqref{eq:lagrange-density-theory}. For $\zeta_1=\zeta_2=0$ and $m_{\gamma}\mapsto 0$
we observe that
\begin{equation}
\mathrm{i}P_{\mu\nu}=-\frac{\mathrm{i}}{p^2}\left[\eta_{\mu\nu}+(\xi-1)\frac{p_{\mu}p_{\nu}}{p^2}\right]\,,
\end{equation}
which is the standard result for the propagator in Fermi's theory, as expected.
\section{Perturbative unitarity at tree level}
\label{sec:unitarity}
Now were are interested in studying probability conservation for the theory given
by \eqref{eq:lagrange-density-theory}. It is known that the timelike sector of CFJ
theory has unitarity issues \cite{Adam} while it was also demonstrated
that a photon mass helps to perform a consistent quantization \cite{Cov_quant}.
Therefore, we would like to investigate the question whether the presence of a
photon mass can render the theory unitary. For brevity, we choose a purely timelike
background field: $(b^{\mu})=(1,0,0,0)$. Note that in this case, the theory
of \eqref{eq:lagrange-density-theory} is isotropic and we can identify
$\zeta_1$ with the isotropic {\em CPT}-even coefficient
$\tilde{\kappa}_{\mathrm{tr}}$ \cite{Kostelecky:2002hh}. For completeness, we
keep the {\em CPT}-even contributions, although they are not expected to cause unitarity
issues for small enough $\zeta_1$.
In the classical regime, a four-derivative applied to the field
equations (\ref{eq:field-equation-gauge}) provides
\begin{equation}
\left(\frac{1}{\xi}\square+m_{\gamma}^2\right)\partial\cdot A=0\,,
\end{equation}
where $\square=\partial^{\mu}\partial_{\mu}$ is the d'Alembertian.
Even if $A_{\mu}$ couples to a conserved current, $\partial\cdot A$ behaves as
a free field. It can be interpreted as a nonphysical scalar mode that exhibits
the dispersion relation
\begin{equation}
\label{eq:dispersion-relation-scalar-mode}
\omega_k^{(0)}=\sqrt{\vec{k}^{\,2}+\xi m_{\gamma}^2}\,.
\end{equation}
Therefore, in the classical approach, the St\"{u}ckelberg term proportional to
$1/\xi$ can actually be removed from the field equations. In this case, we
automatically get the subsidiary requirement $\partial\cdot A=0$ that corresponds
to the Lorenz gauge fixing condition.
\begin{figure}[t]
\centering
\includegraphics[width=0.3\textwidth]{Fig1.pdf}
\caption{Electron and positron forward scattering with the dashed line representing
a cut of the diagram.}
\label{fig:Fig1
\end{figure
Furthermore, the modified Gauss and Amp\`{e}re law for the
electric field $\vec{E}$ and magnetic field $\vec{B}$ are obtained directly
from \eqref{eq:field-equation-gauge} and read as follows:
\begin{subequations}
\begin{align}
(1+\zeta_1)\vec{\nabla}\cdot\vec{E}+m_{\gamma}^2\phi&=0\,, \displaybreak[0]\\[2ex]
(1-\zeta_1)\vec{\nabla}\times\vec{B}-2\zeta_2\vec{B}+m_{\gamma}^2\vec{A}&=(1+\zeta_1)\partial_t\vec{E}\,,
\end{align}
\end{subequations}
where $\phi$ is the scalar and $\vec{A}$ the vector potential, respectively.
By expressing the physical fields in terms of the potentials and using the Lorenz
gauge fixing condition, the modified Gauss law can be brought into the form
\begin{equation}
\left[(1+\zeta_1)(\partial_t^2-\triangle)+m_{\gamma}^2\right]\phi=0\,,
\end{equation}
with the Laplacian $\triangle=\vec{\nabla}^2$. This massive wave
equation leads to the dispersion relation
\begin{equation}
\label{eq:dispersion-relations-longitudinal}
\omega_k^{(3)}=\sqrt{\vec{k}^{\,2}+\frac{m_{\gamma}^2}{1+\zeta_1}}\,.
\end{equation}
The associated mode is interpreted as longitudinal. Furthermore, the
modified Amp\`{e}re law yields
\begin{align}
\vec{0}&=\left[(1+\zeta_1)\partial_t^2-(1-\zeta_1)\triangle+m_{\gamma}^2\right]\vec{A}-2\zeta_1\vec{\nabla}(\vec{\nabla}\cdot\vec{A}) \notag \\
&\phantom{{}={}}-2\zeta_2\vec{\nabla}\times\vec{A}\,.
\end{align}
The latter provides the modified transverse dispersion relations
\begin{equation}
\label{eq:dispersion-relations-transverse}
\omega_k^{(\pm)}=\sqrt{\frac{(1-\zeta_1)\vec{k}^{\,2}\mp 2\zeta_2|\vec{k}|+m_{\gamma}^2}{1+\zeta_1}}\,.
\end{equation}
Note that the dispersion relations found above correspond to the poles of
the propagator that we have calculated in \eqref{eq:propagator}. These results
will be valuable in the quantum treatment to be performed as follows.
First, we would like to construct the amplitude of a scattering process
involving two external, conserved four-currents $J_{\mu}$, $J_{\nu}^{*}$
without specifying them explicitly:
\begin{equation}
\mathcal{S}\equiv J^{\mu}(\mathrm{i}P_{\mu\nu})J^{*,\nu}\,.
\end{equation}
The latter object is sometimes called the saturated propagator in the literature
\cite{Veltman:1981,SaturatedPropagator} (cf.~\cite{Nakasone:2009bn} for an
application of this concept in the context of massive gravity in $(1+2)$ dimensions).
Inserting the decomposition (\ref{eq:decomposition-propagator}) of the propagator in terms of
polarization vectors, we obtain:
\begin{equation}
\mathcal{S}=-\mathrm{i}\sum_{\lambda=\pm,3} \frac{|J\cdot v^{(\lambda)}|^2}{\Lambda_{\lambda}}\,,
\end{equation}
where the mode labeled with $\lambda=0$ is eliminated because of current conservation: $p\cdot J=0$.
To guarantee the validity of unitarity, the imaginary part of the residues of $\mathcal{S}$
evaluated at the positive poles in $p_0$ should be nonnegative. The numerator of the latter expression
is manifestly nonnegative. Hence, the outcome only depends on the pole structure of $\mathcal{S}$.
For the case of a purely timelike $b_{\mu}$, which is the interesting one to study, we obtain
\begin{equation}
\label{eq:imaginary-part-residues}
\mathrm{Im}[\mathrm{Res}(\mathcal{S})|_{k_0=\omega_k^{(\lambda)}}]=\frac{1}{2(1+\zeta_1)}\frac{|J\cdot v^{(\lambda)}|^2}{\omega_k^{(\lambda)}}\,,
\end{equation}
for each one of the physical modes $\lambda=\pm,3$ with dispersion
relations~(\ref{eq:dispersion-relations-longitudinal}),
(\ref{eq:dispersion-relations-transverse}).
For perturbative $\zeta_1$, $\zeta_2$, and $m_{\gamma}>|\zeta_2|/\sqrt{1-\zeta_1}$ it
holds that $\omega_k^{(\lambda)}>0$ for $\lambda=\pm,3$. In this case, the right-hand
side of \eqref{eq:imaginary-part-residues} is nonnegative. This result is already an
indication for the validity of unitarity.
We see how a decomposition of the form of \eqref{eq:decomposition-propagator}
leads to a very elegant study of the saturated propagator without the need of
considering explicit configurations of the external currents such as done in the
past \cite{SaturatedPropagator}.
To test unitarity more rigorously, we couple \eqref{eq:lagrange-density-theory}
to standard Dirac fermions and consider the theory
\begin{subequations}
\begin{align}
\mathcal{L}&=\mathcal{L}_{\gamma}+\mathcal{L}_{\psi,\gamma}\,, \\[2ex]
\mathcal{L}_{\psi,\gamma}&=\overline{\psi}[\gamma^{\mu}(\mathrm{i}\partial_{\mu}-eA_{\mu})+m_{\psi}]\psi\,.
\end{align}
\end{subequations}
Here, $e$ is the elementary charge, $m_{\psi}$ the fermion mass, $\gamma^{\mu}$ are
the standard Dirac matrices satisfying the Clifford algebra $\{\gamma^{\mu},\gamma^{\nu}\}=2\eta^{\mu\nu}$,
$\psi$ is a Dirac spinor, and $\overline{\psi}=\psi^{\dagger}\gamma^0$ its Dirac conjugate.
We intend to check the validity of the optical theorem for the tree-level forward scattering
amplitude of the particular electron-positron scattering process in \figref{fig:Fig1}.
The amplitude ${\cal M}_F$ associated with the corresponding Feynman graph is
\begin{align}
\label{Amplitud}
\mathrm{i}{\mathcal M}_F&=\bar{v}^{(r)}(p_{2})(-\mathrm{i}e\gamma^{\mu})u^{(s)}(p_{1})(\mathrm{i}P^F_{\mu \nu }(k)) \notag \\
&\phantom{{}={}}\times\bar{u}^{(s)}(p_{1})(-\mathrm{i}e\gamma^{\nu})v^{(r)}(p_{2})\,,
\end{align}
with particle spinors $u^{(s)}$ and antiparticle spinors $v^{(s)}$ of spin projection $s$.
The momentum of the internal photon line is $k=p_1+p_2$. Furthermore,
$P^F_{\mu\nu}(k)$ is the Feynman propagator obtained from \eqref{eq:decomposition-propagator}
by employing the usual prescription $k^2\mapsto k^2+\mathrm{i}\epsilon$. Let us define the
four-currents
\begin{subequations}
\label{Currents}
\begin{align}
J^{\mu}&\equiv\bar{v}^{(r)}(p_{2})\gamma ^{\mu}u^{(s)}(p_{1})\,, \displaybreak[0]\\[2ex]
J^{\ast\mu}&\equiv\bar{u}^{(s)}(p_{1})\gamma ^{\mu}v^{(r)}(p_{2})\,.
\end{align}
\end{subequations}
Due to current conservation at the ingoing and outgoing vertices, $p\cdot J=p\cdot J^{*}=0$.
Introducing an integral and a $\delta$ function for momentum conservation, we can write the
forward scattering amplitude as
\begin{equation}
\label{eq:forward-scattering-amplitude-general}
{\mathcal M}_F=-e^2 J^{\mu }J^{\ast\nu}\int\frac{\mathrm{d}^4k}{(2\pi)^4}\,P^F_{\mu \nu}(k)(2\pi)^4\delta^{(4)}(k-p_1-p_2)\,.
\end{equation}
Now we insert \eqref{eq:decomposition-propagator} and decompose the denominator in terms of
the poles as follows:
\begin{align}
\label{eq:amplitude-propagator-inserted}
\mathcal{M}_F&=-e^2 J^{\mu}J^{\ast\nu}\int\frac{\mathrm{d}^4k}{(2\pi)^4}\,\sum_{\lambda,\lambda'=0,\pm,3}\frac{-g_{\lambda\lambda'}}{1+\zeta_1} \notag \\
&\phantom{{}={}}\times\frac{v_{\mu}^{(\lambda)}v_{\nu}^{*(\lambda')}}{(k_0-\omega_k^{(\lambda)}+\mathrm{i}\epsilon)(k_0+\omega_k^{(\lambda)}-\mathrm{i}\epsilon)} \notag \\
&\phantom{{}={}}\times(2\pi)^4\delta^{(4)}(k-p_1-p_2)\,,
\end{align}
with the dispersion relation~(\ref{eq:dispersion-relation-scalar-mode}) for the unphysical
scalar mode, \eqref{eq:dispersion-relations-longitudinal} for the massive mode, and those
of \eqref{eq:dispersion-relations-transverse} for the transverse modes.
Since the zeroth mode points along the direction of the four-momentum, we are
left with the sum over $\lambda=\pm,3$. We then employ
\begin{align}
&\frac{1}{(k_0-\omega_k^{(\lambda)}+\mathrm{i}\epsilon)(k_0+\omega_k^{(\lambda)}-\mathrm{i}\epsilon)} \notag \\
&\hspace{1cm}=\frac{1}{2\omega_k^{(\lambda)}}\left[\frac{1}{k_0-\omega_k^{(\lambda)}+\mathrm{i}\epsilon}-\frac{1}{k_0+\omega_k^{(\lambda)}-\mathrm{i}\epsilon}\right]\,.
\end{align}
Taking the imaginary part of~\eqref{eq:amplitude-propagator-inserted} by using the general
relation
\begin{equation}
\lim_{\epsilon\mapsto 0^+}\frac{1}{x\pm\mathrm{i}\epsilon}=\mathcal{P}\left(\frac{1}{x}\right)\mp\mathrm{i}\pi\delta(x)\,,
\end{equation}
with the principal value $\mathcal{P}$, results in
\begin{align}
\label{eq:imaginary-part-forward-scattering}
2\text{Im}({\mathcal M}_F)&=e^2\int\frac{\mathrm{d}^4k}{(2\pi)^4}\sum_{\lambda=\pm,3}|J\cdot v^{(\lambda)}|^2\frac{2\pi}{(1+\zeta_1)2\omega_k^{(\lambda)}} \notag \\
&\phantom{{}={}}\times\delta(k_0-\omega_k^{(\lambda)})(2\pi)^4\delta^{(4)}(k-p_1-p_2) \notag \\
&=e^2\int\frac{\mathrm{d}^4k}{(2\pi)^4}\sum_{\lambda=\pm,3}|J\cdot v^{(\lambda)}|^2 \notag \\
&\phantom{{}={}}\times(2\pi)^4\delta^{(4)}(k-p_1-p_2)(2\pi)\delta(\Lambda_{\lambda})\,.
\end{align}
In the final step we exploited that
\begin{align}
\delta(\Lambda_{\lambda})&=\delta\left[(1+\zeta_1)(k_0-\omega_k^{(\lambda)})(k_0+\omega_k^{(\lambda)})\right] \notag \\
&=\frac{1}{(1+\zeta_1)2\omega_k^{(\lambda)}}\left[\delta(k_0-\omega_k^{(\lambda)})+\delta(k_0+\omega_k^{(\lambda)})\right]\,,
\end{align}
for each $\lambda$.
The negative-energy counterparts $k_0=-\omega_k^{(\lambda)}$ do not contribute due to energy-momentum conservation.
The right-hand side of \eqref{eq:imaginary-part-forward-scattering} corresponds to the total
cross section of $\mathrm{e^+e^-}\rightarrow \upgamma$ with both the transverse photon modes and the massive
mode contributing. Also, it is nonnegative under the conditions stated below \eqref{eq:imaginary-part-residues}.
Therefore, we conclude that the optical theorem at tree-level and,
therefore, unitarity are valid for the theory defined by \eqref{eq:lagrange-density-theory}
as long as the photon mass is large enough. Note that the latter computation is generalized to an arbitrary
timelike frame by computing the imaginary part of \eqref{eq:forward-scattering-amplitude-general}
directly with
\begin{equation}
\mathrm{Im}\left(\frac{1}{\Lambda_{\lambda}+\mathrm{i}\epsilon}\right)=-\pi\delta(\Lambda_{\lambda})\,,
\end{equation}
employed. This result means that the decay rate or cross section of a particular process at tree
level can be safely obtained in the context of massive CFJ theory where problems are not expected
to occur (cf.~\cite{Colladay:2016rmy} for the particular example of Cherenkov-like radiation in
\textit{vacuo}).
\section{Application to electroweak sector}
\label{sec:application-electroweak}
Several CFJ-like terms are included in the electroweak sector of the SME before spontaneous
symmetry breaking. We consider the Abelian contribution that is given by \cite{Colladay}
\begin{equation}
\label{eq:cfj-electroweak}
\mathcal{L}_{\mathrm{gauge}}^{\mathrm{CPT-odd}}\supset\mathcal{L}_B\,,\quad \mathcal{L}_B=(k_1)_{\kappa}\varepsilon^{\kappa\lambda\mu\nu}B_{\lambda}B_{\mu\nu}\,,
\end{equation}
with the $\mathit{U}_Y(1)$ gauge field $B_{\mu}$, the associated field strength tensor $B_{\mu\nu}$,
and the controlling coefficients $(k_1)_{\kappa}$. After spontaneous symmetry breaking
$\mathit{SU}_L(2)\otimes\mathit{U}_Y(1)\mapsto \mathit{U}_{\mathrm{em}}(1)$, the field $B_{\mu}$ is
interpreted as a linear combination of the photon field $A_{\mu}$ and the Z boson field $Z_{\mu}$.
The corresponding field strength tensor reads
\begin{equation}
B_{\mu\nu}=F_{\mu\nu}\cos\theta_w-Z_{\mu\nu}\sin\theta_w\,,
\end{equation}
where $\theta_w$ is the Weinberg angle and $Z_{\mu\nu}$ the field strength tensor associated with the
Z boson. Hence, the Lorentz structure of the field operator in $\mathcal{L}_B$ of
\eqref{eq:cfj-electroweak} after spontaneous symmetry breaking has the form
\begin{align}
B_{\lambda}B_{\mu\nu}&=A_{\lambda}F_{\mu\nu}\cos^2\theta_w-(A_{\lambda}Z_{\mu\nu}+Z_{\lambda}F_{\mu\nu})\sin\theta_w\cos\theta_w \notag \\
&\phantom{{}={}}+Z_{\lambda}Z_{\mu\nu}\sin^2\theta_w\,.
\end{align}
Therefore, CFJ-like terms are induced for the massive Z boson as well as for the photon. Our analysis
shows that unitarity issues are prevented in the Z sector due to the mass of this boson that emerges
via the coupling of $Z_{\mu}$ to the vacuum expectation value of the Higgs field. In the photon sector,
a mass has to be added by hand, though.
\section{Conclusions}
\label{sec:conclusions}
In this work, we considered both {\em CPT}-even and {\em CPT}-odd Lorentz-violating modifications for
photons that were constructed from a single preferred spacetime direction. The timelike
sector of the {\em CPT}-odd CFJ electrodynamics is known to exhibit issues with unitarity in the
infrared regime. Therefore, our intention was to find out whether the inclusion of a photon mass can
mitigate these effects.
To perform the analysis, we derived the modified propagator of the theory and decomposed it into a
sum of polarization tensors weighted by the dispersion equations for each photon mode. To get a
preliminary idea on the validity of unitarity, we contracted this propagator with general conserved
currents. The imaginary part of the residue evaluated at the positive poles was found to be nonnegative
as long as the modified dispersion relations for each mode stays real. This property is guaranteed by
the presence of a sufficiently large photon mass. A second more thorough check involved the evaluation
of the optical
theorem for a particular tree-level process. The optical theorem was found to be valid for the same
conditions encountered previously. It is clear how unitarity issues arise in the limit of a vanishing
photon mass when the dispersion relations of certain modes can take imaginary values.
In general, it is challenging to decide whether the imaginary part of the residue of the propagator
contracted with conserved currents is positive for arbitrary preferred directions
and currents. We emphasize that the decomposition of the propagator into polarization tensors allows
for a quite elegant proof of this property independently of particular choices for background fields
and currents. Similar relations are expected to be valuable for showing unitarity in alternative
frameworks.
Hence, we conclude that CFJ electrodynamics is, indeed, unitary when a photon mass is included into
the theory. This finding clearly demonstrates how a mild violation of gauge invariance is capable of
solving certain theoretical issues. Note that unitarity issues are prevented automatically for a
CFJ-like term in the Z-boson sector where the Z boson mass is generated via the Higgs mechanism.
\section*{Acknowledgments}
M.M.F. is grateful to FAPEMA Universal 00880/15, FAPEMA PRONEX 01452/14, and CNPq Produtividade 308933/2015-0.
C.M.R. acknowledges support from Fondecyt Regular project No.~1191553, Chile. M.S. is indebted to FAPEMA
Universal 01149/17, CNPq Universal 421566/2016-7, and CNPq Produtividade 312201/2018-4.
Furthermore, we thank M. Maniatis for suggesting an investigation of the CFJ-like term in the electroweak
sector of the SME.
|
2,869,038,154,716 | arxiv | \section{Introduction}
\label{sec_intro}
It is fairly well established that quantum electrodynamics (QED), and
in particular quenched QED, breaks chiral symmetry for sufficiently
large couplings. This phenomenon has been observed both in
lattice simulations~\cite{Lat1} as well as
various studies based on the use of Dyson-Schwinger
equations~\cite{FGMS,MiranskReview,TheReview}. These latter
calculations have generally relied on the use of a cut-off in
euclidean momentum in order to regulate divergent integrals, a
procedure which breaks the gauge invariance of the theory.
On the other hand, continuation of gauge theories to $D < 4$
dimensions has long been used as an efficient way to regularize
perturbation theory without violating gauge invariance. In
nonperturbative calculations, however, the use of this method of
regularization is rarely used~\cite{eff_the}. Within the context of the
Dyson-Schwinger equations (DSEs) only a few
publications~\cite{dim_reg,dim_reg_2} have employed dimensional
regularization instead of the usual momentum cut-off.
It is the purpose of the present paper to study dynamical chiral
symmetry breaking and the chiral limit within dimensionally
regularized quenched QED. We are motivated to do this by the wish to avoid
some gauge ambiguities occurring in cut-off based work, which we
discuss in Sec.~\ref{sec_motivation}. In that Section we also outline
some general results which one expects to be valid for $D < 4$,
independently of the particular vertex which one uses as an input to
the DSEs. Having done this we proceed, in Section~\ref{sec:
rainbow}, with a study of chiral symmetry breaking in the popular, but
gauge non-covariant, rainbow approximation. Just as in cut-off
regularized work, the rainbow approximation provides a very good
qualitative guide to what to expect for more realistic vertices and
has the considerable advantage that, with certain additional
approximations, one may obtain analytical results. We check
numerically that the additional approximations made are in fact quite
justified. Indeed, it is very fortunate that it is possible to obtain
this analytic insight into the pattern of chiral symmetry breaking in
$D$ dimensions as it provides us with a well defined procedure for
extracting the critical coupling of the 4 dimensional theory with
more complicated vertices. We proceed to the Curtis-Pennington (CP)
vertex in Section~\ref{sec: cp}. There we derive, for solutions which do
not break chiral symmetry, an integral representation for the
exact wavefunction renormalization function ${\cal Z}$ in $D$ dimensions.
We also provide an approximate, but explicit, expression for this quantity.
The latter is quite useful, in the ultraviolet region, even if dynamical chiral symmetry
breaking takes place as it provides a welcome
check for the numerical investigation of chiral symmetry breaking with
the CP vertex with which we conclude that section. Finally, in
Section~\ref{sec: conclusion}, we summarize our results and conclude.
\section{Motivation and general considerations}
\label{sec_motivation}
Although chiral symmetry breaking appears to be
universally observed independently of the precise nature of the vertex used
in DSE studies, it has also been recognized for a long time that the
critical couplings with almost all\footnote{Some vertex Ans\"atze
exist which lead to critical couplings which are strictly gauge
independent~\protect\cite{Kondo,BP1,BP2}. However, these involve
either vertices which have unphysical singularities or ensure
gauge independence of the critical coupling by explicit construction.} of
these vertices show a gauge dependence which should not be
present for a physical quantity. With a bare vertex this is not
surprising as this vertex Ansatz breaks the Ward-Takahashi
identity. However, even with the Curtis-Pennington (CP) vertex,
which does not violate this identity and additionally is constrained
by the requirement of perturbative multiplicative renormalizability, a
residual gauge dependence remains~\cite{CPIV,ABGPR}.
Apart from possible deficiencies of the vertex, which we do not
investigate in this paper, the use of cut-off regularization
explicitly breaks the gauge symmetry even as the cut-off is taken to
infinity. This is well known in perturbation theory (see, for example, the
discussion of the axial anomaly in Sect.~19.2 of Ref.~\cite{peskin})
and was pointed out by Roberts and
collaborators~\cite{dongroberts} in the present context. The latter
authors proposed a prescription for dealing with this ambiguity which
ensures that the regularization does not violate the Ward-Takahashi identity.
As may be observed in Fig.~\ref{fig: cut-off alpha_cr},
this ambiguity has a strong effect on the value of the critical coupling
of the theory. The two curves in that figure correspond to the
critical coupling $\alpha_c$ of Ref.~\cite{ABGPR} as well as the coupling
$\alpha_c'$ one obtains by following the prescription of Roberts et al.
It is straightforward to show, following the analysis of Ref.~\cite{ABGPR},
that these couplings are related through
\begin{equation}
\alpha_{c}' \> = \> { \alpha_{c} \over 1 +
{\xi \> \alpha_{c} \over 8 \pi}}\;\;\;.
\end{equation}
Also plotted in this figure are previously published numerical
results~\cite{CPIV,qed4_hrw} obtained with both of the above prescriptions.
Note that, curiously, the critical couplings obtained with the
prescription of Ref.~\cite{dongroberts} (i.e. the calculation which restores
the Ward-Takahashi identity) exhibits a stronger gauge
dependence, at least for the range of gauge parameters shown in
Fig.~\ref{fig: cut-off alpha_cr}.
Gauge ambiguities such as the one outlined above are absent if one does not
break the gauge invariance of the theory through the regularization
procedure. Hence, we now turn to dimensionally regularized (quenched)
QED. The Minkowski space fermion propagator $S(p)$ is defined in the usual
way through
the dimensionless wavefunction renormalization function ${\cal Z}(p^2)$
and the dimensionful mass function $M(p^2)$, i.e.
\begin{equation}
S(p) \> \equiv \> {{\cal Z}(p^2) \over {p \hspace{-5pt}/} - M(p^2)}\;\;\;.
\end{equation}
The dependence of ${\cal Z}$ and $M$
on the dimensionality of the space is not explicitly
indicated here. Furthermore, note that
to a large extent we shall be dealing only
with the regularized theory without imposing a renormalization procedure, as
renormalization~\cite{qed4_hrw,renorm} is inessential to our discussion.
In addition to the above, we shall consider the theory without explicit chiral
symmetry
breaking (i.e. zero bare mass). This theory would not contain a mass
scale were it not for the usual arbitrary scale (which we
denote by $\nu$) introduced in $D=4 - 2 \epsilon$ dimensions
which provides the connection between
the {\it dimensionful} coupling $\alpha_D$ and the usual
{\it dimensionless} coupling
constant $\alpha=e^2/4\pi$:
\begin{equation}
\alpha_D \> = \> \alpha \> \nu^{2 \epsilon}\;\;\;.
\end{equation}
As $\nu$ is {\it the only} mass scale in
the problem, and as the coupling always appears in the above combination with
this scale, on dimensional grounds alone the mass function must be of the form
\begin{equation}
M(p^2) \> = \> \nu \> \alpha^{1 \over 2 \epsilon} \> \tilde M
\left( {p^2 \over \nu^2 \alpha^{1 \over \epsilon} },\epsilon \right)
\end{equation}
where $\tilde M$ is a dimensionless function and in
particular
\begin{equation}
M(0) \> = \> \nu \> \alpha^{1 \over 2 \epsilon} \>
\tilde M \left(0,\epsilon \right)\;\;\;.
\label{eq: M(0) general form}
\end{equation}
Moreover, as $\epsilon$ goes to zero the $\nu$ dependence on the right hand
side must disappear and hence the dynamical mass $M(0)$ is either
zero (i.e. no symmetry breaking) or goes to infinity in this limit.
This situation is analogous to what happens in cut-off regularized theory,
where the scale parameter is the cut-off itself and the mass is proportional
it.
Note that $\tilde M(0,\epsilon)$ is not dependent on $\alpha$. This
implies immediately that there can be no non-zero critical coupling
in $D \ne 4$ dimensions: if $M(0)$ is non-zero for some coupling
$\alpha$ then it must be non-zero for all couplings.
Given these general considerations (which are of course independent of
the particular Ansatz for the vertex) it behooves one to ask how this
situation can be reconciled with a critical coupling $\alpha_c$ of order 1 in
four dimensions. In order to see how this might arise, we shall extract
a convenient numerical factor out of $\tilde M$ and suggestively re-write
the dynamical mass as
\begin{equation}
M(0) \> = \> \nu \> \left({\alpha \over \alpha_c}\right)^{1 \over 2 \epsilon} \>
\overline M \left(0,\epsilon \right)\;\;\;.
\label{eq: M(0) general form 2}
\end{equation}
At present there is no difference in content between
Eq.~(\ref{eq: M(0) general form}) and Eq.~(\ref{eq: M(0) general form 2}).
However, if we now {\it define} $\alpha_c$ by demanding that
the behaviour of $M(0)$ is
dominated by the factor $\left({\alpha \over \alpha_c}\right)^{1/2\epsilon}$
as $\epsilon$ goes to zero, which is equivalent to demanding that
\begin{equation}
[\overline M \left(0,\epsilon \right)]^\epsilon \>
\longrightarrow_{_{_{_{\hspace{-7mm}
{{ \epsilon\rightarrow}{ 0}}}}}}
\>\> 1 \;\;\;,
\label{eq: m_overbar def}
\end{equation}
then the intent becomes clear: even though $M(0)$ may be nonzero for
all couplings in $D < 4$ dimensions, in the limit that $\epsilon $ goes
to zero we obtain
\begin{eqnarray}
M(0)\>
\longrightarrow_{_{_{_{\hspace{-7mm}
{{ \epsilon\rightarrow}{ 0}}}}}}
\> \> 0
&& \hspace{2cm} \alpha \> < \alpha_c \\ \nonumber
M(0)\>
\longrightarrow_{_{_{_{\hspace{-7mm}
{{ \epsilon\rightarrow}{ 0}}}}}}\> \>\infty
&& \hspace{2cm} \alpha \> > \alpha_c \;\;\;. \nonumber
\end{eqnarray}
Note that in the above we have not addressed the issue of
whether or not there actually is a
$\overline M\left(0,\epsilon \right)$ with the property
of Eq.~(\ref{eq: m_overbar def}). In fact, the numerical
and analytical work in the following sections is
largely concerned with finding this function and hence determining
whether or not chiral symmetry is indeed broken for
$D < 4$.\footnote{The reader will note that as neither
$\tilde M(0,\epsilon)$ nor $\overline M\left(0,\epsilon \right)$ are functions
of the coupling $\alpha$, the value of $\alpha_c$ can be determined independently
of the strength $\alpha$ of the self-interactions in $D<4$ dimensions.}
Notwithstanding this, as one knows from cut-off based work
that there actually is a non-zero
critical coupling for $D=4$, one can at this stage already come
to the conclusion that $\overline M\left(0,\epsilon \right)$ exists and
hence that quenched QED in $D < 4$ dimensions has a chiral symmetry
breaking solution for all couplings.
In summary, as the trivial solution $M(p^2)=0$ always exists as well,
we see that in $D<4$ dimensions the trivial and symmetry breaking solutions
bifurcate at $\alpha=0$ while for $D=4$ the point of bifurcation is at
$\alpha=\alpha_c$; i.e., there is a discontinuous change in the
point of bifurcation. As $D$ approaches four (i.e. as $\epsilon$ approaches $0$)
the generated mass $M(0)$ decreases (grows) roughly like $\left (\alpha/
\alpha_c \right )^{1/2\epsilon}$ for $\alpha$ $\lesssim$ $\alpha_c$
($\gtrsim$ $\alpha_c$),
respectively, becoming an infinite step function at $\alpha=\alpha_c$
when $\epsilon$
goes to zero.
\section{The rainbow approximation}
\label{sec: rainbow}
Let us now consider an explicit vertex.
To begin with, we consider the rainbow approximation to the
Euclidean mass function of quenched QED with zero bare mass
in Landau gauge. It is given by
\begin{equation}
M(p^2)=(e \nu^{\epsilon})^2(3 - 2 \epsilon)\int \;\frac{d^Dk}{(2\pi)^D}\;
\frac{M(k^2)}
{k^2+M^2(k^2)}\frac{1}{(p-k)^2}\;\;\;.
\label{masseq}
\end{equation}
Note that
the Dirac part of the self-energy is equal to zero in the Landau gauge
in rainbow approximation even in $D < 4$ dimensions and hence that
${\cal Z}(p^2)=1$ for all $p^2$.
It is of course possible to find the solution to Eq.~(\ref{masseq})
numerically -- indeed we shall do so -- however it is far more instructive
to first try to make some reasonable approximations in order to be able
to analyze it analytically.
First, as the angular integrals involved
in $D$-dimensional integration are standard (see, for example,
Refs.~\cite{dim_reg_2} and ~\cite{Muta}) we may reduce Eq.~(\ref{masseq})
to a one-dimensional integral, namely
\begin{eqnarray}
M(p^2)&=&\alpha \nu^{2 \epsilon}c_{\epsilon} \>\int
\limits_0^\infty \frac{dk^2 (k^2)^
{1-\epsilon}M(k^2)}{k^2+M^2(k^2)}\left[{1\over p^2}F\left(1,\epsilon;
2 - \epsilon;\frac{k^2}{p^2}\right)\theta(p^2-k^2)\right.\nonumber\\
&+&\left.{1\over k^2}F\left(1,\epsilon;2-\epsilon;\frac{p^2}{k^2}\right)
\theta(k^2-p^2)\right],
\label{inteq}
\end{eqnarray}
where
\begin{equation}
c_{\epsilon}= \frac{3 - 2 \epsilon}{(4\pi)^{1-\epsilon}\Gamma(2-\epsilon)},
\quad (c_0=\frac{3}{4\pi}).
\end{equation}
Note that for $D=4$ the mass function in Eq.(\ref{inteq}) reduces to the standard one in
QED$_4$.
In $D\ne 4$ dimensions the hypergeometric functions in Eq.~(\ref{inteq})
preclude a solution in closed form. However, note that these
hypergeometric functions have a power expansion in $\epsilon$ so that
for small $\epsilon$ one is not likely to go too far wrong by just replacing
these by their $\epsilon = 0$ (i.e. $D=4$) limit. After all, the reason
for choosing dimensional regularization in the first place is in order
to regulate the integral, and this is achieved by the factor of
$k^{- 2 \epsilon}$, not the hypergeometric functions. In addition, this
approximation also corresponds to just replacing the hypergeometric functions
by their IR and UV limits, so that one might expect that even for larger
$\epsilon$ that the approximation is not too bad in these regions
\footnote{It is however possible to show
that a linearized version of Eq.~(\ref{masseq}) always
has symmetry breaking solutions even without making this approximation
of the angular integrals. We indicate how this may be done in
Appendix A.}.
Making this replacement, i.e.
\begin{equation}
M(p^2)=\alpha \nu^{2 \epsilon}c_{\epsilon}\left[{1\over p^2}\int\limits_0^{p^2}\frac{dk^2 \>
(k^2)^{1 - \epsilon} M(k^2)}{k^2+M^2(k^2)}+\int\limits_{p^2}^\infty\frac{dk^2
\>(k^2)^{-\epsilon} M(k^2)}{k^2+M^2(k^2)}\right]\;\;\;,
\label{inteqxa}
\end{equation}
allows us to convert Eq.~(\ref{inteq}) into a differential equation, namely
\begin{equation}
\left[p^4 M^\prime(p^2)\right]^\prime+\alpha \nu^{2 \epsilon}c_{\epsilon}\frac{(p^2)^{1 -
\epsilon}}{p^2+M^2(p^2)}M(p^2)=0\;\;\;,
\label{diffeq}
\end{equation}
with the boundary conditions
\begin{equation} p^4 M^\prime(p^2)|_{p^2=0}=0,\qquad \left[p^2 M(p^2)\right]^\prime|_{p^2=
\infty}=0\;\;\;.
\end{equation}
Unfortunately, the differential equation (\ref{diffeq}) still has no
solutions in terms of known special functions. Since the mass function in the
denominator of Eq. (\ref{inteqxa}) serves primarily as an infrared regulator
we shall make one last approximation and replace it by an infrared
cut-off for the integral, which can be taken as a fixed value of
$M^2(k^2)$ in the infrared region (for convenience we shall call this value
the `dynamical mass' $m$). This simplifies the problem sufficiently to allow
the derivation of an analytical solution.
In terms of the dimensionless variables $x=p^2/\nu^2$, $y=k^2/\nu^2$ and $a=m^2/
\nu^2$ the linearized equation becomes
\begin{equation}
M(x)=\alpha c_{\epsilon}\left[{1\over x}\int\limits_a^x\frac{dy \> y^{1 - \epsilon}
M(y)}{y}+\int\limits_x^\infty\frac{dy \>y^{-\epsilon} M(y)}{y}\right]\;\;\;;
\label{inteqx}
\end{equation}
for simplicity, we do not explicitly
differentiate between $M(x)$ and $M(p^2)$.
This may be written in differential form as
\begin{equation}
\left[x^2 M^\prime(x)\right]^\prime\> +\> \alpha \>c_{\epsilon} \>x^{-\epsilon} M(x)=0
\;\;\;,
\label{diffeqsimple}
\end{equation}
with the boundary conditions
\begin{equation}
M^\prime(x)|_{x=a}=0,\qquad \left[x M(x)\right]^\prime|_{x=\infty}=0\;\;\;.
\label{BC}
\end{equation}
This differential equation has solutions in terms of Bessel
functions
\begin{equation}
M(x)=x^{-1/2}\left[C_1\> J_{\lambda}
\left(\frac{\sqrt{4\alpha c_{\epsilon}}}{\epsilon \> x^{\epsilon \over 2}}\right)
+C_2 \> J_{-{\lambda}}
\left(\frac{\sqrt{4\alpha c_{\epsilon}}}{\epsilon \> x^{\epsilon \over 2}}\right)\right]\;\;\;,
\label{diffeqsol}
\end{equation}
where we have defined $\lambda = 1/\epsilon$ in order to avoid
cumbersome indices on the Bessel functions.
The ultraviolet boundary condition Eq.~(\ref{BC}) gives $C_2=0$ while
the infrared boundary condition leads to
\begin{equation}
C_1 \> \left[J_{\lambda}\left(\frac{\sqrt{4\alpha c_{\epsilon}}}{\epsilon\> x^{\epsilon \over 2}}\right)+
\frac{\sqrt{4\alpha c_{\epsilon}}}{x^{\epsilon \over 2}}
J_{\lambda}^\prime\left(\frac{\sqrt
{4\alpha c_{\epsilon}}}{\epsilon\>
x^{\epsilon \over 2}}\right)\right]_{x=a}\> =\> 0\;\;\;.
\label{dynmasseq}
\end{equation}
This equation may be simplified
using the relation among Bessel functions
\begin{equation}
zJ_\lambda^\prime (z)+\lambda J_\lambda(z)=zJ_{\lambda-1}(z)\;\;\;,
\end{equation}
and becomes
\begin{equation}
C_1 \> \left[\frac{\sqrt{4\alpha c_{\epsilon}}}{\epsilon \> x^{\epsilon \over 2}}
J_{{\lambda}-1}\left(\frac{\sqrt
{4\alpha c_{\epsilon}}}{\epsilon \> x^{\epsilon \over 2}}\right)\right]_{x=a}\> =0\> \;\;\;.
\label{eq: simple dynmasseq}
\end{equation}
Clearly this equation is satisfied by $C_1=0$, which corresponds to
the trivial chirally symmetric solution $M(x)=0$. However, for values of $a$ which are
such that the argument of the Bessel function in Eq.~(\ref{eq: simple
dynmasseq}) corresponds to one of its zeroes, the equation is also
satisfied for $C_1 \neq 0$, i.e. for these values of $a$ there exist
solutions with dynamically broken chiral symmetry. If we define
$j_{\lambda-1,1}\> = \> \sqrt{4\alpha c_{\epsilon}} /\epsilon
a^{\epsilon/2}$ to be the smallest positive zero of
Eq.~(\ref{eq: simple dynmasseq}), the dynamical mass for this solution
becomes
\begin{equation}
m\> =\> \nu \> a^{1/2}\> =\> \nu \> \alpha^{1 \over
2 \epsilon} \> \left(\frac{\sqrt{4 c_{\epsilon}}}{\epsilon\>
j_{{\lambda}-1,1}}\right)^{1 \over \epsilon}.
\label{dynmass}
\end{equation}
Note that for this solution the normalization $C_1$ is not fixed by
Eq.~(\ref{inteqx}) as this equation is linear in $M(x)$. Later on we
shall fix $C_1$ by demanding that $M(a) = m$, however there is no
compelling reason to do this and one might alternatively fix the
normalization in such a way as to approximate the true (numerical)
solutions of Eq.~(\ref{masseq}) as well as possible. Finally, note
that, as expected, a dynamical symmetry breaking solution exists
for any value
of the coupling and that the expression for the dynamical mass is in
agreement with the general form expected from dimensional
considerations [i.e. Eq.~(\ref{eq: M(0) general form})].
In order to extract $\alpha_c$, we need to look at the behaviour of $m$
as $\epsilon$ goes to zero (i.e. $\lambda \rightarrow \infty$). This may
be done by noting that the positive roots of the Bessel function
$J_\lambda$ have the following asymptotic behaviour (see, for example,
Eq.~9.5.22 in Ref.~\cite{abramowitz}):
\begin{equation}
j_{\lambda,s}\sim\lambda
z(\zeta)+\sum\limits_{k=1}^\infty\frac{f_k(\zeta)}{\lambda^{2k-1}},\quad
\zeta=\lambda^{-2/3}a_s,
\end{equation}
where $a_s$ is the $s$th negative zero of Airy function Ai($z$),
and $z(\zeta)$ is determined $(z(\zeta)>1)$ from the equation
\begin{equation}
{2\over3}(-\zeta)^{3/2}=\sqrt{z^2-1}-\arccos{1\over z}.
\end{equation}
For large $\lambda$ the variable $\zeta$ is small and so
it is valid to expand $z$ around 1. Writing $z=1+\delta$ we obtain
\begin{equation}
\arccos\frac{1}{1+\delta}\sim\sqrt{2\delta}-\frac{5\sqrt2}{12}
\delta^{3/2},
\end{equation}
and so $\delta\simeq -\zeta/{2^{1/3}}$ yielding, to leading order,
\begin{equation}
z=1-\frac{a_1}{2^{1/3}\lambda^{2/3}}\;\;\;.
\end{equation}
If we define
\begin{equation}
\gamma = -{a_1 \over 2^{1/3}} \> \sim \> 1.855757
\end{equation}
then the leading terms in the expansion of $j_{\lambda-1,1}$ are
\begin{equation}
j_{\lambda-1,1}\sim\lambda +\gamma\lambda^{1/3}-1+O(\lambda^{-1/3}).
\label{eq: root}
\end{equation}
Also, the coefficient $c_{\epsilon}$ appearing in Eq.~(\ref{dynmass}) may be
expanded
\begin{equation}
c_{\epsilon}\hspace{3mm}\sim_{_{_{_{\hspace{-5mm}
{{\epsilon\to 0}}}}}}\frac{3}{4\pi}(1+d \> \epsilon),\quad
d=\ln(4\pi)+{1\over3}+\psi(1)
\end{equation}
so that for small $\epsilon$ the dynamical mass becomes
\begin{equation}
m \> \sim \> \nu \> \alpha^{1 \over 2 \epsilon}
\> {\left[{3 \over \pi} (1+d \> \epsilon) \right]^{1 \over 2 \epsilon} \over
(1+\gamma\epsilon^{2/3}-\epsilon)^{1 \over \epsilon}}
\> \sim \> \nu \> \left( {\alpha \over {\pi/ 3}}\right)^{1 \over 2 \epsilon}
\> e^{1 + {d \over 2} - \gamma\epsilon^{-1/3}}\;\;\;.
\label{eq: dyn mass app}
\end{equation}
Note that the behaviour of the first term (for $\epsilon$ going to
zero) dominates over the exponential function, as required
in Eq.~(\ref{eq: m_overbar def}). Hence the critical coupling
in four dimensions is given by $\pi / 3$, as expected
from cut-off based work~\cite{FGMS,MiranskReview}.
Returning now to the mass function itself,
we may substitute the expression for the dynamical mass, i.e., Eq.~(\ref{dynmass}),
together with our choice of normalization
condition
\begin{equation}
M(p^2=m^2) \> = \> m\;\;\;,
\label{eq: norm cond}
\end{equation}
into Eq.~(\ref{masseq}) in order to eliminate $C_1$. One obtains
\begin{equation}
M(p)=\frac{m^2}{|p|}\frac{J_\lambda[j_{\lambda-1,1}\cdot\left({m\over
|p|}\right)^{\epsilon}]}{J_\lambda[j_{\lambda-1,1}]}\;\;\;.
\label{massfunction}
\end{equation}
Note that the explicit dependence on $\nu$ (and hence $\alpha$) has been completely
replaced by $m$ in this expression.
So far we have taken $\alpha$ independent of the regularization. As we have seen this
leads to a dynamically generated mass which becomes infinite as the regulator is
removed. Fomin et al.~\cite{FGMS} examined (within cut-off regularized
QED) a different limit, namely one where the mass $m$ is kept
constant while the cut-off is removed.
In our case this limit necessitates that the coupling $\alpha$ is dependent on
$\epsilon$ through
\begin{equation}
\alpha \simeq {\pi \over 3} \left ( 1 \> + \> 2 \>\gamma \>\epsilon^{2 \over 3} \right )
\end{equation}
[see Eq.~(\ref{eq: dyn mass app}); note that $\alpha_c$ is approached from above].
The limit may be taken analytically in Eq.~(\ref{massfunction})
by making use of the
known asymptotic behaviour of the Bessel functions (see Eq. 9.3.23 of
Ref.~\cite{abramowitz}), i.e.
\begin{equation}
J_\lambda\left(\lambda+\lambda^{1/3}z\right)\sim\left(\frac{2}{\lambda}\right)^{1/3}
{\rm Ai}(-2^{1/3}z)\;\;\;,
\end{equation}
as well as the asymptotic expansion of $j_{\lambda-1,1}$ in Eq.~(\ref{eq: root}).
One obtains
\begin{equation}
M(p)=\frac{m^2}{p}\left(\ln{p\over m}+1\right)\;\;\;,
\end{equation}
which agrees with the result in Ref.~\cite{FGMS}.
To conclude this section, we analyze the validity of the
approximations made by solving Eq.~(\ref{masseq}) numerically and
comparing it to the Bessel function solution in
Eq.~(\ref{massfunction}). In Fig.~\ref{fig: comp}a we have
plotted the mass function (divided by $\nu$) as a function of the
dimensionless momentum $x$ for a moderately large coupling ($\alpha =
0.6$) and $\epsilon = 0.03$. The solid curve corresponds to the exact
numerical result [Eq.~(\ref{masseq})] while the dashed line is a plot of
Eq.~(\ref{massfunction}) for these parameters. As can be seen, the approximation
is not too bad and could actually be made
significantly better by adopting a different
normalization condition to that in Eq.~(\ref{eq: norm cond})
However, no further insight is gained by doing this and
we shall not pursue it further.
One might naively think that most of the difference between the Bessel
function and the exact numerical solution comes from the linearization of
Eq.~(\ref{masseq}) -- i.e., the approximation made by going from
Eq.~(\ref{inteqxa}) to Eq.~(\ref{inteqx}), as the only approximation
made prior to this is to replace the hypergeometric functions by
unity, which is expected to be good to order $\epsilon$ (i.e. in this
case, 3 \%). This turns out to be not the case; the dotted curve in
Fig.~\ref{fig: comp}a corresponds to the (numerical) solution of
Eq.~(\ref{inteqxa}). Not only is the difference to the true solution
essentially an order of magnitude larger than expected (about 30 \% -- note
that Fig.~\ref{fig: comp}a is a log-log plot),
it is actually of opposite sign to the equivalent difference for the
Bessel function. In other words, the validity of the two
approximations is roughly of the same order of magnitude and they
tend to compensate.
Why are the quantitative differences rather larger than expected? On
the level of the integrands the approximations are actually quite
good. In Fig.~\ref{fig: comp}b we show the integrands of
Eqs.~(\ref{masseq}), (\ref{inteqx}) and Eq.~(\ref{inteqxa}) for a
value of $x$ in the infrared ($x \approx 7.1$ $10^{-11}$). Clearly
the replacement of the hypergeometric functions by unity is indeed an
excellent approximation, as is the linearization performed in
Eq.~(\ref{inteqx}) (except in the infrared, as expected. Note that
when estimating the contribution to the integral from different $y$
one should take into account that the x-axis in Fig.~\ref{fig:
comp}b is logarithmic). The real source of the `relatively large'
differences observed for the integrals in Fig.~\ref{fig: comp}a is
the fact that these are integral equations for the function $M(x)$ --
small differences in the integrands do not necessarily guarantee small
differences in $M(x)$. To illustrate this point, consider a
hypothetical `approximation' to Eq.~(\ref{inteqx}) in which we just
scale the integrands by a constant factor $1+\epsilon$
and ask the question how much this affects the solution $M(x)$. For
$x=0$ the answer is rather simple: the hypothetical approximation just
corresponds to a rescaling of $\alpha$ by $1+\epsilon$ and as $M(0)$
scales like $\alpha^{1 \over 2 \epsilon}$ we find that the solution
has increased by a factor $(1+\epsilon)^{1 \over 2 \epsilon}$. In other words,
even in the limit $\epsilon \rightarrow 0$ there remains a remnant
of the `approximation', namely a rescaling of $M(0)$ by a factor $e^{1/2}
\approx 1.6$!
\section{The Curtis-Pennington vertex}
\label{sec: cp}
We shall now leave the rainbow approximation and turn to the CP vertex.
The expressions for the scalar and Dirac
self-energies for this vertex, using dimensional regularization and in an
arbitrary gauge, have
already been given in Ref.~\cite{dim_reg_2}. Before we discuss
chiral symmetry breaking for this vertex we shall first examine the
chirally symmetric phase. We remind the reader that in this phase in four dimensions
the wavefunction renormalization has a very simple form for this
vertex~\cite{CPII}, namely
\begin{equation}
{\cal Z}(x,\mu^2)|_{M(x)=0} \> = \> \left( {x \over \mu^2} \right)^{{\xi \alpha \over 4 \pi}}\;\;\;,
\label{eq: pow z}
\end{equation}
where the renormalized Dirac propagator is given by
\begin{equation}
S(p) \> = \> {{\cal Z}(x,\mu^2) \over {p \hspace{-5pt}/} }\;\;\;.
\end{equation}
Here $\xi$ is the gauge parameter and $\mu^2$ is the (dimensionless)
renormalization scale.
This power behaviour of ${\cal Z}(x)$ is in fact demanded by multiplicative
renormalizability~\cite{Brown} as well as gauge covariance~\cite{dongroberts}.
We shall derive the form of this self-energy
in $D < 4 $ dimensions, which will provide a very useful check on the numerical
results even if $M(x) \ne 0$ as long as $x >> \left ({M(x) \over \nu}
\right )^2$.
\subsection{${\cal Z}(p^2)$ in the chirally symmetric phase}
\label{sec: cp one}
In the chirally symmetric phase, the unrenormalized ${\cal Z}(x)$ corresponding
to the CP vertex in $D$ dimensions
is given by
\begin{equation}
{\cal Z}(x)\> = \> 1 \> + \>
\frac{\alpha}{4\pi}
\frac{(4\pi)^\epsilon}{\Gamma(2-\epsilon)} \> \xi \>
\int_0^\infty dy \>{ y^{-\epsilon} \over x-y} {\cal Z}(y)
\left[ (1-\epsilon)\left(1 - I_1^D\left({y \over x}\right)\right) + {y \over y+x} I_1^D\left({y \over x}\right)\right].
\label{eq: massless a}
\end{equation}
This equation may be obtained from Eq.~(A6) of Ref.~\cite{dim_reg_2} by setting
$b(y)$ equal to zero in that equation and by using Eq.~(A8) of the same reference
in order to eliminate the terms with coefficient $a^2(y)$. The angular integral
$I_1(w)$ is
defined to be
\begin{eqnarray}
I_{1}^D(w) & = & (1+w) \,\,{}_2F_1(1,\epsilon;2-\epsilon;w)\quad 0 \leq w \leq 1\\
I_{1}^D(w) & = & I_{1}^D (w^{-1} )\quad \quad \quad \quad \quad \quad
\quad \quad \quad w \geq 1\;\;\;.
\end{eqnarray}
In four dimensions the solution to Eq.~(\ref{eq: massless a}) is given
by a ${\cal Z}(x)$ having a simple power behaviour while for $D < 4$ this
is clearly no longer the case. Nevertheless, it is
possible to derive an integral representation of the solution of Eq.~(\ref{eq: massless a})
by making use of the gauge covariance of this equation. We do so in
Appendix B, with the result
\begin{equation}
{\cal Z}(x)
\>=\> x^{\epsilon \over 2} \> 2^{1 - \epsilon} \> \Gamma(2-\epsilon)
\int_0^\infty du\> u^{\epsilon-1} e^{-r u^{2 \epsilon}} J_{2 - \epsilon}({\sqrt x} \> u)
\;\;\;.
\label{eq: res}
\end{equation}
Although this result is exact it is somewhat cumbersome to evaluate numerically because, for
$\epsilon \rightarrow 0$, the oscillations in the integrand become increasingly important.
For this reason
we shall
approximate the integrand in Eq.~(\ref{eq: massless a}) by its IR and UV limits,
as we did for the rainbow approximation
(as before, this approximation is
good to order $\epsilon$). Using
\begin{equation}
I_{1}^D(w) \>= \> 1 + {2 \over 2 - \epsilon}w \> + \> O[w^2]
\end{equation}
this approximation yields
\begin{equation}
{\cal Z}(x)\> = \> 1 \> + \>
\frac{\alpha}{4\pi}
\frac{(4\pi)^\epsilon}{\Gamma(2-\epsilon)} \> \xi \>
\left[{\epsilon \over 2 - \epsilon}
\int_0^x dy \>{ y^{1-\epsilon}\over x^2} {\cal Z}(y)
\> - \> \int_x^\infty dy \> y^{-\epsilon-1} {\cal Z}(y) \right]\;\;\;.
\label{eq: massless z approx}
\end{equation}
This may be converted to the differential equation
\begin{equation}
{\cal Z}''(x) \> + \> {3 \over x} {\cal Z}'(x)
\> = \> \frac{\tilde c}{x^{1+\epsilon}} \>
\left[ {\cal Z}'(x) \> + \> 2 \>{1 - \epsilon \over x} \>{\cal Z}(x) \right]
\label{eq: z diff eq}
\end{equation}
where $\tilde c$ is defined to be
\begin{equation}
\tilde c \> = \> \frac{\alpha}{2\pi}
\frac{(4\pi)^\epsilon}{\Gamma(3-\epsilon)} \> \xi
\end{equation}
and the appropriate boundary conditions are
\begin{equation}
x^{2-\epsilon} {\cal Z}(x)|_{x=0}=0,\qquad {\cal Z}(x)|_{x=\infty}=1.
\end{equation}
(The IR boundary condition arises from the requirement that the
integral in Eq.~(\ref{eq: massless z approx}) needs to converge at its lower limit.)
In order to solve Eq.~(\ref{eq: z diff eq}), it is convenient
to change variables to
\begin{equation}
z \> = \> {\tilde c \over 2 - \epsilon} \> x^{- \epsilon}\;\;\;,
\end{equation}
and to define
\begin{equation}
a \> = \> {2 \over \epsilon} - 1
\end{equation}
so that the differential equation becomes
\begin{equation}
z {\cal Z}'' \> - \> a (1-z) {\cal Z}'
\> - \> a (a-1) {\cal Z} \> = \> 0\;\;\;,
\end{equation}
while the boundary conditions now are
\begin{equation}
z^{-a} {\cal Z}|_{z=\infty}=0,\qquad {\cal Z}|_{z=0}=1.
\end{equation}
This equation is essentially Kummer's Equation
(see Eq. 13.1.1 of Ref.~\cite{abramowitz}; we use the notation
of that reference in the following). Its general solution
may be expressed in terms of confluent hypergeometric functions, i.e.
\begin{eqnarray}
{\cal Z} &=& z^{a+1} e^{-a z} \>
\left [
\tilde C_1 \> M(a,a+2; a z) \> + \> \tilde C_2 \> U(a,a+2; a z)
\right ]\nonumber \\
&=&
e^{-a z} \> \left \{C_1 \> \left [
\gamma(a+1, - a z) \> + \> a z \> \gamma(a, -a z ) \right ]
\> + \> C_2 \>[1+z]\right \}\;\;\;.
\label{eq: gen_solution}
\end{eqnarray}
The UV boundary condition is fulfilled if $C_2 = 1$ while $C_1$ is not fixed by
the boundary conditions. Although Eq.~(\ref{eq: massless z approx}) is solved by
Eq.~(\ref{eq: gen_solution}) for arbitrary $C_1$ we shall concentrate on the
solution with $C_1$=0. The reason for this is that the solution to the
unapproximated integral [Eq.~(\ref{eq: res})] vanishes at $x=0$
(see Appendix B) while the term multiplying $C_1$ in Eq.~(\ref{eq: gen_solution})
diverges like $x^{2 \epsilon - 2}$ and is therefore unlikely to provide a
good approximation to Eq.~(\ref{eq: massless a}).
Hence we obtain
\begin{equation}
{\cal Z}(x) \> = \> \left[1 \> + \> {\tilde c \over 2 - \epsilon} x^{-\epsilon} \right]
\exp(-{\tilde c \over \epsilon} x^{-\epsilon})\;\;\;.
\end{equation}
Finally, the renormalized function ${\cal Z}(x,\mu^2)$ may be obtained from this
by demanding that ${\cal Z}(\mu^2,\mu^2) = 1$ so that the renormalized
wavefunction renormalization becomes
\begin{equation}
{\cal Z}(x,\mu^2) \> = \>
{1 \> + \> { \tilde c\over 2 - \epsilon} x^{- \epsilon}
\over 1 \> + \> {\tilde c \over 2 - \epsilon} \mu^{-2 \epsilon} }
\>{ \exp(-{\tilde c \over \epsilon} x^{- \epsilon})
\over
\exp(-{\tilde c \over \epsilon} \mu^{-2 \epsilon})}\;\;\;.
\label{eq: cp z analytic}
\end{equation}
Only in the limit $D \rightarrow 4$ does this reduce to the usual power
behaved function found in cut-off based work [Eq.~(\ref{eq: pow z})]
while for $D < 4$ it vanishes non-analytically at $x=0$. On the other
hand, note that the solution to Eq.~(\ref{eq: massless a}) -- for
finite $\epsilon$ -- only goes to zero linearly in $x$. For the purpose of
this paper
this difference in the analytic behaviour in the infrared does not concern us
as for solutions which break chiral symmetry the infrared region is regulated
by
$M^2(x)$ so that we do not expect the chirally symmetric ${\cal Z}$ to be
a good approximation in this region in any case.
\subsection{Chiral symmetric breaking for the CP vertex}
\label{sec: cp two}
We shall now examine dynamical chiral symmetry breaking for the $CP$
vertex in the absence of any explicit symmetry breaking by a nonzero
bare mass, as before. Even for solutions exhibiting dynamical symmetry
breaking, it is to be
expected that the analytic result derived for
${\cal Z}(x)|_{M(x)=0}$ [Eq.~(\ref{eq: cp z analytic})] remains valid as long
as $x$ is large compared to $(M(x)/\nu)^2$ and $\epsilon$ is
sufficiently small. That this is indeed the case is illustrated in
Fig.~\ref{fig: a(x)},
where we show a typical example of ${\cal Z}^{-1}(x)$ for a solution which breaks
chiral symmetry. In this Figure, as well as in the rest of this
Section, we shall be dealing with the renormalized ${\cal Z}(x)$ and $M(x)$
instead of the unrenormalized quantities in the previous sections. This makes
no essential difference to the physics of chiral symmetry breaking, although
it of course effects the absolute scale of ${\cal Z}(x)$. For a
discussion
of the renormalization of the dimensionally regularized
theory we refer the reader to Ref.~\cite{dim_reg_2}.
The comparison to the analytic result in Fig.~\ref{fig: a(x)} provides
a very convenient check on the numerics. Another check is provided by
plotting the logarithm of $M(0)$ against the logarithm of the
coupling. According to Eq.~(\protect\ref{eq: M(0) general form}) this
should be a straight line with gradient ${1 \over 2 \epsilon}$. As
can be seen in Fig.~\ref{fig: alpha_vs_mass} not only does one
observe chiral symmetry breaking down to couplings as small as $\alpha=0.15$,
the expected linear behaviour is confirmed to quite high precision.
Although the numerics in $D < 4$ dimensions are clearly under control, the
extraction of the critical coupling (appropriate in four dimensions)
has proven to be numerically quite difficult. From the discussion in
Sections~\ref{sec_motivation} and~\ref{sec: rainbow}, we anticipate that
the logarithm of the dynamical mass has the general form
\begin{equation}
\ln\left({M(0) \over \nu}\right) \> = \> {1 \over 2 \epsilon} \> \ln\left({\alpha \over \alpha_c}\right)
\> + \> \ln\left(\overline M(0,\epsilon)\right)
\end{equation}
where the last term is subleading as compared to the first as $\epsilon$ tends
to zero. For sufficiently small $\epsilon$, therefore, $\alpha_c$ is related
to the gradient of $\ln\left(M(0)\right)$ plotted against $\epsilon^{-1}$.
In Fig.~\ref{fig: logmass_vs_inv_epsilon} we attempt to extract $\alpha_c$ in this
way. The logarithm of $M(0)$ was evaluated for $\epsilon$ ranging from 0.03
down to $\epsilon=0.015$ for a fixed gauge $\xi=0.25$.
The squares corresponds to a coupling constant $\alpha=1.2$, although some
of the points at lower $\epsilon$ have actually been calculated at smaller
$\alpha$ and then rescaled according to Eq.~(\protect\ref{eq: M(0) general form}).
At present we are unable, for these parameters, to decrease $\epsilon$
significantly further without a significant loss of numerical precision.
(We also note in passing that it is quite difficult numerically to move away
from small values of the gauge parameter; $\xi=20$, which, judging
from Fig.~\ref{fig: cut-off alpha_cr}, would not require a very high numerical
accuracy for $\alpha_c$, is unfortunately not an option.)
The two fits shown in Fig.~\ref{fig: logmass_vs_inv_epsilon}
correspond to two different assumptions for the functional form of
$\overline M(0,\epsilon)$, which is a priori unknown. The curves do
indeed appear to be well approximated by a straight line, however we
caution the reader that this does not allow an accurate
determination of $\alpha_c$ as the gradient is essentially determined
by the `trivial' dependence on $\log(\alpha)$ (more on this below). The solid line
corresponds to the assumption that the leading term in $\overline
M(0,\epsilon)$ has the same form as what we found in the rainbow
approximation, i.e.
\begin{equation}
\ln\left({M(0) \over \nu}\right) \> = \> {1 \over 2 \epsilon} \> \ln\left({\alpha \over \alpha_c}\right)
\> + \> c_1 \left({1 \over 2 \epsilon} \right)^{1 \over 3}\;\;\;.
\label{eq: fit1}
\end{equation}
With this form the fit parameters $\alpha_c$ and $c_1$ are found to be
\begin{equation}
\alpha_c \> = \> 0.966 \quad \quad c_1 \> = \> -1.15\;\;\;.
\end{equation}
Indeed, the critical coupling is similar to what is found in cut-off
based work (see Sec.~\ref{sec_motivation}; in Ref.~\cite{qed4_hrw} the
value was $0.9208$ for $\xi=0.25$). At present it is difficult to
make a more precise statement, let alone differentiate between the two
curves plotted in Fig.~\ref{fig: cut-off alpha_cr}, as $\alpha_c$ is
quite strongly dependent on the functional form assumed in
Eq.~(\ref{eq: fit1}). In fact, allowing an extra constant term on the
right hand side of Eq.~(\ref{eq: fit1}) reduces the critical coupling
to $0.920$ and the addition of yet a further term proportional to
$\epsilon^{1 \over 3}$ increases it again to $0.931$. As these numbers
appear to converge to something of the order of $0.92$ or $0.93$ one might think
that $\alpha_c$ has been determined to this precision. However, it is not clear
that the functional form suggested by the rainbow approximation should be taken
quite this seriously. The dashed line in Fig.~\ref{fig: logmass_vs_inv_epsilon}
corresponds to a fit where the power of $\epsilon$ of the subleading term has
been left free, i.e.
\begin{equation}
\ln\left({M(0) \over \nu}\right) \> = \> {1 \over 2 \epsilon} \> \ln\left({\alpha \over \alpha_c}\right)
\> + \> c_1 \left({1 \over 2 \epsilon} \right)^{c_2}\;\;\;.
\label{eq: fit2}
\end{equation}
The optimum fit assuming this form for $\ln\left(M(0)\right)$ yields a power
quite different to ${1 \over 3}$ and a very much smaller $\alpha_c$:
\begin{equation}
\alpha_c \> = \> 0.825 \quad \quad c_1 \> = \> -0.801
\quad \quad c_2 \> = \> 0.688\;\;\;.
\end{equation}
To conclude this section, let us discuss why it is that
the functional form of the subleading term $\overline M(0,\epsilon)$
appears to be rather important even if $\epsilon$ is already rather small. The
reason for this is two-fold: most importantly, although
the leading $\epsilon$ dependence of $\ln\left(M(0)\right)$ is indeed $\epsilon^{-1}$,
the coefficient of this term (leaving out the trivial $\alpha$ dependence)
is $\ln(\alpha_c)$. As $\alpha_c$ is rather close to 1 one therefore obtains
a strong suppression of this leading term, increasing the relative
importance of the subleading terms. In addition, it appears as if the numerical
results favour a subleading term which is not as strongly suppressed (as a
function of $\epsilon$) as suggested by the rainbow approximation (i.e.
the power of $\epsilon^{-1}$ of the subleading term appears to be closer
to ${2 \over 3}$ rather than ${1 \over 3}$). This again increases the
importance of the subleading terms.
\section{Conclusions and Outlook}
\label{sec: conclusion}
The primary purpose of this paper was to explore the phenomenon of
dynamical chiral symmetry breaking through the use of Dyson-Schwinger
equations with a regularization scheme which does not break the gauge
covariance of the theory, namely dimensional regularization. It is
necessary to do this as the cut-off based work leads to ambiguous
results for the critical coupling of the theory precisely because of
the lack of gauge covariance in those calculations. In particular,
this should be kept in mind when using the expected gauge invariance
of the critical coupling as a criterion for judging the suitability of
a particular vertex.
To begin with, we have shown on dimensional grounds alone and for an
arbitrary vertex, that in $D<4$ dimensions either a symmetry breaking
solution does not exist at all (in which case, however, it would also
not exist in $D=4$ dimensions) or it exists for all non zero values of
the coupling (in which case a chiral symmetry breaking solution exists
in $D=4$ for $\alpha > \alpha_c$). For Dyson-Schwinger analyses
employing the rainbow and CP vertices we have shown that it is the
second of these possibilities which is realized. For these symmetry
breaking solutions the limit to $D = 4$ is necessarily discontinuous
and so the extraction of the critical coupling of the theory (in 4
dimensions) is not as simple as in cut-off regularized work.
We next turned to an examination of symmetry breaking in the
rainbow approximation in Landau gauge, both analytically and numerically.
Indeed, for this vertex one could rewrite the (linearized)
Dyson-Schwinger equation as a Schr\"odinger equation in 4 dimensions
and appeal to standard results from elementary quantum mechanics to
explicitly show that the theory always breaks chiral symmetry if $D <
4$. We also showed how the usual critical coupling $\alpha_c={\pi
\over 3}$ may be extracted from the dimensionally regularized work.
We concluded this work with an examination of the CP vertex. By
making use of the gauge covariance of the theory we derived an exact
integral expression for the wavefunction renormalization function
${\cal Z}(p^2)$ of the chirally symmetric solution. Furthermore we
obtained a compact expression
for this quantity which is an excellent approximation to the true
${\cal Z}(p^2)$ even for solutions which break the chiral symmetry.
Finally, we extracted the critical coupling corresponding to this
vertex and found that, within errors, it agrees with the standard
cut-off results.
In the future, we plan to increase the numerical precision with which
we can extract this critical coupling for the CP vertex by an order of
magnitude or so. The factor limiting the precision at present is that
when solving the propagator's Dyson-Schwinger equation with the CP
vertex by iteration the rate of convergence decreases dramatically as
$\epsilon$ is decreased below $\epsilon \approx 0.015$. If this
increase in precision can be attained it will enable one to make a
meaningful comparison with the cut-off based results shown
in Fig.~\ref{fig: cut-off alpha_cr}.
\begin{acknowledgements}
We would like to acknowledge illuminating discussions
with D.~Atkinson, A.\ K{\i}z{\i}lers\"{u}, V.~A.~Miransky and M.~Reenders.
VPG is grateful to the members of the Institute for Theoretical
Physics of the University of Groningen for hospitality during his stay there.
This work was supported by a Swiss National Science Foundation
grant (Grant No. CEEC/NIS/96-98/7 051219), by the Foundation of Fundamental
Research of
Ministry of Sciences of Ukraine (Grant No 2.5.1/003) and by the
Australian Research
Council.
\end{acknowledgements}
\setcounter{equation}{0}
\makeatletter
\renewcommand\theequation{A\arabic{equation}}
\makeatother
\begin{appendix}
\begin{center}
{\bf APPENDIX A: CHIRAL SYMMETRY BREAKING IN RAINBOW APPROXIMATION}
\end{center}
In this appendix we show that the linearized version of Eq.~(\ref{masseq}),
i.e.
\begin{equation}
M(p^2)=(e \nu^{\epsilon})^2(3 - 2 \epsilon)\int \;\frac{d^Dk}{(2\pi)^D}\;
\frac{M(k^2)}
{k^2+m^2}\frac{1}{(p-k)^2}\;\;\;,
\label{masseq lin}
\end{equation}
has symmetry breaking solutions for all values of the coupling.
Our aim here is to convert this equation to
a Schr\"odinger-like equation, which we do by introducing
the function
\begin{equation}
\psi(r)=\int\frac{d^Dk}{(2\pi)^D}\frac{e^{ikr}M(k^2)}{k^2+m^2}\;\;\;.
\end{equation}
With this definition we have
\begin{equation}
\left(-\Box + m^2\right)\psi(r)=\int\frac{d^Dk}{(2\pi)^D}e^{ikr}M(k^2)
\end{equation}
where $\Box$ is the $D$-dimensional Laplacian and so
\begin{equation}
\left(-\Box +m^2\right)\psi(r)=e^2\nu^{2 \epsilon}(3 - 2 \epsilon)\int\frac{d^Dp}{(2\pi)^D}e^{ipr}
\int\frac{d^Dk}{(2\pi)^D}\frac{M(k^2)}
{k^2+m^2}\frac{1}{(p-k)^2}\;\;\;.
\end{equation}
After shifting the integration variable $(p\rightarrow p+k)$
the last equation can be written in the form of a
Schr\"odinger-like equation
\begin{equation}
H\psi(r)=-m^2\psi(r),
\label{Schrodinger}
\end{equation}
where $H=-\Box + V(r)$ is the Hamiltonian, $E=-m^2$ plays the
role of an energy and the potential $V(r)$ given by
\begin{equation}
V(r)\>=\>-e^2\nu^{2 \epsilon}(3 - 2 \epsilon)\int\frac{d^Dp}{(2\pi)^D}\frac{e^{ipr}}{p^2}
\>=\>-\frac{\eta}{ r^{D-2}}\;\;\;,
\end{equation}
where
\begin{equation}
\eta \> = \> {\Gamma (1 - \epsilon ) \over 4 \pi^{2 - \epsilon}}
e^2\nu^{2 \epsilon}(3 - 2 \epsilon)\;\;\;.
\end{equation}
For $D=3$ the coefficient $\eta$ is $2 \nu \alpha$ while near $D=4$ it is ${3 \over \pi} \alpha \nu^{2 \epsilon}$.
It is well known from any standard course of quantum mechanics (see, for
example, Ref.\cite{quant1}) that
potentials behaving as $1/r^{s}$ at infinity, with $s<2$, always support
bound states (actually, an infinite
number of them). In the present case this can be seen by considering the
Schr\"odinger equation (\ref{Schrodinger}) for zero energy, i.e. $E=0$.
The $s$-symmetric wave function then satisfies the equation
\begin{equation}
\psi^{\prime\prime}+\frac{D-1}{r}\psi^\prime+\frac{\eta}{r^{D-2}}
\psi=0. \end{equation}
The solution finite at the origin $r=0$ is
\begin{equation}
\psi(r)={\rm const.} \> \times \> r^{\epsilon - 1}J_{{1 \over
\epsilon}-1}\left(\frac{\sqrt{\eta}}{\epsilon}r^{\epsilon}\right).
\label{E0function}
\end{equation}
The Bessel function in (\ref{E0function}) has an infinite number of
zeros, which means that there is an infinite number of states with
$E<0$.
Returning now to Eq.~(\ref{Schrodinger}), we can estimate the lowest energy
eigenvalue variationally by using
\begin{equation}
\psi(r)=Ce^{-\kappa r}
\label{trialfunction}
\end{equation}
as a trial wavefunction. Here $C$ is related to $\kappa$ by demanding that
$\psi$ is normalized, i.e.
\begin{equation}
|C|^2=\frac{(2\kappa)^D}{\Omega_D\Gamma(D)}\;\;\;,
\end{equation}
where $\Omega_D$ is the volume of a $D$-dimensional sphere.
Calculating the expectation value of the ``Hamiltonian'' $H$ on the
trial wave function in Eq.~(\ref{trialfunction}) we find
\begin{equation}
E_0(\kappa^2)=\langle\psi|H|\psi\rangle=\kappa^2
\left[ 1 - {2^{D-2} \over \Gamma(D)} \kappa^{D-4} \> \eta \right]
\label{varenergy}
\end{equation}
The minimum of the ``ground state energy'' in Eq.~(\ref{varenergy}),
$E_0(\kappa)$, is reached at
\begin{equation}
\kappa^{4 - D} \> = \> (D-2)\> {2^{D-3} \over \Gamma(D)}\> \eta
\end{equation}
(for $D=3$ the parameter $\kappa$ is $\nu \alpha$ while near $D=4$ it is
$\nu \left[{\alpha \over \pi/2}\right]^{{1 \over 2\epsilon}}$)
and is given by the expression
\begin{equation}
(E_0)_{\rm var}\>=\>-m^2
\>=\>\kappa^2 \left( 1 - {1 \over {D \over 2} - 1} \right)
\> = \> \kappa^2 {D-4 \over D-2}\;\;\;,
\label{E_0}
\end{equation}
where the $1$ is the contribution from the kinetic
energy while the $\left({D \over 2} - 1\right)^{-1}$
corresponds to the potential energy. For $D > 2$ the potential is attractive and
for $2 < D < 4$ it
is always larger than the kinetic energy, so for this case we get
dynamical symmetry breaking for any value of $\alpha$. For example, for
$D=3$, one obtains $E_0=-\kappa^2=-\nu^2\alpha^2$ which coincides precisely
with the ground-state energy of the hydrogen atom (not surprisingly, as we
have used the ground-state hydrogen wave function as our trial function). In
this case the dynamical mass is $m = \nu\alpha$.
For $D$ near $4$, on the other hand, we obtain from Eq.~(\ref{E_0}) that
\begin{equation}
m\simeq\nu({\epsilon})^{1/2}\left(\frac{\alpha}{\pi/2}\right)^
{1\over2 \epsilon}.
\end{equation}
This is of the general form anticipated in Section~\ref{sec_motivation},
with $\alpha_c={\pi \over 2}$.
Indeed, for $D=4$, the Schr\"odinger equation
(\ref{Schrodinger}) becomes an equation with the singular potential
\begin{equation}
V(r)=-\frac{\eta}{r^2},\quad \eta=\frac{\alpha}{\pi/3}.
\end{equation}
Again, it is known from standard quantum mechanics\cite{quant2} that the spectrum
of bound states for such a potential depends on the
strength $\eta$ of the potential: it has an infinite number
of bound states with $E<0$ if $\eta>1$ and bound states are
absent if $\eta<1$. Thus, the true critical value for the coupling is
expected to be $\alpha_c=\pi/3$ instead of the $\alpha_c=\pi/2$
obtained with the help of the variational method (which made use of the
exponential Ansatz for the wavefunction and thus only gave an upper bound
for the energy eigenvalue).
\end{appendix}
\setcounter{equation}{0}
\makeatletter
\renewcommand\theequation{B\arabic{equation}}
\makeatother
\begin{appendix}
\begin{center}
{\bf APPENDIX B: CHIRALLY SYMMETRIC QED FROM THE LANDAU-KHALATNIKOV TRANSFORMATION}
\end{center}
Because the CP vertex in the chirally symmetric phase of
QED is gauge-covariant~\cite{dongroberts}
it is possible to derive an integral representation of the wavefunction
renormalization function ${\cal Z}(x)$ [see Eq.~(\ref{eq: massless a})] from the
Landau-Khalatnikov transformation~\cite{LaKh}. This transformation
relates the coordinate space propagator $\tilde S^\xi(u)$ in one gauge to the propagator
in a different gauge. Specifically, with covariant gauge fixing, we have
\begin{equation}
\tilde S^\xi(u) \> = \> e^{4 \pi \alpha \nu^{2 \epsilon} [\Delta(0) - \Delta(u)]}
\tilde S^{\xi=0}(u)
\end{equation}
where $\Delta(u)$ is essentially the Fourier transform of the gauge-dependent
part of the photon propagator, i.e.
\begin{equation}
\Delta(u) \> = \> -\xi \int {d^Dk \over (2 \pi)^D}
{e^{-i k\cdot u} \over k^4}\;\;\;.
\end{equation}
Specifically, we obtain
\begin{equation}
\tilde S^\xi(u) \> = \> e^{-r (\nu\> u)^{2 \epsilon}} \tilde S^{\xi=0}(u)
\label{eq: LK in D dim}
\end{equation}
where
\begin{equation}
r \> = \>- {\alpha \over 4 \pi} \> \Gamma(-\epsilon) (\pi)^\epsilon \> \xi\;\;.
\end{equation}
Substituting the coordinate-space propagator in Landau gauge, i.e.
\begin{eqnarray}
\tilde S^{\xi=0}(u) &=& \int {d^Dp \over (2 \pi)^D} {e^{i p \cdot u} \over {p \hspace{-5pt}/}} \nonumber \\
&=&{i \over 2\> \pi^{D/2}} \> \Gamma \left ({D \over 2}\right ) {{u \hspace{-5pt}/} \over u^D}
\;\;\;,
\end{eqnarray}
and carrying out the inverse Fourier transform of Eq.~(\ref{eq: LK in D dim})
one obtains the wavefunction renormalization function in an arbitrary gauge,
namely
\begin{eqnarray}
{\cal Z}(x) &=& -{i \over 2\> \pi^{D/2}} \> \Gamma\left ({D \over 2}\right )
\int d^Du e^{i p \cdot u} \>{p \cdot u \over u^D}
e^{-r (\nu\> u)^{2 \epsilon}} \nonumber \\
&=& x^{\epsilon \over 2} \> 2^{1 - \epsilon} \> \Gamma(2-\epsilon)
\int_0^\infty du\> u^{\epsilon-1} e^{-r u^{2 \epsilon}} J_{2 - \epsilon}({\sqrt x} \> u)
\;\;\;.
\label{eq: res2}
\end{eqnarray}
Note that for small $x$ this function vanishes:
\begin{equation}
{\cal Z}(x) \>=\> { \Gamma \left({1 \over \epsilon}\right)
\over 4 \epsilon \> (2 - \epsilon)} \> r^{-{1 \over \epsilon}} \> x \> + \>
O\left( x^2 \right)\;\;\;.
\end{equation}
It may be checked explicitly that
Eq.~(\ref{eq: res2}) is indeed a solution to
Eq.~(\ref{eq: massless a}) for
arbitrary $D$ by making use of the expansion of Eq.~(\ref{eq: res2}) around
$x^{-\epsilon}=0$. To be more precise, consider the RHS of
Eq.~(\ref{eq: massless a}) upon insertion of the power $y^\delta$ in the place
of ${\cal Z}(y)$. Note that the integral converges only if
$\epsilon > \delta > \epsilon - 2$.
After some work the
result is that the R.H.S. of Eq.~(\ref{eq: massless a}) becomes
\begin{eqnarray}
1&+&\tilde c {2 - \epsilon \over 2} x^{\delta-\epsilon} \left [
-(1+\delta - \epsilon) {\Gamma(2-\epsilon) \over \Gamma(\epsilon)}
\sum_{n=-\infty}^{\infty} {\Gamma(\epsilon+n) \over \Gamma(2 - \epsilon+n)}
{1 \over n - \delta + \epsilon} \right ].
\label{insertdelta}
\end{eqnarray}
For $\epsilon < 1$ this may simplified further by applying
Dougall's formula (Eq. 1.4.1 in \cite{Bateman}) which, in this case,
reduces to
\begin{equation}
\sum_{n=-\infty}^{\infty} {\Gamma(\epsilon+n) \over \Gamma(2 - \epsilon+n)}
{1 \over n - \delta + \epsilon} \> = \> {\pi^2 \over \sin (\pi \epsilon)
\sin (\pi [\epsilon - \delta])}{1 \over \Gamma(1-\delta) \Gamma(2 + \delta -
2 \epsilon)}
\end{equation}
Using this result, Eq.~(\ref{insertdelta}) becomes
\begin{equation}
1 \> - \> \tilde c {2 - \epsilon \over 2} x^{\delta-\epsilon}
\frac{\Gamma(1-\epsilon)\Gamma(2-\epsilon)\Gamma(\epsilon-\delta)
\Gamma(2+\epsilon-\delta)} {\Gamma(2+\delta-2\epsilon)\Gamma(1-\delta)}.
\end{equation}
Note that, as opposed to the integral representation Eq.~(\ref{eq: massless a}),
this expression is defined for $\delta$ outside the range
$\epsilon > \delta > \epsilon - 2$ and so we may use it as an analytical
continuation of the integral. Furthermore, note that this last expression
vanishes for integer $\delta\geq1$ hence we cannot obtain a simple power expansion
around $x=0$ for ${\cal Z}(x)$ in this way.
On the other hand, an expansion in powers of $x^{-\epsilon}$ is possible.
If we seek a solution of the form
\begin{equation}
{\cal Z}(x) = \sum\limits_{n=0}^{\infty} c_n x^{-n \epsilon}\;\;\;,
\label{series}
\end{equation}
we may equate the coefficients of equal powers of $x^{-\epsilon}$ after inserting
the series (\ref{series}) into both sides of Eq.(\ref{eq: massless
a}). This way we obtain the recurrence relation for
the coefficients $c_n$ ($c_0=1$) as
\begin{equation}
{c_{n+1} \over c_n} = {\tilde c \over 2 (n+1)} \Gamma(-\epsilon)
\Gamma(3-\epsilon)
{\Gamma(1+\epsilon n+\epsilon) \Gamma(2 - \epsilon-\epsilon n) \over
\Gamma(2 - 2 \epsilon - \epsilon n) \Gamma(1+\epsilon n)}\;\;\;.
\label{eq:recursion1}
\end{equation}
This may be solved leading
to
\begin{equation}
c_n=[{\tilde c\over2}\Gamma(-\epsilon)\Gamma(3-\epsilon)]^n
\frac{\Gamma(2-\epsilon)\Gamma(1+n\epsilon)}{\Gamma(2-\epsilon-
n\epsilon)n!}\;\;\;,
\end{equation}
so that finally we obtain
\begin{equation}
{\cal Z}(x)=\Gamma(2-\epsilon) \sum\limits_{n=0}^\infty\frac
{\Gamma(1+n\epsilon)}{\Gamma(2-\epsilon- n\epsilon)n!}\left[{\tilde
c\Gamma(-\epsilon)\Gamma(3-\epsilon)\over2}x^{-\epsilon}\right]^n
\end{equation}
as the series expansion of the solution to
Eq.~(\ref{eq: massless a}). The reader may check that this
coincides precisely with the corresponding expansion of
the solution obtained via the Landau-Khalatnikov transformations
[Eq.~(\ref{eq: res2})]. The latter may be obtained by
changing the variable of integration from $u$ to $u/\sqrt x$,
expanding the exponential in the integrand and making use
of the standard integral
\begin{equation}
\int\limits_0^\infty x^{\alpha-1}J_\nu(cx)\>dx\>=\>2^{\alpha-1}c^{-\alpha}
\frac{\Gamma\left({\alpha+\nu\over 2}\right)}
{\Gamma\left (1+{\nu-\alpha\over 2}\right )}\;\;\;.
\end{equation}
\end{appendix}
|
2,869,038,154,717 | arxiv |
\section{Introduction}
\label{sec:Sec1}
In robotics, one is often tasked with designing algorithms to coordinate teams of robots to achieve a task. Some examples of such tasks are formation flying \cite{desai2001modeling,khan2019graph,alonso2015multi}, perimeter defense \cite{shishika2018local}, surveillance \cite{saldana2016dynamic}. In this paper, we concern ourselves with scenarios where a team of homogeneous or identical robots must execute a set of identical tasks such that each robot executes only one task, but it does not matter which robot executes which task. Concretely, this paper studies the concurrent goal assignment and trajectory planning problem where robots must simultaneously assign goals and plan motion primitives to reach assigned goals. This is the unlabelled multi-robot planning problem where one must simultaneously solve for goal assignment and trajectory optimization.
In the past, several methods have been proposed to achieve polynomial-time solutions for the unlabelled motion planning problem \cite{adler2015efficient, macalpine2015scram,yu2012distance,turpin2014capt}. The common theme among these methods is the design of a heuristic best suited for the robots and the environment. For example \cite{turpin2014capt} minimizes straight line distances and solves for the optimal assignment using the Hungarian algorithm.
However, when additional constraints such as constraints on dynamics, presence of obstacles in the space, desired goal orientations solving for a simple heuristic is no longer the optimal solution. In fact, one can think of the robot to goals matching problem with constraints as an instance of the minimum cost perfect matching problem with conflict pair constraints (MCPMPC) which is in fact known to be strongly $\mathcal{NP}$-hard \cite{darmann2011paths}.
\begin{figure}[t!]
\centering
\includegraphics[scale=0.40]{fig/gpg_capt.png}
\caption{\textbf{Graph Policy Gradients for Unlabeled Motion Planning.} $\mathbf{N}$ Robots and $\mathbf{N}$ goals are randomly initialized in some space. To define the graph, each robot is a node and threshold is connected to $\mathbf{n}$ nearest robots. Each robot observes relative positions of $\mathbf{m}$ nearest goals and other entities within a sensing region. Information from K-hop neighbors is aggregated at each node by learning local filters. Local features along with the robots own features are then used to learn policies to produce desired coverage behavior. \label{fig:mainfig}}
\end{figure}
In contrast to previous literature that relies on carefully designed but ultimately brittle heuristic functions, we hypothesize using reinforcement learning (RL) to compute an approximate solution for the assignment and planning problem without relaxing any constraints. When using RL to learn policies for the robots, it might be possible to simply use a cost function that is independent of any dynamics or constraints, i.e a cost function that is set to a high value if all goals are covered and zero otherwise. The idea of casting the unlabelled motion planning problem as a multi-agent reinforcement learning (MARL) problem has been explored before by Khan et. al \cite{khan2019learning}. They propose learning individual policies for each robot which are trained by using a central Q-network that coordinates information among all robots. However, the authors of \cite{khan2019learning} themselves note that the key drawback of their method is that does not scale as the number of robots increase. Further, there is a huge computational overhead attached with training individual policies for each robot.
There exist two key obstacles in learning policies that scale with the number of robots: increase in dimensionality and partial observability. Consider an environment with $\mathbf{N}$ robots (This paper uses bold font to denote collection of items, vectors and matrices). In a truly decentralized setting, each robot can at best only partially sense its environment. In order for a robot to learn a meaningful control policy, it must communicate with some subset of all agents, $\mathbf{n} \subseteq \mathbf{N}$. Finding the right subset of neighbors to learn from which is in itself a research problem.
Additionally, to ensure scalability, as the number of robots increase, one needs to ensure that the cardinality of the subset of neighbors that each robot must interact with $|\mathbf{n}|$, remains fixed or grows very slowly.
To achieve scalable multi-robot motion planning, we look to exploit the inherent graph structure among the robots and learn policies using local features only. We hypothesize that graph convolutional neural networks (GCNs) \cite{kipf2016semi,wu2019comprehensive} can be a good candidate to parametrize policies for robots as opposed to the fully connected networks used in \cite{khan2019learning}. GCNs work similar to convolutional neural networks (CNNs) and can be seen as an extension of CNNs to data placed on irregular graphs instead of a two dimensional grid such as an image. A brief overview of the workings of a GCN is provided in Sec \ref{sec:GCNs}.
We define a graph $\mathcal{G} = (\mathbf{V},\mathbf{E})$ where $\mathbf{V}$ is the set of nodes representing the robots and $\mathbf{E}$ is the set of edges defining relationships between them. These relationships can be arbitrary and are user defined. For example in this work we define edges between robots based on the Euclidean distance between them. This graph acts as a support for the data vector $\textbf{x}=[\mathbf{x}_1,\ldots,\mathbf{x}_N]^\top$ where $\mathbf{x}_n$ is the state representation of robot $n$. The GCN consists of multiple layers of graph filters and at each layer the graph filters extract local information from a node's neighbors, similar to how CNNs learn filters that extract features in a neighborhood of pixels. The information is propagated forward between layers after passing it through a non linear activation function similar to how one would propagate information in a CNN. The output of the final layer is given by $\Pi = [\pi_1,\ldots,\pi_N]$, where $\pi_1,\ldots,\pi_N$ are independent control policies for the robots. During training, each robot rolls out its own trajectory by executing its respective policy. Each robot also collects a centralized reward and using policy gradient methods ~\cite{sutton1998reinforcement} the weights of the GCN are updated. Further, since the robots are homogeneous and the graph filters only learn local information, we circumvent the computational burden of training many robots by training the GCN on only a small number of robots but during inference use the same filter across all robots.
We call this algorithm Graph Policy Gradients (GPG) (Fig.\ref{fig:mainfig}) and first proposed it in our earlier work \cite{khan2019graph} where it was used to control swarms of robots with designated goals. This is in contrast to this paper which focuses on using GPG for simultaneous goal assignment and trajectory optimization. Through varied simulations we demonstrate that GPG provides a suitable solution that extends the learning formulation of the unlabeled motion planning algorithm to have the ability to scale to many robots.
\section{Problem Formulation}
\label{sec:problem_formulation}
Consider a Euclidean space $\mathbb{R}^2$ populated with $\mathbf{N}$ homogeneous disk-shaped robots and $\mathbf{N}$ goals of radius $\text{R}$. Goal location $\mathbf{g}_m$ is represented as a vector of its position in the XY plane, i.e $\mathbf{g}_m = [x_m,y_m]$. Robot position at time $t$ is denoted as $\mathbf{p}_{nt}$ and is represented as a vector of its position in the XY plane, i.e $\mathbf{p}_{nt} = [x_{nt},y_{nt}]$. Each robot observes its relative position to $\mathbf{n}$ nearest goals where $\mathbf{n}$ is a parameter set by the user and depends on the task. Each robot also observes its relative position to $\mathbf{\hat{n}}$ nearest robots in order to avoid collisions. In the case that obstacles are present, each obstacle is represented by its position in the XY plane $\mathbf{o}_k = (x_k,y_k)$.
Each robot also observes its relative position to $\mathbf{\tilde{n}}$ nearest obstacles. We motivate this choice of feature representation in Sec.\ref{subsec:permequi}. Thus, the state of robot $n$ at time $t$ is given as:
\begin{equation}
\mathbf{x}_{nt} := [\mathbf{G}_{\mathbf{n}t},\mathbf{R}_{\mathbf{\hat{n}}t},\mathbf{O}_{\mathbf{\tilde{n}}t}]
\end{equation}
where $\mathbf{G}_{\mathbf{n}t}$ is the vector that gives relative positions of robot $n$ at time $t$ to $\mathbf{n}$ nearest goals, $\mathbf{R}_{\mathbf{\hat{n}}t}$ is the vector that gives relative positions of robot $n$ at time $t$ to $\mathbf{\hat{n}}$ nearest robots and $\mathbf{O}_{\mathbf{\tilde{n}}t}$ is the vector that gives relative positions of robot $n$ at time $t$ to $\mathbf{\tilde{n}}$ nearest obstacles. At each time $t$, each robot executes an action $\mathbf{a}_{nt} \in \mathcal{A}$, that evolves the state of the robot accorwding to some stationary dynamics distribution with conditional density $p(\mathbf{x}_{n t+1}|\mathbf{x}_{nt},\mathbf{a}_{nt})$. In this work, all actions represent continuous control (change in position or change in velocity). Collision avoidance and goal coverage conditions are encoded to yield the desired collision free unlabelled motion planning behavior where all goals are covered. The necessary and sufficient condition to ensure collision avoidance between robots is given as :
\begin{equation}
\label{eq:collision_avoidance}
E_c(\mathbf{p}_{it},\mathbf{p}_{jt}) > \delta, \\ \forall i \neq j \in \{1,\ldots \mathbf{N}\}, \forall {t}
\end{equation}
where $E_c$ is the euclidean distance and $\delta$ is a user-defined minimum separation between robots. The assignment matrix $\phi(t) \in \mathbb{R}^{\mathbf{N} \times \mathbf{N}}$ as
\begin{equation}
\phi_{ij}(t) =
\begin{cases}
1, &\text{if } E_c(\mathbf{p}_i(t),\mathbf{g}_j) \leq \psi \\
0, &\text{otherwise}
\end{cases}
\end{equation}
where $\psi$ is some threshold region of acceptance. The necessary and sufficient condition for all goals to be covered by robots at some time $t=T$ is then:
\begin{equation}
\label{eq:stopping}
\phi(T)^\top\phi(T) = \textbf{I}_{\mathbf{N}}
\end{equation}
where $\textbf{I}$ is the identity matrix. Lastly, since we are interested in exploiting local symmetry between robots, we define a graph $\mathcal{G}=(\mathbf{V},\mathbf{E})$ where the set of vertices $\mathbf{V}$ represents all the robots. An edge $e \in \mathbf{E}$ is said to exist between two vertices, if:
\begin{equation}
\label{eq:graph_node}
E_c(\mathbf{p}_{i},\mathbf{p}_{j}) \leq \lambda, \\ \forall i \neq j \in \{1,\ldots \mathbf{N}\}
\end{equation}
where $\lambda$ is some user defined threshold to connect two robots. Robots cannot externally communicate with any other robots. Robot $n$ can directly communicate only with its one hop neighbors given by $\mathcal{G}$. In order to communicate with robots further away, the communication must be indirect. Robots are given access to this graph and their own state $\mathbf{x}_{nt}$. The problem statement considered in this paper can then be formulated as :
\begin{problem}
\label{prob:prob1}
\textit{Given a set of $\mathbf{N}$ robots with some initial configurations, a set of $\mathbf{N}$ goals a graph $\mathcal{G}$ defining relationships between the robots} $\textbf{g}$\textit{, compute functions $\Pi = [\pi_1,\ldots,\pi_{\mathbf{N}}]$ such that execution actions $\{\mathbf{a}_{1t},\ldots,\mathbf{a}_{\mathbf{N}t}\}= \{\pi_1(\mathbf{a}_{1t}|\mathbf{x}_{1t},\mathcal{G}),\ldots,\pi_{\mathbf{N}}(\mathbf{a}_{\mathbf{N}t}|\mathbf{x}_{\mathbf{N}t},\mathcal{G})\}$ results in a sequence of states for the robots that satisfy Eq.\ref{eq:collision_avoidance} for all time and at some stopping time $t=T$, satisfy the assignment constraint in Eq.\ref{eq:stopping}.}
\end{problem}
\section{Graph Convolutional Networks}
\label{sec:GCNs}
A graph $\mathcal{G}=({\mathbf{V},\mathbf{E}})$ is described by a set of $\mathbf{N}$ nodes and a set of edges $\mathbf{E} \subseteq \mathbf{V} \times \mathbf{V}$. This graph can be represented by a graph shift operator $\mathbf{S}$, which respects the sparsity of the graph, i.e $s_{ij} = [\mathbf{S}]_{ij}=0$, $\forall$ $i\neq j \text{ and } (i,j) \notin \mathbf{E}$. Adjacency matrices, graph laplacians and their normalized versions are some examples of this graph shift operator which satisfy the sparsity property. At each node, there exists a data signal $\mathbf{x}_n$. Collectively, define $\textbf{x}=[\mathbf{x}_1,\ldots,\mathbf{x}_N]^\top \in \mathbb{R}^{\mathbf{N}}$ as the signal and its support is given by $\mathcal{G}$.
$\mathbf{S}$ can be used to define a linear map $\mathbf{y}=\mathbf{S}\textbf{x}$ which represents local information exchange between a given node and its neighbors. For example, consider a node $n$ with immediate or one-hop neighbors defined by the set $\mathfrak{B}_n$. The following equation then gives us a simple aggregation of information at node $n$ from its one-hop neighbors $\mathfrak{B}_n$.
\begin{equation}\label{eq:graph_signal}
y_n = [\mathbf{S}\mathbf{x}]_n =\sum_{j=n,j\in \mathfrak{B}_n}s_{nj}x_n
\end{equation}
By repeating this operation over all nodes in the graph, one can construct the signal $\mathbf{y}=[y_1,\ldots,y_{\mathbf{N}}]$. Recursive application of this operation yields information from nodes further away i.e the operation $\mathbf{y}^k = \mathbf{S}^k\mathbf{x} = \mathbf{S}(\mathbf{S}^{k-1}\mathbf{x})$ aggregates information from nodes that are $k$-hops away. Then graph convolutional filters can be defined as polynomials on $\mathbf{S}$ :
\begin{equation}\label{eq:z}
\mathbf{z} = \sum_{k=0}^{K} h_k \mathbf{S}^k \mathbf{x} = \mathbf{H(S)x}
\end{equation}
Here, the graph convolution is localized to a $K$-hop neighborhood for each node. The output from the graph convolutional filter is fed into a pointwise non-linear activation function $\sigma$ akin to convolutional neural networks.
\begin{equation}
\label{eq:finalapproxform}
\mathbf{z} = \sigma(\mathbf{H(S)x})
\end{equation}
A graph convolution network can then be composed by stacking several of these layers together as seen in Fig. \ref{fig:gnnfig}
\begin{figure}[hbt!]
\centering
\includegraphics[scale=0.25]{fig/gnn.png}
\caption{\textbf{Graph Convolutional Networks.} Each layer consists of chosen graph filters with pointwise non linearities. \label{fig:gnnfig}}
\end{figure}
\subsection{Permutation Equivariance}
\label{subsec:permequi}
On-policy RL methods to train policies for robots such as the ones proposed in \cite{khan2019learning} collect trajectories from the current policy and then use these samples to update the current policy. The initial policy for all robots is a random policy.
This is the exploration regime in RL. It is highly likely that as one were to randomly explore a space during training, the underlying graph $\mathcal{G}$ that connects robots based on nearest neighbors would change, i.e the graph at $t=0$, say $\mathcal{G}_0$ would be significantly different from the graph at $t=T$, say $\mathcal{G}_T$. We noted in Section \ref{sec:Sec1} that one of the key obstacles with learning scalable policies is that of dimensionality. At a first glance it seems that providing the underlying graph structure would only exacerbate the problem of dimensionality and will result in a large number of graphs that one needs to learn over as the number of robots are increased.
However, we use a key result from \cite{gama2019stability} that minimizes the number of graphs one must learn over.
The authors of \cite{gama2019stability} show that as long as the topology of the underlying graph is fixed, the output of the graph convolution remains the same even under node reordering. More concretely if given a set of permutation matrices $\bm{\mathcal{P}} = \{\mathbf{{P}}\in \{0,1\}^{\mathbf{N} \times \mathbf{N}}\ : \mathbf{{P1=1}},\mathbf{{P}^\top1 =1}\}$ such that the operation $\mathbf{\Bar{P}x}$ permutes elements of the vector $\mathbf{x}$, then it can be shown that
\begin{theorem}\label{theorem:permutationequivariance}
Given a graph $\mathcal{G}=(\mathbf{V},\mathbf{E})$ defined with a graph shift operator $\mathbf{S}$ and $\hat{\mathcal{G}}$ to be the permuted graph with $\mathbf{\hat{S}} = \mathbf{{P}}^{\top} \mathbf{S} \mathbf{{P}}$ for $\mathbf{{P}} \in \bm{\mathcal{P}}$ and any $\mathbf{x} \in \mathbb{R}^{\mathbf{N}}$ it holds that :
\begin{equation}
\mathbf{H}(\hat{\mathbf{S}})\mathbf{P}^{\top}\mathbf{x} = \mathbf{P}^{\top} \mathbf{H(S)x}
\end{equation}
\end{theorem}
We direct the reader to \cite{gama2019stability} for the proof of Theorem \ref{theorem:permutationequivariance}. The implication of Theorem \ref{theorem:permutationequivariance} is that if the graph exhibits several nodes that have the same graph neighborhoods, then the graph convolution
filter can be translated to every other node with the same neighborhood. We use this key property to bring down the number of training episodes one would need to learn over. In this work in order to keep the topology constant, we assume that the number nearest neighbors each robot is connected to, remains fixed during training and inference.
\section{Graph Policy Gradients for Unlabelled Motion Planning}
The unlabelled motion planning problem outlined in Sec \ref{sec:problem_formulation} can be recast as a reinforcement learning problem. The constraints in Eq. \ref{eq:collision_avoidance} and Eq. \ref{eq:stopping} can be captured by a centralized reward scheme that yield positive rewards only if all goals are covered by robots and negative or zero reward in case of collisions or otherwise.
\begin{equation}
\label{eq:rewardstruc}
r(t) =
\begin{cases}
\alpha &\text{if } \phi(t)^\top\phi(t) = \textbf{I}_{\mathbf{N}}\\
-\beta, &\text{if } \text{any collisions}\\
0 &\text{otherwise}
\end{cases}
\end{equation}
where $\alpha$ and $\beta$ are scalar positive values. Each robot receives this centralized reward. This reward scheme is independent of any constraints on obstacles, dynamics or other constraints on the robots. Let $\Pi=[\pi_1,\ldots,\pi_{\mathbf{N}}]$. Then an approximate solution for \textbf{Problem \ref{prob:prob1}} can be computed
by optimizing for the following loss function:
\begin{equation}
J = \sum_{n=1}^{\mathbf{N}} \max_{\theta} \mathbb{E}_{\Pi}\bigg[\sum_{t}^T r_t\bigg] \enspace
\end{equation}
where $\theta$ represents the parametrization for $\Pi$. In practice we optimize for a $\gamma$ discounted reward function \cite{sutton1998reinforcement}.
We use graph convolutions to learn filters that aggregate information locally at each robot. $L$ layers of the GCN are used to aggregate information from neighbors $L$ hops away as given in Eqn \ref{eq:finalapproxform}. At every time step $t$, the input to the first layer $z^{0}$ is the vector robot states stacked together, i.e $z^{0}=\textbf{x}_t = [\mathbf{x}_1,\ldots,\mathbf{x}_{\mathbf{N}}]$ The output at the final layer at any time $t$, $z^{L} =\Pi = [\pi_1,\ldots,\pi_{\mathbf{N}}]$ represents the independent policies for the robots from which actions are sampled.
During training, the weights of the graph filters or the GCN $\theta$, are randomly initialized. Constant graph topology is ensured by fixing the graph at start time. This graph is not varied as the policy is rolled out since the resulting graph convolution yields an equivalent result according to theorem \ref{theorem:permutationequivariance}.
Each robot rolls out a trajectory $\tau=(\mathbf{x_0},\mathbf{a_0},\ldots,\mathbf{x_T},\mathbf{a_T})$ and collects a reward. It is important to note that this reward is the same for all the robots. The policies for the robots are assumed to be independent and as a consequence the policy gradient $\nabla_{\theta}J$ can be computed directly and is given as:
\begin{equation}
\label{eq:policygradient}\begin{split}
\mathbb{E}_{\tau \sim (\pi_1,\ldots,\pi_{\mathbf{N}})}\Bigg[\Big(\sum_{t=1}^T\nabla_{\theta} \log[\pi_1(.)\ldots \pi_{\mathbf{N}}(.)]\Big) \Big(\sum_{t=1}^T r_t \Big) \Bigg]
\end{split}
\end{equation}
To achieve results that are scalable, we train the filters with a smaller number of robots. This has the additional benefit of reducing the number of episodes required for the policy to converge. During inference, we test with a larger number of robots and execute the learned graph filter over all the robots. Since the filters learn only local information, they are invariant to the number of filters. This is analogous to CNNs where once a filter's weights have been learned, local features can be extracted from an image of any size by simply sliding the filter all over the image. It is important to note that the graph filter only needs centralized information during training. During testing, this solution is completely decentralized as each robot only has a copy of the trained filter and uses it to compute policies that enable all goals to be covered.
We call this algorithm Graph Policy Gradients (GPG). In the next section, we demonstrate the use of these graph convolutions to learn meaningful policies for different versions of the unlabelled motion planning problem.
\section{Experiments}
To test the efficacy of GPG on the unlabelled motion planning problem, we setup a few experiments in simulation. To design a reward function that forces robots to cover all goals, we pick the following scheme. For each goal we compute the distance to its nearest robot. The maximum valued distance among these distances is multiplied with $-1$ and added to the reward. We denote this as $r_G(t)$ and it represents the part of the reward function that forces all goals to be covered. However, this does not account for collisions. In order to account for collisions, we compute distance between all robots and add a negative scalar to the reward if the distance between any pair of robots is less than a designed threshold. This part of the reward function is denoted as $r_R(t)$. In the case that the environment is also populated with obstacles, we follow a similar scheme and denote this part of the reward as $r_O(t)$. The overall reward at any time $t$ is then given as a weighted sum of these rewards
\begin{equation}
r(t) = w_1r_G(t) + w_2r_R(t) + w_3 r_O(t)
\end{equation}
where the weights $w_1, w_2$ and $w_3$ are balance the goal following, robot-robot collision avoidance and robot-obstacle collision avoidance. To test GPG, we establish four main experiments. \textbf{1)} Unlabelled motion planning with three, five and ten robots where robots obey point mass dynamics. \textbf{2)} In the second experiment, GPG is tested on three, five and ten robots but here robots obey single integrator dynamics. \textbf{3)} Here, the robots follow single integrator dynamics and additionally the environment is populated with disk shaped obstacles. \textbf{4)} In this experiment, the performance of GPG is tested against a model based provably optimal centralized solution for the unlabelled motion planning problem. We demonstrate empirically that the performance of GPG is almost always within a small margin of that of the model based method but comes with the additional advantage of being decentralized.
Closest to our work is that of \cite{khan2019learning} where the robot policies are parametrized by fully connected networks. Thus, to establish relevant baselines, we compare GPG with Vanilla Policy Gradients (VPG) where the policies for the robots are parameterized by fully connected networks (FCNs). Apart from the choice of policy parametrization there are no other significant differences between GPG and VPG.
\subsection{Experimental Details}
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{fig/rew_curves.png}
\caption{\textbf{Training Curves for 3, 5 and 10 robots} Policies trained by GPG are able to converge on experiments with point mass robots, experiments where robots follow single integrator dynamics and are velocity controlled as well as experiments when disk shaped obstacles are present in the environment. Here iteration refers to episodes. Darker line represents mean and shaded line represents variance.}
\label{fig:rew_curves}
\end{figure*}
\begin{figure*}[b]
\centering
\includegraphics[width=\textwidth]{fig/formationfig.png}
\caption{\textbf{Transferring Learned GPG Filters for Large Scale Unlabelled Motion Planning.} (Left) A small number of robots are trained to cover goals and follow point mass dynamics. During testing the number of robots as well as the distance of the goals from the start positions are much greater than those seen during training. (Center) A similar experiment is performed but now robots have single integrator dynamics. (Right) In this experiment in addition to single integrator dynamics, the environment also has obstacles that robots must avoid.\label{fig:formationfig}}
\end{figure*}
For GPG, we setup a L-layer GCN depending on the experiment. For experiments involving 3 and 5 robots with point mass experiments we find a 2 layer GCN, i,e a GCN that aggregates information from neighbors that are at most 2 hops away to be adequate. For experiments with 10 robots we find that GCNs with 4 layers work the best. For the baseline VPG experiments, we experiment with 2-4 layers of FCNs. The maximum episode length is $200$ steps and the discount factor $\gamma= 0.95$. In experiments with 3 robots, each robot senses 2 nearest goals and 1 nearest robot. In experiments with 5 and more robots, robots sense 2 nearest goals and 2 nearest robots. The graph $\mathcal{G}$ connects each robot to its $1$,$2$ and $3$ nearest neighbors in experiments with $3,5$ and $10$ robots respectively.
\subsection{Experimental Results - Training}
The behavior of GPG v/s VPG during training can be observed from Fig. \ref{fig:rew_curves}. We observe that the in all cases GPG is able to produce policies that converge close to the maximum possible reward (in all three cases maximum possible reward is zero). When compared to the convergence plots of \cite{khan2019learning} who first proposed use of RL for the unlabelled motion planning problem, this represents a large improvement on just training.
It can also be observed that GPG converges when robot dynamics are changed or obstacles are added to the environment. While this is not necessarily the optimal solution to the unlabelled motion problem, it is an approximate solution to the unlabelled motion planning problem. The fully connected network policies represented by VPG in Fig. \ref{fig:rew_curves} fails to converge even on the simplest experiments.
\subsection{Experimental Results - Inference}
The previous section shows the feasibility of GPG as a method for training a large swarm of robots to approximately solve the unlabelled motion planning problem. However, training a large number of robots is still a bottleneck due to the randomness in the system. Training 10 robots with simple dynamics on a state of the art NVIDIA 2080 Ti GPU with a 30 thread processor needs several hours (7-8). We see from our experiments that this time only grows exponentially as we increase the number of robots.
Thus, to overcome this hurdle and to truly achieve large scale solutions for the unlabelled motion planning problem, we hypothesize that since the graph filters learned by GPG only operate on local information, in a larger swarm one can simply slide the same graph filter everywhere in the swarm to compute policies for all robots without any extra training. Intuitively, this can be attributed to the fact that while the topology of the graph does not change from training time to inference time, the size of the filter remains the same. As stated before, this is akin to sliding a CNN filter on a larger image to extract local features after training it on small images.
To demonstrate the effect of GPG during inference time, we setup three simple experiments where we distribute goals along interesting formations. As described earlier, each robot only sees a certain number of closest goals, closest robots and if present closest obstacles. Our results can be seen in Fig. \ref{fig:formationfig}. The policies in Fig. \ref{fig:formationfig} (Left) and Fig. \ref{fig:formationfig} (Center) are produced by transferring policies trained to utilize information from 3-hop neighbors. In Fig. \ref{fig:formationfig} the policies are transferred after being trained to utilize information from 5-hop neighbors. Consider the formation shown in Fig \ref{fig:formationfig} (Left). Here each robot receives information about 3 of its nearest goals and these nearest goals overlap with its neighbors. Further, since the goals are very far away and robots are initialized close to each other, a robot and its neighbor receives almost identical information. In such a scenario the robots must communicate with each other and ensure that they each pick a control action such that they do not collide into each other and at the end of their trajectories, all goals must be covered. The GPG filters learn this local coordination and can be extended to every robot in the swarm.
Thus, with these results it can be concluded that GPG is capable of learning solutions for the unlabelled motion planning problem that scale to a large number of robots.
\subsection{Comparison to Centralized Model Based Methods}
\begin{figure}[b!]
\centering
\includegraphics[scale=0.25]{fig/timeplots.png}
\caption{\textbf{Time to Goals (Capt vs GPG)}. Time taken by Capt to cover all goals v/s time taken by GPG to cover all goals when robots follow velocity controls. The goals are arranged in different formations F1, F2 and F3 and differ from each other in terms of distance from the starting positions of the robots (on average goals in F3 > F2 > F1). \label{fig:timefig}}
\end{figure}
To further quantify the performance of GPG, we compare with a model based approach that uses centralized information, called concurrent assignment and planning (CAPT), proposed in \cite{turpin2014capt}. The CAPT algorithm is a probably correct and complete algorithm but needs centralized information. However, when used in an obstacle free environment, it guarantees collision free trajectories. We direct the reader to \cite{turpin2014capt} for more details about the CAPT algorithm. We set up three different formations F1, F2 and F3 similar to that in Fig \ref{fig:formationfig} (Center). On average the goals in F3 are further away from the starting positions of the robots than those in F2 and those in F2 are further away from goals in F1. In this work, we treat CAPT as the oracle and look to compare how well GPG performs when compared to this oracle. We use time to goals as a metric to evaluate GPG against this centralized oracle. Our results can be seen in Fig. \ref{fig:timefig}.
The key takeaway from this experiment is that decentralized inference using GPG, performs always within an $\epsilon$ margin (approximately 12-15 seconds) of the optimal solution and this margin remains more or less constant even if the goals are further away and if the number of robots are increased. Thus, from this we empirically demonstrate that GPG trades some measure of optimality in exchange for decentralized behavior and this trade-off remains more or less constant even as the number of robots are increased. Hence, with these experiments we conclude that GPG offers a viable solution for Problem \ref{prob:prob1} and is in fact scalable to many robots and further, is very close in performance to the provably optimal centralized solution.
\section{Acknowledgements}
We gratefully acknowledge support from Semiconductor Research Corporation (SRC) and DARPA, ARL DCIST CRAW911NF-17-2-0181, and the Intel Science and Technology Center for Wireless Autonomous Systems (ISTC-WAS) and the Intel Student Ambassador Program.
\section{Conclusion}
In this paper, we look to achieve scalable solutions for the full unlabelled motion planning problem. In the recent past RL approaches have been proposed to compute an approximate solution but these do not scale. In this work, we propose connecting the robots with a naive graph and utilize this graph structure to generate policies from local information by employing GCNs. We show that these policies can be transferred over to a larger number of robots and the solution computed is close to the solution computed by a centralized $\textit{oracle}$. One of the caveats of this paper is the problem of guaranteed collision free trajectories. It might even be possible to add a safety projection set such as that in \cite{khaniros} in order to guarantee collision free trajectories. We leave this for future work.
\addtolength{\textheight}{-12cm}
\bibliographystyle{IEEEtran}
|
2,869,038,154,718 | arxiv | \section{Introduction}
Optical quantum memories are a key component for several quantum information applications. For example, they are used for synchronization of entanglement swapping in quantum repeaters, which enables long distance quantum communication \cite{Briegel1998,Kimble2008Jun,Wehner2018Oct}. They are also essential for signal synchronization in linear-optics-based quantum computing schemes \cite{Knill2001Jan}. The ability to store single-photons for long times and retrieve them on-demand are some of the key requirements for a practical quantum memories \cite{Lvovsky2009Dec,Tittel2010Feb}.
Rare-earth ions are considered to be an attractive platform to realize quantum memories. This is primarily due to their excellent optical and spin coherence properties at cryogenic temperatures \cite{Zhong2015Jan}. Furthermore, the inhomogeneously broadened optical transition provides another resource that can be spectrally tailored and used to realize strong light-matter coupling \cite{,Nilsson2004,Sinclair2014Jul}. The atomic frequency comb (AFC) is one of the actively investigated quantum memory schemes used in rare-earth ions systems \cite{Afzelius2009May,deRiedmatten2008Dec,Afzelius2010Jan,Afzelius2010Aug,Sabooni2013Mar}. In the standard AFC scheme, light is stored as a collective optical excitation in an inhomogeneously broadened ensemble of ions. The ions are spectrally shaped into a series of narrow, highly absorbing peaks with a predefined frequency separation. The retrieval time of the stored excitation is predetermined by the frequency separation between the peaks. In order to enable on-demand retrieval, the scheme is combined with two bright control pulses to transfer the optical excitation to a spin level, and to recall it back to the optical level, on-demand, where it continues to rephase \cite{Afzelius2009May,Afzelius2010Jan,Jobez2014Aug}.
It is, however, challenging to realize spin-wave storage in the single photon regime due to the excessive optical noise created by emission from ions excited by the strong spin control pulses \cite{Gundogan2015Jun,Timoney2013Aug,Jobez2015Jun,Bonarota2014Aug}. This emission could be incoherent fluorescence from ions off-resonantly excited by the control pulses. It could also be coherent emission which can take the following forms: (\romannum{1}) free induction decay (FID) emitted due to resonant excitation of background ions on the spin control pulse transition, (\romannum{2}) undesired echo emission due to off-resonant excitation of the AFC ensemble by the control field \cite{Timoney2013Aug}.
In this paper, we demonstrate how all coherent noise sources can be strongly suppressed, by combining electric field effects at the nano scale with spin-wave storage, which has a potential to improve the single photon storage performance.
In a previous work, Stark control was combined with the standard AFC scheme to realize a noise-free and on-demand control without the need for spin transfer pulses \cite{Horvath2021May}. Furthermore, the Stark effect has been previously combined with photon echoes \cite{Meixner1992Sep,Wang1992May,Graf1997,Chaneliere2008Jun}, and with spin echoes \cite{Arcangeli2016Jun}. The Stark effect is also used in the CRIB memory scheme, where gradient electric fields are applied macroscopically along the light propagation axes to coherently control the collective emission from a narrow ensemble of ions \cite{Nilsson2005Mar,Alexander2006Feb,Kraus2006Feb,Lauritzen2011Jan}. The scheme presented here is based on using the linear Stark effect to split the ion ensemble within the nano scale into two electrically distinct ions classes that can be coherently controlled using electric field pulses. By applying an appropriate electric field pulse, coherent oscillations of the two ion classes are put 180$^{\circ}$ out of phase before applying the first spin control pulse. This will consequently suppress any coherent emission, including the photon echo. The echo emission will stay quenched also after applying the spin control pulses until a second electric field pulse is applied. This second electric field pulse puts the stored collective excitation back in phase, and simultaneously switches off any coherent processes initiated in the time between the two electric field pulses. In particular, coherent emission from the spin transfer pulses which otherwise might interfere with the signal recalled from the memory. The region between the two electric field pulses, where the coherent optical noise is suppressed, will therefore be referred to as the Vegas region \footnote{What happens in Vegas, stays in Vegas!}. To achieve spin-wave storage in the single photon regime with high signal-to-noise ratio, it is important to prevent everything created during this Vegas region from extending beyond it.
In addition, the electric field control provides more timing flexibility for applying the first spin control pulse without risking an echo re-emission during the spin transfer. The second electric pulse can also be tuned independently to delay the echo emission after the second spin control pulse, introducing one more degree of control to the spin-wave quantum memory scheme.
\section{Theory}
\label{sec:theory}
Part of the theory discussed in this section was already presented in Ref. \cite{Horvath2021May}, and we repeat it here for convenience. It should be noted that although we only describe how the electric field can be used to control the phase evolution of the memory part here, the theory can be generalised to include all other parts that contribute to coherent emissions. The permanent electric dipole moment of the ground state differs from that of the excited state in Pr$^{3+}$. As a consequence, when an external electric field, \textbf{E}, is applied across the crystal, it will Stark shift the resonance frequency of the ions by a magnitude $\Omega$, which is given by:
\begin{equation}
\Omega = \frac{\mathbf{\Delta\mu}\cdot\textrm{\textbf{E}}}{h},
\end{equation}
\noindent where $h$ is Planck's constant, and $\mathbf{\Delta\mu}$ is the difference in dipole moment between the ground state and the excited state. There are four possible orientations of $\mathbf{\Delta\mu}$ for Pr$^{3+}$ in \yso{}, all of them at an angle $\theta$ = 12.4$^{\circ}$ relative to the crystallographic $b$ axis as shown in Fig. \ref{fig:exp_setup} (a) \cite{Graf1997May}. For an electric field applied along the $b$ axis, the ions will split into two electrically distinct classes that experience the same magnitude of the Stark shift ($\Omega$), but with opposite signs. If the electric field is applied as a pulse with a finite duration, it will induce a phase shift of $+ \phi$ to one of the ion classes, and $- \phi$ to the other ion class, where $\phi$ is given by:
\begin{equation}
\phi = 2\pi \int \Omega dt.
\end{equation}
The spin-wave scheme is based on a three level configuration for storage. An incoming photon resonant with the optical transition $\ket{g}\to\ket{e}$ of the ions forming the AFC is stored as a collective optical excitation. In light of the distinction between the two electrically nonequivalent ions classes, the collective excitation can be described as \cite{Horvath2021May}:
\begin{equation}
\ket{\psi(t)} = \frac{1}{\sqrt{2M}}\sum_{\ell = 0}^{M-1} e^{i \omega_{\ell} t} \left[e^{i \phi} \ket{\psi_{\ell}^+} + e^{-i \phi} \ket{\psi_{\ell}^-}\right].
\label{eqn:col_ex}
\end{equation}
where $M$ the number of AFC peaks, $\omega_{\ell} = 2 \pi \Delta \ell$, with $\Delta$ being the spacing between the peaks. $\ket{\psi_{\ell}^\pm}$ are the wavefunctions of the positive and negative electrically inequivalent ion classes, which describe a delocalized optical excitation across the ions forming the peak, and is written as:
\begin{equation}
\ket{\psi_{\ell}^\pm} = \frac{1}{\sqrt{N_{\ell}^\pm}}\sum_{j = 1}^{N_{\ell}^\pm} c_{\ell j}^\pm e^{2 \pi i \delta_{\ell j}^\pm t} e^{-i k z_{\ell j}^\pm}\ket{g_1 \ldots e_j \ldots g_{N_{\ell}^\pm}}, \label{eqn:st_wf}
\end{equation}
Here $N_{\ell}^\pm$ is the number of atoms in peak $\ell$ that experience a $\pm$ frequency shift due to \textbf{E}, $c_{\ell j}^\pm$ is the amplitude which depends on the spectral detuning $\delta_{\ell j}^\pm$ from the center of peak $\ell$, and on the position $z_{\ell j}^\pm$ of atom $j$ in AFC peak $\ell$, and $k$ is the photon wave vector.
The collective optical excitation described by Eq. \ref{eqn:col_ex} initially dephases due to the frequency separation of the AFC peaks, and rephases after times 1/$\Delta$ due to the periodicity of the AFC, leading to an echo emission. In the spin-wave scheme, a strong control pulse is applied before the echo emission to transfer the collective optical excitation to a spin level $\ket{s}$, converting it to a collective spin excitation where each term in the superposition state is written as $\ket{g_1 \ldots s_j \ldots g_{N_{\ell}^\pm}}$. This also freezes the dephasing due to $\omega_l$ and $\delta_{lj}$. A second strong control pulse is applied on-demand to reverse this process, after which the collective optical excitation continue to rephase and eventually emit an echo after a total storage time $T_s+1/\Delta$, with $T_s$ being the time spent in the spin state.
\begin{figure*}[tbh!]
\centering
\includegraphics[width=0.8\textwidth]{Fig1_dipole_setup.pdf}
\caption{(a) The four possible static dipole moment orientations for Pr$^{3+}$. (b) Experimental setup used for spin-wave storage. The orange solid line correspond to light used for AFC preparation and for spin control pulses, all propagating in the backward direction. The red dotted line represents the data pulses propagating in the forward direction. Both lights were overlapped within the crystal.}
\label{fig:exp_setup}
\end{figure*}
By applying an electric field with $\int \Omega dt = 1/4$, before the first spin control pulse, the ions described by the two wavefunctions $\ket{\psi_{\ell}^\pm}$, given by Eq. \ref{eqn:st_wf}, will accumulate a $\pm\pi/2$ phase shift with respect to each other. As a result of this $\pi$ relative phase difference, the echo emission at 1/$\Delta$ will be turned off, giving more flexibility in the timing and the duration of the the first Spin control pulse without risking losing part of the echo due to rephasing during the spin transfer. After the second spin control pulse, the collective excitation will continue to evolve without echo emission until a second equivalent electric field pulse is applied. The second electric field pulse removes the $\pi$ relative phase difference between the two ion classes $\ket{\psi_{\ell}^\pm}$ in the AFC, and simultaneously adds a $\pi$ phase difference between the two classes of all other ions that were excited by the spin control pulses. This leads to an echo re-emission after $T_s+m/\Delta$, for m $\in \mathbb{N}$, and, at the same time, a suppression of all coherent background created within the Vegas region due to excitation by the spin control pulses.
\section{Experiment}
\label{sec:exp}
The experiment was performed on a 0.05\%-doped \pryso{}crystal cooled down to 2.1 K. The crystal had the dimensions of $6 \times 10 \times 10$ mm$^3$ for the $b \times D_1 \times D_2$ crystal axes, respectively. The top and bottom surfaces perpendicular to the $b$ axis were coated with gold electrodes, through which the electric field could be applied across the crystal. The AFC structure was prepared using the $^3$H$_4$ $\to$ $^1$D$_2$ transition centered around 494.723 THz for Pr$^{3+}$ in site 1.
The optical setup used for the experiment is shown in Fig. \ref{fig:exp_setup}(b). The light source used was a frequency stabilised coherent 699-21 ring dye laser, tuned to the center of the inhomogeneous line of the $^3$H$_4$ $\to$ $^1$D$_2$ in site 1, and polarized along the crystallographic $D_2$ axis. Light pulses were generated through a combination of the double-pass AOM1 and the single-pass AOM2 in series, through which the phase, frequency and amplitude of the pulses could be tailored. The light was the then split into two parts using 90:10 beam splitter, and the weaker beam was directly measured by the photodetector (PD1) as a reference to calibrate for laser intensity fluctuations. The rest of the light was split once more by another 90:10 beam splitter, and the weaker beam passed through the single pass AOM3, and propagated through the crystal in the forward direction (red dotted line). This light was used later to generate the data pulses to be stored. The stronger light that was transmitted through the beam splitter went through the double pass AOM4 setup, after which it propagated through the crystal in the backward direction (orange solid line). This light was used for the AFC preparation and for spin-transfer. The forward and backwards propagating beams were overlapped by maximizing the coupling of both beams through the two ends of the short fiber before the crystal.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{Fig2_Pulse_seq_level_str1_Vegas.pdf}
\caption{(a) level diagram of Pr$^{3+}$ showing the pulses used in the storage scheme. The solid red line is the input pulse, the two orange lines represent the 3 MHz FWHM control pulses used to transfer to and from the $\ket{3/2g}$ state, and the dashed red line is the echo. (b) A qualitative absorption measurement of the AFC structure structure, which also shows the spectral location of the pulses. The readout was distorted at the AFC peaks due to the high optical depth. The four peaks around 6.2 MHz correspond to the transition $\ket{1/2g}\to\ket{5/2e}$. (c) The pulse sequence in the time domain with signal and Vegas regions represented by the colored lines at the bottom. E1 and E2 are the first and second electric field pulses, C1 and C2 are the first and second spin control pulses. The lines in the bottom show which ions are affected by the electric field pulses at different times. E1 turns off coherent emission from ions excited at times $<$ t1, represented by the red line. E2 serves two purposes: it turns off all coherent background emission from ions excited between t1 and t4 by the two spin control pulses (orange line), and at the same time, E2 turns on emission that was turned off by E1 (red line), i.e., the stored photon echo. The gray line represents ions emitting incoherently, which is not affected by the electric field pulses.}
\label{fig:pulse_seq}
\end{figure}
The memory output (propagating in the forward direction) passed through AOM4 in a single pass, as the AOM was turned off during the output. Furthermore, the output was spatially separated from the control beams and instead directed toward photodetector PD2. The crystal was mounted inside a bath cryostat cooling it down to 2.1 K. The light propagated through the crystal along the $D_1$ axis.
As mentioned earlier, the storage was performed on the $^3$H$_4$ $\to$ $^1$D$_2$ transition in Pr$^{3+}$ . The level diagram of this transition is shown in Fig. \ref{fig:pulse_seq}(a). Both of the ground and the excited states have three hyperfine levels at zero magnetic field. In this experiment, the AFC is prepared in the $\ket{1/2g}$ level, the memory input is stored initially as an optical excitation in the $\ket{3/2e}$ level, and then transferred to the $\ket{3/2g}$ level as a spin-wave excitation.
Before preparing the AFC peaks, an 18 MHz wide transmission window was prepared in the center of the inhomogeneous line using the sequence described in Ref. \cite{Amari2010Sep}. The AFC was formed by coherently transferring back four narrow ensembles of ions to the $\ket{1/2g}$ level in the transmission window. This gave rise to four 140 kHz narrow absorption peaks separated by $\Delta =$ 600 kHz for their $\ket{1/2g}\to\ket{3/2e}$ transitions. As a result of those transfers, some unwanted ions were burned back to the $\ket{3/2g}$ level, and had to be cleaned away using frequency scan pulses with a scan range 8.5-14.5 MHz. The emptied $\ket{3/2g}$ level was used later for spin-wave storage. Due to the high absorption depth of the AFC peaks along the $D_2$ crystal axis, the weak frequency-scanned light used to probe the peaks was heavily distorted, which hindered a clean readout of the absorption structure.
Therefore, the AFC preparation sequence was tested in a different \pryso{} crystal with a nominally equivalent praseodymium concentration, in which it was possible to have the light propagating along the $b$ crystal axis with a polarization along the less absorbing $D_1$ crystal axis. The crystal was 12 mm long along the $b$ axis. The absorption spectra measured along the $D_1$ crystal axis is shown in Fig. \ref{fig:pulse_seq}(b). Despite the lower absorption, the readout still has some distortions.
The pulse sequence used in the experiment is shown in Fig. \ref{fig:pulse_seq}(a)-(b) in frequency domain, and in Fig. \ref{fig:pulse_seq}(c) in the time domain with the signal and Vegas regions highlighted. A Gaussian pulse with 500 ns FWHM was used as a memory input. The pulse was resonant with the center of the AFC, which coincides with the $\ket{1/2g}\to\ket{3/2e}$ transition of the ensemble. At time t1, just after the input was absorbed (at t0), and well before the first control pulse (at t2), a Gaussian electric-field pulse, E1, with an amplitude of 54 V and FWHM = 23 ns was applied across the crystal through the gold-coated electrodes. This pulse introduced a relative phase shift of $\pm\pi/2$ for the two electrically inequivalent ion classes, which froze the echo re-emission. This allowed for considerable timing flexibility for the application of the first spin-transfer pulse without risking the echo being re-emitted during the transfer. The spin transfer was performed using 2 $\mu$s long complex hyperbolic secant (sechyp) pulses\cite{Rippe2005Jun,Roos2004Feb}. The first transfer pulse, C1, resonant with the $\ket{3/2e}\to\ket{3/2g}$, was applied at t2. It was used to transfer the collective excitation into the $\ket{3/2g}$ state, which froze the evolution of the atomic dipoles and convert the optical excitation to a spin-wave excitation. At t3, a second spin-transfer pulse, C2, was used to bring the spin state back to the excited state. The dipoles then continued to evolve as a collective excitation without emitting the echo, due to the $\pi$ phase difference introduced by the first electric field pulse. A second electric field pulse, E2, was then applied at t4 to remove the $\pi$ phase difference that was created by E1, and at the same time added a $\pi$ phase difference between the two classes of ions excited within the Vegas region by the spin transfer pulses, C1 and C2. This led to an echo emission at t5 and a suppression of coherent background created due to excitation by C1 and C2. The total storage time for this scheme is given by $\frac{m}{\Delta}+T_s$ for m $\in$ $\mathbb{N}$, with $T_s$ being the separation between C1 and C2. Here, in order to separate the echo emission from the second transfer pulse, the second electric field pulse was delayed such that the echo is emitted at the second re-phasing, i.e., using m = 2.
\section{Results and discussion}
\begin{figure*}[tbh!]
\centering
\includegraphics[width=0.73\textwidth]{Fig3_cmap.pdf}
\caption{(a) Spin-wave storage with classical intensity input for varying storage times, and (b) for the shortest storage time. The first pulse marked by the solid red line is the part of the storage light transmitted through the AFC without being stored. The first electric field pulse was applied directly after absorption at the time marked by the green line. The second faint pulse marked by the solid orange line is scattering of the first control pulse that leaked into the detection path. The second orange line (tilted) is scattering from the second control pulse applied at varying times. This is followed by another electric field pulse marked by the second green line (tilted). The dashed red lines is the restored echo after rephasing. The rest of the peaks peaks are higher order echoes emitted.}
\label{fig:cmap}
\end{figure*}
The experiment was performed at varying storage times. The first electric and spin transfer pulses were fixed for all measurements. The second electric field pulse was delayed after the second control pulse such that the echo was emitted at the second re-phasing i.e. after $T_s + 2/\Delta$. This ensured that no part of the echo was emitted during the first or the second spin transfer pulses. By delaying the second electric field pulse, the recall of the echo can be further delayed after applying the second spin control pulse, which can be used as another degree of control. Here, both of the second spin control pulse and the second electric field pulse were delayed in steps of 1 $\mu$s to obtain different storage times. The result of this measurement is shown in Fig. \ref{fig:cmap}(a), with the storage sequence shown separately in Fig. \ref{fig:cmap}(b) for the shortest storage time. It is worth noting that the detection was performed in the forward direction, i.e. opposite to the control pulse propagation. Nevertheless, reflection of the control pulses, C1 and C2, from optical surfaces leaked into the detection path, and is marked by the two orange solid lines shown in Fig. \ref{fig:cmap}(a). There are several ways to reduce this reflection of the control pulses, for example using optical surfaces with anti-reflective coating, or by having a small angle between the control beam and the storage beam. The two green lines in the figure indicate the times when the two electric field pulses were applied. A recall of the second order echo as well as four other higher order echos can be seen in the figure.
A challenge when implementing the spin-wave storage scheme at the single photon level is the optical noise created due to the control pulses, which can be either incoherent fluorescence or coherent collective emission such as FID and off-resonant echoes. When applying the second electric field to switch on the signal echo emission, it simultaneously shifts the phases of all ions contributing to the coherent optical noise, which consequently turns off the coherent noise contribution from these ions.
\begin{figure}[tbh!]
\centering
\includegraphics[width=0.45\textwidth]{Fig4_inh_decay.pdf}
\caption{The circles are the maximum echo intensity at different spin storage time $T_s$, the dashed line is a Gaussian fit to Eq. \ref{eq:gauss}, giving an inhomogeneous spin linewidth of 26.8 $\pm$ 0.8 kHz}
\label{fig:inh_decay}
\end{figure}
The exponential decay of the echo intensity, which can also be seen in Fig. \ref{fig:inh_decay}, is attributed to the inhomogeneous broadening of the spin transition. This was confirmed by fitting the echo intensity against the spin storage duration (T$_s$) to the following Gaussian \cite{Timoney2012Jun}:
\begin{equation}
\textrm{I(T}_s\textrm{)} = \textrm{I}_0 \times \textrm{exp}\left[\frac{-(\gamma_{IS}T_s)^2}{2\textrm{ log}(2)/\pi^2} \right],
\label{eq:gauss}
\end{equation}
where $I_0$ is a constant, and $\gamma_{IS}$ is the inhomogeneous spin linewidth. From the Gaussian fit, we obtain an inhomogeneous spin linewidth of 26.8 $\pm$ 0.8 kHz, in agreement with previous measurements in the same material\cite{Afzelius2010Jan,Gundogan2015Jun}. We see no contribution of any additional dephasing due to our phase switching technique using the electric field.
\begin{figure}[tbh!]
\centering
\includegraphics[width=0.45\textwidth]{Fig5_FID_switched.pdf}
\caption{FID quenching using an electric field pulse. The blue line is the Gaussian pulse before propagating through the crystal. The yellow dashed line is the transmission after going through the narrow peak, with an extended FID emission. The delay on the transmission is caused by the slow light effect due to the strong dispersion across the transmission window. The green line shows the electric field pulse applied to switch off the FID emission. The red solid line shows the FID when it is turned off, also delayed due to dispersion.}
\label{fig:switched_FID}
\end{figure}
\subsection{Suppression of free-induction decay}
In order to investigate the capacity of this technique to suppress coherent FID noise, a single 140 kHz peak was burned back in an empty 18 MHz wide transmission window. A Gaussian pulse with FWHM of 4 $\mu$s resonant with the peak was sent through the crystal. After absorbing the pulse, the ions in the narrow peak oscillate in phase and emit a coherent FID for a duration defined by the peak width. By applying an electric field pulse, the two electrically inequivalent ions classes were put out of phase, which consequently lead to a suppression of the FID emission. This was performed using classical light intensities to demonstrate the effect, and the result is shown in Fig. \ref{fig:switched_FID}. The integrated emission during the time interval $\left[6,25\right] \mu$s is reduced by a factor of $\sim$ 44 when the electric field pulse was applied compared to no pulse. In theory, the suppression of coherent emission is only limited by the electric field inhomogeneity across the light propagation path in the crystal. Although we only show the switching of the FID here, the same can be expected for all other kinds of coherent noise. This includes coherent off-resonantly excited echo emitted at the same frequency as the input, referred to as OREO in \cite{Timoney2013Aug}, and caused by off-resonant excitation of the comb structure by the spin control pulses. In addition, a two-pulse photon echo is another possible noise source that can be switched off by this technique, which is created due to the two spin control pulses, and emitted at time $T_s$ after the second pulse.
Since the presented technique only affects coherent optical noise, it would be most effective when used in materials where the majority of the optical noise is coherent, and thus can be quenched electrically without recourse to complicated spectral filtering.
It should be noted that the FID switching technique presented here is different from the one discussed in Ref. \cite{Minar2009Nov}, where an electric-field gradient dephases atoms on a macroscopic scale along the light propagation direction. In contrast, the control is achieved here by switching groups of ions out of phase on the submicron scale using discrete homogeneous electric field pulses. This is an essential difference, since microscopic cancellation may turn off the unwanted emissions in all directions.
\subsection{Fluorescence estimation}
Spin state storage at the single photon level is challenging and will be strongly affected by both coherent and incoherent fluorescence noise as mentioned earlier. Using the present technique, all sources of coherent noise can be reduced to such a high degree that fluorescence from off-resonant excitation will be the main limiting factor. The current experiments were performed with the Pr$^{3+}$ ion due to experimental availability, although it is not an ideal choice. Experiments at the single photon level demonstrated an optical noise level of $\sim$ 0.05 photon per shot within the echo emission duration. This is about an order of magnitude higher than the signal level expected from storage of a single photon. The optical noise was measured by applying the AFC sequence, described in Section \ref{sec:exp}, but without a storage pulse. The emission after the second electric pulse was collected with an optical collection efficiency of 40$\%$, and was detected using a Laser Components Count 50N avalanche photodiode with a quantum efficiency of 0.69 at 606 nm and a dark-count rate of 26 Hz. A Chroma band-pass filter (ET590/33m) was mounted before the detector to block light at wavelengths above 610 nm.
To explore the potential of our technique, we here make an estimate of the fluorescence noise in other more suitable materials. In order to benchmark materials where the presented technique would be effective, we look into parameters that would lead to negligible incoherent fluorescence emission.
The fluorescence is emitted due to off-resonant excitation of ions outside the spectral transmission window. To estimate the fluorescence noise, we look at the remaining absorption at the center of a spectral transmission window due to the Lorentzian tail of the ions outside. The total off-resonant absorption ($\alpha_c$) at the center of the transmission window can be written as \cite{Horvath2022Mar}:
\begin{equation}
\alpha_c = \frac{2}{\pi}\frac{\Gamma_h}{\Delta}\alpha_0.
\label{eq:alpha_c}
\end{equation}
Here $\Gamma_h$ is the homogeneous linewidth of the ions, $\Delta$ is the width of the transmission window, and $\alpha_0$ is the absorption outside the transmission window. For a crystal of length $L$, the power absorbed ($P_{abs}$) by the off resonant ions can be written as:
\begin{equation}
P_{abs} = P_{in} [1 - e^{\alpha_c L}],
\label{eq:P_abs}
\end{equation}
where $P_{in}$ is the input power. Equation \ref{eq:alpha_c} shows that materials with narrow homogeneous linewidth and in which wide spectral transmission windows can be created are favourable to reduce the off-resonant excitation. Here, we look into Eu$^{3+}$, which has an optical homogeneous linewidth of 122 Hz in site 1, an excited state lifetime of 1.9 ms \cite{Equall1994Apr}, and a branching ratio of $\sim$ 11\% to the $^7$F$_0$ zero phonon line, calculated from the lifetime of the excited stated and the dipole moment of the transition \cite{Hilborn1998Jul,Graf1998Sep}. For a 1\% doping concentration in \yso, the absorption depth of Eu$^{3+}$ along the $D_1$ crystal axis is 3.9 cm$^{-1}$ \cite{Konz2003Aug}. For a given input power, the part that will be off-resonantly absorbed by the ions outside a 40 MHz wide spectral transmission window in Eu$^{3+}$ is $\sim$ 20 ppm of the input. All off-resonantly absorbed photons are assumed to be re-emitted as fluorescence. Out of the total isotropic fluorescence, only the part that is overlapping with the spatial mode of the echo emission contributes to the optical noise. Here we assume that a diameter of 1 mm of the fluorescence light is collimated 20 cm after the crystal, which is only 1 ppm of the total isotropic emission. Furthermore, fluorescence photons are emitted at different times with some decay constant given by the excited state lifetime. Only photons emitted during the same time bin as the stored signal photon will contribute to the optical noise. For a 1 $\mu$s time bin at the beginning of the fluorescence decay, the probability of photon emission will be $\sim$ 0.1\%. Assuming a 1 $\mu$s long control pulse with 100 mW power, this will lead to an average of $\sim$ 10$^{-4}$ incoherent fluorescence photon emitted in a 1 $\mu$s time bin.
At such low level of incoherent noise, our presented technique can quench the other coherent noise emissions such as FID and off-resonant echoes, allowing for single photon storage without the need for additional spectral filtering. The lower optical depth in Eu$^{3+}$ can be compensated by a cavity to enhance the memory efficiency as has been demonstrated in Ref. \cite{Jobez2014Aug}.
For comparison, doing the same calculation for the Pr$^{3+}$ ions used in the experiment, we estimate $\sim$ 0.03 incoherent fluorescence photon in a 1 $\mu$s time bin, taking into account experimental parameters, such as collection and detection efficiencies as well as control pulses with 10 mW power. This noise level is very close to the measured background mentioned earlier, and shows that our model gives reasonable predictions.\\
\section{Conclusion}
We used the linear Stark effect to coherently control the emission of the echo in the spin-wave storage scheme using electric field pulses. The first electric field pulse was used to switch off the echo emission after the absorption of the storage pulse, giving more time flexibility for applying the first spin control pulse. Then after the second spin control pulse, we used another electric field pulse to turn on the echo emission. We also showed that this technique can turn off the FID emission, and could therefore be used to quench the coherent optical emissions induced by the strong spin control pulses when performing the spin-wave storage at the single-photon level. If used in Eu$^{3+}$ :\yso, this technique has potential to enable spin-wave storage of single photons without the need for additional spectral filtering, which would substantially simplify noise-free quantum memory experiments.
\section{\label{sec:ack} ACKNOWLEDGMENTS}
This research was supported by the Swedish Research Council (no. 2016-05121, no. 2015-03989, no. 2016-04375, no. 2019-04949 and no. 2021-03755), the Knut and Alice Wallenberg Foundation (KAW 2016.0081), the Wallenberg Center for Quantum Technology (WACQT) funded by The Knut and Alice Wallenberg Foundation (KAW 2017.0449), and the European Union FETFLAG program, Grant No. 820391 (SQUARE) (2017.0449).
|
2,869,038,154,719 | arxiv | \section{Introduction}
The inverse Ising problem is intensively studied in statistical
physics, computational biology and computer science in the few past
years~\cite{Nature-06,Cocco-09,Weigt-11,Wain-2010}. The biological
experiments or numerical simulations usually generate a large amount
of experimental data, e.g., $M$ independent samples
$\{\boldsymbol{\sigma}^{1},\boldsymbol{\sigma}^{2},\ldots,\boldsymbol{\sigma}^{M}\}$
in which $\boldsymbol{\sigma}$ is an $N$-dimensional vector with
binary components ($\sigma_{i}=\pm1$) and $N$ is the system size.
The least structured model to match the statistics of the
experimental data is the Ising model~\cite{Bialek-09ep}:
\begin{equation}\label{Ising}
P_{{\rm
Ising}}(\boldsymbol{\sigma})=\frac{1}{Z(\mathbf{h},\mathbf{J})}\exp\left[\sum_{i<j}J_{ij}\sigma_{i}\sigma_{j}+\sum_{i}h_{i}\sigma_{i}\right]
\end{equation}
where the partition function $Z(\mathbf{h},\mathbf{J})$ depends on
the $N$-dimensional fields and $\frac{N(N-1)}{2}$-dimensional
couplings. These fields and couplings are chosen to yield the same
first and second moments (magnetizations and pairwise correlations
respectively) as those obtained from the experimental data. The
inverse temperature $\beta=1/T$ has been absorbed into the strength
of fields and couplings.
Previous studies of the inverse Ising problem on Hopfield
model~\cite{Huang-2010a,Huang-2010b,SM-11,Pan-11,Huang-2012} lack a
systematic analysis for treating sparse networks. Inference of the
sparse network also have important and wide applications in modeling
vast amounts of biological data. Actually, the real biological
network is not densely connected. To reconstruct the sparse network
from the experimental data, an additional penalty term is necessary
to be added into the cost function, as studied in recovering sparse
signals in the context of compressed
sensing~\cite{Kabashima-2009,Montanari-2010,Elad-2010} or in Ising
model selection~\cite{Wain-2010,Bento-2011}. This strategy is known
as $\ell_{1}$-regularization which introduces an $\ell_{1}$-norm
penalty to the cost function (e.g., the log-likelihood of the Ising
model). The regularization is able to minimize the impact of finite
sampling noise, thus avoid the overfitting of data. The
$\ell_{1}$-regularization has been studied in the pseudo-likelihood
approximation to the network inference problem\cite{Aurell-2011} and
in the setting of sparse continuous perceptron memorization and
generalization~\cite{Weigt-09}. This technique has also been
thoroughly discussed in real neural data analysis using selective
cluster expansion method~\cite{Cocco-12,CM-12}. The cluster
expansion method involves repeated solution of the inverse Ising
problem and the computation of the cluster entropy included in the
expansion (cluster means a small subset of spins). To truncate the
expansion, clusters with small entropy in absolute value are
discarded and the optimal threshold needs to be determined.
Additionally, the cluster size should be small to reduce the
computational cost while at each step a convex optimization of the
cost function (see Eq.~(\ref{cost})) for the cluster should be
solved. This may be complicated in some cases. The pseudo-likelihood
maximization~\cite{Aurell-2011} method relies on the complete
knowledge of the sampled configurations, and involves a careful
design of the numerical minimization procedure for the
pseudo-likelihood (e.g., Newton descent method, or interior point
method) at a large computational cost (especially for large sample
size). In this paper, we provide an alternative way to reconstruct
the sparse network by combining the Bethe approximation and the
$\ell_{1}$-regularization, which is much simpler in practical
implementation. We expect that the $\ell_{1}$-regularization will
improve the prediction of the Bethe approximation. To show the
efficiency, we apply this method to the sparse Hopfield network
reconstruction.
Our contributions in this work are two-fold. (1) We provide a
regularized quadratic approximation to the negative log-likelihood
function for the sparse network construction by neglecting higher
order correlations, which yields a new inference equation reducing
further the inference error. Furthermore, the implementation is much
simple by saving the computational time. (2) Another significant
contribution is a scaling form for the optimal regularization
parameter is found, and this scaling form is useful for choosing the
suitable regularization. Most importantly, the method is not limited
to the tested model (sparse Hopfield model), and is generally
applicable to other diluted mean field models and even real data
analysis (e.g., neural data). The outline of the paper is as
follows. The sparse Hopfield network is defined in
Sec.~\ref{sec_sHopf}. In Sec.~\ref{sec_method}, we present the
hybrid inference method by using the Bethe approximation and
$\ell_{1}$-regularization. We test our algorithm on single instances
in Sec.~\ref{sec_result}. Concluding remarks are given in
Sec.~\ref{sec_Sum}.
\section{Sparse Hopfield model}
\label{sec_sHopf}
The Hopfield network has been proposed in Ref.~\cite{Hopfield-1982}
as an abstraction of biological memory storage and was found to be
able to store an extensive number of random unbiased
patterns~\cite{Amit-1987}. If the stored patterns are dynamically
stable, then the network is able to provide associative memory and
its equilibrium behavior is described by the following Hamiltonian:
\begin{equation}\label{Hami}
\mathcal{H}=-\sum_{i<j}J_{ij}\sigma_{i}\sigma_{j}
\end{equation}
where the Ising variable $\sigma$ indicates the active state of the
neuron ($\sigma_{i}=+1$) or the silent state ($\sigma_{i}=-1$). For
the sparse network storing $P$ random unbiased binary patterns, the
symmetric coupling is constructed~\cite{Somp-1986,Fukai-1998} as
\begin{equation}\label{J_spar}
J_{ij}=\frac{l_{ij}}{l}\sum_{\mu=1}^{P}\xi_{i}^{\mu}\xi_{j}^{\mu}
\end{equation}
where $l$ is the average connectivity of the neuron. $l\sim\mathcal
{O}(1)$ independent of the network size $N$. Note that in this case,
the number of stored patterns can only be finite. In the
thermodynamic limit, $P$ scales as $P=\alpha l$ where $\alpha$ is
the memory load. No self-interactions are assumed and the
connectivity $l_{ij}$ obeys the distribution:
\begin{equation}\label{distri}
P(l_{ij})=\left(1-\frac{l}{N-1}\right)\delta(l_{ij})+\frac{l}{N-1}\delta(l_{ij}-1).
\end{equation}
Mean field properties of the sparse Hopfield network have been
discussed within replica symmetric approximation in
Refs.~\cite{Coolen-2003,Skantzos-2004}. Three phases (paramagnetic,
retrieval and spin glass phases) have been observed in this sparsely
connected Hopfield network with arbitrary finite $l$. For large $l$
(e.g., $l=10$), the phase diagram resembles closely that of
extremely diluted
($\lim_{N\rightarrow\infty}l^{-1}=\lim_{N\rightarrow\infty}l/N=0$,
such as $l=\ln N$) case~\cite{Watkin-1991,Canning-1992} where the
transition line between paramagnetic and retrieval phase is $T=1$
for $\alpha\leq 1$ and that between paramagnetic and spin glass
phase $T=\sqrt{\alpha}$ for $\alpha\geq 1$. The spin glass/retrieval
transition occurs at $\alpha=1$.
To sample the state of the original model Eq.~(\ref{Hami}), we apply
the Glauber dynamics rule:
\begin{equation}\label{GDrule}
P(\sigma_{i}\rightarrow-\sigma_{i})=\frac{1}{2}\left[1-\sigma_{i}\tanh\beta h_{i}\right]
\end{equation}
where $h_{i}=\sum_{j\neq i}J_{ij}\sigma_{j}$ is the local field
neuron $i$ feels. In practice, we first randomly generate a
configuration which is then updated by the local dynamics rule
Eq.~(\ref{GDrule}) in a randomly asynchronous fashion. In this
setting, we define a Glauber dynamics step as $N$ proposed flips.
The Glauber dynamics is run totally $3\times 10^{6}$ steps, among
which the first $1\times 10^{6}$ steps are run for thermal
equilibration and the other $2\times 10^{6}$ steps for computing
magnetizations and correlations, i.e.,
${m_{i}=\left<\sigma_{i}\right>_{{\rm data}},
C_{ij}=\left<\sigma_{i}\sigma_{j}\right>_{{\rm data}}-m_{i}m_{j}}$
where $\left<\cdots\right>_{{\rm data}}$ denotes the average over
the collected data. The state of the network is sampled every $20$
steps after thermal equilibration (doubled sampling frequency yields
the similar inference result), which produces totally $M=100 000$
independent samples. The magnetizations and correlations serve as
inputs to our following hybrid inference algorithm.
\section{Bethe approximation with $\ell_{1}$ regularization}
\label{sec_method}
The Bethe approximation assumes that the joint probability
(Boltzmann distribution, see Eq.~(\ref{Ising})) of the neuron
activity can be written in terms of single-neuron marginal for each
single neuron and two-neuron marginal for each pair of adjacent
neurons as
\begin{equation}\label{Bethe}
P_{{\rm
Ising}}(\boldsymbol\sigma)\simeq\prod_{(ij)}\frac{P_{ij}(\sigma_{i},\sigma_{j})}{P_{i}(\sigma_{i})P_{j}(\sigma_{j})}\prod_{i}P_{i}(\sigma_{i})
\end{equation}
where $(ij)$ runs over all distinct pairs of neurons. This
approximation is exact on tree graphs and asymptotically correct for
sparse networks or networks with sufficiently weak
interactions~\cite{Mezard-09}. Under this approximation, the free
energy ($-\ln Z$) can be expressed as a function of connected
correlations $\{C_{ij}\}$ (between neighboring neurons) and
magnetizations $\{m_{i}\}$. The stationary point of the free energy
with respect to the magnetizations yields the following
self-consistent equations:
\begin{equation}\label{m}
m_{i}=\tanh\left(h_{i}+\sum_{j\in\partial
i}\tanh^{-1}\left(t_{ij}f(m_{j},m_{i},t_{ij})\right)\right)
\end{equation}
where $\partial i$ denotes neighbors of $i$, $t_{ij}=\tanh J_{ij}$
and
$f(x,y,t)=\frac{1-t^{2}-\sqrt{(1-t^{2})^{2}-4t(x-yt)(y-xt)}}{2t(y-xt)}$.
Using the linear response relation to calculate the connected
correlations for any pairs of neurons, we obtain the Bethe
approximation (BA) to the inverse Ising
problem~\cite{Ricci-2012,Berg-2012}:
\begin{equation}\label{BA}
J_{ij}=-\tanh^{-1}\Biggl[\frac{1}{2(\mathbf{C}^{-1})_{ij}}(a_{ij}-b_{ij})
-m_{i}m_{j}\Biggr],
\end{equation}
where $\mathbf{C}^{-1}$ is the inverse of the connected correlation
matrix, $a_{ij}=\sqrt{1+4L_{i}L_{j}(\mathbf{C}^{-1})_{ij}^{2}}$,
$L_{i}=1-m_{i}^{2}$ and
$b_{ij}=\sqrt{\left(a_{ij}-2m_{i}m_{j}(\mathbf{C}^{-1})_{ij}\right)^{2}-4(\mathbf{C}^{-1})_{ij}^{2}}$.
The couplings have been scaled by the inverse temperature $\beta$.
Note that fields can be predicted using Eq.~(\ref{m}) after we get
the set of couplings. Hereafter we consider only the reconstruction
of the coupling vector. In fact, the BA solution of the couplings
corresponds to the fixed point of the susceptibility
propagation~\cite{Mezard-09,Huang-2010b}, yet it avoids the
iteration steps in susceptibility propagation and the possible
non-convergence of the iterations. It was also found that the BA
yields a good estimate to the underlying couplings of the Hopfield
network~\cite{Huang-2010b}. In the following analysis, we try to
improve the prediction of BA with $\ell_{1}$-regularization.
The cost function to be minimized in the inverse Ising problem can
be written as the following rescaled negative log-likelihood
function~\cite{SM-09}:
\begin{equation}\label{cost}
\begin{split}
S(\mathbf{h},\mathbf{J}|\mathbf{m},\mathbf{C})&=-\frac{1}{M}\ln\left[\prod_{\mu=1}^{M}P_{{\rm Ising}}(\boldsymbol{\sigma}^{\mu}|\mathbf{h},\mathbf{J})\right]\\
&=\ln
Z(\mathbf{h},\mathbf{J})-\mathbf{h}^{T}\mathbf{m}-\frac{1}{2}{\rm tr}(\mathbf{J}\mathbf{\tilde{C}})
\end{split}
\end{equation}
where $m_{i}=\left<\sigma_{i}\right>_{{\rm data}}$ and $
\tilde{C}_{ij}=\left<\sigma_{i}\sigma_{j}\right>_{{\rm data}}$.
$\mathbf{h}^{T}$ denotes the transpose of the field vector while
${\rm tr}(\mathbf{A})$ denotes the trace of matrix $\mathbf{A}$. The
minimization of $S(\mathbf{h},\mathbf{J}|\mathbf{m},\mathbf{C})$ in
the $\frac{N(N+1)}{2}$-dimensional space of fields and couplings
yields the following equations:
\begin{subequations}\label{mJ}
\begin{align}
m_{i}&=\left<\sigma_{i}\right>,\\
C_{ij}&=\left<\sigma_{i}\sigma_{j}\right>-\left<\sigma_{i}\right>\left<\sigma_{j}\right>\label{mJ2}
\end{align}
\end{subequations}
where the average is taken with respect to the Boltzmann
distribution Eq.~(\ref{Ising}) with the optimal fields and couplings
(corresponding to the minimum of $S$). Actually, one can use Bethe
approximation to compute the connected correlation in the right-hand
side of Eq.~(\ref{mJ2}), which leads to the result of
Eq.~(\ref{BA}).
To proceed, we expand the cost function around its minimum with
respect to the fluctuation of the coupling vector up to the second
order as
\begin{equation}\label{expand}
S(\mathbf{J})\simeq
S(\mathbf{J}_{0})+\nabla S(\mathbf{J}_{0})^{T}\mathbf{\tilde{J}}+\frac{1}{2}\mathbf{\tilde{J}}^{T}\mathbf{H}_{S}(\mathbf{J}_{0})\mathbf{\tilde{J}}
\end{equation}
where $\mathbf{\tilde{J}}$ defines the fluctuation
$\mathbf{\tilde{J}}\equiv\mathbf{J}-\mathbf{J}_{0}$ where
$\mathbf{J}_{0}$ is the (near) optimal coupling vector. $\nabla
S(\mathbf{J}_{0})$ is the gradient of $S$ evaluated at
$\mathbf{J}_{0}$, and $\mathbf{H}_{S}(\mathbf{J}_{0})$ is the
Hessian matrix. The quadratic approximation to the log-likelihood
has also been used to develop fast algorithms for estimation of
generalized linear models with convex
penalties~\cite{Friedman-2010}. We have only made explicit the
dependence of $S$ on the coupling vector. The first order
coefficient vanishes due to Eq.~(\ref{mJ}). Note that the Hessian
matrix is an $N(N-1)/2\times N(N-1)/2$ symmetric matrix whose
dimension is much higher than that of the connected correlation
matrix. However, to construct the couplings around neuron $i$, we
consider only the neuron $i$-dependent part, i.e., we set $l=i$ in
the Hessian matrix
$\chi_{ij,kl}=\left<\sigma_{i}\sigma_{j}\sigma_{k}\sigma_{l}\right>-\left<\sigma_{i}\sigma_{j}\right>\left<\sigma_{k}\sigma_{l}\right>$
where $ij$ and $kl$ run over distinct pairs of neurons. This
simplification reduces the computation cost but still keeps the
significant contribution as proved later in our simulations. Finally
we obtain
\begin{equation}\label{apprS}
S(\mathbf{J})\simeq
S(\mathbf{J}_{0})+\frac{1}{2}\sum_{ij,ki}\tilde{J}_{ij}(\tilde{C}_{jk}-\tilde{C}_{ij}\tilde{C}_{ki})\tilde{J}_{ki}+\lambda\sum_{ij}|J_{0,ij}+\tilde{J}_{ij}|
\end{equation}
where an $\ell_{1}$-norm penalty has been added to promote the
selection of sparse network
structure~\cite{Hertz-12,Cocco-12,Bento-2011}. $\lambda$ is a
positive regularization parameter to be optimized to make the
inference error (see Eq.~(\ref{error})) as low as possible. The
$\ell_{1}$-norm penalizes small but non-zero couplings and
increasing the value of the regularization parameter $\lambda$ makes
the inferred network sparser. In the following analysis, we assume
$\mathbf{J}_{0}$ is provided by the BA solution (a good
approximation to reconstruct the sparse Hopfield
network~\cite{Huang-2010b}, yielding a low inference error), then we
search for the new solution to minimize the regularized cost
function Eq.~(\ref{apprS}), finally we get the new solution as
follows,
\begin{equation}\label{reg}
J^{(i)}_{ij}=J_{0,ij}-\lambda\sum_{k}{\rm sgn}(J_{0,ik})[\mathbf{C}^{i}]^{-1}_{kj}
\end{equation}
where ${\rm sgn}(x)=x/|x|$ for $x\neq0$ and
$(\mathbf{C}^{i})_{kj}=\tilde{C}_{kj}-\tilde{C}_{ji}\tilde{C}_{ik}$.
Eq.~(\ref{reg}) results from $\frac{\partial S(\mathbf{J})}{\partial
J_{ij}}=0$ which gives
$\mathbf{\tilde{J}}^{T}\mathbf{C}^{i}=\boldsymbol{\Lambda}^{T}$,
where $\Lambda_{j}=-\lambda{\rm sgn}(J_{0,ij}) (j\neq i)$ and
$\tilde{J}_{j}=J_{ij}-J_{0,ij} (j\neq i)$. $J^{(i)}_{ij}$ represents
couplings around neuron $i$. To ensure the symmetry of the
couplings, we construct
$J_{ij}=\frac{1}{2}(J^{(i)}_{ij}+J^{(j)}_{ji})$ where $J^{(j)}_{ji}$
is also given by Eq.~(\ref{reg}) in which $i$ and $j$ are exchanged.
The inverse of $\mathbf{C}^{i}$ or $\mathbf{C}^{j}$ takes the
computation time of the order $\mathcal {O}(N^{3})$, much smaller
than that of the inverse of a susceptibility matrix
$\boldsymbol{\chi}$.
We remark here that minimizing the regularized cost function
Eq.~(\ref{apprS}) corresponds to finding the optimal deviation
$\mathbf{\tilde{J}}$ which provides a solution to the regularized
cost function. We also assume that for small $\lambda$, the
deviation is small as well. Without the quadratic approximation in
Eq.~(\ref{expand}), no closed form solution exists for the optimal
$\mathbf{J}$, however, the solution can still be found by using
convex optimization techniques. Similar equation to Eq.~(\ref{reg})
has been derived in the context of reconstructing a sparse
asymmetric, asynchronous Ising network~\cite{Roudi-12}. Here we
derive the inference equation (Eq.~(\ref{reg})) for the static
reconstruction of a sparse network. We will show in the next section
the efficiency of this hybrid strategy to improve the prediction of
the BA without regularization. To evaluate the efficiency, we define
the reconstruction error (root mean squared (rms) error) as
\begin{equation}\label{error}
\Delta_{J}=\left[\frac{2}{N(N-1)}\sum_{i<j}(J_{ij}^{*}-J_{ij}^{{\rm
true}})^{2}\right]^{1/2}
\end{equation}
where $J_{ij}^{*}$ is the inferred coupling while $J_{ij}^{{\rm
true}}$ is the true one constructed according to Eq.~(\ref{J_spar}).
Other performance measures for sparse network inference will also be
discussed in the following section.
\section{Results and discussions}
\label{sec_result}
\begin{figure}
\includegraphics[bb=23 12 296 219,width=8.5cm]{figure01a.eps}\vskip .1cm
\includegraphics[bb=16 17 280 212,width=8.5cm]{figure01b.eps}\vskip .1cm
\caption{
(Color online) (a) Improvement of the prediction by $\ell_{1}$-regularized BA on sparse Hopfield
networks. The inference error by BA with prior knowledge of the sparseness of the network is also shown. Network size $N=100$, the memory load $\alpha=0.6$ and
the mean node degree $l=5$. Each data point is the average over
five random sparse networks. The regularization parameter has been
optimized. The inset gives the relative inference error defined as $\frac{\Delta_{J}^{BA}-\Delta_{J}^{reg}}{\Delta_{J}^{BA}}$ versus the inverse
temperature. (b) The receiver operating characteristic curve for
three
typical examples ($T=1.4$). Each data point corresponds to a value of
$\lambda$ for $\ell_{1}$-regularized BA. The solid symbol gives the result of BA without
regularization. Parameters for these three examples are
$(N,P,\alpha)=(40,3,0.6),(100,3,0.6),(100,5,1.2)$ respectively.
}\label{regBA}
\end{figure}
\begin{figure}
\includegraphics[bb=18 18 303 220,width=8.5cm]{figure02a.eps} \vskip .1cm
\includegraphics[bb=18 16 294 214,width=8.5cm]{figure02b.eps}\vskip .1cm
\includegraphics[bb=16 11 297 216,width=8.5cm]{figure02c.eps}\vskip .1cm
\caption{
(Color online) (a) Reconstruction error $\Delta_{J}$ versus the regularization
parameter $\lambda$ at $T=1.4$. Inference results on three random instances are
shown. The inference errors by applying BA without regularization on these three random
instances are $\Delta_{J}=0.006108,0.006049,0.005981$
respectively. (b) Correct classification rate (CCR) versus the regularization parameter $\lambda$ at $T=1.4$. The instances are the same as those in
(a). The CCR of BA without regularization are ${\rm CCR}=0.9224,0.9178,0.9162$ respectively. (c) The optimal $\lambda_{{\rm opt}}$ versus the
number of samples $M$ ($T=1.4$). Each point is the mean value over five
random realizations of the sparse Hopfield network. The standard
error is nearly zero and not shown. The linear fit shows that $\lambda_{{\rm
opt}}=\lambda_{0}M^{-\nu}$ with
$\lambda_{0}\simeq0.6883,\nu\simeq0.5001$ for rms error and $\lambda_{0}\simeq0.0675,\nu\simeq0.2743$ for CCR measure.
}\label{regpara}
\end{figure}
We simulate the sparsely connected Hopfield network of size $N=100$
at different temperatures. The average connectivity for each neuron
$l=5$ and the memory load $\alpha=0.6$. As shown in fig.~\ref{regBA}
(a), the $\ell_{1}$-regularization in Eq.~(\ref{reg}) does improve
the prediction on the sparse network reconstruction. The improvement
is evident in the presence of high quality data (e.g., in the high
temperature region, see the inset of fig.~\ref{regBA} (a)).
However, the relative inference error (improvement fraction) shown
in the inset of fig.~\ref{regBA} (a) gets smaller as the temperature
decreases. This may be due to insufficient
samplings~\cite{Huang-2012} of glassy states at the low
temperatures. The glassy phase is typically characterized by a
complex energy landscape exhibiting numerous local minima. As a
result, the phase space we sample develops higher order (higher than
second order) correlations whose contributions to the regularized
cost function can not be simply neglected, which explains the
behavior observed in the inset of fig.~\ref{regBA} (a). In this
case, the pseudo-likelihood method or more complex selective cluster
expansion can be used at the expense of larger computation times.
For comparison, we also show the inference error of BA with prior
knowledge of the network connectivity, i.e., the sparseness is known
in advance with only the true non-zero couplings to be predicted.
The comparison confirms that the $\mathbf{C}^{i}$ matrix obtained
from correlations in the data contains useful information about the
sparsity of the network, and this information can be extracted by
using $\ell_{1}$-regularization in Eq.~(\ref{reg}).
An accurate pruning of the network can be achieved by simple
thresholding (setting to zero some couplings whose absolute values
are below certain threshold) based on the improved prediction. The
receiver operating characteristic (ROC) curves are given in
fig.~\ref{regBA} (b) for three typical examples of different network
size, memory load and connectivity. The ROC curve is obtained by
plotting true positive rate (the number of inferred non-zero
couplings with correct sign divided by the total number of true
non-zero couplings) against true negative rate (the number of
inferred zero couplings divided by the total number of true zero
couplings). A threshold $\delta=0.01$ is used to get the inferred
zero couplings. The ROC curve in fig.~\ref{regBA} (b) shows that one
can push the inference accuracy towards the upper right corner (high
true positive rate as well as high true negative rate) by tuning the
regularization parameter. Note that BA without regularization
reports low true negative rate.
We also explore the effects of the regularization parameter on the
reconstruction, which are reported in fig.~\ref{regpara} (a). With
increasing $\lambda$, the inference error first decreases, then
reaches a minimal value followed by an increasing trend in the range
we plot in fig.~\ref{regpara} (a). This implies that the optimal
regularization parameter guides our inference procedure to a sparse
network closest to the original one. The inference quality can also
be measured by the fraction of edges $(ij)$ where the coupling
strength is classified correctly as \textquoteleft
positive\textquoteright, \textquoteleft zero\textquoteright or
\textquoteleft negative\textquoteright. We call this quantity
correct classification rate (CCR). Results for three typical
examples are reported in fig.~\ref{regpara} (b). With increasing
$\lambda$, CCR first increases and then decreases. The optimal
regularization parameter corresponding to the maximum is slightly
different from that in fig.~\ref{regpara} (a). By using regularized
BA (Eq.~(\ref{reg})), one can achieve a much higher value of CCR,
and furthermore the computational cost is not heavy. Interestingly,
the optimal value of $\lambda$ yielding the lowest inference error
(rms error) has the order of $\mathcal {O}(\sqrt{\frac{1}{M}})$ for
fixed network size (usually $M\gg N$), which is consistent with that
found in Refs.~\cite{Wain-2010,Bento-2011}. We verify this scaling
form by varying $M$ and plotting the optimal $\lambda$ in
fig.~\ref{regpara} (c). The linear fit implies that the scaling
exponent $\nu\simeq0.5$. However, this scaling exponent depends on
the performance measure. Taking the CCR measure yields a smaller
value $\nu\simeq0.2743$, as shown in fig.~\ref{regpara} (c) as well.
We also find that the magnitude of the optimal regularization
parameter shows less sensitivity to specific instances and other
parameters (e.g., the temperature, memory load or network size),
since the number of samplings $M$ dominates the order of the
magnitude. The specific optimal value becomes slightly different
across different instances of the sparse network in the low
temperature region, where its mean value shifts to a bit larger
value for rms error measure or a bit smaller value for CCR measure,
as the temperature further decreases. The number of samplings $M$
determines the order of the magnitude, which helps us find the
appropriate strength for the regularization parameter. In the real
application, the true coupling vector is {\it a priori} unknown. In
this case, the regularization parameter can be chosen to make the
difference between the measured moments and those produced by the
reconstructed Ising model as small as possible.
\begin{figure}
\includegraphics[bb=16 12 285 216,width=8.5cm]{figure03a.eps}\vskip .1cm
\includegraphics[bb=18 17 285 216,width=8.5cm]{figure03b.eps}\vskip .1cm
\caption{
(Color online) Comparison of performance measured by misclassification rate. Each data point is the average over
five random sparse networks. The regularization parameter has been
optimized. (a) Misclassification rate versus inverse temperature. Network size $N=100$, the memory load $\alpha=0.6$ and
the mean node degree $l=5$. (b) Misclassification rate versus memory load. Network size $N=100$, temperature $T=1.4$ and $P=5$.
}\label{MisCR}
\end{figure}
Finally, we give the comparison of performance measured by
misclassification rate in fig.~\ref{MisCR}. According to the above
definition, misclassification rate equals to $1-{\rm CCR}$. Low
misclassification rate is preferred in the sparse network inference.
Fig.~\ref{MisCR} (a) shows the performance versus inverse
temperature. The misclassification rate is lowered by a substantial
amount using the hybrid strategy. Especially in the high temperature
region, the error approaches zero while BA still yields an error of
the order of $\mathcal {O}(10^{-2})$. As displayed in
fig.~\ref{MisCR} (b), the hybrid strategy is also superior to BA
when the memory load is varied, although the misclassification rate
grows with the memory load. Compared with BA, the
$\ell_{1}$-regularized BA yields a much slower growth of the error
when $\alpha$ increases. Even at the high memory load $\alpha=1.4$,
the hybrid strategy is able to reconstruct the network with an error
$4.3\%$ while at the same memory load, the error of BA is as large
as $18.9\%$. Note that as $\alpha$ changes, the average connectivity
also changes. Fig.~\ref{MisCR} (b) illustrates that our simple
inference strategy is also robust to different mean node degrees.
\section{Conclusion}
\label{sec_Sum}
We propose an efficient hybrid inference strategy for reconstructing
the sparse Hopfield network. This strategy combines Bethe
approximation and the $\ell_{1}$-regularization by expanding the
objective function (negative log-likelihood function) up to the
second order of the coupling fluctuation around its (near) optimal
value. The hybrid strategy is simple in implementation without heavy
computational cost, yet improves the prediction by zeroing couplings
which are actually not present in the network (see fig.~\ref{regBA}
and fig.~\ref{MisCR}). We can control the accuracy by tuning the
regularization parameters. The magnitude of the optimal
regularization parameters is determined by the number of independent
samples $M$ as $\lambda_{{\rm opt}}\sim M^{-\nu}$. The value of the
scaling exponent depends on the performance measure. $\nu\simeq0.5$
for root mean squared error measure while $\nu\simeq0.2743$ for
misclassification rate measure. By varying the value of the
regularization parameter, we show that the reconstruction (rms)
error first decreases and then increases after the lowest error is
reached. Similar phenomenon is observed for the change of
misclassification rate with the regularization parameter. We observe
this phenomenon in the sparse Hopfield network reconstruction, and
this behavior may be different in other cases~\cite{Cocco-12}. The
efficiency of this strategy is demonstrated for the sparse Hopfield
model, but this approximated reconstruction method is generally
applicable to other diluted mean field models if we can first find a
good solution (yielding low inference error) to the inverse Ising
problem without regularization.
\section*{Acknowledgments}
Helpful discussions with Yoshiyuki Kabashima are gratefully acknowledged. This work was supported by the JSPS Fellowship for Foreign
Researchers (Grant No. $24\cdot02049$).
|
2,869,038,154,720 | arxiv | \section{Introduction} \label{introduction}
Almost all textbooks on quantum mechanics consider only measurements of a special
kind---namely, measurements of \emph{observables}. An observable $\mathcal{O}$ corresponds to
some Hermitian operator $\hat O$. A measuring device measures the observable $\mathcal{O}$, if
(i) the only possible results of measurements are eigenvalues of $\hat O$, and (ii) if the
state $|\psi\rangle$ of the system under measurement is an eigenstate of $\hat O$, one can
predict the measurement result \emph{with certainty} to be the eigenvalue of $\hat O$
corresponding to $|\psi\rangle$. When $|\psi\rangle$ is not an eigenvector of $\hat O$, the
measurement result cannot be known in advance, but the postulates of quantum mechanics allow one to predict the
\emph{probabilities} of results. Namely, the probability $p_\lambda\,(|\psi\rangle)$ of the result $\lambda$ is
\begin{equation} \label{eq:projective}
p_\lambda(|\psi\rangle) = \langle\psi | \hat P_\lambda | \psi\rangle,
\end{equation}
where $\hat P_\lambda$ is the projector onto the eigenspace of $\hat O$ corresponding to the eigenvalue $\lambda$.
In the simplest case of non-degenerate eigenvalue $\lambda$, the projector $\hat P_\lambda$ is equal to
$|\varphi_\lambda\rangle\langle\varphi_\lambda|$,
where $|\varphi_\lambda\rangle$ is the eigenvector, and Eq.~(\ref{eq:projective}) turns into the Born rule:
\begin{equation} \label{eq:born-rule}
p_\lambda(|\psi\rangle) =
\langle\psi | \varphi_\lambda\rangle\langle\varphi_\lambda | \psi\rangle
\equiv \bigl| \langle\varphi_\lambda|\psi\rangle \bigr|^2 .
\end{equation}
Unlike this special class of measurements, \emph{general measurements} are not associated with any observables,
and probabilities of their results do not obey literally Eq.~(\ref{eq:projective}). Instead, the probability
$p_\lambda(|\psi\rangle)$ of some result $\lambda$ of a general measurement can be expressed as
\begin{equation} \label{eq:povm}
p_\lambda(|\psi\rangle) = \langle\psi | \hat A_\lambda | \psi\rangle ,
\end{equation}
where $\hat A_\lambda$ is some Hermitian operator (not necessary a projector).
The set of operators $\{\hat A_\lambda\}$ obeys the following requirements, which are consequences of
properties of probability:
\newline (1) eigenvalues of operators $\hat A_\lambda$ are bound within the range [0,1];
\newline (2) the sum $\sum_\lambda \hat A_\lambda$ (over all measurement results $\lambda$) is equal to the
identity operator.
\newline The set $\{\hat A_\lambda\}$ satisfying these requirements is usually called a \emph{positive-operator
valued measure} (POVM)~\cite{Peres,Nielsen}.
Such general measurements, described by POVMs via Eq.~(\ref{eq:povm}), occur in various contexts:
as \emph{indirect measurements}, when a system~$A$ (to be measured) first interacts with another quantum
system~$B$, and actual measurement is then performed on the
system~$B$~\cite{Peres,Nielsen,Braginsky};
as \emph{imperfect measurements}, where a result of a measurement is subjected to a
random error~\cite{Braginsky,Gardiner};
as \emph{continuous} and \emph{weak measurements}~\cite{Jacobs2006}, etc.
In this paper, we will show that the rule~(\ref{eq:povm}) for probabilities of results of general measurements
is a simple consequence of Gleason's theorem. This theorem~\cite{Gleason1957,Dvurecenskij} is a key statement
for quantum logics, and also can be considered as a justification of the Born rule~\cite{Dickson2011}. But the
usual way of getting the probability rule from Gleason's theorem requires \emph{non-contextuality} to be
postulated~\cite{Peres,Dickson2011}. We will show that it is possible to avoid the demand of non-contextuality.
We will use Gleason's theorem in the following (somewhat restricted) formulation. Let $p\,(|e\rangle)$ be
a real-valued function of unit vectors $|e\rangle$ in $N$-dimensional Hilbert space. Suppose that
\newline (1) $N \geq 3$,
\newline (2) the function $p$ is non-negative,
\newline (3) the value of the sum
\begin{equation} \label{eq:gleason-condition}
\sum_{n=1}^N p\,(|e_n\rangle),
\end{equation}
where unit vectors $|e_1\rangle, |e_2\rangle, \ldots |e_N\rangle$ are all mutually orthogonal,
does not depend on the choice of the unit vectors.
\newline Then, Gleason's theorem states that the function $p\,(|e\rangle)$ can be represented as follows:
\begin{equation} \label{eq:gleason-answer}
p\,(|e\rangle) = \langle e | \hat A | e \rangle ,
\end{equation}
where $\hat A$ is some Hermitian operator in the $N$-dimensional Hilbert space.
We will apply Gleason's theorem in a quite unusual way. Typically, the argument $|e\rangle$ is considered
as a property of a measuring device, and the function $p$ as a characteristic of the measured system's state.
Our approach is completely reverse---we interpret the vector $|e\rangle$ as a system's state vector,
and refer the function $p$ to a measuring device. The main difficulty of this approach consists in satisfying
the third condition: namely, that the sum~(\ref{eq:gleason-condition}) is constant.
To show that this condition fulfils, we will exploit the concept
of environment-induced invariance, or \emph{envariance}, suggested by Zurek~\cite{Zurek2003,Zurek2005}. The idea of
\emph{envariance} can be formulated as follows: when two quantum systems are entangled,
one can \emph{undo} some actions with the first system, performing corresponding “counter-actions” with the second one.
Such a possibility of undoing means that these actions do not change the state of the first system (considered as alone)
and, in particular, do not change probabilities of results of any measurements on this system~\cite{Zurek2003,Zurek2005}.
Note that envariance was introduced by Zurek as a tool for understanding the nature of quantum probabilities,
and for derivation the Born rule.
For illustrative purposes, we will depict a quantum system as a moving particle, and a measuring device---as
a black box (that emphasizes our ignorance about construction of this device and about processes inside it), see Fig.~1.
When the particle reaches the black box, the lamp on the box either flashes for a moment, or stays dark.
One can introduce the probability $p\,(|\psi\rangle)$ of flashing the lamp, as the rate of flashes normalized to
the rate of particle arrivals, when all these particles are in the state $|\psi\rangle$. The main result of the
present paper consists in finding out that
\begin{equation} \label{eq:answer}
p\,(|\psi\rangle) = \langle \psi | \hat A | \psi \rangle ,
\end{equation}
with some Hermitian operator $\hat A$.
\begin{figure}
\includegraphics[width=0.72\linewidth]{fig1}
\caption{Probability $p\,(|\psi\rangle)$ of flashing the lamp as a result
of interaction of the quantum system in a state $|\psi\rangle$
with the measuring device.}
\label{fig1}
\end{figure}
Though we consider a measurement with only two possible results (flashing and non-flashing of the lamp),
this does not lead to any loss of generality. Indeed, one can associate flashing of the lamp with some particular
measurement result $\lambda$, and non-flashing---with all other results. Then, the function $p\,(|\psi\rangle)$
in Eq.~(\ref{eq:answer}) would be the same as the function $p_\lambda(|\psi\rangle)$ in Eq.~(\ref{eq:povm}).
So any proof of Eq.~(\ref{eq:answer}) also proves Eq.~(\ref{eq:povm}), i.~e. justifies the POVM nature of
every conceivable measurement.
For simplicity, we restrict ourselves by consideration of quantum systems with \emph{finite-dimensional} state spaces.
In Section~\ref{envariance} we will introduce a particular case of \emph{envariance}, which will be used later.
Section~\ref{preparation} illustrates preparation of a quantum system in a pure state by measurement of \emph{another}
system. In Section~\ref{thought-exp}, we will consider a series of thought experiments that
combine the features discussed in previous two sections. These experiments show that the function
$p\,(|\psi\rangle)$ obeys Eq.~(\ref{eq:identity1}). In Section~\ref{gleason-appl} we will demonstrate that
Eq.~(\ref{eq:identity1}) together with Gleason's theorem lead to the probability rule~(\ref{eq:answer}).
In Section~\ref{2d}, the special case of two-dimensional state space (not covered directly by Gleason's
theorem) is considered. Finally, Section~\ref{born-rule} shows how the projective postulate~(\ref{eq:projective})
(and the Born rule as a particular case) follows from the POVM probability
rule~(\ref{eq:answer}). Closing remarks are gathered in Section~\ref{conclusions}.
\section{Envariance} \label{envariance}
Let us consider an experiment shown in Fig.~\ref{fig2}. Two identical particles are prepared in the joint state
\begin{equation} \label{eq:psi-N}
|\Psi_N\rangle = \frac{|1\rangle|1\rangle + |2\rangle|2\rangle +\ldots+ |N\rangle|N\rangle }{\sqrt N} \,,
\end{equation}
$|1\rangle, |2\rangle, \ldots, |N\rangle$ being some orthonormal basis of the $N$-dimensional state space
of one particle.
After that, each particle passes through a \emph{quantum gate}, i.~e. a device that performs some unitary
transformation under the corresponding particle. For the first (upper) particle, an arbitrarily chosen
transformation $\hat U$ is used. For the second (lower) particle, the complex-conjugated transformation
$\hat U^*$ (whose matrix elements in the basis $|1\rangle, \ldots, |N\rangle$ are complex conjugates to
corresponding matrix elements of $\hat U$) is applied.
\begin{figure}
\includegraphics[width=0.86\linewidth]{fig2}
\caption{Illustration of \emph{envariance}: the initial joint state of two particles, perturbed by the gate
$U^*$, is restored after applying the gate $U$ to another particle.}
\label{fig2}
\end{figure}
Let us find the joint state $|\Psi'_N\rangle$ of two particles after passing through the gates:
\begin{equation} \label{eq:transform}
|\Psi'_N\rangle = \frac{1}{\sqrt N}
\sum_{n=1}^N \left(\hat U|n\rangle\right) \left(\hat U^*|n\rangle\right) ,
\end{equation}
where
\begin{equation} \label{eq:u-n}
\hat U|n\rangle = \sum_{k=1}^N U_{kn}|k\rangle
\end{equation}
($U_{kn}$ being matrix elements of $\hat U$), and
\begin{equation} \label{eq:u-star-n}
\hat U^*|n\rangle = \sum_{l=1}^N (U_{ln})^*|l\rangle .
\end{equation}
Substituting the latter two equalities into Eq.~(\ref{eq:transform}), and changing the order of summation,
one can get
\begin{equation} \label{eq:transform2}
|\Psi'_N\rangle = \frac{1}{\sqrt N}
\sum_{k=1}^N \sum_{l=1}^N \left( \sum_{n=1}^N U_{kn} (U_{ln})^* \right) |k\rangle|l\rangle .
\end{equation}
Due to unitarity of the matrix $U$, the expression in brackets in Eq.~(\ref{eq:transform2}) reduces to the
Kroneker's delta $\delta_{kl}$:
\begin{equation} \label{eq:unitarity}
\sum_{n=1}^N U_{kn} (U_{ln})^* = \delta_{kl} .
\end{equation}
Hence,
\begin{equation} \label{eq:psi-pp-is-psi}
|\Psi'_N\rangle = \frac{1}{\sqrt N} \sum_{k=1}^N |k\rangle|k\rangle \equiv |\Psi_N\rangle .
\end{equation}
Thus, effects of two transformations $\hat U$ and $\hat U^*$, applied to different entangled particles
prepared in the joint state $|\Psi_N\rangle$, Eq.~(\ref{eq:psi-N}), cancel each other.
According to Zurek~\cite{Zurek2003,Zurek2005}, this means that each of these transformations do not change
the state of the particle, on which it acts. In other words, the state of each particle is invariant
(``\emph{envariant}'') under such transformations.
The fact that the two-particle state $|\Psi_N\rangle$ remains unchanged when the particles pass through
the gates $U$ and $U^*$ (Fig.~\ref{fig2}) will be used in Section~\ref{thought-exp}.
\section{Preparation by measurement} \label{preparation}
Let us consider a special measurement device (a ``meter'') that distinguishes the basis states
$|1\rangle, |2\rangle, \ldots, |N\rangle$ from each other. Therefore the following property is satisfied by definition:
{\bf Property a.} If a measured system was in the state $|k\rangle$ before measurement by the meter
($k\in\{1,2,\ldots,N\}$), then the measurement result will be $k$ with certainty.
It is commonly accepted that any such ``meter'' has to obey also the following property, which is the reversal of
Property a:
{\bf Property b.} The only state, for which the result of measurement by the meter can be predicted
to be $k$ with certainty, is the pure state $|k\rangle$.
Quantum mechanics also guarantees that the following statement is true:
{\bf Property c.} If two systems were in the joint state $|\Psi_N\rangle$, Eq.~(\ref{eq:psi-N}),
and each of them was measured
by a meter as shown in Fig.~\ref{fig3}, then the results of these two measurements must coincide.
\begin{figure}
\includegraphics[width=0.86\linewidth]{fig3}
\caption{After the measurement of the lower particle, the upper one appears in the state $|n\rangle$,
where $n$ is the measurement result.}
\label{fig3}
\end{figure}
Now consider a state of the upper particle in Fig.~\ref{fig3} just after the lower particle was measured.
Let $n$ be the measurement result obtained by the lower meter. Then, according to Property c, one can predict that
the result of the upper particle's measurement will also be $n$. Due to Property b, this means that the upper particle
is now in the pure state $|n\rangle$.
Hence, if a system of two particles was initially in the state $|\Psi_N\rangle$, and one particle is measured by a
meter, this measurement \emph{prepares} the other particle in the state $|n\rangle$, where $n$ is the result of
the measurement. This conclusion will be used in the next Section.
\section{Three thought experiments} \label{thought-exp}
Let us examine the measuring device, schematically represented in Fig.~\ref{fig1}, by means of the
equipment introduced in Figs.~\ref{fig2} and~\ref{fig3}.
Figure~\ref{fig4}a shows the experiment, in which the source, emitting pairs of particles prepared
in the state $|\Psi_N\rangle$, is combined with the measuring device.
One can define the probability $\mathcal{P}$ of flashing the lamp on the device, as a ratio of
the rate of flashing to the rate of emitting the particles by the source.
\begin{figure}
\includegraphics[width=\linewidth]{fig4}
\caption{Three thought experiments. The probability $\mathcal{P}$ of flashing the light on the
measuring device is the same in all three experiments.}
\label{fig4}
\end{figure}
In the next thought experiment, Fig.~\ref{fig4}b, two quantum gates $U$ and $U^*$,
the same as in Fig.~\ref{fig2}, are added on the way of particles.
It was shown in Section~\ref{envariance} that this combination of
gates leaves the state $|\Psi_N\rangle$ unchanged. Thus, from the point of view of the measuring device,
nothing was changed when the two gates were introduced, consequently the rate of flashing of the lamp
remains unchanged. Thus, the probability $\mathcal{P}$ of flashing the lamp on the measuring device
is the same in Figs.~\ref{fig4}a and~\ref{fig4}b.
The third thought experiment in this series (Fig.~\ref{fig4}c) differs from the second one (Fig.~\ref{fig4}b)
by removing the gate $U^*$ and inserting the ``meter'', which measures the state of the lower particle
in the basis $|1\rangle, |2\rangle, \ldots, |N\rangle$, as in Fig.~\ref{fig3}.
Since the difference between Fig.~\ref{fig4}b and Fig.~\ref{fig4}c is related to the lower branch of the
experimental setup only, it cannot influence any events of the higher branch. (Otherwise, it would be
possible to transfer information from the lower branch to the higher one, without any physical interaction
between the branches.) So we conclude that the probability $\mathcal{P}$ of flashing the lamp on the
measuring device in the third experiment is the same as in the second one.
Now we will express the value of $\mathcal{P}$ in the third experiment (Fig.~\ref{fig4}c) through the
function $p\,(|\psi\rangle)$ defined in Section~\ref{introduction} (a probability of lamp flashing for
the pure state $|\psi\rangle$ of the measured particle).
Let $a_n$ denote the probability that the meter at the lower branch gives the result $n$.
Also, let $\mathcal{P}_n$ denote the probability that this meter gives the result $n$
\emph{and} the lamp on the measuring device at the higher branch flashes. Obviously,
\begin{equation} \label{eq:sum-a}
\sum_{n=1}^N a_n = 1 ,
\end{equation}
\begin{equation} \label{eq:total-probability}
\sum_{n=1}^N \mathcal{P}_n = \mathcal{P} .
\end{equation}
If the lower meter gives the result $n$, then the upper particle appears in the state $|n\rangle$,
according to discussion in Section~\ref{preparation}. After passing through the gate $U$,
the upper particle's state turns into $\hat U|n\rangle$. Thus, the (conditional) probability of lamp flashing
on the device at the higher branch is equal to $p\left(\hat U|n\rangle\right)$ \emph{if}
the lower meter's result is $n$. Then, according to the multiplicative rule for probabilities,
\begin{equation} \label{eq:experiment4}
\mathcal{P}_n = a_n \, p\left(\hat U|n\rangle\right) .
\end{equation}
Combination of Eqs.~(\ref{eq:total-probability}) and (\ref{eq:experiment4}) gives
\begin{equation} \label{eq:identity1}
\sum_{n=1}^N a_n \, p\left(\hat U|n\rangle\right) = \mathcal{P} ,
\end{equation}
which is simply a manifestation of the law of total probability applied to the experiment shown in Fig.~\ref{fig4}c.
In Eq.~(\ref{eq:identity1}), the value of $\mathcal{P}$ does not depend on choice of the unitary operator $\hat U$,
because this value is the same as in the first experiment (Fig.~\ref{fig4}a), see discussion above.
Also the values of $a_n$ do not depend on $\hat U$.
In the next Section, we will derive Eq.~(\ref{eq:answer}) from Eq.~(\ref{eq:identity1}).
\section{Applying Gleason's theorem} \label{gleason-appl}
Let $\mathcal{E} = \{ |e_1\rangle, |e_2\rangle, \ldots, |e_N\rangle \}$ be an orthonormal set of vectors
in the $N$-dimensional Hilbert space: $\langle e_m | e_n \rangle = \delta_{mn}$. Then, it is possible to
construct an unitary operator $\hat U$ that transforms the set of basis vectors
$\{ |1\rangle, ..., |N\rangle \}$ into $\mathcal{E}$:
\begin{equation} \label{eq:u-construction}
\hat U|n\rangle = |e_n\rangle , \quad n = 1, \ldots, N .
\end{equation}
Any such unitary operator can be implemented (at least in a thought experiments) as a physical device (quantum gate).
Thus, Eq.~(\ref{eq:identity1}) is valid for the operator $\hat U$ defined by Eq.~(\ref{eq:u-construction}).
Substituting Eq.~(\ref{eq:u-construction}) into Eq.~(\ref{eq:identity1}), one can see that
\begin{equation} \label{eq:identity2}
\forall \; \mathcal{E}: \quad
\sum_{n=1}^N a_n \, p\left(|e_n\rangle\right) = \mathcal{P} ,
\end{equation}
where values of $a_n$ and $\mathcal{P}$ do not depend on the choice of $\mathcal{E}$.
Now we will see how to get rid of the unknown coefficients $a_n$. Let us first examine the simplest case of $N=2$.
Eq.~(\ref{eq:identity2}) for $N=2$ reads:
\begin{equation} \label{eq:identity2-2d-1}
a_1 \, p\left(|e_1\rangle\right) + a_2 \, p\left(|e_2\rangle\right) = \mathcal{P} .
\end{equation}
If $\{ |e_1\rangle, |e_2\rangle \}$ is an orthonormal set, then, obviously,
$\{ |e_2\rangle, |e_1\rangle \}$ is also an orthonormal set. Therefore Eq.~(\ref{eq:identity2-2d-1})
remains valid if one swaps the vectors $|e_1\rangle$ and $|e_2\rangle$:
\begin{equation} \label{eq:identity2-2d-2}
a_1 \, p\left(|e_2\rangle\right) + a_2 \, p\left(|e_1\rangle\right) = \mathcal{P} .
\end{equation}
Summing up Eqs.~(\ref{eq:identity2-2d-1}) and~(\ref{eq:identity2-2d-2}), and taking into account that
$a_1+a_2=1$, one can arrive to the equality
\begin{equation} \label{eq:identity2-2d-sum}
p\left(|e_1\rangle\right) + p\left(|e_2\rangle\right) = 2\,\mathcal{P} ,
\end{equation}
which is the desired relation between probabilities without coefficients $a_n$.
This recipe works also for arbitrary $N$. Indeed, any permutation of $N$ vectors $|e_n\rangle$
in Eq.~(\ref{eq:identity2}) gives rise to a valid equality; therefore one can get $N!$ equalities for
a given set of vectors. In these $N!$ equalities, each of $N$ vectors $|e_n\rangle$ enters $(N-1)!$ times with each of $N$
factors $a_n$. Hence, the sum of all these equalities is
\begin{equation} \label{eq:identity2-sum}
(N-1)! \left(\sum_{m=1}^N a_m\right) \left(\sum_{n=1}^N p\left(|e_n\rangle\right)\right) = N! \; \mathcal{P} .
\end{equation}
Finally, taking Eq.~(\ref{eq:sum-a}) into account, one can get the following relation for the function
$p\,(|\psi\rangle)$:
\begin{equation} \label{eq:identity3}
\forall \; \mathcal{E}: \quad
\sum_{n=1}^N p\left(|e_n\rangle\right) = N \, \mathcal{P} .
\end{equation}
One can see now that, for $N\geq3$, the function $p\,(|\psi\rangle)$ obeys the conditions of Gleason's theorem.
Since $p$ is a probability, it is non-negative.
Finally, the sum~(\ref{eq:gleason-condition}) is equal to $N \, \mathcal{P}$ and therefore does not
depend on the choice of unit vectors $|e_n\rangle$.
Thus, one can apply Gleason's theorem, that completes the proof of Eq.~(\ref{eq:answer}) for the case $N\geq3$.
\section{Case of two-dimensional state space} \label{2d}
The above derivation of Eq.~(\ref{eq:answer}) does not cover the special case $N=2$.
Now we will see that this case can be reduced to the case $N=4$.
Consider a system of two non-interacting particles, each of them described by a two-dimensional state space.
The first particle is measured by a black-box device, as shown in Fig.~\ref{fig1}.
As above, we denote as $p\,(|\psi\rangle)$ the probability
of flashing the light on the device, when the state of the \emph{first particle} before its measurement is $|\psi\rangle$.
In addition, we denote as $P\,(|\Psi\rangle)$ the probability
of flashing the light, when the joint state of the \emph{two particles} is $|\Psi\rangle$ before measurement of the first
particle.
Since $|\Psi\rangle$ is a vector in four-dimensional space $(N=4)$, the above derivation of Eq.~(\ref{eq:answer})
is valid for the function $P\,(|\Psi\rangle)$. Hence, there is such Hermitian operator $\hat A$, acting in a
four-dimensional space and independent of $|\Psi\rangle$, that
\begin{equation} \label{eq:p-4d}
P\,(|\Psi\rangle) = \langle \Psi | \hat A | \Psi \rangle .
\end{equation}
Let us consider the case when the first particle is in some pure state
\begin{equation} \label{eq:psi-2d}
|\psi\rangle = \alpha |1\rangle + \beta |2\rangle ,
\end{equation}
and the second particle is in the state $|1\rangle$. (Here $|1\rangle$ and $|2\rangle$ are some basis vectors in the
two-dimensional space.) Then, the joint state of both particles is
\begin{equation} \label{eq:psi-4d}
|\psi\rangle |1\rangle \equiv \alpha |11\rangle + \beta |21\rangle .
\end{equation}
The probability of flashing the light in this situation can be expressed both as $p\,(|\psi\rangle)$
and as $P\,(|\psi\rangle|1\rangle)$, therefore
\begin{equation} \label{eq:p-p}
p\,(|\psi\rangle) = P\,(|\psi\rangle|1\rangle) .
\end{equation}
Substituting Eqs.~(\ref{eq:psi-4d}) and~(\ref{eq:p-4d}) into Eq.~(\ref{eq:p-p}),
one can express the probability $p\,(|\psi\rangle)$ as follows:
\begin{equation} \label{eq:p-4d-op}
p\,(|\psi\rangle) =
\left( \alpha^* \langle11| + \beta^* \langle21| \right) \hat A \left( \alpha |11\rangle + \beta |21\rangle \right) ,
\end{equation}
i. e.
\begin{equation} \label{eq:p-4d-matrix}
p\,(|\psi\rangle) =
\left( \alpha^* \; \beta^* \right)
\left( \begin{array}{cc}
\langle11|\hat A|11\rangle & \langle11|\hat A|21\rangle \\
\langle21|\hat A|11\rangle & \langle21|\hat A|21\rangle
\end{array} \right)
\left( \begin{array}{l} \alpha \\ \beta \end{array} \right) .
\end{equation}
The $2\times2$ matrix in the latter equation can be considered as a representation of some Hermitian operator
$\hat A_2$, acting in the two-dimensional state space of one particle. Therefore one can rewrite
Eq.~(\ref{eq:p-4d-matrix}) in the operator form:
\begin{equation} \label{eq:p-2d}
p\,(|\psi\rangle) = \langle \psi | \hat A_2 | \psi \rangle ,
\end{equation}
where $\hat A_2$ does not depend on $|\psi\rangle$ (i.~e. on $\alpha$ and $\beta$).
The derivation of Eq.~(\ref{eq:p-2d}), given in this Section, justifies Eq.~(\ref{eq:answer}) for the special case
$N=2$, where $N$ is the dimensionality of the state space of the measured system.
Therefore Eq.~(\ref{eq:answer}) is now proven for any measurement on any quantum system with finite $N$.
\section{From POVM to the Born rule} \label{born-rule}
Consider some device that measures an observable $\mathcal{O}$.
Let $p\,(|\psi\rangle)$ be the probability of getting some fixed result $\lambda$, when a
system in a state $|\psi\rangle$ is measured by this device.
It is already proven in Sections~\ref{thought-exp}--\ref{2d},
that the function $p\,(|\psi\rangle)$ can be represented in the
form of Eq.~(\ref{eq:answer}), where $\hat A$ is some Hermitian operator. In this Section we will see that
$\hat A$ is a projector onto an eigenspace of the operator $\hat O$, which describes the observable~$\mathcal{O}$.
Let a matrix $A_{mn}$ represents the operator $\hat A$ in a basis $|\varphi_1\rangle,\ldots,|\varphi_N\rangle$
of eigenvectors of $\hat O$:
\begin{equation} \label{eq:a-mn-def}
A_{mn} \equiv \langle \varphi_m | \hat A | \varphi_n \rangle .
\end{equation}
Then, according to Eqs.~(\ref{eq:answer}) and~(\ref{eq:a-mn-def}), probabilities $p\,(|\varphi_n\rangle)$
are equal to diagonal matrix elements $A_{nn}$:
\begin{equation} \label{eq:p-phi-n}
p\,(|\varphi_n\rangle) = \langle \varphi_n | \hat A | \varphi_n \rangle = A_{nn} .
\end{equation}
On the other hand, if the state of the measured system is an eigenstate of $\hat O$, then the measurement result
must be equal to the corresponding eigenvector; therefore $p\,(|\varphi_n\rangle)$ is 1 if the $n$th
eigenvalue is equal to $\lambda$ (i.~e. if $\hat O |\varphi_n\rangle = \lambda |\varphi_n\rangle$), and 0 otherwise.
Hence,
\begin{equation} \label{eq:a-nn}
A_{nn} = \left\{
\begin{array}{l}
1 \quad \mbox{if } \hat O |\varphi_n\rangle = \lambda |\varphi_n\rangle, \\
0 \quad \mbox{otherwise.} \\
\end{array} \right.
\end{equation}
Now we will show that non-diagonal matrix elements $A_{mn}$ vanish. For this purpose, let us consider
eigenvalues $a_1,\ldots,a_N$ of the operator $\hat A$. Since the trace of a matrix is an invariant, then
\begin{equation} \label{eq:trace}
\sum_n A_{nn} = \sum_k a_k .
\end{equation}
Analogously, since the sum of squared absolute values of all matrix elements is an invariant, then
\begin{equation} \label{eq:sum-squares}
\sum_{m,n} |A_{mn}|^2 = \sum_k a_k^2 .
\end{equation}
Subtracting Eq.~(\ref{eq:trace}) from Eq.~(\ref{eq:sum-squares}), and taking into account that
$|A_{nn}|^2 = A_{nn}$ due to Eq.~(\ref{eq:a-nn}), one can see that
\begin{equation} \label{eq:sum-squares-nondiag}
\sum_{m \neq n} |A_{mn}|^2 = \sum_k (a_k^2-a_k) ,
\end{equation}
where summation in the left hand side is over all non-diagonal elements.
It is easy to see that all eigenvalues $a_k$ are non-negative. Indeed, if some eigenvalue $a_k$ were negative,
then the probability $p\,(|\chi_k\rangle)$, where $|\chi_k\rangle$ is the corresponding eigenvector,
would be negative too:
\begin{equation}
p\,(|\chi_k\rangle) = \langle\chi_k|\hat A|\chi_k\rangle = \langle\chi_k|a_k|\chi_k\rangle = a_k < 0 ,
\end{equation}
which is impossible. For a similar reason, $a_k$ cannot be larger than 1. Hence, all eigenvalues $a_k$
are bound within the range $[0,1]$ and, consequently,
\begin{equation} \label{eq:eig-negative}
\forall n: \quad a_k^2-a_k \leq 0 .
\end{equation}
Therefore the right hand side of Eq.~(\ref{eq:sum-squares-nondiag}) is negative or zero. But the
left hand side of Eq.~(\ref{eq:sum-squares-nondiag}) is positive or zero, so both sides are equal to zero.
This proves that all non-diagonal matrix elements $A_{mn}$ vanish.
So the matrix $A_{mn}$ is diagonal, and the action of the operator $\hat A$ on basis vectors $|\varphi_n\rangle$
is defined by Eq.~(\ref{eq:a-nn}):
\begin{equation} \label{eq:a-action}
\hat A |\varphi_n\rangle = A_{nn} |\varphi_n\rangle =
\left\{
\begin{array}{l}
|\varphi_n\rangle \quad \mbox{if } \hat O |\varphi_n\rangle = \lambda |\varphi_n\rangle, \\
0 \qquad\; \mbox{otherwise.} \\
\end{array} \right.
\end{equation}
The operator $\hat A$ is, consequently, the projector onto the eigenspace of $\hat O$
with eigenvalue~$\lambda$. Thus, we have seen that the postulate~(\ref{eq:projective}),
together with the Born rule~(\ref{eq:born-rule}) in a particular case of non-degenerate eigenvalue~$\lambda$,
are consequences of Eq.~(\ref{eq:answer}).
\section{Conclusions} \label{conclusions}
In the main part of this paper, Sections~\ref{thought-exp}--\ref{2d}, we have presented a proof that
probability of any result of any measurement on a quantum system, as a function of the system's
state vector $|\psi\rangle$, obeys Eq.~(\ref{eq:answer}). (For simplicity, only systems with
finite-dimensional state spaces were considered.) This justifies the statement that the most general
type of measurement in quantum theory is one described by POVM.
It is important to note that this proof of Eq.~(\ref{eq:answer}) avoids using the Born rule (or any other form
of probabilistic postulate). This opens a possibility of deriving non-circularly the Born rule from
Eq.~(\ref{eq:answer}). Such possibility is given in Section~\ref{born-rule}.
Note that, despite many efforts aiming to derive the Born rule (see Refs.~\cite{Dickson2011,Schlosshauer2005}
for review), there are no generally accepted derivations up to now. Therefore the present approach may be helpful,
due to its simplicity: all its essence is contained in three thought experiments shown in Fig.~\ref{fig4}.
Finally, let us emphasize the role of entanglement in the present derivation.
Consideration of an \emph{entangled state of two particles} $|\Psi_N\rangle$, Eq.~(\ref{eq:psi-N}), has helped us
to establish the probability rule for \emph{pure states of one particle alone} (not entangled with any environment).
It~seems to be that entanglement is a necessary concept for establishing the probabilistic nature
of quantum theory.
|
2,869,038,154,721 | arxiv | \section{Introduction}
The origin of this article is rooted in two conjectures by Beck which appeared in The On-Line Encyclopedia of Integer Sequences \cite{B1} on the pages for sequences A090867 and A265251. The conjectures, as formulated by Beck, were proved by Andrews in \cite{A17} using generating functions. Certain generalizations and combinatorial proofs appeared in \cite{FT17} and \cite{Y18}. Combinatorial proofs of the original conjectures were also given in \cite{BB19}. Several additional similar identities were proved in the last two years. See for example \cite{AB19, LW19, LW19b,LW19c}. In order to define Beck-type identities, we first introduce the necessary terminology and notation.
In this article $\mathbb N$ denotes the set of positive integers. Given a non-negative integer $n$, a \textit{partition} $\lambda$ of $n$ is a non-increasing sequence of positive integers $\lambda=(\lambda_1, \lambda_2, \ldots, \lambda_k)$ that add up to $n$, i.e., $\displaystyle\sum_{i=1}^k\lambda_i=n$. Thus, if $\lambda=(\lambda_1, \lambda_2, \ldots, \lambda_k)$ is a partition, we have $
\lambda_1\geq \lambda_2\geq \ldots \geq \lambda_k$. The numbers $\lambda_i$ are called the \textit{parts} of $\lambda$ and $n$ is called the \textit{size} of $\lambda$. The number of parts of the partition is called the \textit{length} of $\lambda$ and is denoted by $\ell(\lambda)$.
If $\lambda, \mu$ are two arbitrary partitions, we denote by $\lambda\cup \mu$ the partition obtained by taking all parts of $\lambda$ and all parts of $\mu$ and rearranging them to form a partition. For example, if $\lambda=(5, 5, 3, 2, 2, 1)$ and $\mu=(7, 5, 3, 3)$, then $\lambda\cup \mu=(7,5,5,5,3,3,3,2,2,1)$.
When convenient, we use the exponential notation for parts in a partition. The exponent of a part is the multiplicity of the part in the partition. For example, $(7, 5^2, 4, 3^3, 1^2)$ denotes the partition $(7, 5,5, 4, 3, 3, 3, 1, 1)$. It will be clear from the context when exponents refer to multiplicities and when they are exponents in the usual sense.
\medskip
Let $S_1$ and $S_2$ be subsets of the positive integers. We define
$\OO(n)$ to be the set of partitions of $n$ with all parts from the set $S_2$ and $\mathcal D_r(n)$ to be the set of partitions of $n$ with parts in $S_1$ repeated at most $r-1$ times. Subbarao \cite{S71} proved the following theorem.
\begin{theorem} \label{sub-thm} $|\OO(n)|=|\mathcal D_r(n)|$ for all non-negative integers $n$ if and only if $rS_1\subseteq S_1$ and $S_2=S_1\setminus rS_1$. \end{theorem} Andrews \cite{A69} first discovered this result for $r=2$ and called a pair $(S_1,S_2)$ such that $|\mathcal{O}_2(n)|=|\mathcal{D}_2(n)|$ an \textit{Euler pair} since the pair $S_1=\mathbb N$ and $S_2=2\mathbb N-1$ gives Euler's identity. By analogy, Subbarao called a pair $(S_1,S_2)$ such that $|\OO(n)|=|\mathcal D_r(n)|$ an \textit{Euler pair of order $r$}.
\begin{example}[Subbarao \cite{S71}]\label{eg:epair}
Let
\begin{align*}
S_1 &= \{ m \in \mathbb N : \ m \equiv 1 \ (mod \ 2) \};\\
S_2 &= \{ m \in \mathbb N : \ m \equiv \pm 1 \ (mod \ 6) \}.
\end{align*}
\noindent Then $(S_1, S_2)$ is an Euler pair of order 3.
\end{example}
Note that Glaisher's bijection used to prove $|\OO(n)|=|\mathcal D_r(n)|$ when $S_1=\mathbb N$ and $S_2=2\mathbb N-1$ can be generalized to any Euler pair of order $r$. If $(S_1,S_2)$ is an Euler pair of order $r$, let $\varphi_r$ be the map from $\OO(n)$ to $\mathcal D_r(n)$ which repeatedly merges $r$ equal parts into a single part until there are no parts repeated more than $r-1$ times. The map $\varphi_r$ is a bijection and we refer to it as Glaisher's bijection.
Given $(S_1,S_2)$, an Euler pair of order $r$, we refer to the elements in $S_2=S_1\setminus rS_1$ as \textit{primitive} elements and to the elements of $rS_1=S_1\setminus S_2$ as \textit{non-primitive} elements. We usually denote primitive parts by bold lower case letters, for example $\bf a$. Non-primitive parts are denoted by (non-bold) lower case letters. If $a$ is a non-primitive part of a partition and we want to emphasize the largest power $k$ of $r$ such that $a/r^k\in S_1$, we write $a=r^k\bf a$ with ${\bf a}\in S_2$ and $k \geq 1$.
Let $\mathcal O_{1,r}(n)$ be the set of partitions of $n$ with parts in $S_1$ such that the set of parts in $rS_1$ has exactly one element. Thus, a partition in $\mathcal O_{1,r}(n)$ has exactly one non-primitive part (possibly repeated). Let $\mathcal D_{1,r}(n)$ be the set of partitions of $n$ with parts in $S_1$ in which exactly one part is repeated at least $r$ times.
Let $a_r(n)=|\mathcal O_{1,r}(n)|$ and $c_r(n)=|\mathcal D_{1,r}(n)|$. Let $b_r(n)$ be the difference between the number of parts in all partitions in $\OO(n)$ and the number of parts in all partitions in $\mathcal D_r(n)$. Thus, $$b_r(n)=\sum_{\lambda\in \OO(n)} \ell(\lambda)-\sum_{\lambda\in \mathcal D_r(n)} \ell(\lambda).$$
Let $\mathcal T_r(n)$ be the subset of $\mathcal D_{1,r}(n)$ consisting of partitions of $n$ in which one part is repeated more than $r$ times but less than $2r$ times. Let $c'_r(n)=|\mathcal T_r(n)|$. Let $b'_r(n)$ be the difference between the total number of different parts in all partitions in $\mathcal D_r(n)$ and the total number of different parts in all partitions in $\OO(n)$ (i.e., in each partition, parts are counted without multiplicity). If we denote by $\bar\ell(\lambda)$ the number of different parts in $\lambda$, then $$b'_r(n)=\sum_{\lambda\in \mathcal D_r(n)} \bar\ell(\lambda)-\sum_{\lambda\in \OO(n)} \bar\ell(\lambda).$$
In \cite{B1}, Beck conjectured that, if $S_1=\mathbb N$ and $S_2=2\mathbb N-1$, then $$a_2(n)=b_2(n)=c_2(n)$$ and $$c'_2(n)=b'_2(n).$$ Andrews proved these identities in \cite{A17} using generating functions. Combinatorial proofs were given in \cite{BB19}. For the case $r\geq 2$, $S_1=\mathbb N$, and $S_2=\{k\in \mathbb N: k \not \equiv 0 \pmod r\}$, Fu and Tang \cite{FT17} gave generating function proofs for \begin{equation}\label{beck1} a_r(n)=\frac{1}{r-1}b_r(n)=c_r(n)\end{equation} and \begin{equation} \label{beck2} c'_r(n)=b'_r(n).\end{equation} They also proved combinatorially that $a_r(n)=c_r(n)$. In \cite{Y18}, Yang gave combinatorial proofs of \eqref{beck1} and \eqref{beck2} in the case $r\geq 2$, $S_1=\mathbb N$, and $S_2=\{k\in \mathbb N: k \not \equiv 0 \pmod r\}$.
Our main theorems establish the analogous result for all Euler pairs. We will prove the theorems both analytically and combinatorially. We refer to the results in Theorem \ref{T1} as first Beck-type identities and to the result in Theorem \ref{T2} as second Beck-type identity.
\begin{theorem} \label{T1} If $n,r $ are integers such that $n \geq 0$ and $r\geq 2$, and $(S_1, S_2)$ is an Euler pair of order $r$, then
\begin{enumerate}
\item[(i)] $a_r(n)=\displaystyle\frac{1}{r-1}b_r(n)$ \\ \ \\
\item[(ii)] $c_r(n)=\displaystyle\frac{1}{r-1}b_r(n)$. \end{enumerate}\end{theorem}
\begin{theorem} \label{T2} If $n,r $ are integers such that $n \geq 0$ and $r\geq 2$, and $(S_1, S_2)$ is an Euler pair of order $r$, then
$c'_r(n)=b'_r(n)$.
\end{theorem}
\begin{example} We continue with the Euler pair of order $3$ from Example \ref{eg:epair}.
We have
$$\mathcal{O}_{3}(7) = \{ (7), (5, 1^2), (1^7) \}; \ \mathcal{D}_{3}(7) = \{(7), (5, 1^2), (3^2, 1)\};$$
and
$$\mathcal{O}_{1, 3}(7) = \{ (3^2, 1), (3, 1^4) \}; \ \mathcal{D}_{1, 3}(7) = \{(1^7), (3, 1^4)\}.$$
Glaisher's bijection gives us
\begin{center}
\begin{tabular}{ccc}
$(7)$ & $\stackrel{\varphi_3}{\longrightarrow}$ & $(7)$\\ \ \\ $(5, 1, 1)$ &$\longrightarrow$ & $(5, 1, 1)$ \\ \ \\ $(\underbrace{1, 1, 1}, \underbrace{1, 1, 1}, 1)$ &$\longrightarrow$ & (3, 3, 1)
\end{tabular}
\end{center}
We note that $$a_3(7) = |{O}_{1, 3}(7)| = 2, c_3(7) = |{D}_{1, 3}(7)| = 2, \mbox{ and } b_3(7) = 11 - 7 = 4.$$ Thus, $$\frac{1}{3-1}b_3(7) = a_3(7) = c_3(7).$$
If we restrict to counting different parts in partitions, we see that there are a total of $4$ diferent parts in the partitions of $\mathcal{O}_{3}(7)$ and a total of $5$ different parts in the partitions of $\mathcal{D}_{3}(7)$. Since $\mathcal{T}_3(7) = \{ (3, 1^4) \}$, we have $$b'_3(7) = 5 - 4 = 1 = |\mathcal{T}_3(7)|.$$
\end{example}
The analytic proofs of Theorems \ref{T1} and \ref{T2} are similar to the proofs in \cite{A17} and \cite{FT17}, while the combinatorial proofs follow the ideas of \cite{BB19}. However, the generalizations of the proofs in the aforementioned articles to Euler pairs of order $r\geq 2$ are important as establishing the theorems in such generality leads to a multitude of new Beck-type identities. We reproduce several Euler pairs listed in \cite{S71}. For each identity $|\OO(n)|=|\mathcal D_r(n)|$ holding for the pair below, there are companion Beck-type identities as in Theorems \ref{T1} and \ref{T2}.
The following pairs $(S_1, S_2)$ are Euler pairs (of order $2$).
\begin{enumerate}
\item[(i)] $S_1=\{m\in N : m \not \equiv 0 \pmod 3\}$;
\noindent $S_2=\{m\in N : m \equiv 1,5 \pmod 6\}$.
In this case, the identity $|\mathcal{O}_2(n)|=|\mathcal{D}_2(n)|$ is known as Schur's identity.\medskip
\item[(ii)] $S_1=\{m\in N : m \equiv 2,4,5 \pmod 6\}$;
\noindent $S_2=\{m\in N : m \equiv 2,5, 11 \pmod{12}\}$.
In this case, the identity $|\mathcal{O}_2(n)|=|\mathcal{D}_2(n)|$ is known as G\"ollnitz's identity.\medskip
\item[(iii)] $S_1=\{m\in N : m=x^2+2y^2 \mbox{ for some } x,y\in \mathbb Z\}$;
\noindent $S_2=\{m\in N : m \equiv 1 \pmod 2 \mbox{ and } m=x^2+2y^2 \mbox{ for some } x,y\in \mathbb Z\}$.
\end{enumerate}\medskip
The following is an Euler pair of order $3$.
\begin{enumerate}
\item[(iv)] $S_1=\{m\in N : m=x^2+xy+y^2 \mbox{ for some } x,y\in \mathbb Z\}$;
\noindent $S_2=\{m\in N : \gcd(m,3)=1 \mbox{ and } m=x^2+xy+y^2 \mbox{ for some } x,y\in \mathbb Z\}$. \end{enumerate}\medskip
The following pairs $(S_1, S_2)$ are Euler pairs of order $r$.
\begin{enumerate}\medskip
\item[(v)] $S_1=\{m\in N : m \equiv \pm r \pmod{r(r+1)}\}$;
\noindent $S_2=\{m\in N : m \equiv \pm r \pmod{r(r+1)} \mbox{ and } m \not \equiv \pm r^2 \pmod{r^2(r+1)} \}$.\medskip
\item[(vi)] $S_1=\{m\in N : m \equiv \pm r, -1 \pmod{r(r+1)}\}$;
\noindent $S_2=\{m\in N : m \equiv \pm r, -1 \pmod{r(r+1)} \mbox{ and } m \not \equiv \pm r^2, -r \pmod{r^2(r+1)}\}$.
If $r=2$, this Euler pair becomes G\"ollnitz's pair in (ii) above. \medskip
\item[(vii)] Let $r+1$ be a prime.
\noindent $S_1=\{m\in N : m \not \equiv 0 \pmod{r+1}\};$
\noindent $S_2=\{m\in N : m \not \equiv tr, t(r+1) \pmod{r^2+r} \mbox{ for } 1\leq t\leq r \}.$
If $r=2$, this Euler pair becomes Schur's pair in (i) above. \medskip
\item[(viii)] Let $p$ be a prime and $r$ a quadratic residue $\pmod p$.
\noindent $S_1=\{m \in \mathbb N: \mbox{$m$ quadratic residue} \pmod p\};$
\noindent $S_2=\{m \in \mathbb N: m \not \equiv 0 \pmod r \mbox{ and $m$ quadratic residue} \pmod p\}.$
\end{enumerate}
Note that each case (v)-(viii) gives infinitely many Euler pairs and therefore leads to infinitely many new Beck-type identities. We also note that in (vii) we corrected a slight error in (3.4) of \cite{S71}.
\begin{example} Consider the Euler pair in (vii) above with $r=4$.
We have
$S_1 = \{ m \in N : m \not\equiv 0 \ \pmod 5 \};$
\medskip
$S_2=\{m\in N : m \not \equiv 4t, 5t \pmod{20} \mbox{ for } 1\leq t\leq 4 \}.$ \\
Then $(S_1, S_2)$ is an Euler pair of order $4$ and we have\\
$\mathcal{O}_{4}(7) = \{ (7), (6, 1), (3^2, 1), (3, 2^2), (3, 1^4), (3, 2, 1^2), (2^3, 1), (2^2, 1^3), (2, 1^5), (1^7) \};$
\medskip
$\mathcal{D}_{4}(7) = \{ (7), (6,1), (3^2, 1), (3, 2^2), (4, 3), (3, 2, 1^2), (2^3, 1), (2^2, 1^3), (4, 2, 1), (4, 1^3) \}.$
\medskip
We have $\mathcal{O}_{1, 4}(7) = \{ (4, 1^3), (4, 2, 1), (4, 3) \};$
\ \ \ $\mathcal{D}_{1, 4}(7) = \{ (1^7), (2, 1^5), (3, 1^4) \}.$\medskip
We note that $a_4(7) = |{O}_{1, 4}(7)| = 3$, $c_4(7) = |{D}_{1, 4}(7)| = 3$, and $b_4(7) = 40 - 31 = 9$, so $\frac{1}{3}b_4(7) = a_4(7) = c_4(7)$.
\medskip
If we restrict to counting different parts, we see that there are 19 different parts in the partitions of $\mathcal{O}_{4}(7)$ and 21 different parts in the partitions of $\mathcal{D}_{4}(7)$. So $b'_4(7) = 21 - 19 = 2 = |\mathcal{T}_4(7)|$ since $\mathcal{T}_4(7) = \{ (1^7), (2, 1^5) \}$.
\end{example}
\section{Proofs of Theorem \ref{T1} }
\subsection{Analytic Proof}
In this article, whenever we work with $q$-series, we assume that $|q| < 1$. When working with two-variable generating functions, we assume both variables are complex numbers less than $1$ in absolute value. Then all series converge absolutely. The generating functions for $|\mathcal D_r(n)|$ and $|\OO(n)|$ are given by
\begin{align*}
\sum_{n = 0}^{\infty}|\mathcal D_r(n)|q^n & = \prod_{a \in S_1} (1 + q^a + q^{2a} + \dotsm +q^{(r-1)a})\\
& = \prod_{a \in S_1} \frac{1 - q^{ra}}{1 - q^a};
\end{align*}
and
\begin{align*}
\sum_{n = 0}^{\infty} |\OO(n)|q^n & = \prod_{{\bf b} \in S_2} \frac{1}{1 - q^{\bf b}}.
\end{align*}
To keep track of the number of parts used, we introduce a second variable $z$, where $|z| < 1$. Let $$\mathcal D_r(n; m)=\{\lambda\in \mathcal D_r(n) \mid \lambda \mbox{ has exactly $m$ parts}\}$$ and $$\OO(n; m)=\{\lambda\in \OO(n) \mid \lambda \mbox{ has exactly $m$ parts}\}.$$
Then, the generating functions for $|\mathcal D_r(n; m)|$ and $ |\OO(n; m)|$ are
\begin{align*}
f_{\mathcal D_r}(z,q):=\sum_{n = 0}^{\infty}\sum_{m = 0}^{\infty}|\mathcal D_r(n; m)|z^mq^n & = \prod_{a \in S_1} (1 + zq^a + z^{2}q^{2a} + \dotsm +z^{(r-1)}q^{(r-1)a})\\
& = \prod_{a \in S_1} \frac{1 - z^rq^{ra}}{1 - zq^a};
\end{align*}
and
\begin{align*}
f_{\OO}(z,q):=\sum_{n = 0}^{\infty}\sum_{m = 0}^{\infty} |\OO(n; m)| z^mq^n & = \prod_{{\bf b} \in S_2} \frac{1}{1 - zq^{\bf b}}.
\end{align*}
To obtain the generating function for the total number of parts in all partition in $\mathcal D_r(n)$ (respectively $\OO(n)$),
we take the derivative with respect to $z$ of $f_{\mathcal D_r}(z,q)$ (respectively $f_{\OO}(z,q)$), and set $z = 1$. We obtain
\begin{align*}
& \left.\frac{\partial}{\partial z}\right|_{z=1}f_{\mathcal D_r}(z,q) \\ & \hspace*{1cm} = \prod_{a \in S_1} \frac{1 - q^{ra}}{1 - q^a}\sum_{a \in S_1} \frac{-rq^{ra}(1-q^a)+q^a(1-q^{ra})}{(1-q^a)(1-q^{ra})} \\ & \hspace*{1cm} = \prod_{a \in S_1} \frac{1 - q^{ra}}{1 - q^a}\sum_{a \in S_1} \left(\frac{q^a}{1-q^a}-\frac{q^{ra}}{1-q^{ra}}-(r-1)\frac{q^{ra}}{1-q^{ra}} \right)\\ & \hspace*{1cm} = \prod_{a \in S_1} \frac{1 - q^{ra}}{1 - q^a}\left(\sum_{a \in S_1} \sum_{\substack{ k=1\\ r\nmid k}}^{\infty}q^{ka}-\sum_{a \in S_1}(r-1)\frac{q^{ra}}{1-q^{ra}} \right)
;
\end{align*}
and
\begin{align*}
\left.\frac{\partial}{\partial z}\right|_{z=1}f_{\OO}(z,q)=\prod_{{\bf b} \in S_2} \frac{1}{1 - q^{\bf b}} \sum_{{\bf b} \in S_2} \frac{q^{\bf b}}{1 - q^{\bf b}}.
\end{align*}
Since $|\mathcal D_r(n)| = |\OO(n)|$, we have
\begin{align*}
\sum_{n=0}^{\infty} b_r(n) q^n & = \prod_{{\bf b} \in S_2} \frac{1}{1 - q^{\bf b}} \bigg( \sum_{{\bf b} \in S_2} \frac{q^{\bf b}}{1 - q^{\bf b}} - \sum_{\substack{ a\in S_1\\ k\in \mathbb N\\ r\nmid k}}q^{ka}+\sum_{a \in S_1}(r-1)\frac{q^{ra}}{1-q^{ra}} \bigg).
\end{align*}
Next we see that
\begin{align} \label{sets}
\sum_{{\bf b} \in S_2} \frac{q^{\bf b}}{1 - q^{\bf b}}= \sum_{\substack{ a\in S_1\\ k\in \mathbb N\\ r\nmid k}}q^{ka}.
\end{align}
We have
$$\sum_{\substack{ a\in S_1\\ k\in \mathbb N\\ r\nmid k}}q^{ka}=\sum_{\substack{ a\in S_1\\ k\in \mathbb N}}q^{ka}-\sum_{\substack{ a\in S_1\\ k\in \mathbb N}}q^{rka}= \sum_{a\in S_1}\frac{q^a}{1-q^a}- \sum_{a\in S_1}\frac{q^{ra}}{1-q^{ra}}=\sum_{{\bf b}\in S_2}\frac{q^{\bf b}}{1-q^{\bf b}}.$$ The last equality holds because $S_2=S_1\setminus rS_1$. Therefore, \eqref{sets} holds.
Then, the generating function for $b_r(n)$ becomes
\begin{align*}
\sum_{n=0}^{\infty} b_r(n) q^n & = \prod_{{\bf b} \in S_2} \frac{1}{1 - q^{\bf b}} \bigg( (r-1) \sum_{a \in S_1} \frac{q^{ra}}{1 - q^{ra}} \bigg)\\
& = \prod_{a \in S_1} (1 + q^a + q^{2a} + \dotsm +q^{(r-1)a}) \bigg( (r-1) \sum_{a \in S_1} \frac{q^{ra}}{1 - q^{ra}} \bigg).
\end{align*}
Therefore
\begin{align*}
\sum_{n=0}^{\infty} b_r(n) q^n & = \sum_{n = 0}^{\infty} (r-1) |\mathcal O_{1,r}(n)|q^n = \sum_{n = 0}^{\infty} (r-1) |\mathcal D_{1,r}(n)|q^n.
\end{align*}
\noindent Equating coefficients results in $a_r(n)=\displaystyle\frac{1}{r-1}b_r(n)$ and $c_r(n)=\displaystyle\frac{1}{r-1}b_r(n)$.
\subsection{Combinatorial Proof}
\subsubsection{$b_r(n)$ as the cardinality of a set of marked partitions}\label{b}
We start with another example of Glaisher's bijection.
\begin{example}\label{eg:gl}
We continue with the Euler pair of order $3$ from Example \ref{eg:epair}, but this time use $n = 11$.\\
$\mathcal{O}_{3}(11) = \{ (11), (7, 1^4), (5^2, 1), (5, 1^6), (1^{11}) \};$
$\mathcal{D}_{3}(11) = \{(11), (9, 1^2), (7, 3, 1), (5^2, 1), (5, 3^2) \}.$
\medskip
Thus, $b_3(11)= 27-13=14$. \medskip
Glaisher's bijection gives us
\begin{center}
\begin{tabular}{ccc}
$(11)$ & $\stackrel{\varphi_3}{\longrightarrow}$ & $(11)$\\ \ \\ $(7, \underbrace{1, 1, 1}, 1)$ &$\longrightarrow$ & $(7, 3, 1)$ \\ \ \\ $(5,5, 1)$ & ${\longrightarrow}$ & $(5,5, 1)$\\ \ \\ $(5, \underbrace{1, 1, 1,} \underbrace{1, 1, 1})$ & ${\longrightarrow}$ & $(5, 3,3)$\\ \ \\ $(\underbrace{\underbrace{1, 1, 1}, \underbrace{1, 1, 1}, \underbrace{1, 1, 1,}} 1, 1)$ & ${\longrightarrow}$ & $(9, 1,1)$
\end{tabular}
\end{center}
\end{example}
From Glaisher's bijection, it is clear that each partition $\lambda\in \OO(n)$ has at least as many parts as its image $\varphi_r(\lambda)\in \mathcal D_r(n)$.
When calculating $b_r(n)$, we sum up the differences in the number of parts in each pair ($\lambda,\varphi_r(\lambda))$. Write each part $\mu_j$ of $\mu=\varphi_r(\lambda)$ as $\mu_j=r^{k_j} \bf a$. Then, $\mu_j$ was obtained by merging $r^{k_j}$ parts equal to $\bf a$ in $\lambda$ and thus contributes an excess of $r^{k_j}-1$ parts to $b_r(n)$. Therefore, the difference between the number of parts of $\lambda$ and the number of parts of $\varphi_r(\lambda)$ is $\displaystyle \sum_{j=1}^{\ell(\varphi_r(\lambda))} (r^{k_j}-1)$.
\medskip
\begin{example} In the setting of Example \ref{eg:gl}, we see that $(7,3,1)$ contributes $2$ to $b_3(11)$, $(5,3,3)$ contributes $2+2$ to $b_3(11)$, and $(9,1,1)$ contributes $8$ to $b_3(11)$. Thus, $b_3(11)=2+4+8=14$. \end{example}
\begin{definition} \label{marked} Given $(S_1,S_2)$, an Euler pair of order $r$, we define the set $\mathcal{MD}_{1,r}(n)$ of \textit{marked partitions} of $n$ as the set of partitions in $\mathcal D_r(n)$ such that exactly one part of the form $r^k\bf a$ with $k\geq 1$ has as index an integer $t$ satisfying $1\leq t\leq r^k-1$. If $\mu\in\mathcal D_r(n)$ has parts $\mu_i=\mu_j=r^k\bf a$, $k\geq 1$, with $i\neq j$, the marked partition in which $\mu_i$ has index $t$ is considered different from the marked partition in which $\mu_j$ has index $t$.
\end{definition}
Note that marked partitions, by definition, have a non-primitive part.
Then, from the discussion above we have the following interpretation for $b_r(n)$.
\begin{proposition} Let $n,r$ be integers such that $n\geq 1$ and $r\geq 2$. Then, $$b_r(n)=|\mathcal{MD}_{1,r}(n)|.$$ \end{proposition}
\begin{definition} An \textit{$r$-word} $w$ is a sequence of letters from the alphabet $\{0,1, \ldots r-1\}$. The \textit{length} of an $r$-word $w$, denoted $\ell(w)$, is the number of letters in $w$. We refer to \textit{position} $i$ in $w$ as the $i$th entry from the right, where the most right entry is counted as position $0$.
\end{definition}
Note that leading zeros are allowed and are recorded. For example, if $r=5$, the $5$-words $032$ and $32$ are different even though in base $5$ they both represent $17$. We have $\ell(032)=3$ and $\ell(32)=2$. The empty bit string has length $0$ and is denoted by $\emptyset$.
\begin{definition} Given $(S_1,S_2)$, an Euler pair of order $r$, we define the set $\mathcal{DD}_r(n)$ of \textit{$r$-decorated partitions} as the set of partitions in $\mathcal D_r(n)$ with at least one non-primitive part such that exactly one non-primitive part $r^k\bf a$ is decorated with an index $w$, where $w$ is an $r$-word satisfying $0\leq\ell(w)\leq k-1$. As in Definition \ref{marked}, if $\mu\in\mathcal D_r(n)$ has non-primitive parts $\mu_i=\mu_j=r^k\bf a$ with $i\neq j$, the decorated partition in which $\mu_i$ has index $w$ is considered different from the decorated partition in which $\mu_j$ has index $w$.\end{definition}
Thus, for each part $\mu_i=r^{k_i}\bf a$ of $\mu\in \mathcal{DD}_r(n)$ there are $\displaystyle \frac{r^{k_i}-1}{r-1}$ possible indices and for each partition $\mu\in \mathcal{DD}_r(n)$ there are precisely $\displaystyle\frac{1}{r-1} \sum_{j=1}^{\ell(\mu)} (r^{k_j}-1)$ possible decorated partitions with the same parts as $\mu$. \medskip
The discussion above proves the following interpretation for $\displaystyle\frac{1}{r-1}b_r(n)$.
\begin{proposition} Let $n,r$ be integers such that $n\geq 1$ and $r\geq 2$. Then, $$\displaystyle\frac{1}{r-1}b_r(n)=|\mathcal{DD}_r(n)|.$$
\end{proposition}
While it is obvious that $|\mathcal{MD}_{1,r}(n)|=(r-1)|\mathcal{DD}_r(n)|$, to see this combinatorially, consider the map $\psi_r:\mathcal{MD}_{1,r}(n) \to \mathcal{DD}_r(n)$ defined as follows. If $\lambda \in \mathcal{MD}_{1,r}(n)$, then $\psi_r(\lambda)$ is the partition in $\mathcal{DD}_r(n)$ in which the $r$-decorated part is the same as the marked part in $\lambda$. The index of the part of $\psi_r(\lambda)$ is obtained from the index of the part of $\lambda$ by writing it in base $r$ and removing the leading digit.
Clearly, this is a $r-1$ to $1$ mapping.
\subsubsection{A combinatorial proof for $a_r(n)=\displaystyle\frac{1}{r-1}b_r(n)$}\label{an=bn}
To prove combinatorially that $a_r(n)=\displaystyle\frac{1}{r-1}b_r(n)$ we establish a one-to-one correspondence between $\mathcal O_{1,r}(n)$ and $\mathcal{DD}_r(n)$.\medskip
\noindent \textit{From $\mathcal{DD}_r(n)$ to $\mathcal O_{1,r}(n)$:}\label{part i}
Start with an $r$-decorated partition $\mu \in \mathcal{DD}_r(n)$. Suppose the non-primitive part $\mu_i=r^k \bf a$ is decorated with an $r$-word $w$ of length $\ell(w)$. Then, $0\leq\ell(w)\leq k-1$. Let $d_w$ be the decimal value of $w$. We set $d_\emptyset=0$. We transform $\mu$ into a partition $\lambda \in \mathcal O_{1,r}(n)$ as follows. \medskip
Define $\bar \mu$ to be the partition whose parts are all non-primitive parts of $\mu$ of the form $\mu_j=r^t\bf a$ with $j\leq i$, i.e., all parts $r^t\bf a$ with $t>k$ and, if $\mu_i$ is the $p$th part of size $r^k\bf a$ in $\mu$, then $\bar \mu$ also has $p$ parts equal to $r^k\bf a$.
Define $\tilde \mu$ to be the partition whose parts are all parts of $\mu$ that are not in $\bar \mu$.
\begin{enumerate}
\item In $\bar \mu$, split one part of size $r^k\bf a$ into $d_w+1$ parts of size $r^{k-\ell(w)}\bf a$ and $r^k-(d_w+1)r^{k-\ell(w)}$ primitive parts of size $\bf a$.
Every other part in $\bar \mu$ splits completely into parts of size $r^{k-\ell(w)}\bf a$. Denote the resulting partition by $\bar \lambda$.
\item Let $\tilde \lambda=\varphi_r^{-1}(\tilde \mu)$. Thus, $\tilde \lambda$ is obtained by splitting all parts in $\tilde \mu$ into primitive parts.
\end{enumerate}
Let $\lambda=\bar\lambda \cup \tilde \lambda$. Then $\lambda\in \mathcal O_{1,r}(n)$ and its set of non-primitive parts is $\{r^{k-\ell(w)}\bf a\}$.
\begin{remark} Since $d_w+1\leq r^{\ell(w)}$, in step 1, the resulting number of primitive parts equal to $\bf a$ is non-negative. Moreover, in $\bar \lambda$ there is at least one non-primitive part.
\end{remark}
\begin{example} We continue with the Euler pair of order $3$ from Example \ref{eg:epair}. Consider the decorated partition
\begin{align*}
\mu & =(1215, 135_{02}, 135, 51, 35, 15, 15, 3)\\
& =(3^5 \cdot 5, (3^3 \cdot 5)_{02}, 3^3 \cdot 5, 3\cdot17, 35, 3\cdot 5, 3 \cdot 5, 3 \cdot 1)\in \mathcal{DD}_3(1604).
\end{align*}
We have $k=3, \ell(w)=2, d_w=2$, and \medskip
$\bar{\mu} = (3^5 \cdot 5, 3^3 \cdot 5);$
\medskip
$\tilde{\mu} = (3^3 \cdot 5, 3\cdot17, 35, 3\cdot 5, 3 \cdot 5, 3 \cdot 1).$\\
To create $\bar{\lambda}$ from $\bar{\mu}$:
\begin{enumerate}
\item Part $135=3^3\cdot 5$ splits into three parts of size $15$ and eighteen parts of size $5$. \medskip
\item Part $1215=3^5\cdot 5$ splits into eighty one parts of size $15$. \medskip
\end{enumerate}
This results in $\bar{\lambda} = (15^{84}, 5^{18}).$\\
To create $\tilde{\lambda}$ from $\tilde{\mu}$:\\
All parts in $\tilde{\mu}$ are split into primitive parts. Thus, part $3^3 \cdot 5$ splits into twenty seven parts of size 5, part $3\cdot 17$ splits into three parts of size $17$, both parts of $3\cdot5$ split into three parts of size $5$ each, and part $3\cdot 1$ splits into three parts of size $1$. Part $35$ is already primitive so remains unchanged.
This results in $\tilde{\lambda} = (35, 17^3, 5^{33}, 1^3)$. Then, setting $\lambda = \bar{\lambda} \cup \tilde{\lambda}$ results in $\lambda=(35, 17^3, 15^{84}, 5^{51}, 1^3)\in\mathcal{O}_{1,3}(1604)$. The non-primitive part is $15=3\cdot 5$. \medskip
\end{example}
\noindent \textit{From $\mathcal O_{1,r}(n)$ to $\mathcal{DD}_r(n)$:}
Start with a partition $\lambda\in \mathcal O_{1,r}(n)$. In $\lambda$ there is one and only one non-primitive part $r^k \bf a$. Let $f$ be the multiplicity of the non-primitive part of $\lambda$. We transform $\lambda$ into an $r$-decorated partition in $\mathcal{DD}_r(n)$ as follows.
Apply Glaisher's bijection to $\lambda$ to obtain $\mu=\varphi_r(\lambda)\in \mathcal D_r(n)$. Since $\lambda$ has a non-primitive part, $\mu$ will have at least one non-primitive part.
Next, we determine the $r$-decoration of $\mu$. Consider the non-primitive parts $\mu_{j_i}$ of $\mu$ of the form $r^{t_i} \bf a$ (same $\bf a$ as in the non-primitive part of $\lambda$) and $t_i\geq k$. Assume $j_1<j_2<\cdots$. For notational convenience, set $\mu_{j_0}=0$. Let $h$ be the positive integer such that
\begin{equation}\label{ineq-h}\sum_{i=0}^{h-1}\mu_{j_i}<f\cdot r^k {\bf a} \leq \sum_{i=0}^{h}\mu_{j_i}.\end{equation} Then, we will decorate part $\mu_{j_h}=r^{t_h}\bf a$.
To determine the decoration, let
$$N=\frac{\displaystyle\sum_{i=0}^{h-1}\mu_{j_i}}{r^k\bf a}.$$ Then, \eqref{ineq-h} becomes $$r^k{\bf a}N<f\cdot r^k{\bf a}\leq r^k{\bf a}N+r^{t_h}{\bf a},$$ which implies $0<f-N\leq r^{t_h-k}$.
Let $d=f-N-1$ and $\ell=t_h-k$. We have $0\leq \ell \leq t_h-1$. Consider the representation of $d$ in base $r$ and insert leading zeros to form an $r$-word $w$ of length $\ell$. Decorate $\mu_{j_h}$ with $w$. The resulting decorated partition is in $\mathcal{DD}_r(n)$.
\begin{example} We continue with the Euler pair of order $3$ from Example \ref{eg:epair}. Consider the partition $\lambda=(35, 17^3, 15^{84}, 5^{51}, 1^3)\in\mathcal{O}_{1,3}(1604)$. The non-primitive part is $15$. We have $k=1, f=84$.
Glaisher's bijection produces the partition $\mu=(1215, 135^2, 51, 35, 15^2, 3) = (3^5\cdot5, 3^3 \cdot 5, 3^3 \cdot 5, 3 \cdot 17, 35, 3\cdot 5, 3\cdot 5, 3\cdot 1)\in \mathcal D\mathcal D_{3}(1604)$. The parts of the form $3^{r_i}\cdot 5$ with $r_i\geq 1$ are $1215, 135, 135, 15,15$. Since $1215<84(3^1 \cdot 5) \leq 1215 + 135,$ the decorated part will be the first part equal to $135=3^3\cdot 5$. We have $N=1215/15=81$.
To determine the decoration, let $d=84-81-1=2$ and $\ell=3-1=2$. The base $3$ representation of $d$ is $2$. To form a $3$-word of length $2$, we introduce one leading $0$. Thus, the decoration is $w=02$ and the resulting decorated partition is $(1215, 135_{02}, 135, 51, 35, 15, 15, 3)=(3^5 \cdot 5, (3^3 \cdot 5)_{02}, 3^3 \cdot 5, 3\cdot17, 35, 3\cdot 5, 3 \cdot 5, 3 \cdot 1)\in \mathcal{DD}_{1,3}(1604)$.
\end{example}
\subsubsection{A combinatorial proof for $c_r(n)=\displaystyle\frac{1}{r-1}b_r(n)$}
We note that one can compose the bijection of section \ref{part i} with the bijection of \cite{FT17} to obtain a combinatorial proof of part (ii) of Theorem \ref{T1}. However, we give an alternative proof
that $c_r(n)=\displaystyle\frac{1}{r-1}b_r(n)$ by establishing a one-to-one correspondence between $\mathcal D_{1,r}(n)$ and $\mathcal{DD}_r(n)$. This proof does not involve the bijection of \cite{FT17} and it mirrors the proof of section \ref{an=bn}.\medskip
\noindent \textit{From $\mathcal{DD}_r(n)$ to $\mathcal D_{1,r}(n)$:}
Start with an $r$-decorated partition $\mu \in \mathcal{DD}_r(n)$. Suppose the non-primitive part $\mu_i=r^k \bf a$ is decorated with an $r$-word $w$ of length $\ell(w)$ and decimal value $d_w$. Then, $0\leq\ell(w)\leq k-1$.
We transform $\mu$ into a partition $\lambda\in \mathcal D_{1,r}(n)$ as follows. \medskip
Let $\bar{\bar \mu}$ be the partition whose parts are all non-primitive parts of $\mu$ of the form $\mu_j=r^t\bf a$ with $j\geq i$, and $k-\ell(w)-1< t\leq k$,
i.e., all parts $r^t\bf a$ with $k-\ell(w)-1< t<k$ and, if there are $p-1$ parts of size $r^k\bf a$ in $\mu$ after the decorated part, then $\bar{\bar \mu}$ has $p$ parts equal to $r^k\bf a$.
Let $\tilde{\tilde \mu}$ be the partition whose parts are all parts of $\mu$ that are not in $\bar{\bar \mu}$.
In $\bar{\bar \mu}$, perform the following steps.
\begin{enumerate}
\item Split one part equal to $r^k\bf a$ into $r(d_w+1)$ parts of size $r^{k-\ell(w)-1}\bf a$ and $m$ primitive parts of size $\bf a$, where $m=r^k-r(d_w+1)r^{k-\ell(w)-1}$.
Apply Glaisher's bijection $\varphi_r$ to the partition consisting of $m$ parts equal to $\bf a$.
\item Split all remaining parts of $\bar{\bar \mu}$ completely into parts of size $r^{k-\ell(w)-1}\bf a$.
\end{enumerate}
Denote by $\bar{\bar \lambda}$ the partition with parts resulting from steps 1 and 2 above.
Let $\lambda=\bar{\bar \lambda} \cup \tilde{\tilde \mu}$.
Since $r(d_w+1)\geq r$, it follows that $\lambda\in \mathcal D_{1,r}(n)$. The part repeated at least $r$ times is $r^{k-\ell(w)-1}\bf a$.
\medskip
\begin{remark} (i) Since $d_w+1\leq r^{\ell(w)}$, the splitting in step 1 can be performed.
(ii) Note that $r \mid m=r^{k-\ell(w)}(r^{\ell(w)}-(d_w+1))$. Thus, if $w\neq \emptyset$, after applying Glaisher's bijection $\varphi_r$ to the partition consisting of $m$ parts equal to $\bf a$, we obtain parts $r^j\bf a$ with $k-\ell(w)\leq j<k$. Since in $\tilde{\tilde \mu}$, all parts of the form $r^i\bf a$ have $i \geq k$ or $i\leq k-\ell(w)-1$, Glaisher's bijection in step (1) does not create parts equal to any parts in $\tilde{\tilde \mu}$.
(iii) If $w=\emptyset$, then, in step (1), we have $m=0$ and $r^k\bf a$ splits into $r$ parts equal to $r^{k-1}\bf a$.
\end{remark}
\begin{example} We continue with the Euler pair of order $3$ from Example \ref{eg:epair}. Consider the partition $\mu = (32805, (10935)_{0120}, 10935, 1215, 45, 45, 25, 9, 3) = (3^8 \cdot 5, (3^7\cdot 5)_{0120}, 3^7 \cdot 5, 3^5 \cdot 5, 3^2 \cdot 5, 3^2 \cdot 5, 25, 3^2 \cdot 1, 3 \cdot 1) \in \mathcal{DD}_3(56017)$. Then the decorated part is $\mu_2 = 3^7 \cdot 5$ and the decoration is $w = 0120$. We have $k=7$, $\ell(w) = 4$, $d_w = 15$. So\\
$\bar{\bar{\mu}} = (3^7 \cdot 5, 3^7 \cdot 5, 3^5 \cdot 5);$
\medskip
$\tilde{\tilde{\mu}} = (3^8 \cdot 5, 3^2 \cdot 5, 3^2 \cdot 5, 25, 3^2 \cdot 1, 3 \cdot 1).$
\begin{enumerate}
\item $3^7 \cdot 5$ splits into
\begin{itemize}
\item $r(d_w + 1) = 48$ parts of $3^2 \cdot 5$ and
\item $m = r^k - r(d_w + 1)r^{k- \ell(w) - 1} = 3^7 - 48(3^2) = 1755$ parts of $5$.
\end{itemize}
The 1755 parts of 5 merge into two parts of 3645, one part of 1215, and two parts of 135.
\item $3^7 \cdot 5$ splits into two hundred and forty three parts of $3^2 \cdot 5$ and $3^5 \cdot 5$ splits into twenty seven parts of $3^2 \cdot 5$.
\end{enumerate}
This results in\\
$\bar{\bar{\lambda}} = (3645^2, 1215, 135^2, 45^{318});$
\medskip
$\lambda = \bar{\bar{\lambda}} \cup \tilde{\tilde{\mu}} = (32805, 3645^2, 1215,135^2, 45^{320}, 25, 9, 3) \in \mathcal D_{1,3}(56017)$. The part repeated at least three times is $45=3^2\cdot 5$.
\end{example}\medskip
\noindent \textit{From $\mathcal D_{1,r}(n)$ to $\mathcal{DD}_r(n)$:}
Start with a partition $\lambda\in \mathcal D_{1,r}(n)$. Then, among the parts of $\lambda$, there is one and only one part that is repeated at least $r$ times. Suppose this part is $r^k \bf a$, $k\geq 0$, and denote by $f\geq r$ its multiplicity in $\lambda$. As in Glaisher's bijection, we merge repeatedly parts of $\lambda$ that are repeated at least $r$ times to obtain $\mu\in \mathcal D_r(n)$. Since $\lambda$ has a part repeated at least $r$ times, $\mu$ will have at least one non-primitive part.
Next, we determine the decoration of $\mu$. In this case, we want to work with the parts of $\mu$ from the right to the left (i.e., from smallest to largest part). Let $\tilde{\mu}_q=\mu_{\ell(\mu)-q+1}$.
Consider the parts $\tilde\mu_{j_i}$ of the form $r^{t_i} \bf a$ with $t_i\geq k$. If $t_1\leq t_2\leq\cdots$, we have $j_1<j_2<\cdots$.
As before, we set $\tilde{\mu}_{j_0}=0$. Let $h$ be the positive integer such that
\begin{equation}\label{ineq1-h}\sum_{i=0}^{h-1}\tilde\mu_{j_i}<f\cdot r^k {\bf a} \leq \sum_{i=0}^{h}\tilde\mu_{j_i}.\end{equation} Then, we will decorate part $\tilde\mu_{j_h}=r^{t_h}\bf a$.
To determine the decoration, let \begin{equation} \label{Nh} N=\frac{\displaystyle\sum_{i=0}^{h-1}\tilde\mu_{j_i}}{r^k\bf a}.\end{equation} Then, \eqref{ineq1-h} becomes $$r^k{\bf a}N<f\cdot r^k{\bf a}\leq r^k{\bf a}N+r^{t_h}{\bf a},$$ which implies $0<f-N\leq r^{t_h-k}$.
Let $\displaystyle d=\frac{f-N}{r}-1$ and $\ell=t_h-k-1$. Note that, by construction, $t_h>k$, and therefore $0\leq \ell \leq t_h-1$. Consider the representation of $d$ in base $r$ and insert leading zeros to form an $r$-word $w$ of length $\ell$. Decorate $\tilde\mu_{j_h}$ with $w$. The resulting decorated partition (with parts written in non-increasing order) is in $\mathcal{DD}_r(n)$. \medskip
\begin{remark} To see that $f-N$ above is always divisible by $r$, note that if $f=qr+t$ with $q,t \in \mathbb Z$ and $0\leq t<r$, then there are $t$ terms equal to $r^k\bf a$ in the numerator of $N$. All other terms, if any, are divisible by $r^{k+1}\bf a$. Therefore, the remainder of $N$ upon division by $r$ is $t$.
\end{remark}
\begin{example} We continue with the Euler pair of order $3$ from Example \ref{eg:epair}. Consider the partition $\lambda = (32805, 3645^2, 1215, 135^2, 45^{320}, 25, 9, 3) \in \mathcal{D}_{1,3}(56017)$. The part repeated at least three times is $45=3^2\cdot 5$. We have $k=2$ and $f = 320$.
Applying Glaisher's bijection to $\lambda$ results in\\
$\mu = \varphi_{3}(\lambda) = (3^8 \cdot 5, 3^7\cdot 5, 3^7 \cdot 5, 3^5 \cdot 5, 3^2 \cdot 5, 3^2 \cdot 5, 25, 3^2, 3 \cdot 1) \in \mathcal{D}_{3}(56017)$.\\
The parts of the form $3^{t_i}\cdot 5$ with $t_i \geq 2$ are $3^2\cdot 5, 3^2\cdot 5, 3^5\cdot 5, 3^7\cdot 5, 3^7\cdot 5, 3^8\cdot 5$. Since $3^2\cdot 5 + 3^2\cdot 5 + 3^5\cdot 5 + 3^7\cdot 5 < 320 \cdot 3^2 \cdot 5 \leq 3^2\cdot 5 + 3^2\cdot 5 + 3^5\cdot 5 + 3^7\cdot 5 + 3^7\cdot 5 $, the decorated part will be the second part (counting from the right) equal to $3^7 \cdot 5 = 10935$. We have $N = \displaystyle\frac{3^2\cdot 5 + 3^2\cdot 5 + 3^5\cdot 5 + 3^7\cdot 5}{3^2 \cdot 5} = 272$. Thus $d = \displaystyle\frac{320-272}{3} - 1 = 15$ and $\ell = 7 - 2 - 1 = 4$. The base $3$ representation of $d$ is $120$. To form a $3$-word of length $4,$ we introduce one leading $0$. Thus, the decoration is $w=0120$ and the resulting decorated partition is
\begin{align*}
\mu = & (32805, (10935)_{0120}, 10935, 1215, 45, 45, 25, 9, 3)\\
= & (3^8 \cdot 5, (3^7\cdot 5)_{0120}, 3^7 \cdot 5, 3^5 \cdot 5, 3^2 \cdot 5, 3^2 \cdot 5, 25, 3^2 \cdot 1, 3 \cdot 1) \in \mathcal{DD}_3(56017).
\end{align*}
\end{example}
\section{Proofs of Theorem \ref{T2} }
\subsection{Analytic Proof}
We create a bivariate generating function to keep track of the number of different parts in partitions in $\OO(n)$, respectively $\mathcal D_r(n)$.
We denote by $\mathcal O'_r(n;m)$ the set of partitions of $n$ with parts from $S_2$ using $m$ different parts. We denote by $\mathcal D'_r(n;m)$ the set of partitions of $n$ with parts from $S_1$ using $m$ different parts and allowing parts to repeat no more than $r-1$ times. Then,
\begin{align*}
f_{\mathcal O'_r}(z,q):=\displaystyle \sum_{n = 0}^{\infty} \sum_{m=0}^\infty| \mathcal O'_r(n;m)|z^mq^n & = \prod_{{\bf b} \in S_2} (1 + zq^{\bf b} + zq^{2\bf b} + \dotsm)\\
& = \prod_{{\bf b} \in S_2} \bigg( 1 + \frac{zq^{\bf b} }{1- q^{\bf b} } \bigg),
\end{align*}
and \begin{align*}
f_{\mathcal D'_r}(z,q):=\displaystyle \sum_{n = 0}^{\infty}\sum_{m=0}^\infty |\mathcal D'_r(n;m)|z^mq^n & = \prod_{a \in S_1} (1 + zq^a + \dotsm +zq^{(r-1)a})\\ & = \prod_{a \in S_1} \bigg( 1 + \frac{zq^a-zq^{ra}}{1- q^a } \bigg).
\end{align*}
To obtain the generating function for the total number of different parts in all partitions in $\OO(n)$ (respectively $\mathcal D_r(n)$),
we take the derivative with respect to $z$ of $f_{\mathcal O'_r}(z,q)$ (respectively $f_{\mathcal D'_r}(z,q)$), and set $z = 1$. We obtain
\begin{align*}
\left.\frac{\partial}{\partial z}\right|_{z=1}f_{\mathcal O'_r}(z,q) &=\sum_{{\bf b} \in S_2} \frac{q^{\bf b} }{1 - q^{\bf b} } \prod_{{\bf c} \in S_2, {\bf c} \neq {\bf b} } \bigg(1 + \frac{q^{\bf c}}{1 - q^{\bf c}} \bigg)\\ &=\sum_{{\bf b} \in S_2} \frac{q^{\bf b} }{1 - q^{\bf b} } \prod_{{\bf c} \in S_2, {\bf c} \neq {\bf b} } \bigg( \frac{1}{1 - q^{\bf c}} \bigg)\\
& = \prod_{{\bf b} \in S_2} \frac{1}{1 - q^{\bf b} } \sum_{{\bf b} \in S_2} q^{\bf b},
\end{align*}
and
\begin{align*}
\left.\frac{\partial}{\partial z}\right|_{z=1}f_{\mathcal D'_r}(z,q) &= \sum_{a \in S_1} \frac{q^a-q^{ra}}{1-q^a} \prod_{d \in S_1, d\neq a} \bigg(1 + \frac{q^d-q^{rd}}{1-q^d}\bigg)\\ & = \sum_{a \in S_1} \frac{q^a-q^{ra}}{1-q^a} \prod_{d \in S_1, d\neq a}\frac{1-q^{rd}}{1-q^d}\\
& =\prod_{a \in S_1}\frac{1-q^{ra}}{1-q^a} \sum_{a \in S_1} \frac{q^a - q^{ra}}{1 - q^{ra}}.
\end{align*}
Since $|\mathcal D_r(n)| = |\OO(n)|$, we have
\begin{align*}
\sum_{n = 0}^{\infty} b'_r(n) q^n
= \prod_{a \in S_1}\frac{1-q^{ra}}{1-q^a}\bigg( \sum_{a \in S_1} \frac{q^a}{1 - q^{ra}} -\sum_{a \in S_1} \frac{q^{ra}}{1 - q^{ra}} - \sum_{{\bf b} \in S_2} q^{\bf b} \bigg).
\end{align*}
Moreover,
\begin{align*}
\sum_{a \in S_1} \frac{q^a}{1 - q^{ra}} -\sum_{a \in S_1} \frac{q^{ra}}{1 - q^{ra}} & = \bigg( \sum_{a \in S_1} q^a + \sum_{a \in S_1} \frac{q^{(r+1)a}}{1 - q^{ra}} \bigg) - \bigg( \sum_{a \in rS_1} q^a + \sum_{a \in S_1} \frac{q^{2ra}}{1 - q^{ra}} \bigg) \\
& = \bigg( \sum_{a \in S_1} q^a - \sum_{a \in rS_1} q^a \bigg) + \bigg(\sum_{a \in S_1} \frac{q^{(r+1)a}}{1 - q^{ra}} - \sum_{a \in S_1} \frac{q^{2ra}}{1 - q^{ra}}\bigg)\\
& = \sum_{{\bf b} \in S_2} q^{\bf b} + \sum_{a \in S_1} \frac{q^{(r + 1)a} - q^{2ra}}{1 - q^{ra}},
\end{align*}
the last equality occurring because $S_1 = S_2 \sqcup rS_1$.
Therefore,
\begin{align*}
\sum_{n = 0}^{\infty} b'_r(n) q^n & = \prod_{a \in S_1}\frac{1-q^{ra}}{1-q^a}\sum_{a \in S_1} \frac{q^{(r + 1)a} - q^{2ra}}{1 - q^{ra}}\\ & = \sum_{a \in S_1} \frac{q^{(r + 1)a} + q^{(r + 2)a} + \dotsm +q^{(2r-1)a}}{1 + q^a + \dotsm + q^{(r-1)a}} \prod_{a \in S_1} (1 + q^a + \dotsm + q^{(r-1)a})\\ & = \sum_{a \in S_1} (q^{(r + 1)a} + q^{(r + 2)a} + \dotsm +q^{(2r-1)a}) \prod_{d \in S_1, c\neq a} (1 + q^d + \dotsm + q^{(r-1)d})\\
& = \sum_{n = 0}^{\infty} c'_r(n) q^n.
\end{align*}
\medskip
\subsection{Combinatorial Proof}
\subsubsection{$b'_r(n)$ as the cardinality of a set of overpartitions} \label{b1}
As in section \ref{b}, we use Glaisher's bijection and calculate $b'_r(n)$ by summing up the difference between the number of different parts of $\varphi_r(\lambda)$ and the number of different parts of $\lambda$ for each partition $\lambda\in \OO(n)$. For a given ${\bf a} \in S_2$, each part in $\varphi_r(\lambda)$ of the form $r^k\bf a$, $k\geq 0$, is obtained from $\lambda$ by merging $r^k$ parts equal to $\bf a$. Therefore, the contribution to $b'_r(n)$ of each $\mu\in \mathcal D_r(n)$ equals $$\sum_{\substack{{\bf a}\in S_2 \\ {\bf a} \mbox{ \small{part of} } \varphi_r^{-1}(\mu)}}(m_{\mu}({\bf a})-1),$$ where
$$m_{\mu}({\bf a})=|\{t\geq 0 \mid r^t{\bf a} \mbox{ is a part of } \mu \}|. $$
Next, we define a set of overpartitions. An overpartition is a partition in which the last appearance of a part may be overlined. For example, $(5, \bar 5, 3,3, \bar 2, 1, 1, \bar 1)$ is an overpartition of $21$. We denote by $\overline{\mathcal{D}}_r(n)$ the set of overpartitions of $n$ with parts in $S_1$ repeated at most $r-1$ times in which \textit{exactly one} part is overlined and such that part $r^s\bf a$ with $s \geq 0$ may be overlined only if there is a part $r^t\bf a$ with $t<s$. In particular, no primitive part can be overlined. Note that when we count parts in an overpartition, the overlined part contributes to the multiplicity. The discussion above proves the following interpretation of $b'_r(n)$.
\begin{proposition} Let $n\geq 1$. Then, $b'_r(n)=|\overline{\mathcal{D}}_r(n)|.$ \end{proposition}
\subsubsection{A combinatorial proof for $c'_r(n)=b'_r(n)$} We establish a one-to-one correspondence between $\overline{\mathcal{D}}_r(n)$ and $\mathcal T_r(n)$.\medskip
\noindent \textit{From $\overline{\mathcal{D}}_r(n)$ to $\mathcal T_r(n)$:}
Start with an overpartition $\mu \in \overline{\mathcal{D}}_r(n)$. Suppose the overlined part is $\mu_i=r^s\bf a$. Then there is a part $\mu_p=r^t\bf a$ of $\mu$ with $t<s$. Let $k$ be the largest non-negative integer such that $r^k\bf a$ is a part of $\mu$ and $k<s$. To obtain $\lambda \in \mathcal T_r(n)$ from $\mu$, split $\mu_i$ into $r$ parts equal to $r^k\bf a$ and $r-1$ parts equal to $r^j\bf a$ for each $j=k+1, k+2, \ldots, s-1$.
\begin{example} We continue with the Euler pair of order $3$ from Example \ref{eg:epair}. Let\\
$\mu = (3^8 \cdot 5, 3^7 \cdot 5, \overline{3^7 \cdot 5}, 3^5 \cdot 5, 3^2 \cdot 5, 3^2 \cdot 5, 25, 3^2 \cdot 1, 3 \cdot 1) \in \overline{\mathcal{D}}_3(56017)$.\\
Then $k=5$ and $3^7 \cdot 5$ splits into three parts equal to $3^5\cdot 5$ and two parts equal to $3^6 \cdot 5$. Thus, we obtain the partition
\begin{align*}
\lambda & = (3^8 \cdot 5, 3^7 \cdot 5, 3^6 \cdot 5, 3^6 \cdot 5, 3^5 \cdot 5, 3^5 \cdot 5, 3^5 \cdot 5, 3^5 \cdot 5, 3^2 \cdot 5, 3^2 \cdot 5, 25, 3^2 \cdot 1, 3 \cdot 1)\\
& \in \mathcal{T}_3(56017).
\end{align*}
The part repeated more than three times but less than six times is $3^5\cdot 5$.
\end{example}
\noindent \textit{From $\mathcal T_r(n)$ to $\overline{\mathcal{D}}_r(n)$:}
Start with a partition $\lambda \in \mathcal T_r(n)$. Suppose $r^k\bf a$ is the part repeated more than $r$ times but less than $2r$ times. Let $\mu=\varphi_r(\lambda)\in \mathcal D_r(n)$. Overline the smallest part of $\mu$ of form $r^t \bf a$ with $t>k$. The resulting overpartition is in $\overline{\mathcal{D}}_r(n)$.
\begin{example} We continue with the Euler pair of order $3$ from Example \ref{eg:epair}. Let
\begin{align*}
\lambda & = (3^8 \cdot 5, 3^7 \cdot 5, 3^6 \cdot 5, 3^6 \cdot 5, 3^5 \cdot 5, 3^5 \cdot 5, 3^5 \cdot 5, 3^5 \cdot 5, 3^2 \cdot 5, 3^2 \cdot 5, 25, 3^2 \cdot 1, 3 \cdot 1)\\
& \in \mathcal{T}_3(56017).
\end{align*}The part repeated more than three times but less than six times is $3^5\cdot 5$.
We have $k = 5$. Merging by Glaisher's bijection, we obtain\\
$\mu = (3^8 \cdot 5, 3^7 \cdot 5, 3^7 \cdot 5, 3^5 \cdot 5, 3^2 \cdot 5, 3^2 \cdot 5, 25, 3^2 \cdot 1, 3 \cdot 1) \in \mathcal{D}_3(56017).$\\
The smallest part of $\mu$ of the form $r^t\bf{a}$ with $t > k = 5$ is $3^7\cdot 5$. Thus we obtain the overpartition\\
$\mu = (3^8 \cdot 5, 3^7 \cdot 5, \overline{3^7 \cdot 5}, 3^5 \cdot 5, 3^2 \cdot 5, 3^2 \cdot 5, 25, 3^2 \cdot 1, 3 \cdot 1) \in \overline{\mathcal{D}}_3(56017).$
\end{example}
\begin{remark} \label{rem} We could have obtained the transformation above from the combinatorial proof of part (ii) of Theorem \ref{T1}. In the transformation from $\mathcal D_{1,r}(n)$ to $\mathcal{DD}_r(n)$, if part $r^k\bf a$ is the part repeated more than $r$ times but less than $2r$ times, we have $f=r+s$ for some $1\leq s\leq r-1$, $h=s+1$, and $N=s$. Thus $d=0$ and the decorated part is the last occurrence of smallest part in the transformed partition $\mu$ that is of the form $r^t\bf a$ with $t>k$. Thus, in $\mu$, the decorated part $r^{t} \bf a$ is decorated with an $r$-word consisting of all zeros and of length $t-k-1$, one less than the difference in exponents of $r$ of the decorated part and the next smallest part with the same $\bf a$ factor. Since in this case the decoration of a partition in $\mathcal{DD}_r(n)$ is completely determined by the part being decorated, we can simply just overline the part. \end{remark}
\section{Concluding remarks}
In this article we proved first and second Beck-type identities for all Euler pairs $(S_1,S_2)$ of order $r\geq 2$. Euler pairs of order $r$ satisfy $rS_1\subseteq S_1$ and $S_2=S_1\setminus rS_1$.
Thus, we established Beck-type identities accompanying all partition identities of the type given in Theorem \ref{sub-thm}.
At the end of \cite{S71}, Subbarao mentions that the characterization of Euler pairs of order $r$ given by Theorem \ref{sub-thm} can be extended to vector partitions. The corrected statement for partitions of multipartite numbers is given in \cite[Theorem 12.2]{A98} and indeed it has Beck-type companion identities as we explain below.
A multipartite (or $s$-partite) number $\frak n=(n_1, n_2, \ldots, n_s)$ is an $s$ tuple of non-negative integers, not all $0$. We view multipartite numbers as vectors and refer to $n_1, n_2, \ldots, n_s$ as the entries of $\frak n$.
A multipartition (or vector partition) $\xi=(\xi^{(1)}, \xi^{(2)},\ldots, \xi^{(t)})$ of $\frak n$ is a sequence of multipartite numbers in non-increasing lexicographic order satisfying $$\frak n=\xi^{(1)}+ \xi^{(2)}+\cdots +\xi^{(t)}.$$ We refer to $\xi^{(i)}$, $1\leq i\leq t$ as the multiparts (or vector parts) of the multipartition $\xi$ and to the number of multiparts $t$ of $\xi$ as the length of $\xi$ which we denote by $\ell(\xi)$.
Let $S_1$ and $S_2$ be sets of positive integers. Given a multipartition $\xi$ of $\frak n$ with all entries of all multiparts in $S_1$, we say that a multipart $\xi^{(i)}$ of $\xi$ is \textit{primitive} if at least one entry of $\xi^{(i)}$ is in $S_2$. Otherwise, the multipart is called \textit{non-prmitive}. We denote by $\mathcal{VD}_r(\frak n)$ the set of multipartitions $\xi=(\xi^{(1)}, \xi^{(2)},\ldots, \xi^{(t)})$ of $\frak n=(n_1, n_2, \ldots, n_s)$ with all entries of all multiparts in $S_1$ and such that all multiparts are repeated at most $r-1$ times. We denote by $\mathcal{VO}_r(\frak n)$ the set of multipartitions $\eta=(\eta^{(1)}, \eta^{(2)}, \ldots, \eta^{(u)})$ of $\frak n=(n_1, n_2, \ldots, n_s)$ such that each multipart $\eta^{(i)}$ of $\eta$ is primitive.
Then, Andrews \cite{A98} gives the following theorem mentioning that its proof can be constructed similar to the proof using ideals of order $1$ for the analogous result for regular partitions.
\begin{theorem} \label{SA} Let $S_1$ and $S_2$ be sets of positive integers. Then $$|\mathcal{VD}_r(\frak n)|=|\mathcal{VO}_r(\frak n)|$$ if and only if $(S_1,S_2)$ is an Euler pair of order $r$, i.e., $rS_1\subseteq S_1$ and $S_2=S_1-rS_1$.
\end{theorem}
We note that Glaisher's bijection can be extended to prove Theorem \ref{SA} combinatorially. The Glaisher type transformation, $v\varphi_r$, from $\mathcal{VO}_r(\frak n)$ to $\mathcal{VD}_r(\frak n)$ repeatedly merges $r$ equal multiparts (as addition of vectors) until there are no multiparts repeated more than $r-1$ times. The transformation from $\mathcal{VD}_r(\frak n)$ to $\mathcal{VO}_r(\frak n)$ takes each non-primitive multipart (all its entries are from $rS_1$) and splits it into $r$ equal multiparts, repeating the process until the obtained multiparts are primitive. The remark below is the key to adapting the combinatorial proofs of Theorems \ref{T1} and \ref{T2} to proofs of Beck-type identities for multipartitions.
\begin{remark} Let $\eta \in \mathcal{VO}_r(\frak n)$ and $\xi=v\varphi_r(\eta)\in \mathcal{VD}_r(\frak n)$. Then multipart $\xi^{(i)}$ of $\xi$ was obtained by merging $r^k$ multiparts of $\eta$ if and only if, when writing all entries of $\xi^{(i)}$ in the form $r^j{\bf a}$, the smallest exponent of $r$ in all entries of $\xi^{(i)}$ is $k$.
\end{remark}
To formulate Beck-type identities for multipartitions, let $vb_r(\frak n)$ be the difference between the number of multiparts in all multipartitions in $\mathcal{VO}_r(\frak n)$ and the number of multiparts in all multipartitions in $\mathcal{VD}_r(\frak n)$. Similarly, let $vb'_r(\frak n)$ be the difference in the total number of different multiparts in all multipartitions in $\mathcal{VD}_r(\frak n)$ and the total number of different multiparts in all multipartitions in $\mathcal{VO}_r(\frak n)$. Then, we have the following Beck-type identities for multipartions.
\begin{theorem} \label{last} Suppose $(S_1,S_2)$ is an Euler pair of order $r\geq 2$ and let $\frak n$ be a multipartite number. Then
\begin{enumerate}
\item[(i)] $\displaystyle \frac{1}{r-1}vb_r(\frak n)$ equals the number of multipartitions $\xi=(\xi^{(1)}, \xi^{(2)},\ldots, \xi^{(t)})$ of $\frak n$ with all entries of all multiparts in $S_1$ and such that exactly one multipart is repeated at least $r$ times. Moreover, $\displaystyle \frac{1}{r-1}vb_r(\frak n)$ equals the number of multipartitions $\eta=(\eta^{(1)}, \eta^{(2)}, \ldots, \eta^{(u)})$ of $\frak n$ such that each multipart $\eta^{(i)}$ of $\eta$ has all entries in $S_1$ and only one multipart, possibly repeated, is non-primitive. \medskip
\item[(ii)] $vb'_r(\frak n)$ equals the number of multipartitions $\xi=(\xi^{(1)}, \xi^{(2)},\ldots, \xi^{(t)})$ of $\frak n$ with all entries of all multiparts in $S_1$ and such that exactly one multipart is repeated more than $r$ times but less than $2r$ times. \end{enumerate}\end{theorem}
The combinatorial proofs of these statements follow the combinatorial proofs of Theorems \ref{T1} and \ref{T2} with all references to expression of the form ``part $r^k{\bf a}$ in the partition $\mu$" changed to ``multipart of multipartition $\mu$ in which $r^k{\bf a}$ is the entry with the smallest exponent of $r$ (among the entries of the multipart)."
To our knowledge, there is no analytic proof of Theorem \ref{last}.
\section*{Acknowledgements}
We are grateful to the anonymous referees for suggestions that improved the exposition of the article. In particular, one referee suggested the short proof of \eqref{sets}, and another referee alerted us to the correct statement of Subbarao's theorem for vector partitions.
|
2,869,038,154,722 | arxiv | \section{Introduction}
Spacetimes which are obtained by dimensional reduction along lightlike
directions can exhibit peculiarities in their geometrical properties
which are not found in other compactifications. The original
motivation for the work presented in this paper has been to examine
the structure of the isometry group of flat compactified spacetimes
when dimensional reduction is performed over a lattice containing a
lightlike direction. In the course of our investigation we have
uncovered some interesting results about the geometry of
higher-dimensional spacetimes $M$ which are product manifolds with an
"internal factor" which contains a lightlike circle. It turns out that
such spaces can admit semigroup extensions $e I(M)$ of their isometry
group $I(M)$; that is to say, they admit smooth global maps $\Lambda:
M \rightarrow M$ which are surjective, but no longer injective on $M$,
{\it and which locally preserve the metric}, so that they still
qualify as "isometries", albeit in a wider sense. Any two such
transformations $g$ and $g'$ again combine to a, generally
non-invertible, metric-preserving transformation $g g'$; but the
non-injectiveness implies that global inverses of such maps do not
generally exist. This fact accounts for the semigroup structure of
these maps.
Such a semigroup extension suggests interesting consequences for the
existence of preferred coordinate charts on the space underlying; for,
a semigroup of isometries can be given a natural ordering on account
of the fact that the product $g g' = h$ of two semigroup elements $g$
and $g'$ still belongs to $eI(M)$, but there may be no way to resolve
this product for either $g$ or $g'$. Below we shall walk through the
simplest example possible, namely a cylindrical internal spacetime
where the circle of the cylinder is lightlike; we shall see that in
this case the preferred coordinate charts consist of
"infinite-momentum" frames, i.e., coordinate frames which are related
to any other frame by the limit of a series of internal Lorentz
transformations, i.e., an infinite boost. Viewed from such a frame,
fields defined on the spacetime acquire extreme values; in particular,
some of the off-diagonal components of the higher-dimensional metric,
which may be regarded as gauge potentials for a field theory on the
$(3+1)$-dimensional external spacetime factor, vanish. This raises the
possibility of regarding known gauge theories, with a given number of
gauge potentials, as part of a more extended field multiplet, which
has been "reduced" in size since it is perceived from within an
"extreme" frame. What is more, the semigroup structure implies that
all fields defined on the internal spacetime factor must be
independent of one of the lightlike coordinates on this factor, namely
the one which has acquired a compact size in the process of
dimensional reduction. We then end up being able to explain quite
naturally how one dimension on the internal manifold must cease to
reveal its presence, since all fields must be independent of this
dimension. Furthermore, the geometry of the lightlike cylinders
studied below has another interesting property: This property is given
by the fact that all lightlike cylinders, independent of the size of
the "compactification radius" along the lightlike direction, are {\it
isometric}. This implies that any physical theory which is formulated
covariantly in terms of geometric objects on the higher-dimensional
spacetime is automatically independent of the compactification radius
along one of the lightlike directions. This then means that the
question whether this radius is microscopic or large becomes
irrelevant, since no covariant theory can distinguish between two
different radii!
Below we shall work out explicitly the example of a $6$-dimensional
spacetime which is a product of a $4$-dimensional Minkowski spacetime
and an internal lightlike cylinder. In this case the field content as
seen in the "infinite-momentum frame" on the internal space is that of
a $5$-dimensional Kaluza-Klein theory, where the electrodynamic
potentials $A_{\mu}$ may depend, in addition to the four external
spacetime coordinates, on a fifth coordinate along a lightlike
direction. We discuss some of the problems associated with obtaining
meaningful equations of motion for these potentials, which formally
are equations on a manifold with {\it degenerate metric}. This leads
outside the established framework of semi-Riemannian geometry, and
hence it is not clear how to produce dynamical equations for the
potentials $A_{\mu}$.
Lightlike compactification, or "discretization", has been utilized
within the framework of Discrete Light-Cone Quantization, a scheme
which has been proposed for gauge field quantization \cite{Thorn1978}
and a lattice approach to string theory \cite{GilesThorn1977}; it goes
back to Dirac's original idea of light-cone quantization
\cite{DiracLightCone} which was later rediscovered by Weinberg
\cite{Weinberg1966a}. While in Discrete Light-Cone Quantization
lightlike directions are compactified (usually one or two), it is
assumed that these directions are obtained from a coordinate
transformation onto an infinite-momentum reference frame living in
ordinary 3+1-dimensional Minkowski spacetime. In contrast, our work
focuses on the compactification of lightlike directions in a
higher-dimensional covering spacetime. Geometrical aspects of
dimensional reduction along lightlike Killing vectors, and action
principles in such spaces, were investigated in
\cite{JuliaNicolai1995a}. These authors studied compactifications
along continuous orbits; in contrast, in this work we are dealing with
orbits which result from the action of discrete groups.
The plan of the paper is as follows: In section \ref{OrbitSpaces} we
motivate the emergence of semigroup extensions from a more abstract
point of view by considering how isometry groups of orbit spaces are
obtained from isometry groups of associated covering spaces. In
section \ref{IdOverLattice} we discuss a condition that obstructs the
existence of semigroup extensions for orbit spaces which are obtained
from the action of primitive lattice translations of a suitable
lattice on a flat pseudo-Euclidean covering space. In section
\ref{2DGeometry} we study in detail the example of two-dimensional
Lorentzian cylindrical spaces which are obtained by compactifying a
lightlike direction in a Lorentzian covering space. We show that the
associated semigroup extension of the isometry group of the cylinder
exhibits a natural ordering, which may be interpreted as pointing out
preferred, "infinite-momentum", frames on this space. The consequences
of these semigroup transformations for scalars, vectors and covectors
on the cylinder are examined in sections \ref{Scalar} and
\ref{Vectors}. In section \ref{KaluzaKlein} we regard the cylindrical
space as the "internal" factor in a product spacetime whose external
factor is Minkowski. We show that the preferred charts on the total
space reduce the associated, originally $(4+2)$-dimensional,
Kaluza-Klein theory to an effective $(3+1+0)$-dimensional Kaluza-Klein
scenario, where in the latter case the internal space appears
one-dimensional and carries a zero metric. In \ref{Summary} we
summarize our results.
\section{Orbit spaces and normalizing sets \label{OrbitSpaces}}
We have mentioned in the introduction that semigroup extensions of
isometry groups emerge naturally when dimensional reduction along
lightlike directions is performed. In order to see the key point at
which such extensions present themselves it is best to approach the
subject in a more general way, by motivating the idea of the {\it
extended normalizer} of a set of symmetry transformations. To this end
we start by recalling how symmetry groups, and possible extensions
thereof, of identification spaces are obtained from the symmetry
groups of associated covering spaces. The relevant idea here is that
of diffeomorphisms on a covering space which {\it descend to a
quotient space}:
Let $M$ be a connected pseudo-Riemannian manifold with a metric $\eta
$, and let $I(M)$ be the group of isometries of $M$. Assume that a
discrete subgroup $\Gamma \subset I(M)$ acts properly discontinuously
and freely on $M$; in this case the natural projection $p : M
\rightarrow M / \Gamma$ from $M$ onto the space of orbits, $M / \Gamma
$, can be made into a covering map, and $M$ becomes a covering space
of $M / \Gamma$ (e.g. \cite{Brown,Fulton,Jaehnich,Massey}). In fact
there is a unique way to make the quotient $M / \Gamma$ a
pseudo-Riemannian manifold (e.g. \cite
{Wolf,Poor,SagleWalde,Warner,ONeill}): In this construction one
stipulates that the projection $p$ be a local isometry, which
determines the metric on $M / \Gamma$. In such a case, we call $p : M
\rightarrow M / \Gamma $ a pseudo-Riemannian covering.
In any case the quotient $p: M \rightarrow M / \Gamma$ can be regarded
as a fibre bundle with bundle space $M$, base $M/ \Gamma$, and
$\Gamma$ as structure group, the fibre over $m \in M/\Gamma$ being the
orbit of any element $x \in p^{-1}(m)$ under $\Gamma $,
i.e. $p^{-1}(m) = \Gamma x = \left\{ \gamma x \mid \gamma \in \Gamma
\right\}$. If $g \in I(M)$ is an isometry of $M$, then $g$ gives rise
to a well-defined map $g_{\#}: M / \Gamma \rightarrow M / \Gamma $,
defined by
\begin{equation}
\label{map1}
g_{\#}(\Gamma x) \equiv \Gamma(gx) \quad,
\end{equation}
on the quotient space {\bf only} when $g$ preserves all fibres, i.e.
when $g \left(\Gamma x \right) \subset \Gamma(gx)$ for all $x\in M$.
This is equivalent to saying that $g \Gamma g^{-1} \subset \Gamma$. If
this relation is replaced by the stronger condition $g \Gamma g^{-1} =
\Gamma$, then $g$ is an element of the {\it normalizer}
$N\left(\Gamma\right)$ of $\Gamma$ in $I(M)$, where
\begin{equation}
\label{normalizer1}
N \left( \Gamma \right) = \left\{ g \in I(M) \mid g \Gamma g^{-1} =
\Gamma \right\} \quad .
\end{equation}
The normalizer is a group by construction. It contains all
fibre-preserving elements $g$ of $I(M)$ such that $g^{-1}$ is
fibre-preserving as well. In particular, it contains the group
$\Gamma$, which acts trivially on the quotient space; this means that
for any $\gamma \in \Gamma$, the induced map $\gamma_{\#} : M / \Gamma
\rightarrow M / \Gamma$ is the identity on $M / \Gamma $. This
follows, since the action of $\gamma_{\#}$ on the orbit $\Gamma x$,
say, is defined to be $\gamma_{\#} \left( \Gamma x \right) \equiv
\Gamma \left( \gamma x \right) = \Gamma x$, where the last equality
holds, since $\Gamma$ is a group.
In this work we are interested in relaxing the equality in the
condition defining $N\left(\Gamma\right)$; to this end we introduce
what we wish to call the {\it extended normalizer}, denoted by
$eN\left(\Gamma\right)$, through
\begin{equation}
\label{exNorm}
eN(\Gamma) \equiv \left\{ g \in I(M) \mid g \Gamma g^{-1} \subset
\Gamma \right\} \quad .
\end{equation}
The elements $g \in I(M)$ which give rise to well-defined maps
$g_{\#}$ on $ M / \Gamma$ are therefore precisely the elements of the
extended normalizer $eN(\Gamma)$, as we have seen in the discussion
above. Such elements $g$ are said to {\it descend to the quotient
space} $M / \Gamma$. Hence $eN(\Gamma)$ contains all isometries of
$M$ that descend to the quotient space $M / \Gamma$; the normalizer
$N(\Gamma )$, on the other hand, contains all those $g$ for which
$g^{-1}$ descends to the quotient as well. Thus, $N(\Gamma)$ is the
group of all $g$ which descend to {\bf invertible} maps $g_{\#}$ on
the quotient space. In fact, the normalizer $N(\Gamma)$ contains all
isometries of the quotient space, the only point being that the action
of $N(\Gamma)$ is not effective, since $\Gamma \subset N( \Gamma)$
acts trivially on $M / \Gamma$. However, $\Gamma$ is a normal
subgroup of $N (\Gamma)$ by construction, so that the quotient
$N(\Gamma) / \Gamma$ is a group again, which is now seen to act
effectively on $M / \Gamma$, and the isometries of $M / \Gamma$ which
descend from isometries of $M$ are in a 1--1 relation to elements of
this group. Thus, denoting the isometry group of the quotient space
$M/\Gamma$ as $I\left(M/\Gamma\right)$, we have the well-known result
that
\begin{equation}
\label{normaliz1}
I (M/ \Gamma) = N (\Gamma) / \Gamma \quad .
\end{equation}
Now we turn to the extended normalizer. For an element $g \in
eN(\Gamma)$, but $g \not \in N(\Gamma)$, the induced map $g_{\#}$ is
no longer injective on $M / \Gamma$: To see this we observe that now
the inclusion in definition (\ref{exNorm}) is proper, i.e. $g \Gamma
g^{-1} \subsetneqq \Gamma $. It follows that a $\gamma' \in \Gamma$
exists for which
\begin{equation}
\label{hilf2}
g \gamma g^{-1} \neq \gamma' \quad \text{for all $\gamma \in \Gamma$}
\quad.
\end{equation}
Take an arbitrary $x \in M$; we claim that
\begin{equation}
\label{hilf3}
g \Gamma g^{-1} x \subsetneqq \Gamma x \quad.
\end{equation}
To see this, assume to the contrary that the sets $g \Gamma g^{-1} x$
and $\Gamma x$ coincide; then a $\gamma_1 \in \Gamma$ exists for which
$g \gamma_1 g^{-1} x = \gamma' x$; since $g$ is an element of the
extended normalizer $eN(\Gamma)$, $g \gamma_1 g^{-1} \in \Gamma$,
i.e., $g \gamma_1 g^{-1} = \gamma_2$, say. It follows that $\gamma_2 x
= \gamma' x$ or $\gamma_2^{-1} \gamma' x = x$. The element
$\gamma_2^{-1} \gamma'$ belongs to $\Gamma$, which, by assumption,
acts freely. Free action implies that if a group element has a fixed
point then it must be the unit element, implying that $\gamma' =
\gamma_2 = g \gamma_1 g^{-1}$, which contradicts (\ref{hilf2});
therefore, (\ref{hilf3}) must hold. Eq. (\ref{hilf3}) can be expressed
by saying that the orbit of $x$ is the image of the orbit of $g^{-1}
x$ under the action of the induced map $g_{\#}$, $g_{\#} (\Gamma
g^{-1} x) = \Gamma x$; but that the $g_{\#}$-image of the orbit
$\Gamma g^{-1} x$, regarded as a set, is properly contained in the
orbit of $x$. The latter statement means that a $\gamma' \in \Gamma$
exists, as in (\ref{hilf2}), such that $\gamma' x \neq g \gamma g^{-1}
x$ for all $\gamma$. It follows that $\gamma g^{-1} x \neq g^{-1}
\gamma' x$ for all $\gamma$, implying that the point $g^{-1} \gamma'
x$ is not contained in the orbit of the point $g^{-1} x$. Its own
orbit, $\Gamma g^{-1} \gamma' x$, is therefore distinct from the orbit
$\Gamma g^{-1} x$ of the point $g^{-1} x$, since two orbits either
coincide or are disjoint otherwise. However, the induced map $g_{\#}$
maps $\Gamma g^{-1} \gamma' x$ into
\begin{equation}
\label{hilf4}
g_{\#} (\Gamma g^{-1} \gamma' x) = \Gamma( g g^{-1} \gamma' x) =
\Gamma x \quad,
\end{equation}
from which it follows that $g_{\#}$ maps two distinct orbits into the
same orbit $\Gamma x$, expressing the fact that $g_{\#}$ is not
injective. In particular, if $g$ was an isometry of $M$, then $g_{\#}$
can no longer be a global isometry on the quotient space, since it is
not invertible {\bf on the quotient space}. From this fact, or
directly from its definition (\ref{exNorm}), we infer that the
extended normalizer is a semigroup, since it contains the identity,
and with any two elements $g$ and $g'$ also their product $g g'$. On
the other hand, we shall see below that there are cases where the
elements $g$ of the extended normalizer are still locally injective,
in particular, injective on the tangent spaces of the
compactification; and moreover, they preserve the metric on the
tangent spaces, so that they should be regarded as a kind of
"generalized isometries".
\section{Identifications over lattices \label{IdOverLattice}}
So far we have been entirely general with regard to the manifolds
$M$. We now make more specific assumptions: Our covering spaces $M$
are taken to be a flat pseudo-Euclidean space $\RR_t^n$, i.e., $\RR^n$
endowed with a pseudo-Euclidean metric $\eta$ with signature $(-t,+s),
t+s=n$. The isometry group $I(\RR^n_t)$ of this space is a semidirect
product $\RR^n \odot O(t,n-1)$, where $\RR^n$ denotes the
translational factor, and will be denoted by $\EE(t,n-t)$. The
identification group $\Gamma$ will be taken as the set of primitive
lattice transformations of a lattice in $\RR^n_t$. If the $\RR$-linear
span of the lattice vectors has real dimension $m$, say, the resulting
identification space $M / \Gamma$ is homeomorphic to a product
manifold $\RR^{n-m} \times T^m$, where $T^m$ denotes an
$m$-dimensional torus. This space inherits the metric from the
covering manifold $\RR^n_t$, since the metric is a local object, but
the identification changes only the global topology. Thus $M / \Gamma$
is again a manifold, but may cease to be semi-Riemannian, since the
metric on the torus may turn out to be degenerate. Whereas the
isometry group of the covering space $\RR^n_t$ is $\EE(t,n-t)$, the
isometry group of the "compactified" space $M / \Gamma$ is obtained
from the normalizer $N(\Gamma)$ of the group $\Gamma$ in $\EE(t,n-t)$
according to formula (\ref{normaliz1}).
For our purposes it is sufficient to consider lattices that contain
the origin $0 \in \RR_t^n$ as a lattice point. Let $1~\le~m~\le~n$,
let $\underline{u} \equiv (u_1,\ldots ,u_m)$ be a set of $m$ linearly
independent vectors in $\RR_t^n$; then the $\ZZ$-linear span of
$\underline{u}$,
\begin{equation}
\label{latpoints1}
\mathsf{lat} \equiv \Big\{ \sum_{i=1}^m z_i \cdot u_i \Big| z_i \in
\mathbb{Z} \Big\} \quad ,
\end{equation}
is the set of {\it lattice points} with respect to
$\underline{u}$. Let $U$ denote the $\RR$-linear span of
$\underline{u}$, or equivalently, of $\mathsf{lat}$.
The subset $T_{\mathsf{lat}}\subset \EE(t,n-t)$ is the subgroup of all
translations in $\EE(t,n-t)$ through elements of $\mathsf{lat}$,
\begin{equation}
\label{pp3fo5}
T_{\mathsf{lat}}=\big\{(t_z,1)\in \EE(t,n-t) \big| t_z\in
\mathsf{lat} \big\} \quad .
\end{equation}
Elements $(t_z,1)$ of $T_{\mathsf{lat}}$ are called {\it primitive
lattice translations}. $T_{\mathsf{lat}}$ is taken as the discrete
group $\Gamma$ of identification maps, which gives rise to the
identification space $p: \RR^n_t \rightarrow \RR^n_t / T_{\lat}$.
We now examine the normalizer and extended normalizer of the
identification group: For an element $(t,R) \in \EE(t,n-t)$ to be in
the extended normalizer $eN(T_{\lat})$ of $T_{\lat}$, the condition
\begin{equation}
(t,R)(t_z,1)(t,R)^{-1} = (Rt_z,1) \in T_{\mathsf{lat}}
\label{Beding1}
\end{equation}
must be satisfied for lattice vectors $t_z$. In other words, $Rt_z \in
\lat$, which means that the pseudo-orthogonal transformation
$R$ must preserve the lattice $\lat$, $R \lat \subset
\lat$. For an element $(t,R)$ to be in the normalizer,
$(t,R)^{-1}$ must be in the normalizer as well, implying $R^{-1}
\lat \subset \lat$, so altogether $R \lat =
\lat$. The elements $R$ occuring in the normalizer therefore
naturally form a subgroup $G_{\lat}$ of $O(t,n-t)$; on the
other hand, the elements $R$ occuring in the extended normalizer form
a {\it semigroup} $eG_{\lat}\supset G_{\lat}$.
Furthermore, no condition on the translations $t$ in $(t,R)$ arises,
hence all translations occur in the normalizer as well as in the
extended normalizer. Thus, the [extended] normalizer has the structure
of a semi-direct [semi-]group
\begin{subequations}
\label{Norm1}
\begin{align}
N\left( \Gamma \right) & = \mathbb{R}^n\odot G_{\lat} \quad ,
\label{Norm1a} \\
eN\left( \Gamma \right) & = \mathbb{R}^n\odot eG_{\lat}
\quad, \label{Norm1b}
\end{align}
\end{subequations}
where $\RR^n$ refers to the subgroup of all translations in
$\EE(t,n-t)$.
A sufficient condition under which $eG_{\lat}$ coincides with
$G_{\lat}$ is easily found:
\begin{theorem}[Condition] \label{Bedingung}
If the restriction $\eta|U$ of the metric $\eta$ to the real linear
span $U$ of the lattice vectors is positive- or negative-definite,
then $eG_{\mathsf{lat}} = G_{\mathsf{lat}}$.
\end{theorem}
{\it Proof:}
We start with the case that $\eta|U$ is positive definite. Assume that
$R \in eG_{\lat} \subset O(t,n-t)$, but $R \not\in G_{\lat}$. Then $R$
preserves the lattice but is not surjective, i.e., $R \lat \subsetneqq
\lat$. This implies the series of inclusions
\begin{equation}
\label{incl1}
\cdots R^3 \lat \subsetneqq R^2 \lat \subsetneqq R\lat \subsetneqq
\lat \quad.
\end{equation}
Now choose an $x \in \lat \backslash R \lat$; then it follows that $R
x \in R \lat \backslash R^2 \lat, \ldots, R^k x \in R^k \lat
\backslash R^{k+1} \lat$. The series of proper inclusions
(\ref{incl1}) then implies that the elements in the series $k \mapsto
R^k x$ are all distinct from each other; furthermore, they are all
lattice points, since $R$ preserves the lattice. We then see that the
set of lattice points $\big\{\, R^k x\, \big|\, k \in \NN_0\, \big\}$
must be infinite. This means that there are elements in this set whose
Euclidean norm $\sqrt{ (\eta|U)(R^k x, R^k x)}$ exceeds every finite
bound. However, the transformations $R$ are taken from the overall
metric-preserving group $O(t,n-t)$ and hence must also preserve the
restricted metric $\eta|U$. The latter statement implies that
\begin{equation}
\label{incl2}
\cdots (\eta|U)(R^3 x, R^3 x) = (\eta|U)(R^2 x, R^2 x) = \eta(Rx,Rx) =
\eta(x,x) \quad,
\end{equation}
hence all elements $R^k x$ have the same norm according to
(\ref{incl2}). This contradicts the previous conclusion that there are
elements for which the norm exceeds all bounds. This shows that our
initial assumption $R \not \in G_{\lat}$ was wrong.
If $\eta|U$ is assumed to be negative-definite, the argument given
above clearly still applies, since the norm of an element $x$ in this
case is just given by $\sqrt{-(\eta|U)(x,x)}$. This completes our
proof. {\hfill$\blacksquare$}
We see that the structure of the proof relies on the fact that the
restricted metric $\eta|U$ was Euclidean. If this metric were
pseudo-Euclidean instead, the possibility that $eG_{\lat} \supsetneqq
G_{\lat}$ arises. We shall now study an explicit example of this
situation.
\section{Two-dimensional cylinders with a compact lightlike direction
\label{2DGeometry} }
We now study a specific example of a space, or rather, a family of
spaces, for which natural semigroup extensions can be derived from the
isometry group of a simply-connected covering space.
The quotient spaces under consideration are two-dimensional cylinders
and will be denoted by $C^2_r$; they are defined as follows:
\begin{equation}
\label{def1}
C^2_r \equiv [0,r) \times \RR \quad, \quad r \sim 0 \quad,
\end{equation}
i.e., $[0,r)$ is a circle with perimeter $r >0$, and we have the
identification $0 \sim r$. These spaces shall have canonical
coordinates $(x^+, x^-)$, where $0 \le x^+ < r$ and $x^- \in \RR$. To
the coordinates $(x^+, x^-)$ we shall also refer as {\it lightlike},
for reasons which will become clear shortly. Each of the cylinders
$C^2_r$ is endowed with a metric $h$ which, in lightlike coordinates,
is given as
\begin{equation}
\label{def2}
h = - dx^+ \otimes dx^- - dx^- \otimes dx^+ \quad.
\end{equation}
The manifolds $C^2_r$ have a remarkable property: They
are all isometric, the isometry $\phi_{rr'} : C^2_r \rightarrow
C^2_{r'}$ being given by
\begin{equation}
\label{iso1}
\phi_{rr'}\big(\, x^+, \, x^-\, \big) = \big(\, x^{\prime +}, \, x^{\prime -}
\, \big) \equiv \bigg(\, \frac{r'}{r}\, x^+\, ,\; \frac{r}{r'}\, x^-
\, \bigg) \quad,
\end{equation}
where $(x^+, x^-)$, $(x^{\prime +}, x^{\prime -})$ are lightlike
coordinates on $C^2_r$ and $C^2_{r'}$, respectively. Since the metric
on $C^2_{r'}$ is given by
\begin{equation}
\label{iso2}
h' = - dx^{\prime +} \otimes dx^{\prime -} - dx^{\prime -} \otimes
dx^{\prime +} \quad,
\end{equation}
we see immediately that $h$ and $h'$ are related by pull-back,
\begin{equation}
\label{iso3}
\phi^*_{rr'}\, h' = h \quad.
\end{equation}
This means that the geometry of $C^2_r$, and subsequently, any physics
built upon covariant geometrical objects on $C^2_r$, must be
completely insensitive to the size of the "compactification radius"
$r/2\pi$. This may suggest that it makes sense to consider the limit
$r \rightarrow 0$.
We now show how the cylinders $C^2_r$ are obtained from a
simply-connected two-dimensional covering space: The uncompactified
covering space is taken to be $\RR^2_1$, which is homeomorphic to
$\RR^2$ with canonical coordinates $X \equiv (x^0, x^1)$ with respect
to the canonical basis $(\v{e}_0, \v{e}_1)$, while the metric in this
basis is $\diag(-1,1)$. The isometry group is $\EE(1,1) = \RR^2 \odot
O(1,1)$, where $\RR^2$ acts as translational subgroup $(t,1)$
according to
\begin{equation}
\label{transsub1}
\RR^2 \ni t = \left( \begin{mat}{1} & t^0 \\ & t^1 \end{mat} \right)
\quad, \quad (t,1)\, \left( \begin{mat}{1} & x^0 \\ & x^1 \end{mat}
\right) = \left( \begin{mat}{1} & x^0 + t^0 \\ & x^1 + t^1 \end{mat}
\right) \quad.
\end{equation}
A faithful $3$-dimensional real matrix representation of
$\EE(1,1)$ is given by
\begin{equation}
\label{matrep1}
\left( t, \Lambda \right) = \left( \begin{mat}{2} & \Lambda &&t \\
&0&& 1
\end{mat} \right) \quad,
\end{equation}
satisfying the Poincar\'e group law $(t,\Lambda) (t', \Lambda') =
(\Lambda t'+t, \Lambda \Lambda')$. If $\Lambda$ lies in the identity
component of $O(1,1)$ then
\begin{equation}
\label{2d1}
\Lambda = \left( \begin{mat}{2} & \cosh \alpha && \sinh \alpha \\ &
\sinh \alpha && \cosh \alpha \end{mat} \right) \quad, \quad \alpha
\in \RR \quad.
\end{equation}
This representation acts on the coordinates $X\in \RR^2_1$ according
to
\begin{equation}
\label{CoordAct1}
(t,\Lambda)\, X = \left( \begin{mat}{2} & \Lambda&& t \\ &0&& 1
\end{mat} \right) \left( \begin{mat}{1} &X \\ &1 \end{mat} \right) =
\left( \begin{array}{@{}c@{}} \Lambda X + t \\ 1 \end{array} \right)
\quad.
\end{equation}
We now introduce a one-dimensional lattice with primitive lattice
vector $r \, \v{e}_+$, where $\v{e}_+ \equiv \smallf{1}{\sqrt{2}}
\left( \v{e}_0 + \v{e}_1 \right)$, so that the set $\underline{u}$ as
defined in section \ref{IdOverLattice} contains just one element,
\begin{equation}
\label{lattvec1}
\underline{u} = \left\{r\, \v{e}_+ \right\} \quad.
\end{equation}
We can introduce lightlike coordinates
\begin{equation}
\label{coord1}
\left( \begin{mat}{1} & x^+ \\ & x^- \end{mat} \right) =
\frac{1}{\sqrt{2}} \left( \begin{mat}{2} & 1 && 1 \\ & 1 & - & 1
\end{mat} \right) \left( \begin{mat}{1} & x^0 \\ & x^1 \end{mat}
\right) = M\, \left( \begin{mat}{1} & x^0 \\ & x^1 \end{mat} \right)
\quad,
\quad M = M^T = M^{-1} \quad,
\end{equation}
on the covering space $\RR^2_1$, so that $\v{e}_+$ is the basis vector
in the $x^+$-direction. The set $\mathsf{lat}$ as defined in
(\ref{latpoints1}) is now given as
\begin{equation}
\label{2d2}
\mathsf{lat} = \left\{\v{0}, \, \pm r\, \v{e}_+, \, \pm 2r\, \v{e}_+,
\, \ldots \right\} \quad.
\end{equation}
The subset $T_{\mathsf{lat}}$ as defined in (\ref{pp3fo5}) is now
\begin{equation}
\label{transsub2}
T_{\mathsf{lat}} = \bigg\{\; (k\, r\, \v{e}_+,\, 1) \in \EE(1,1)\;
\bigg| \; k \in \ZZ \; \bigg\} \quad.
\end{equation}
The elements of $T_{\mathsf{lat}}$ are the primitive lattice
translations, and the identification group is $\Gamma =
T_{\mathsf{lat}}$. After taking the quotient of $\RR^2_1$ over
(\ref{transsub2}), we obtain the two-dimensional cylinder $C^2_r$ as
defined in (\ref{def1}).
According to (\ref{Beding1}), a general Poincare transformation
$(t,\Lambda)$ lies in the extended normalizer of $T_{\mathsf{lat}}$ if
and only if $\Lambda$ preserves the lattice (\ref{2d2}), which is
equivalent to the condition that
\begin{equation}
\label{cond1}
\Lambda \v{e}_+ = k \cdot \v{e}_+ \quad \text{for some integer $k$ .}
\end{equation}
To facilitate computations we now transform everything into lightlike
coordinates (\ref{coord1}). Then the metric $h= \diag(-1,1)$ takes the
form
\begin{equation}
\label{coord2}
h = \left( \begin{mat}{2} & 0 & -& 1 \\ -& 1& & 0 \end{mat} \right)
\quad,
\end{equation}
while a Lorentz transformation (\ref{2d1}) in this basis looks like
\begin{equation}
\label{coord3}
\Lambda = \left( \begin{mat}{2} & e^{\alpha} && 0 \\ &0& &e^{-\alpha}
\end{mat} \right) \quad.
\end{equation}
If we allow $\Lambda$ to take values in the full group $O(1,1)$,
rather than the identity component, we obtain matrices
\begin{equation}
\label{coord4}
\Lambda \quad = \quad \left( \begin{mat}{2} & e^{\alpha} && 0 \\ &0&
&e^{-\alpha} \end{mat} \right) \quad, \quad - \left( \begin{mat}{2} &
e^{\alpha} && 0 \\ &0& &e^{-\alpha} \end{mat} \right) \quad, \quad
\left( \begin{mat}{2} &0 && e^{-\alpha} \\ & e^{\alpha} & & 0
\end{mat} \right) \quad, \quad - \left( \begin{mat}{2} &0 &&
e^{-\alpha} \\ & e^{\alpha} & & 0
\end{mat} \right) \quad.
\end{equation}
Here we have introduced the parity transformation $\C{P}$, defined by
$\C{P}(x^0, x^1) = (x^0, -x^1)$, which acts on lightlike coordinates
according to
\begin{equation}
\label{parity1}
\C{P} \left( \begin{mat}{1} &x^+ \\ & x^- \end{mat} \right) = \left(
\begin{mat}{1} & x^{\prime +} \\ & x^{\prime -} \end{mat} \right) = \left(
\begin{mat}{1} & x^- \\ & x^+ \end{mat} \right) \quad,
\end{equation}
and the time-reversal transformation $\C{T}(x^0, x^1) = (-x^0, x^1) =
- \C{P}(x^0, x^1)$, which acts on lightlike coordinates as
\begin{equation}
\label{timerev1}
\C{T} \left( \begin{mat}{1} &x^+ \\ & x^- \end{mat} \right) = \left(
\begin{mat}{1} & x^{\prime +} \\ & x^{\prime -} \end{mat} \right) = \left(
\begin{mat}{1} -& x^- \\ -& x^+ \end{mat} \right) \quad.
\end{equation}
Parity and time-reversal are the discrete isometries of the metric
(\ref{coord2}). The sequence of matrices in (\ref{coord4}) can then be
written as
\begin{equation}
\label{coord4A}
\Lambda \quad, \quad \C{P} \C{T} \Lambda = \Lambda \C{P} \C{T} \quad,
\quad \C{P} \Lambda \quad, \quad \C{T} \Lambda \quad.
\end{equation}
The condition (\ref{cond1}) can be satisfied only for matrices of the
first two kinds; these must have the form
\begin{equation}
\label{coord5}
\Lambda_m = \left( \begin{mat}{2} &m&&0 \\ &0 & &\frac{1}{m}
\end{mat} \right) \quad, \quad m \in \ZZ \quad.
\end{equation}
This set of matrices (\ref{coord5}) comprises the group
$eG_{\mathsf{lat}}$ defined in section \ref{IdOverLattice}; it is
obviously a semigroup with composition and unit element
\begin{equation}
\label{coord6}
\Lambda_m\, \Lambda_{m'} = \Lambda_{m\, m'} \quad, \quad \Eins =
\Lambda_{1} \quad.
\end{equation}
This semigroup is isomorphic to the set $(\ZZ, \cdot)$ of all integers
with multiplication as composition and one as unit element. For $m
\neq 1$, the inverse of a matrix $\Lambda_m$ in the full Lorentz
group $O(1,1)$ is given by
\begin{equation}
\label{coord7}
\Lambda_m^{-1} = \Lambda_{1/m} = \left(
\begin{mat}{2} & \frac{1}{m} &&0 \\ &0&& m \end{mat} \right) \quad.
\end{equation}
However, these inverses violate the condition (\ref{cond1}) and hence
cannot be elements of $eG_{\mathsf{lat}}$; this fact accounts for the
semigroup structure of (\ref{coord5}). Consequently, the only Lorentz
transformation (\ref{coord5}) which preserves the lattice (\ref{2d2})
such that its inverse preserves the lattice as well is the unit matrix
$\Eins$; as a consequence, the group $G_{\mathsf{lat}} = \{ \Eins \}$
is trivial. Then (\ref{Norm1}) takes the form
\begin{subequations}
\label{Norm2}
\begin{align}
N\left( \Gamma \right) & \simeq \mathbb{R}^2 \quad, \label{Norm2a} \\
eN\left( \Gamma \right) & \simeq \mathbb{R}^2 \odot (\ZZ, \cdot)
\quad. \label{Norm2b}
\end{align}
\end{subequations}
In order to obtain the isometry group $I(C^2_r)$, and its semigroup
extension $eI(C^2_r)$, we must divide out the lattice translations
(\ref{transsub2}). Then, in lightlike coordinates (\ref{coord1}),
\begin{subequations}
\label{Norm3}
\begin{align}
I(C^2_r) & \simeq \bigg\{\; \left( \begin{mat}{1} &t^+ \\ & t^-
\end{mat} \right) \; \bigg| \; t^+ \in [0,1) \;,\; t^- \in \RR \;
\bigg\} \quad, \label{Norm3a} \\
eI(C^2_r) & \simeq \bigg\{\; \left( \begin{mat}{1} &t^+ \\ & t^-
\end{mat} \right) \; \bigg| \; t^+ \in [0,1) \;,\; t^- \in \RR \; \bigg\}
\odot (\ZZ, \cdot) \quad. \label{Norm3b}
\end{align}
\end{subequations}
(\ref{Norm3a}) says that the isometry group of the cylinder $C^2_r$
contains translations only; (\ref{Norm3b}) expresses the fact that the
semigroup extension of the isometry group of the cylinder is given by
the discrete Lorentz transformations (\ref{coord5}). A faithful matrix
representation of $eI(C^2_r)$ is obtained from (\ref{CoordAct1}) using
lightlike coordinates (\ref{coord1}),
\begin{equation}
\label{faithsemi2}
(t,\Lambda_m) = \left( \begin{array}{cc} \Lambda_m & t \\ 0 & 1
\end{array} \right) = \left( \begin{array}{cc|c} m & 0 & t^+ \\ 0 &
\frac{1}{m} & t^- \\[5pt] \hline \rule{0pt}{10pt} 0 & 0 & 1
\end{array} \right) \quad.
\end{equation}
(\ref{faithsemi2}) acts on lightlike coordinates according to
\begin{equation}
\label{faithsemi3}
(t,\Lambda_m) \left( \begin{mat}{1} & x^+ \\ &x^- \end{mat} \right) =
\left( \begin{mat}{1} & m\, x^+ + t^+ \\[5pt] & \frac{1}{m}\, x^- + t^-
\end{mat} \right) \quad.
\end{equation}
We see that these transformations are volume-preserving, since
translations and Lorentz transformations are so, and the
transformations $\Lambda_m$ are still {\it locally injective}; in
particular, they are injective on the tangent spaces. Furthermore, it
is obvious that $\Lambda_m$ is injective on each of the strips
$\left[0, \frac{r}{m} \right) \times \RR$, $\left[ \frac{r}{m},
\frac{2r}{m} \right) \times \RR$, $\ldots$ , separately, but since
each strip is mapped by $\Lambda_m$ onto the whole interval $[0,1)$,
the inverse image $\Lambda_m^{-1}(x^+, x^-)$ of any point $(x^+, x^-)
\in C^2_r$ contains $m$ elements. Yet each of the $\Lambda_m$ is
perfectly metric-preserving,
\begin{equation}
\label{metpres1}
\Lambda_m^T\, h\, \Lambda_m = h \quad,
\end{equation}
and injective on tangent spaces, as mentioned above. These
transformations therefore should qualify as isometries in an
"extended" sense. What is more, even though these transformations are
not globally injective, they still preserve the cardinality of the
cylinder $C^2_r$, since each of the strips $\left[ \frac{r}{m},
\frac{2r}{m} \right)$, etc., has the same cardinality as the total
interval $[0,1)$.
On the covering space $\RR^2_1$, transformations $\Lambda_m$ are
indeed one-to-one, and any two reference frames {\it on the covering}
$\RR^2_1$ which are related by such a transformation must be regarded
as physically equivalent. This leads to an important question: Should
reference frames on the compactification $C^2_r$ be regarded as
equivalent if they are related by a semigroup transformation
$\Lambda_m$ (\ref{coord5})? Let us examine the consequences of such an
assumption for a scalar field:
\section{Classical scalar fields living on $C^2_r$ \label{Scalar}}
Let us first clarify what we mean by a real scalar field on the
compactification, by comparing it with the definition of an
$O(1,1)$-scalar $\phi$ on the covering space $\RR^2_1$: In $\RR^2_1$
we have a set of equivalent frames which are mutually related by
Lorentz transformations $\Lambda$. In each of these frames, an
observer defines a single-valued field with respect to his coordinate
chart by assigning $\RR^2_1 \ni X \mapsto \phi(X) \in \RR$ a single
number to each point $X$. Consider any two equivalent frames with
coordinates $X$ and $X' = \Lambda X$, and call the respective field
values $\phi(X)$ and $\phi'(X')$. If these field assignments are
related by
\begin{equation}
\label{scalar1}
\phi'(X') = \phi(\Lambda^{-1} X'\,) \quad,
\end{equation}
for all $X' \in \RR^2_1$, and then for all pairs of equivalent frames,
we refer to the collection of assignments $\phi, \phi', \ldots$ as an
$O(1,1)$ scalar field.
If we now try to carry over this scenario to the compactified space
$C^2_r$ we see that we run into certain troubles: The set $O(1,1)$ is
reduced to a discrete set (\ref{coord5}) which moreover is now only a
semigroup, so that we cannot operate definition (\ref{scalar1}) which
involves inverses of $\Lambda$. However, (\ref{scalar1}) may be
rewritten in the form
\begin{equation}
\label{scalar2}
\phi'(\Lambda X) = \phi(X) \quad,
\end{equation}
for all $X \in C^2_r$, and $\Lambda = \Lambda_m$ now. Using
(\ref{faithsemi3}) this can be written explicitly as
\begin{equation}
\label{scalar3}
\phi'\left(m\, x^+, \frac{1}{m}\, x^- \right) = \phi(x^+, x^-) \quad.
\end{equation}
Thus, if $\phi(X)$ is a single-valued field in frame/chart $X$, {\bf
and} if we adopt the assumption that frames on $C^2_r$ related by
transformations (\ref{coord5}) are physically equivalent, then
frame/chart $X'$ must see a {\bf single-valued} field $\phi'(X')$
obeying (\ref{scalar3}). So, if $x^+$ ranges through
$[0,\smallf{r}{m})$, then $x^{\prime +}$ covers $[0,r)$ once; if $x^+$ ranges
through $[\smallf{r}{m}, \smallf{2r}{m} )$, then $x^{\prime +}$ already has
covered $[0,r)$ twice, and so on. Thus, if $\phi'$ is supposed to be
single-valued, then $\phi$ must be {\bf periodic} with period $r/m$ on
the interval $[0,r)$ in the first place. But then this argument must
hold for arbitrary values of $m$. Barring pathological cases and
focusing on reasonably smooth fields $\phi$ we conclude that $\phi$
must be constant on the whole interval $[0,r)$! As a consequence of
(\ref{scalar3}), each of the equivalent observers reaches the same
conclusion for his field $\phi'$. The statement that the scalar field
$\phi$ is independent of the lightlike coordinate $x^+$ is therefore
"covariant" with respect to the set of $\Lambda_m$-related observers
on $C^2_r$.
It is interesting to see that this fact makes the field $\phi$
automatically a solution to a massless field equation if $\phi$ was
assumed to be massless in the first place: To see this, rewrite the
d'Alembert operator $\Box = -h^{ab} \d_a \d_b$ in lightlike
coordinates using (\ref{coord2}),
\begin{equation}
\label{scalar4}
\Box = 2\, \d_+ \d_- = 2\, \d_- \d_+ \quad.
\end{equation}
The equation of motion for a massless scalar field is $\Box \phi = 0$,
which, on account of (\ref{scalar4}), is seen to be satisfied
automatically by all scalars which obey the consistency condition
$\d_+ \phi = 0$ as discussed above. On the other hand, a massive
scalar is inconsistent with the peculiar geometry on $C^2_r$ as
manifested in the semigroup structure of the isometry group, since the
massive Klein-Gordon equation $(\Box + \mu_b^2) \phi = 0$ leads
necessarily to $\mu_b^2\, \phi = 0$ for all scalars $\phi$ compatible
with the semigroup structure on $C^2_r$. Let us repeat this important
point:
{\it Any single-valued field $\phi$ on $C^2_r$ which transforms as a
scalar with respect to the extended isometry group $eI(C^2_r)$ is
necessarily a solution to a massless Klein-Gordon equation}.
We can arrive at the same conclusion, and learn more, if we study
plane-wave solutions to the massive Klein-Gordon equation:
\begin{equation}
\label{plw1}
\phi(x^0, x^1) \sim \cos\big(kx^1 - \omega_k x^0 + \delta \big)
\quad, \quad \omega_k = \sqrt{ k^2 + \mu_b^2} \quad.
\end{equation}
Here we have started by assuming the general case, so that the
dispersion law on the right-hand side of (\ref{plw1}) is again that of
a massive field, but we shall arrive at the conclusion for
masslessness as above presently. Expressed in lightlike coordinates
eq. (\ref{plw1}) becomes
\begin{equation}
\label{plw2}
\phi(x^+, x^-) \sim \cos \bigg\{ \frac{1}{\sqrt{2}} \left( k-
\omega_k \right) x^+ - \frac{1}{\sqrt{2}} \left( k+ \omega_k \right)
x^- + \delta \bigg\} \quad.
\end{equation}
This quantity must be independent of $x^+$, which is equivalent to
saying that
\begin{equation}
\label{plw3}
\mu_b = 0 \quad, \quad k = |k| > 0 \quad.
\end{equation}
In other words, only the {\it right-propagating} modes are admissible,
while left-propagating ones, $k<0$, are inconsistent with the
$eI(C^2_r)$-geometry on the compactification. The admissible modes, up
to constant phase shifts, and expressed in $x^0 x^1$-coordinates, then
look like
\begin{subequations}
\label{cf5}
\begin{equation}
\label{cf5a}
\begin{aligned}
\cos k(x^1-x^0) & = \Re f_k(x) \quad, \quad f_k(x) = e^{ik x^1 - i k
x^0} = f_k(x^1-x^0) \quad,
\end{aligned}
\end{equation}
so that any field $\phi$ may be decomposed into
\begin{align}
\phi(x^1-x^0) & = \int\limits_0^{\infty} \frac{dk}{4\pi k}
\sqrt{\hbar_C}\; \bigg\{ a_k\, f_k(x) + a_k\ad\, f_k^*(x) \bigg\}
\quad, \quad x = (x^0, x^1) \quad, \label{cf5b}
\end{align}
\end{subequations}
where we have introduced the $SO(1,1)$-invariant integration measure
$dk/4\pi k$. At this point, $\hbar_C$ is just a real numerical factor
with dimension of an action which, in the quantum theory, may be
identified with the Planck quantum of action on the cylinder $C^2_r$.
From (\ref{cf5a}) we immediately see that parity transformation or
time-reversal do not map admissible solutions into admissible
solutions; therefore, both of these symmetries are broken. However, a
combined parity-time-reversal is admissible, as it maps a mode
(\ref{cf5a}) into itself. This is clearly consistent with the findings
in (\ref{coord4A}, \ref{coord5}), where the combined transformation
$\C{P} \C{T}$ was seen to be an isometry of the metric, but $\C{T}$ or
$\C{P}$ separately were not.
Now, what about the dependence of $\phi$ on the coordinate $x^-$? We
can rewrite (\ref{scalar3}),
\begin{equation}
\label{scalar5}
\phi'(x^{\prime -}) = \phi(m\, x^{\prime -}) \quad, \quad x^{\prime
-} \in \RR \quad.
\end{equation}
Thus, an observer using chart $X' = \Lambda_m\, X$ sees a "contracted"
version of the field, since, on an interval $[0,\smallf{L}{m})$,
$x^{\prime -}$ has already covered the same information with respect to
$\phi'$ as $x^-$ in the interval $[0,L)$ with respect to $\phi$. More
precisely, the Fourier transforms $\phi(k)$ and $\phi'(k')$ of the
fields $\wt{\phi}(X)$ and $\wt{\phi}'(X')$ are related by
\begin{equation}
\label{scalar6}
\wt{\phi}'(k') = \frac{1}{m}\, \wt{\phi}\left( \frac{k'}{m} \right)
\quad,
\end{equation}
hence the frequency spectrum of the scalar field is "blue-contracted",
thus "more energetical", for the $X'$-observer. Suppose that the field
$\phi$ is such that it tends to zero at infinity. Then in the
"infinite-momentum" limit $m \rightarrow \infty$ we see a field with
spatial dependence
\begin{equation}
\label{scalar7}
\phi'(x^{\prime -}) \; = \; \left\{ \begin{array}{ccl} 0 & , &
x^{\prime -} \neq 0 \quad, \\[8pt] \phi(0) & , & x^{\prime -} = 0
\quad. \end{array} \qquad \right\}
\end{equation}
This result is noteworthy: We have started out with a field $\phi$
which was supposed to transform under the semigroup transformations as
a scalar. This transformation behaviour has led us to conclude that
the field must be right-propagating, and must not depend on the
coordinate $x^+$. We have derived this independence by assuming only
that 1.) the lightlike direction $x^+$ is compactified, with no
condition on the length $r$ of the compactified interval; and 2.) that
the semigroup transformations (\ref{coord5}) continued to make sense
as maps between physically equivalent reference frames, which allowed
us to define fields with scalar transformation behaviour under the
semigroup (\ref{coord5}). On the other hand, we must keep in mind that
the transformations $\Lambda_m$ no longer comprise a group, but only a
semigroup with the property (\ref{coord6}) that the succession of two
such transformations can ever go only into {\bf one} direction, namely
towards $m \rightarrow \infty$. In our view this points out the
infinite-momentum frame $m \rightarrow \infty$ as something preferred!
The preference is manifested in the fact that the set of Lorentz
transformations (\ref{coord5}) now has an {\it inherent order}, given
by
\begin{equation}
\label{order1}
\Lambda_m \prec \Lambda_{m+1} \prec \cdots \quad,
\end{equation}
which follows the order $m < m+1 < \cdots$ of the indices $m$. This
order reflects the fact that a transformation $Y = \Lambda_m X$
between two different frames cannot be undone (after all, it is not
globally injective). This is in stark contrast to the usual case, for
example, the relation between two different frames on the covering
space $\RR^2_1$, where both frames can be reached from each other by
an appropriate Lorentz transformation. We see that the ordering
(\ref{order1}) clearly points out the "infinite-momentum" frame at $m
\rightarrow \infty$ as a {\it preferred frame}. This preferred frame
is an immediate consequence of admitting the semigroup transformations
(\ref{coord5}) to be part of our physical reasoning on the spaces
$C^2_r$. Some consequences of these preferred frames for Kaluza-Klein
theories will be presented in section \ref{KaluzaKlein} below. First,
however, we must study the transformation behaviour of vectors and
covectors under the semigroup transformations (\ref{coord5}).
\section{Vectors and covectors on $C^2_r$ \label{Vectors}}
We can extend the reasoning leading to the scalar transformation law
(\ref{scalar3}) to vector and covector fields on $C^2_r$. Consider a
vector field $V = V^+ \d_+ + V^- \d_-$ on $C^2_r$; then $V$ must
transform under $\Lambda_m$-transformations as $V'(\Lambda_m X) =
\Lambda_m V(X)$, or
\begin{equation}
\label{vector3}
\left[ \begin{array}{c} V^{\prime +}(m x^+, \frac{1}{m} x^-) \\[5pt]
V^{\prime -}(m x^+, \frac{1}{m} x^-) \end{array} \right] = \left[
\begin{array}{c} m\, V^+(x^+, x^-) \\[5pt] \frac{1}{m}\, V^-(x^+,
x^-) \end{array} \right] \quad.
\end{equation}
As in the case of scalar fields, single-valuedness of the component
fields requires that the fields cannot depend on $x^+$.
Now we consider $1$-forms $\omega = \omega_+\, dx^+ + \omega_-\,
dx^-$, denoting components by row vectors $(\omega_+,
\omega_-)$. Their transformation behaviour is obtained from
\begin{equation}
\label{forms1}
\omega = \omega_+\, dx^+ + \omega_-\, dx^- = \omega'_+\, dx^{\prime
+} + \omega'_-\, dx^{\prime -} \quad,
\end{equation}
such that
\begin{equation}
\label{forms2}
\bigg(\, \omega'_+(\Lambda_m X)\,, \; \omega'_-(\Lambda_m X) \,
\bigg) = \bigg(\, \omega_+(X)\,,\; \omega_-(X)\, \bigg) \left(
\begin{mat}{2} & \frac{1}{m} && 0 \\ & 0 && m \end{mat} \right) =
\left( \omega_+, \omega_- \right)\, \Lambda^{-1}_m \quad.
\end{equation}
Again, we conclude that the fields cannot depend on $x^+$. This latter
feature is clearly general, and applies to tensor fields of arbitrary
type $m\choose n$, since it arises from the transformation of the
arguments $X$ of the tensor fields, rather than the transformation of
the tensor components.
\section{Consequences for six-dimensional Kaluza-Klein theories}
\label{KaluzaKlein}
So far we have studied the consequences of admitting semigroup
transformations to connect different physical frames on the
cylindrical spacetimes $C^2_r$. Now we want to examine these
transformations within a greater framework: We envisage the case of a
product spacetime $M \times C^2_r$, where the external factor $M$ is a
standard 3+1-dimensional Minkowski spacetime with metric $\eta$, while
the internal factor is comprised by our cylindrical space $C^2_r$ with
metric $h_{ab} = \diag(-1,1)$. We use coordinates $(x^{\mu}, x^a) =
x^A$ on the product manifold such that $x^{\mu}, \mu = 0, \ldots, 3$,
and $x^a, a=4,5$, are coordinates on the external Minkowski space and
the internal cylinder, respectively. In the absence of gauge fields,
the signature of the 6-dimensional metric is $(-1,1,1,1,-1,1)$. When
off-diagonal metric components are present, we denote the metric as
\begin{equation}
\label{met1}
\hat{g} = \left( \begin{array}{c|c} g_{\mu\nu} & A_{\mu a} \\[5pt]
\hline A_{a \nu} & h_{ab} \end{array} \right) \quad,
\end{equation}
or explicitly,
\begin{equation}
\label{met2}
\hat{g} = g_{\mu\nu}\, dx^{\mu} \otimes dx^{\nu} + A_{a\mu}\, \big(
dx^a \otimes dx^{\mu} + dx^{\mu} \otimes dx^a \big) + h_{ab}\, dx^a
\otimes dx^b \quad.
\end{equation}
Here, $a,b,\ldots$ run over indices $+$ and $-$ on the internal
dimensions; while Greek indices range over the external dimensions $0,
\ldots 3$ on $M$. The submetric $g_{\mu\nu}(x^{\rho}, x^a)$ of the
external spacetime may turn out to depend on the internal coordinates
$x^a$, so all we may demand is that on the 3+1-dimensional submanifold
we are aware of, i.e. for $x^a = 0$, we have $g_{\mu\nu}(x^{\rho}, 0)
= \eta_{\mu\nu}$.
Let us now apply a semigroup transformation (\ref{coord5}) to
(\ref{met2}),
\begin{equation}
\label{met3}
\hat{g} = g_{\mu\nu}\, dx^{\mu} \otimes dx^{\nu} + A_{a\mu} \left(
\Lambda_m^{-1} \right)^a_{\ms b} \big( dy^b \otimes dx^{\mu} +
dx^{\mu} \otimes dy^b \big) + h_{ab}\, dy^a \otimes dy^b \quad,
\end{equation}
where $y^a = \left( \Lambda_m\right)^a_{\ms b}\, x^b$, and the
internal part of the metric remains invariant, since $\Lambda_m$
preserves the metric $h_{ab}$. Comparison with (\ref{forms2}) shows
that the off-diagonal elements $A_{a\mu}$ transform like covectors on
the internal manifold; we therefore conclude that they must be
independent of the $x^+$-coordinate on $C^2$. Hence,
\begin{equation}
\label{met4}
\begin{aligned}
A'_{+\mu}(y^-) & = \frac{1}{m}\, A_{+\mu}\left(\, m y^- \, \right)
\quad, \\
A'_{-\mu}(y^-) & = m\, A_{-\mu}\left(\,m y^- \, \right) \quad,
\end{aligned}
\end{equation}
where, for sake of simplicity, we have temporarily suppressed the
dependence of the gauge potentials $A_{a\mu}$ on the spacetime
coordinates $x^{\mu}$. We now assume that the off-diagonal elements
satisfy
\begin{equation}
\label{met5}
\lim\limits_{x^- \rightarrow \pm \infty} A_{a\mu}(x^-) = 0
\quad.
\end{equation}
Then the same arguments given in (\ref{scalar5}, \ref{scalar7}) lead
to the conclusion that, in the "infinite-momentum" limit $m
\rightarrow \infty$, the $A_{+\mu}$-fields vanish, while the
$A_{-\mu}$-fields appear localized around the point $x^- =0$, but
acquire an infinite amplitude. This latter feature is less
catastrophic than it might appear, though: There is no inherent length
scale on the internal manifold which must be preserved, and so nothing
can stop us from applying a conformal rescaling
\begin{equation}
\label{met6}
\left( \begin{mat}{1} &y^+ \\ &y^- \end{mat} \right) \rightarrow
\left( \begin{mat}{1} &z^+ \\ &z^- \end{mat} \right) = \left(
\begin{mat}{1} &y^+ \\ e^{\beta}\; &y^- \end{mat} \right) \quad,
\end{equation}
which is to accompany the limit $m \rightarrow \infty$ in such a way
as to compensate for the factor of $m$ in front of
$A_{-\mu}$. Explicitly, the metric (\ref{met3}) in the coordinates
(\ref{met6}) becomes
\begin{equation}
\label{met7}
\begin{aligned}
\hat{g} & = \eta_{\mu\nu}\, dx^{\mu} \otimes dx^{\nu} + \\[8pt]
& + A_{+\mu} \; \frac{1}{m}\; \big( dz^+ \otimes dx^{\mu} + dx^{\mu}
\otimes dz^+ \big) + \\[8pt]
& + e^{-\beta}\; A_{-\mu}\; m\; \big( dz^- \otimes dx^{\mu} +
dx^{\mu} \otimes dz^- \big) + \\[8pt]
& + e^{-\beta}\, h_{ab}\, dz^a \otimes dz^b \quad.
\end{aligned}
\end{equation}
Hence, if we perform the limits $\beta \rightarrow \infty$ and $m
\rightarrow \infty$ simultaneously by setting $e^{\beta} = m$, then
\begin{equation}
\label{met8}
\hat{g}\; \xrightarrow{\quad m\rightarrow \infty \quad}\;
\eta_{\mu\nu}\, dx^{\mu} \otimes dx^{\nu} + A_{-\mu}(x^{\nu}, z^-)\,
\big( dz^- \otimes dx^{\mu} + dx^{\mu} \otimes dz^- \big) + 0 \cdot
dz^- \otimes dz^- \quad.
\end{equation}
Eq. (\ref{met8}) is a very interesting result: As seen from the
preferred, infinite-momentum, frame, the field content of the original
metric (\ref{met1}) is that of a Kaluza-Klein theory defined on a flat
$(3+1)$-dimensional external Minkowski space times a one-dimensional
factor (the $z^-$-direction) along which the metric is zero, while the
$z^+$-direction has completely vanished out of sight. In a more
technical language, we obtain an effective $5$-dimensional {\it
non-compactified} \cite{OverduinWesson1997} Kaluza-Klein theory. The
associated metric for $z^- = 0$ is
\begin{equation}
\label{limit1}
\hat{g} = \left( \begin{array}{c|c} \eta_{\mu\nu} & A_{\mu -} \\[5pt]
\hline A_{- \nu} & 0 \end{array} \right) \quad.
\end{equation}
In contrast to previous Kaluza-Klein theories, the metric
(\ref{limit1}) is {\it degenerate} in the absence of fields;
furthermore, it may be expected that the signature of the overall
metric $\hat{g}$, now including gauge potentials, will depend on the
particular field configuration represented by the $A_{-\mu}$: The
characteristic equation of the metric (\ref{met1}) is
\begin{equation}
\label{charac1}
\left( \lambda -1\right)^2\, \Big\{ \lambda^3 - \lambda\, \big(
\v{A}^2 + A_0^2 + 1 \big) - \v{A}^2 + A_0^2 \Big\} = 0 \quad,
\end{equation}
where we have abbreviated $A_{\mu} \equiv A_{-\mu}$. Two eigenvalues
are always equal to one. If the vector potential satisfies $A_{\mu}
A^{\mu} = 0$ then a zero eigenvalue exists, and the metric is
degenerate even in the presence of fields $A_{\mu}$. We have checked
numerically that both cases $\v{A}^2= 0$ and $A_0^2= 0$ can produce
negative, vanishing and positive eigenvalues.
The emergence of a degenerate metric clearly goes beyond the framework
of standard Kaluza-Klein theories, where the non-degeneracy of the
overall metric is usually taken for granted. However, we are inclined
to take the result (\ref{met8}) serious if it can be shown to produce
meaningful physics. To this end we have to contemplate how equations
of motion for the fields $A_{\mu} \equiv A_{-\mu}$ can be
obtained. The point here is that the usual technique of obtaining
dynamical equations for the fields from the Einstein equations for the
total metric $\hat{g}$ no longer works: On account of the fact that
the inverse $\hat{g}^{-1}$ does not exist in the field-free case, a
Levi-Civita connection on the total manifold is not well-defined,
since the associated connection coefficients (Christoffel symbols)
involve the inverse of the metric. It might be possible to produce
equations of motions for the fields $A_{\mu}$ by going back to the
original field multiplet $A_{a\mu}$, before taking the limit of the
infinite-momentum frame on the internal space; computing the Ricci
tensor, and then trying to obtain a meaningful limit $m \rightarrow
\infty$ for the theory even though the metric becomes degenerate in
this limit. This possibility will be explored elsewhere.
\section{Summary \label{Summary}}
The concept of the extended normalizer of a group of isometries leads
to the possibility of semigroup extensions of isometry groups of
compactified spaces. For flat covering spaces which are compactified
over lattices, semigroup extensions become possible when the lattice
contains lightlike vectors. The simplest example is provided by the
family of two-dimensional cylindrical spacetimes with Lorentzian
signature compactified along a lightlike direction; the members of
this family are all isometric to each other. The semigroup elements
acting on these cylinders can be given a natural ordering, which in
turn suggests the existence of preferred coordinate frames, the latter
consisting of infinite-momentum frames related to the canonical chart
by extreme Lorentz transformations. Fields as viewed from the
preferred frame acquire extreme values; in particular, some of the
off-diagonal components of the total metric, regarded as gauge
potential for a field theory on the external Minkowski factor, may
vanish, leaving a field multiplet which is reduced in size and which
no longer depends on one of the coordinates on the internal space. The
infinite amplitude exhibited by the surviving fields can be removed by
conformal rescaling of one of the lightlike directions. The effective
theory so obtained is a Kaluza-Klein theory which is reduced by one
dimension, and has a smaller field content, but which is defined on a
space with degenerate metric. The latter feature is responsible for
the fact that dynamical equations for the fields $A_{\mu}$ cannot be
obtained from the Ricci tensor of the overall metric.
\acknowledgements{Hanno Hammer acknowledges support from EPSRC
grant~GR/86300/01.}
|
2,869,038,154,723 | arxiv | \section{Introduction}
Wireless Sensor Networks (WSN) have recently expanded to support diverse
applications in various and ubiquitous scenarios, especially in the context of
Machine-to-Machine (M2M) networks~\cite{exalted_project}.
Energy consumption is still the main design goal along with providing
sufficient performance support for target applications.
Medium Access Control (MAC) methods play the key role in reducing energy
consumption~\cite{langendoen08medium} because of the part taken by the radio in
the overall energy budget.
Thus, the main goal consists in designing an access method that reduces the
effects of both \emph{idle listening} during which a device consumes energy while
waiting for an eventual transmission and \textit{overhearing} when it receives a
frame sent to another device~\cite{langendoen08medium}.
To save energy, devices aim at achieving low duty cycles: they alternate long
sleeping periods (radio switched off) and short active ones (radio switched
on).
As a result, the challenge of MAC design is to synchronize the instants of the
receiver wake-up with possible transmissions of some devices so that the
network achieves a very low duty cycle.
The existing MAC methods basically use two approaches.
The first one synchronizes devices on a common sleep/wake-up schedule by
exchanging synchronization messages (SMAC~\cite{ye02energy},
TMAC~\cite{vandam03adaptive}) or defines a synchronized network wide TDMA
structure (LMAC~\cite{vanhoesel04lightweight}, D-MAC~\cite{dmac.lu.2004},
TRAMA~\cite{rajendran05energy}).
With the second approach, each device transmits before each data frame a
\emph{preamble} long enough to ensure that intended receivers wake up to catch
its frame (Aloha with Preamble Sampling~\cite{elhoiydi02aloha}, Cycled
Receiver~\cite{lin04power}, LPL (Low Power Listening) in
B-MAC~\cite{polastre04versatile}, B-MAC+~\cite{avvenuti.bmacplus.2006},
CSMA-MPS~\cite{mahlknecht04csma} aka X-MAC~\cite{buettner06xmac},
BOX-MAC~\cite{kuntz2011auto}, and DA-MAC~\cite{corb11damac}).
Both approaches converge to the same scheme, called \emph{synchronous preamble
sampling}, that uses very short preambles and requires tight synchronization
between devices (WiseMAC~\cite{enz04wisenet}, Scheduled Channel Polling
(SCP)~\cite{ye06ultra}).
Thanks to its lack of explicit synchronization, the second approach based on
\textit{preamble sampling} appears to be more easily applicable, more scalable,
and less energy demanding than the first synchronous approach.
Even if methods based on \textit{preamble sampling} are collision prone, they
have attracted great research interest, so that during last years many protocols
have been published.
In a companion paper, we have proposed LA-MAC, a Low-Latency
Asynchronous MAC protocol~\cite{corb11lamac} based on preamble sampling and
designed for efficient adaptation of device behaviour to varying network
conditions.
In this paper, we analytically and numerically compare
B-MAC~\cite{polastre04versatile}, X-MAC~\cite{buettner06xmac}, and LA-MAC in
terms of energy consumption.
The novelty of our analysis lies in how we relate
the expected energy consumption to traffic load.
In prior energy analyses, authors based the expected energy consumption on the
average Traffic Generation Rate (TGR) of devices~\cite{ye06ultra} as well as on
the probability of receiving a packet in a given interval~\cite{buettner06xmac}.
In contrast to these approaches, which only focus on the consumption of a
``transmitter-receiver'' couple, we rather consider the global energy cost of
a group of neighbour contending devices.
Our analysis includes the cost of all radio operations involved in
the transmission of data messages, namely the cost of transmitting, receiving,
idle listening, and overhearing.
The motivation for our approach comes from the fact that in complex, dense, and
multi-hop networks, traffic distribution is not uniformly spread over the
network.
Thus, the expected energy consumption depends on traffic pattern, \textit{e.g.}
\textit{convergecast}, \textit{broadcast}, or \textit{multicast}, because
instantaneous traffic load may differ over the network.
In our approach, we estimate the expected energy consumption that depends on the
instantaneous traffic load in a given localized area.
As a result, our analysis estimates the energy consumption independently of
the traffic pattern.
\section{Background}
\label{sec_preliminaries}
We propose to evaluate the expected energy consumption of a group of
sensor nodes under
three different preamble sampling MAC protocols:
B-MAC, X-MAC and LA-MAC.
In complex, dense, and multi-hop networks, the instantaneous traffic distribution
over the network is not uniform.
For example, in the case of networks with the \textit{convergecast} traffic
pattern (all messages must be delivered to one sink), the the traffic load is
higher at nodes that are closer to the sink in terms of number of hops.
Due to this \textit{funnelling effect}~\cite{wan2005siphon}, devices close to
the sink exhaust their energy much faster than the others.
The evaluation of the expected energy consumption in this case is difficult and
the energy analyses published in the literature often base the expected energy
consumption of a given protocol on the traffic generation rate of the
network~\cite{ye06ultra}.
In our opinion, this approach does not fully reflect the complexity of the
problem, so we propose to analyze the expected consumption with
respect to the number of messages that are buffered in a given geographical
area.
This approach can simulate different congestion situations by varying the
instantaneous size of the buffer.
In our analysis, we consider a ``star'' network composed of a single receiving
device (\textit{sink}) and a group of $N$ devices that may have
data to send.
All devices are within 1-hop radio coverage of each other.
We assume that all transmitting devices share a global message buffer for which
$B$ sets the number of queued messages, $B$ is then related to network
congestion.
Among all $N$ devices, $N_s$ of them have at least one packet
to send and are called \textit{active} devices.
Remaining devices have empty buffers and do not participate in the
contention, nevertheless, they are prone to the \textit{overhearing effect}.
Thus, there are $N_o=N-N_s$ \textit{overhearers}.
According to the global buffer state $B$, there are several combinations of how
to distribute $B$ packets among $N$ sending devices: depending on the
number of packets inside the local buffers of active devices, $N_s$ and $N_o$ may
vary for each combination.
For instance, there can be $B$ active devices with each one packet to send or
less than $B$ active devices with some of them having more than one buffered
packet.
In the remainder, we explicitly separate the energy cost due to transmission
$E_t$, reception $E_r$, polling (listening for any radio activity in the
channel) $E_l$, and sleeping $E_s$.
$E_o$ is the overall energy consumption of all overhearers.
The overall expected energy consumption $E$ is the sum of all these energies.
The power consumption of respective radio states is $P_t$, $P_r$, $P_l$, and
$P_s$ for transmission, reception, channel polling, and sleeping.
The power depends on a specific radio device.
We distinguish the polling state from the reception state.
When a node is performing channel polling, it listens to any channel for
activity---to be detected, a radio transmission must start after the beginning
of channel polling.
Once a radio activity is detected, the device immediately switches its radio
state from polling to receiving.
Otherwise, the device that is polling the channel cannot change its radio state.
The duration of a message over the air is $t_d$.
The time between two wakeup instants is $t_f=t_l+t_s$, where $t_l$ and $t_s$ are respectively the
channel polling duration and the sleep period. These values are related to the
duty cycle.
\section{Preamble Sampling MAC Protocols}
\label{sec_MAC_protocols}
In this section, we provide the details of analyzed preamble sampling
protocols.
Figure~\ref{fig:operation_compare} presnts the operation of all protocols.
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=0.7\linewidth,clip=true]{pdf/MAC_compare_single_frame}
\end{center}
\caption{Comparison of analyzed MAC methods.}
\label{fig:operation_compare}
\end{figure}
\subsection{B-MAC}
In B-MAC \cite{polastre04versatile}, all nodes
periodically repeat the same cycle during their lifetime: wake up, listen for
the channel, and then go back to sleep.
When an active node wants to transmit a
data frame, it first transmits a preamble long enough to cover the entire sleep
period of a potential receiver.
After the preamble the sender immediately
transmits the data frame.
When the receiver wakes up and detects the preamble,
it switches its radio to the receiving mode and listens to until the complete
reception of the data frame.
Even if the lack of synchronization results in low
overhead, the method presents several drawbacks due to the length of the
preamble: high energy consumption of transmitters, high latency, and limited
throughput.
In the remainder, we define $t_p^B$, the duration of the B-MAC preamble.
\subsection{X-MAC}
In CSMA-MPS \cite{mahlknecht04csma} and X-MAC~\cite{buettner06xmac}, nodes
periodically alternate sleep and polling periods.
After the end of a polling period, each active node transmits a series
of short preambles spaced with gaps.
During a gap, the transmitter switches to the idle mode and expects to receive
an ACK from the receiver.
When a receiver wakes up and receives a preamble, it sends an ACK back to the
transmitter to stop the series of preambles, which reduces the energy spent by
the transmitter.
After the reception of the ACK, the transmitter sends a data
frame and goes back to sleep.
After data reception, the receiver remains awake for possible transmission of a
single additional data frame.
If another active node receives a preamble destined to the same
receiver it
wishes to send to, it stops transmitting to listen to the channel for an
incoming
ACK.
When it overhears the ACK, it sets a random back-off timer at which
it
will send its data frame.
The transmission of a data frame after the back-off is not preceded by any
preamble.
Note however that nodes that periodically wake up to sample the channel
need to keep listening for a duration that is larger than the gap between short
preambles to be able to decide whether there is an ongoing transmission or not.
The duration of each short preamble is $t_p^X$ and
the ACK duration is $t_a^X$.
\subsection{LA-MAC}
LA-MAC~\cite{corb11lamac} is a scalable protocol that aims at achieving
low latency and limited energy consumption by
building on three main ideas: efficient
forwarding based on proper scheduling of children nodes that want to transmit,
transmissions of frame bursts, and traffic differentiation.
The method periodically adapts local organization of channel access depending on
network
dynamics such as the number of active users and the instantaneous traffic load.
In LA-MAC, nodes periodically alternate long sleep periods and short polling
phases.
During polling phases each receiver can collect several requests for
transmissions
that are included inside short preambles.
After the end of its polling period, the node that has collected some preambles
processes the requests, compares the priority of requests with the locally
backlogged messages and broadcasts a SCHEDULE message.
The goal of the SCHEDULE message is to temporarily organize the transmission of
neighbor nodes to avoid collisions.
If the node that ends its polling has not detected any channel activity and has
some
backlogged data to send, it starts sending a sequence of
short unicast
preambles containing the information about the burst to send.
As in B-MAC and X-MAC, the strobed sequence is long enough to wakeup the
receiver.
When a receiver wakes up and receives a preamble, it clears it with an ACK frame
containing
the instant of a \textit{rendezvous} at which it will broadcast the
SCHEDULE frame.
If a second active node overhears a preamble destined to the same destination
it wants to send to, it waits
for an incoming ACK.
After ACK reception, a sender goes to sleep and wakes up at the instant
of the rendezvous.
In Figure~\ref{fig:operation_compare}, we see that after the transmission of an
ACK to Tx1, Rx device is again
ready for receiving preambles from other devices.
So, Tx2 transmits a preamble and receives an ACK with the same rendezvous.
Preamble clearing continues until the end of the channel polling interval of
the receiver.
\section{Energy Analysis}
\label{sec:energy_analysis}
LA- MAC provides its best performance in contexts of high density and traffic congestion.
In order SHOW THE GAIN of LA-MAC, we provide an energy analysis aimed at comparing EXPECTED energy consumption of all considered protocols.
We focus on evaluating expected energy consumption of a group of nodes when the number of messages to transmit within the group is known.
In our analysis, we consider one receiver and a group of devices that can have some messages to send as well as empty buffers.
In the analysis thta we provide, we focus our attention to the fact that in a complex sensor network traffic congestion is not uniformly distributed over the network.
In fact elements such as the MAC protocol, the density and the traffic model have different impact in different areas of the network.
For this reason instead of focusing on the simple Traffic Generation Rate (TGR)~\cite{ye06ultra} on the probability of receiving a packet in a given interval~\cite{buettner06xmac}, we base our analysis on the number of messages that a group of nodes must send to a reference receiver.
With this approach we can show different congestion situations as they happen in a multi-hops networks with convergecast traffic pattern, where traffic distribution is not uniform with respect to proximity to the sink (in terms of number of hops).
In fact, the closer the sink, the higher the average traffic.
We provide an evaluation that shows energy consumption with respect to a group of nodes.
We assume that a group of nodes share a global message buffer, depending on the number of messages in the buffer there may be zero, one, two or multiple senders.
Those nodes that have any message to send are called \textit{others} or \textit{overhearers}, they don't participate in the contention but are prone to the \textit{overhearing problem} (one of the major causes of energy waste in wireless sensor networks).
In the analysis we separate energy cost due to transmission (couple, triple or more) $E_t$, reception $E_r$, polling (listening some activities in the channel) $E_l$ and sleep $E_s$.
Consumption of other node that overhears the channel is represented by $E_o$.
Overall expected energy consumption $E$ is the sum of all energies.
Global buffer state of the group of nodes is $B$.
Power consumption of radio states are $P_t$ for transmission, $P_r$ for reception, $P_l$ for channel polling and $P_s$ for sleep.
We assume that when a device is polling the channel, it listens to the air interface for some activity; if a message is already being sent while a device starts polling the channel, the device will not change its radio state.
Otherwise, if a device that is polling the channel hears the beginning of new message, it switches its radio in receiving mode increasing the energy consumption.
We consider that the group is composed by $N$ devices and one receiver.
Depending on the state of buffers, the number of senders $N_s$ varies as well as the number of overhearing nodes $N_o = N - N_s$.
We assume that all devices are within radio range of each others.
Duration of a message over the air is $t_d$.
Each frame elapses $t_f = t_l + t_s$
\subsection{Global buffer is empty ($B=0$)}
If all buffers are empty, all protocols behave in the same way: nodes periodically wakeup, poll the channel, then go back to sleep because of absence of channel activity. Consumption only depends on time spent in polling and sleeping.
\begin{equation}
E^{ALL}(0) = (N+1) \cdot ( t_l \cdot P_l+ t_s \cdot P_s)
\end{equation}
\subsection{Global buffer contains one message ($B=1$)}
If there is one message to send, there are only two devices that are active: the one which has a message in the buffer ($N_s$ = 1) and the destination.
The number of overhears is $N_o=N-1$.
\textit{ }
\noindent\textbf{B-MAC ($B=1$)}
When message sender wakes up, it polls the channel and then starts sending one large preamble that anticipates data transmission.
Even if data is unicast, destination field is not included in preambles; therefore, all nodes need to hear both preamble and the header of the following data in order to know the identity of the intended receiver.
Provided that devices are not synchronized, each device will hear in average half of the preamble.
The cost of transmission is the cost of an entire preamble plus the cost of transmitting data.
\begin{equation}
E_t^{B}(1) = ( t_p^B + t_d ) \cdot P_t
\end{equation}
The cost of reception is the cost of receiving half of the duration of a preamble plus the cost of receiving data.
In packetized radios, a large preamble is obtained by a sequence of short preambles sent one right after the other.
For this reason, if a generic device B wakes up and polls the channel while a generic device A is sending a long peamble, radio state of device B will remain in polling state for a short time until the beginning of the next small packet of the large preamble; afterwards the radio will switch in receiving mode consuming more energy.
When the receiver (that is not synchronized with the sender) wakes up, it polls the channel for some activity.
Because of lack of synchronization, it may happen that at the time when the receiver wakes up, the sender is performing channel polling.
Probability of this event is $p=t_l/t_f$, so if the receiver wakes up during this period, it will perform half of the polling and then it will listen for the entire preamble.
Otherwise, if the receiver wakes up after the end of the polling of sender, it will listen half of the preamble (probabiliy $1-p$).
In the remainder of this document we say that with probability $p$ transmitter and receiver are somehow quasi-synchronized.
\begin{equation}
E_r^{B}(1) = (p \cdot t_p^B + (1-p) \cdot \frac{t_p^B}{2} + t_d ) \cdot P_r
\end{equation}
So more than the entire polling of the sender we must consider half of polling period that must be performed by the receiver with probability $p$.
\begin{equation}
E_l^{B}(1) = (1 + \frac{p}{2}) \cdot t_l \cdot P_l
\end{equation}
The cost of sleeping activity for the couple transmitter/receiver it depends on the time that they do not spend in polling, receiving or transmitting messages.
\begin{equation}
E_s^{B}(1) = (2 \cdot t_f - ( \frac{t_p^B}{2} \cdot (p+3) + 2 \cdot t_d + t_l \cdot (1 + \frac{p}{2}) ))\cdot P_s
\end{equation}
With B-MAC there is not difference in terms of energy consumption between overhearing and receiving a message. Therefore, the cost of overhearing is:
\begin{equation}
E_o^{B}(1)= N_o\cdot (E_r^{B}(1) + p \cdot \frac{t_l}{2} \cdot P_l + (t_f - (p \cdot (\frac{t_l}{2} + t_p^B) + (1-p) \cdot \frac{t_p^B}{2} + t_d )) \cdot P_s )
\end{equation}
\textit{ }
\noindent\textbf{X-MAC ($B=1$)}
When the sender wakes up, it polls the channel and starts sending a sequence of unicast preambles separated by a time for \textit{early} ACK reception.
When the intended receiver wakes up and polls the channel, it receives the preamble and clear it.
Then the sender can transmit its message.
After data reception, the receiver remains in polling state for an extra backoff time $t_b$ that is used to receive other possible messages~\cite{buettner06xmac} coming from other senders.
All devices that have no message to send, overhear channel activity and go to sleep as soon as they receive any unicast message (preamble, ACK or data).
The expected number of preambles that are needed to \textit{wakeup} the receiver is $\gamma^X$.
Average number of preambles depends on the duration of polling period, preamble and ACK messages as well as the duration of an entire frame~\cite{buettner06xmac}.
$\gamma^X$ Is the inverse of the \textit{collision} probability of one preamble over the polling period of the receiver.
In fact, if the couple sender/receiver is not synchronized, the sender can not know when the receiver will wake up, thus each preamble has the same probability to be heard or not by the receiver.
Each sent preamble is a trial of a geometric distribution, so we say that before there is a collision between preamble and polling period there are $(\gamma^X-1)$ preambles whose energy is wasted.
\begin{equation}
\gamma^X = \frac{1}{\frac{t_l - t_a^X - t_p^X}{t_f}}
\end{equation}
Total amount of energy that is due to the activity of transmitting one message depends on the average number of preambles that must be sent ($\gamma^X $) and the cost of \textit{early} ACK reception.
Provided that wakeup schedules of nodes are not synchronous, it may happen that when the receiver wakes up, the sender is performing channel polling (transmitter and receiver are quasi-synchronized with probability $p$).
In the case of quasi-synchronization, the receiver will perform in average half of the polling period and afterwards it the will be able to clear the very first preamble of the strobe.
In this case the cost of transmission only includes the transmission of one preamble and the cost of receiving the ACK.
Otherwise, if nodes are not synchronous (the receiver wakes up after the end of the polling of sender), the receiver will cause the sender to waste energy for the transmission of $\gamma^X$ preambles and the wait for an ACK (we consider waiting for ACK as a polling state) before it can hear one of them.
The energy consumption of all activities of polling is reported separately in $E_l^{X}(1)$.
Transmission cost is:
\begin{equation}
E_t^{X}(1) = (1-p)\cdot \gamma^X \cdot t_p^X \cdot P_t + p \cdot t_p^X \cdot P_t + t_a^X \cdot P_r + t_d \cdot P_t
\end{equation}
\begin{equation}
= ((1-p)\cdot \gamma^X + p ) \cdot t_p^X \cdot P_t + t_a^X \cdot P_r + t_d \cdot P_t
\end{equation}
The cost of the receiving activity is represented by the transmission of one ACK and the reception of both data and preamble.
\begin{equation}
E_r^{X}(1) = (t_d + t_p^X ) \cdot P_r + t_a^X \cdot P_t
\end{equation}
With probability $1-p$ (no synchronization) the receiver will wakeup while the sender is already transmitting a preamble (or it is waiting for an \textit{early} ACK).
Otherwise (with probability $p$) the receiver will perform in average, only half of its polling period.
The reason for this is that if the active couple is quasi-synchronized they simultaneously perform channel sensing, then the sender starts the preamble transmission.
As far as the sender is concerned, we must consider both the entire polling period and the time that the sender waits for \textit{early} ACK without any answer (event that happens with probability $1-p$).
\begin{equation}
E_l^{X}(1) = ( ( t_l + (1-p)\cdot (\gamma^X - 1) \cdot t_a^X) + ((1-p)\cdot \frac{t_p^X + t_a^X}{2} + p \cdot \frac{t_l}{2}) + t_b)\cdot P_l
\end{equation}
\begin{equation}
= ((1-p)\cdot (\frac{t_p^X + t_a^X}{2} +(\gamma^X - 1) \cdot t_a^X )+ (\frac{p}{2}+ 1)\cdot t_l + t_b)\cdot P_l
\end{equation}
Sleep activity of the active couple is twice a frame duration minus the time that both devices are active.
\begin{equation}
E_s^{X}(1) = (2\cdot t_f - (t_l + ((1-p)\cdot \gamma^X + p) \cdot ( t_p^X + t_a^X) + t_d ) - ( p \cdot \frac{t_l}{2}+t_p^X + t_a^X + (1-p)\cdot\frac{t_p^X + t_a^X}{2} + t_d + t_b))\cdot P_s
\end{equation}
\begin{equation}
= (2\cdot t_f - 2 \cdot t_d- p \cdot \frac{t_l}{2}-t_p^X - t_a^X - (1-p)\cdot \frac{t_p^X + t_a^X}{2} - t_l - ((1-p)\cdot \gamma^X + p) \cdot ( t_p^X + t_a^X) - t_b)\cdot P_s
\end{equation}
As other devices, the overhearers can wakeup at a random instant.
However, differently from active agents, as soon as they overhear some activity they go back to sleep.
Therefore their energy consumption depends on the probability that such nodes wake up while the channel is busy or not.
The probability that at wakeup instant the channel is free depends on polling duration, buffer states, the number of senders etc.
In figure~\ref{fig:probabilitiesOthersXmac} we present all the possible situations that can happen.
We consider as reference instant, the time at which the transmitter wakes up (root of the tree).
With probability $p$, the receiver and the the transmitter are quasi-synchronized, not synchronized otherwise (probability $(1-p)$).
With probability $p\cdot p$ both the receiver and a generic overhearer are quasi-synchronized with the transmitter, this is the Case 1 in the tree.
\vspace{0.2cm}
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=.5]{pdf/probabilitiesOthersXmac}
\end{center}
\caption{X-MAC. Tree of different wakeup cases.}
\label{fig:probabilitiesOthersXmac}
\end{figure}
\vspace{0.2cm}
\begin{itemize}
\item Case 1 : Sender, receiver and overhearer are quasi-synchronized. The overhearer will sense a preamble that is not intended to it and the goes back to sleep.
\vspace{1cm}
\begin{figure}[ht!]
\begin{center}
\includegraphics[scale=.5]{pdf/xmacA1case1}
\end{center}
\caption{Global buffer size A=1. Overhearing situations for Case 1. X-MAC protocol}
\label{fig:xmacA1case1}
\end{figure}
\vspace{1cm}
\begin{equation}
E_{Case_1,o}^X = \frac{t_l}{2} \cdot P_l + t_p^X \cdot P_r + (t_f - \frac{t_l}{2} -t_p^X) \cdot P_s
\end{equation}
\item Case 2, 3, 4: Sender and receiver are synchronized but not the overhearer.
When the overhearar wakes up, it can ovehear different messages (preamble, ACK or data) as well as clear channel.
Possible situations are summarized in figure~\ref{fig:xmacA1case234}.
\vspace{1cm}
\begin{figure}[ht!]
\begin{center}
\includegraphics[scale=.5]{pdf/xmacA1case234}
\end{center}
\caption{Global buffer size A=1. Overhearing situations for Cases 2, 3 and 4. X-MAC protocol}
\label{fig:xmacA1case234}
\end{figure}
\vspace{1cm}
\begin{itemize}
\item Case 2
If the ovehearer wakes up during a preamble transmission, it will ovehear the following ACK and afterwards go back to sleep.
The probability for the overhearer to wakeup during a preamble is $p_a = t_p^X/t_f$.
\begin{equation}
E_{Case_2,o}^X = \frac{t_p^X}{2} \cdot P_l + t_a^X \cdot P_r+ (t_f - \frac{t_p^X}{2}- t_a^X) \cdot P_s
\end{equation}
\item Case 3:
If the ovehearer wakes up during a ACK transmission, it will ovehear the following data message and afterwards go back to sleep.
The probability for the overhearer to wakeup during an ACK is $p_b = t_a^X/t_f$.
\begin{equation}
E_{Case_3,o}^X =\frac{t_a^X}{2} \cdot P_l + t_d \cdot P_r+ (t_f - \frac{t_a^X}{2}- t_d) \cdot P_s
\end{equation}
\item Case 4:
The ovehearer will either wakes up during data transmission or after the end of it.
In both cases when the sender wakes up and senses the channel, it will sense it as free because the sender was sleeping when data packet transmission begun.
Therefore the overhearer performs an entire polling and go back to sleep.
The probability for this event to happen is $1-p_a-p_b$.
\begin{equation}
E_{Case_4,o}^X = t_l \cdot P_l+ (t_f - t_l) \cdot P_s
\end{equation}
\end{itemize}
\item Case 5
\vspace{1cm}
\begin{figure}[ht!]
\begin{center}
\includegraphics[scale=.5]{pdf/xmacA1case5}
\end{center}
\caption{Global buffer size A=1. Overhearing situations for Case 5. X-MAC protocol}
\label{fig:xmacA1case5}
\end{figure}
\vspace{1cm}
Similarly to Case 1, if the overhearer is quasi-synchronized with the transmitter it will overhear the first preamble even if the receiver is still sleeping.
The energy cost is:
\begin{equation}
E_{Case_5,o}^X = E_{Case_1,o}^X
\end{equation}
\item Cases 6,7,8: If neither the receiver nor the overhearer are synchronized with the sender, it may happen that the receiver wakes up before the overhearer.
Therefore, similarly to cases 2,3 and 4 we have different situations.
\vspace{1cm}
\begin{figure}[ht!]
\begin{center}
\includegraphics[scale=.5]{pdf/xmacA1case678}
\end{center}
\caption{Global buffer size A=1. Overhearing situations for Cases 6, 7 and 8. X-MAC protocol}
\label{fig:xmacA1case678}
\end{figure}
\vspace{1cm}
Cases 6,7,8 are respectively similar to 2,3 and 4:
\begin{equation}
E_{Case_6,o}^X = E_{Case_2,o}^X
\end{equation}
\begin{equation}
E_{Case_7,o}^X = E_{Case_3,o}^X
\end{equation}
\begin{equation}
E_{Case_8,o}^X = E_{Case_4,o}^X
\end{equation}
\item Case 9: If the overhearer wakes up before the intended receiver, it will receive a preamble and go back to sleep.
The cost in this case is:
\begin{equation}
E_{Case_9,o}^X = t_p^X \cdot P_r + \frac{t_p^X + t_a^X}{2} \cdot P_l+ (t_f - \frac{t_p^X + t_a^X}{2} -t_p^X) \cdot P_s
\end{equation}
\end{itemize}
The overall energy cost is the sum of the costs of each case weighted by the probability of the case to happen
\begin{equation}
E_o^{X}(1) = N_o \cdot \sum\limits_{i=1}^{9} p_{Case_i}\cdot E_{Case_i,o}^X
\end{equation}
\textit{ }
\noindent\textbf{LA-MAC ($B=1$)}
In the present analysis we do not consider adaptive wakeup schedule of senders presented in section [REF PROTOCOL DESCRIPTION SECTION].
Therefore, wakeup schedules are assumed random.
Even if this is a worst case for LA-MAC, it helps to better compare it with other protocols.
When sender wakes up, polls channel and send preambles as in X-MAC.
However differently from X-MAC, after \textit{early} ACK reception, the sender goes back to sleep and waits for Schedule message to be sent.
When the intended receiver receives one preamble, it clears it and completes its polling period in order to detect other possible preambles to clear.
Immediately after the end of polling period, the receiver processes requests and broadcasts the Schedule message.
In LA-MAC, overhearers go to sleep as soon as they receive any unicast message (preamble, ACK or data) as well as the Schedule.
Due to lack of synchronization, expected number of preambles per slot follows X-MAC with different size of preambles $t_p^L$ and ACK $t_a^L$.
When the sender wakes up, it perform an entire channel polling before starting transmitting strobed preambles.
When the receiver wakes up, it polls the channel.
With probability $p=t_l/t_f$ the sender and receiver are quasi-synchronized; so with probability $p$ the sender is still polling the channel when the receiver wakes up.
When the sender wakes up, it polls the channel and starts sending preambles to \textit{wakeup} the receiver.
With probability $p$, the first preamble that is sent will wake up the receiver, so the sender will immediately receive an \textit{early} ACK.
Otherwise, if nodes are not synchronized (probability $(1-p)$) the sender will wake up its destination in average after $\gamma^L$ preambles.
$E_t^{L}(1)$ is similar to the cost of X-MAC plus the cost of receiving the Schedule.
\begin{equation}
E_t^{L}(1) = (1-p)\cdot \gamma^L \cdot t_p^L \cdot P_t + p\cdot t_p^L \cdot P_t + t_a^L \cdot P_r + t_d \cdot P_t + t_g \cdot P_r
\end{equation}
Cost of reception depends on the duration of preamble, ACK, data and Schedule messages.
\begin{equation}
E_r^{L}(1) = (t_p^L + t_d ) \cdot P_r + (t_a^L + t_g )\cdot P_t
\end{equation}
When the sender wakes up, it performs a full polling period before the beginning of the strobed preambles.
Moreover the degree of synchronization between sender and receiver (called \textit{active} nodes) also influences the consumption.
If active nodes are not synchronized, the sender will poll the channel $(\gamma^L- 1)$ times in order to wait for \textit{early} ACK.
Differently from X-MAC, the receiver will complete its polling period even if it clears one preamble, so its radio will remain in polling state the duration of a full polling period less the time for preamble reception and ACK transmission.
\begin{equation}
E_l^{L}(1) = ((t_l + (1-p) \cdot (\gamma^L- 1) \cdot t_a^L ) + (t_l - t_p^L - t_a^L ))\cdot P_l
\end{equation}
When the active nodes are not transmitting, receiving or polling the channel they can sleep.
\begin{equation}
E_s^{L}(1) = ( 2\cdot t_f -( t_l + (1-p)\cdot \gamma^L \cdot t_p^L + p\cdot t_p^L + t_a^L + (1-p) \cdot (\gamma^L- 1) \cdot t_a^L + t_d + t_g) - (t_l + t_d + t_g )) \cdot P_s
\end{equation}
As in X-MAC as soon as overhearers receive some messages they go back to sleep.
Therefore their energy consumption depends on the probability
that such nodes wake up while the channel is busy or not.
All the possible combinations of wakeup schedules with relative probabilities are shown in figure~\ref{fig:probabilitiesOthersLamac} .
\vspace{1cm}
\begin{figure}[ht!]
\begin{center}
\includegraphics[scale=.5]{pdf/probabilitiesOthersLamac}
\caption{Lamac. Tree of different wakeup cases.}
\label{fig:probabilitiesOthersLamac}
\end{center}
\end{figure}
\vspace{1cm}
\begin{itemize}
\item Case 1 : Sender, receiver and overhearer are quasi-synchronized. The overhearer will sense a preamble that is not intended to it and goes back to sleep.
Probability of this event is $p\cdot p $
\vspace{1cm}
\begin{figure}[ht!]
\begin{center}
\includegraphics[scale=.4]{pdf/case1lamac}
\caption{Lamac. Possible wakeup instants of overhearers. Case 1}
\end{center}
\end{figure}
\vspace{1cm}
\begin{equation}
E_{Case_1,o}^L =\frac{t_l}{2} \cdot P_l+ t_p^L \cdot P_r + (t_f - \frac{t_l}{2} -t_p^L) \cdot P_s
\end{equation}
\item Case 2, 3, 4, 5: the receiver is synchronized with Sender. Nevertheless, the overhearer is not synchronized with the sender.
When the overhearar wakes up, it can receive different messages (preamble, ACK, Schedule or data) as well as clear channel.
\vspace{1cm}
\begin{figure}[ht!]
\begin{center}
\includegraphics[scale=.4]{pdf/case2345lamac}
\caption{Lamac. Possible wakeup instants of overhearers. Case 2}
\end{center}
\end{figure}
\vspace{1cm}
\begin{itemize}
\item Case 2
If the ovehearer wakes up during a preamble transmission, it will receive in average half of the preamble and ovehear the following ACK. Afterwards it will go back to sleep.
Probability of this event is $p\cdot (1-p) \cdot p_c$, where $p_c = t_p^L/t_f$ represents the event that wakeup instant of the overhearer is slightly after the end of polling of the sender.
\begin{equation}
E_{Case_2,o}^L = \frac{t_p^L}{2} \cdot P_l + t_a^L \cdot P_r + (t_f - \frac{t_p^L}{2}- t_a^L) \cdot P_s
\end{equation}
\item Case 3:
If the ovehearer wakes up during an ACK transmission, it will sense a silent period and ovehear the following schedule message. Afterwards it goes back to sleep.
Probability of this event is $p\cdot (1-p) \cdot p_d$, where $p_d = t_a^L/t_f$ includes the event that wakeup instant of the overhearer happens at least after the transmission of a preamble. $p_d$ Neglects the time that elapses between the end of the ACK and the end of channel polling of the receiver. In other words, $p_d$ supposes that schedule message is sent immediately after the transmission of ACK.
\begin{equation}
E_{Case_3,o}^L = \frac{t_a^L}{2} \cdot P_l + t_g \cdot P_r+ (t_f - \frac{t_a^L}{2}- t_g) \cdot P_s
\end{equation}
\item Case 4:
If the overhearer wakes up during the transmission of the Schedule, it will hear the following data and then go to sleep.
Probability of this event is $p\cdot (1-p) \cdot p_e$, where $p_e = t_g/t_f$ assumes that the wakeup instant of the overhearer happens in average during the middle of schedule transmission.
\begin{equation}
E_{Case_4,o}^L =\frac{t_g}{2}\cdot P_l+ t_d \cdot P_r + (t_f -\frac{t_g}{2} - t_d) \cdot P_s
\end{equation}
\item Case 5
The ovehearer will either wakes up during data transmission or will sense a free channel because both sender and receiver are already sleeping.
Therefore the overhearer performs an entire polling and goes back to sleep.
Probability of this event is $p\cdot (1-p) \cdot (1-p_c-p_d-p_e)$.
\begin{equation}
E_{Case_5,o}^L = t_l \cdot P_l+ (t_f - t_l) \cdot P_s
\end{equation}
\end{itemize}
\item Case 6:
Similarly to Case 1, if the overhearer is quasi-synchronized with the sender with probability $(1-p)\cdot p$, the energy cost is:
\begin{equation}
E_{Case_6,o}^L = \frac{t_l}{2} \cdot P_l+ t_p^L \cdot P_r + (t_f - \frac{t_l}{2} -t_p^L) \cdot P_s
\end{equation}
\item Cases 7,8,9,10: If neither the receiver nor the ovehearer are synchronized with sender, it may happen that the receiver wakes up before the overhearer.
We distinguish the situations of quasi-synchronization of the couple overhearer-preambles and lack of synchronization.
\vspace{1cm}
\begin{figure}[ht!]
\begin{center}
\includegraphics[scale=.4]{pdf/case78910lamac}
\caption{Lamac. Possible wakeup instants of overhearers. Cases 7,8,9, 10.}
\end{center}
\end{figure}
\vspace{1cm}
In cases 7 and 8, overhearer is quasi-synchronized with the receiver.
\begin{itemize}
\item Case 7:
There is a probability to overhear a preamble.
Such a probability is $(1-p) \cdot (1-p) \cdot 1/2 \cdot p_c$.
Consumption of this case is the same of Case 2.
\begin{equation}
E_{Case_7,o}^L = E_{Case_2,o}^L
\end{equation}
\item Case 8:
There is a probability to overhear an ACK.
Such a probability is $(1-p) \cdot (1-p) \cdot 1/2 \cdot p_d$.
Consumption of this case is the same of Case 2.
\begin{equation}
E_{Case_8,o}^L =E_{Case_3,o}^L
\end{equation}
\end{itemize}
If the overhearer and the receiver are not synchronized:
\begin{itemize}
\item Case 9:
There is a probability to overhear a Schedule.
Such a probability is $(1-p) \cdot (1-p) \cdot 1/2 \cdot p_e$.
Consumption of this case is the same of Case 4.
\begin{equation}
E_{Case_9,o}^L = E_{Case_4,o}^L
\end{equation}
\item Case 10:
There is a probability to overhear a data message.
Such a probability is $(1-p) \cdot (1-p) \cdot 1/2 \cdot (1-p_c-p_d-p_e)$.
Consumption of this case is the same of Case 5.
\begin{equation}
E_{Case_{10},o}^L =E_{Case_5,o}^L
\end{equation}
\end{itemize}
\item Case 11: Otherwise, if the overhearer wakes up before the intended receiver, it will receive one preamble (whichever preamble amongst $\gamma^L$) and go back to sleep.
The cost in this case is:
\begin{equation}
E_{Case_{11},o}^L = \frac{t_p^L + t_a^L}{2} \cdot P_l + t_p^L \cdot P_r + (t_f - \frac{t_p^L + t_a^L}{2} -t_p^L) \cdot P_s
\end{equation}
\end{itemize}
The overall energy cost is the sum of the costs of each case weighted by the probability of the case to happen
\begin{equation}
E_o^{L}(1) = N_o \cdot \sum\limits_{i=1}^{11} p_{Case_i}\cdot E_{Case_i,o}^L
\end{equation}
\subsection{Global buffer contains two messages ($B=2$)}
If \textit{A=2}, there can be either one sender with two messages to deliver, or two senders with each only one message. The others devices may overhear some channel activity.
The number of overhearers will be $N_o=N-1$ if there is just one sender, $N_o=N-2$ otherwise.
The probability that two messages are in different buffers is equal to $(N-1)/N$.
\textit{ }
\noindent\textbf{B-MAC ($B=2$)}
The overall power consumption for transmission and reception when $A\geq1$ is linear with the global number of packets in buffer, independently on how packets are distributed in the different buffers,\textit{i.e.,} independently of the number of senders.
In fact, due to the long preamble to send ($t_p^b=t_f$), there can be only one sender per frame.
Thus, we have the following relation: $E^B(A) = A\cdot E^B(1) = A\cdot (E_t^B(1) + E_r^B(1) + E_l^B(1) + E_s^B(1) + E_o^B(1))$.
Such a relation depicts the limitations of B-MAC protocol, since high-loaded traffic can hardly been addressed.
\textit{ }
\noindent\textbf{X-MAC ($B=2$)}
After the reception of the first data message, the receiver remains in polling state for an extra back-off time $t_b$ during which it can receive a second message.
The energy consumed for the transmission of the first packet is the same as the energy $E_t^{X}(1)$ defined in the previous subsection; then the cost of the transmission for the second message must be added.
Differently from B-MAC, the distribution of messages in the buffers impacts X-MAC protocol behaviour.
With probability $1/N$ both packets are in the same buffer; otherwise two different senders are implicated, so we need to study how wakeup instants of the active agents are scheduled with respect to each others.
Wakeup instant of different agents are all independent.
We assume that the frame begins at the wakeup instant of the first transmitter; scenarios that may happen are illstrated on Figure~\ref{fig:probabilitiesXmac} with their happening probability:
\vspace{1cm}
\begin{figure}[ht!]
\begin{center}
\includegraphics[scale=.5]{pdf/probabilitiesXmac}
\end{center}
\caption{X-MAC protocol: Probability tree of wakeup combinations with global buffer size A=2. There are one sender and one or two transmitters.}
\label{fig:probabilitiesXmac}
\end{figure}
\vspace{1cm}
\begin{itemize}
\item Case 1: All three agents are quasi-synchronized.
The very first preamble sent by the first transmitter is cleared by the receiver who sends an ACK; the second transmitter hears both the preamble and the ACK.
Probability of this scenario is $p_{Case_1}=(N-1)/N\cdot p\cdot p$.
\vspace{0.5cm}
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=.5]{pdf/xmacA2case1}
\end{center}
\caption{X-MAC protocol, global buffer size A=2: Overhearing situations for Case 1.}
\label{fig:xmacA2case1}
\end{figure}
\vspace{1cm}
\begin{equation}
E_{Case_1,t}^X(2) = t_p^X\cdot P_t + t_a^X\cdot P_r + (t_p^X+t_a^X)\cdot P_r + 2\cdot t_d\cdot P_t
\end{equation}
\begin{equation}
E_{Case_1,r}^X(2) = (t_p^X+2\cdot t_d)\cdot P_r+t_a^X\cdot P_t
\end{equation}
\begin{equation}
E_{Case_1,l}^X(2) = (t_l+\frac{t_l}{2}+\frac{t_l}{2})\cdot P_l
\end{equation}
\begin{equation}
E_{Case_1,s}^X(2) = (3\cdot t_f - (t_l+t_p^X+t_a^X+t_d) - (\frac{t_l}{2}+t_p^X+t_a^X+t_d) - (\frac{t_l}{2}+t_p^X+t_a^X+2\cdot t_d))\cdot P_s
\end{equation}
Depending on wakeup instants of overhearers several situations may happen.
If the overhearer is quasi-synchronized with one of the three active agents (receiver or one of the two senders), then it will sense a busy channel (cf. figure~\ref{fig:xmacA2case1}).
We assume that an overhearer polls the channel for some time and then overhears a message that can be a preamble, an ACK or a data.
For simplicity, we assume the overhearer polls the channel during in average a half polling frame and then overhears a data (the largest message that can be overheard).
Probability to wakeup during a busy period is $p_{case1,A=2}^X=(t_p^X+t_a^X+2\cdot t_d)/t_f$.
Otherwise, the overhearer wakes up while channel is free; it polls the channel and then goes back to sleep.
\begin{equation}
E_{Case_1,o}^X(2) = N_o\cdot(p_{case1,A=2}^X\cdot(\frac{t_l}{2}\cdot P_l+t_d\cdot P_r+(t_f-\frac{t_l}{2}-t_d)\cdot P_s) + (1-p_{case1,A=2}^X)\cdot(t_l\cdot P_l+(t_f-t_l)\cdot P_s))
\end{equation}
\item Case 2: First sender and receiver are quasi-synchronized, contrary to second sender (cf. figure~\ref{fig:xmacA2case2}).
The only possibility for the second sender to send data in the current frame is to manage to catch the ACK of the receiver during its polling period.
This event happens with probability $q^X=(t_l-t_a^X)/t_f$.
Probability of this scenario is $p_{Case_2}=(N-1)/N\cdot p\cdot(1-p)\cdot q^X$.
\vspace{0.5cm}
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=.5]{pdf/xmacA2case2}
\end{center}
\caption{X-MAC protocol, global buffer size A=2: Overhearing situations for Case 2.}
\label{fig:xmacA2case2}
\end{figure}
\vspace{1cm}
Energy consumption of this second scenario is quite the same as the one of Case 1 but event probability is different.
Since the second sender is not quasi-synchronised, it cannot hear the full preamble sent by the first sender and has a shorter polling period.
\begin{equation}
E_{Case_2,t}^X(2) = E_{Case_1,t}^X(2)-t_p^X\cdot P_r
\end{equation}
\begin{equation}
E_{Case_2,r}^X(2) = E_{Case_1,r}^X(2)
\end{equation}
\begin{equation}
E_{Case_2,l}^X(2) = E_{Case_1,l}^X(2)-\frac{t_l-t_p^X}{2}\cdot P_l
\end{equation}
\begin{equation}
E_{Case_2,s}^X(2) = E_{Case_1,s}^X(2) + \frac{t_l + t_p^X }{2}\cdot P_l
\end{equation}
We assume that the probability of busy channel is the same as the previous scenario.
So, overhearing consumption is unchanged.
\begin{equation}
E_{Case_2,o}^X(2) = E_{Case_1,o}^X(2)
\end{equation}
\item Case 3: With probability $1-q^X$, the second sender wakes up too late and cannot catch the ACK.
In this case, it goes back to sleep and it will transmit its data during the next frame.
So, energy cost is the sum of the transmission cost for the first packet in the current frame and for the second packet in the following frame.
This second frame is the same as $E^X(1)$.
This scenario happens with probabiliy $p_{Case_3}=(N-1)/N\cdot p\cdot(1-p)\cdot(1-q^X)$.
\begin{equation}
E_{Case_3,t}^X(2) = t_p^X\cdot P_t+t_a^X\cdot P_r+t_d\cdot P_t+ E_t^{X}(1)
\end{equation}
\begin{equation}
E_{Case_3,r}^X(2) = t_p^X\cdot P_r+t_a^X\cdot P_t+t_d\cdot P_r+E_r^{X}(1)
\end{equation}
\begin{equation}
E_{Case_3,l}^X(2) = (t_l+t_l+\frac{t_l}{2})\cdot P_l+E_l^{X}(1)
\end{equation}
\begin{equation}
E_{Case_3,s}^X(2) = (3\cdot t_f - (t_l+t_p^X+t_a^X+t_d) - t_l - (\frac{t_l}{2}+t_p^X+t_a^X+t_d))\cdot P_s+E_s^{X}(1)
\end{equation}
In the second frame, the first sender has nothing to send any more and can be count as an overhearer.
Then the number of overhearers should be updated between two frames but energy cost per overhearer is unchanged in comparison to the one for a unique message in global buffer (A+1):
\begin{equation}
E_{Case_3,o}^X(2) = (N_o+(N_o+1))\cdot\frac{E_o^{X}(1)}{N_o+1}
\end{equation}
\item Case 4: First and second senders are quasi-synchronized but the receiver wakes up later.
In this scenario, the first sender sends a strobed preamble until the receiver wakes up and sends an ACK; the second sender hears the whole strobed preamble and then sends its data during the back-off time.
Between short preambles, senders poll channel waiting for an ACK from receiver.
Probability of this scenario is $p_{Case_4}=(N-1)/N\cdot(1-p)\cdot p$.
\vspace{0.5cm}
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=.5]{pdf/xmacA2case4}
\end{center}
\caption{X-MAC protocol, global buffer size A=2: Overhearing situations for Case 4.}
\label{fig:xmacA2case4}
\end{figure}
\vspace{1cm}
\begin{equation}
E_{Case_4,t}^X(2) = \gamma^X\cdot t_p^X\cdot(P_t+P_r)+2\cdot t_a^X\cdot P_r +2\cdot t_d\cdot P_t
\end{equation}
\begin{equation}
E_{Case_4,r}^X(2) = (t_p^X+2\cdot t_d)\cdot P_r+t_a^X\cdot P_t
\end{equation}
\begin{equation}
E_{Case_4,l}^X(2) = (t_l+\frac{t_l}{2}+2\cdot(\gamma^X-1)\cdot t_a^X+\frac{t_p^X+t_a^X}{2})\cdot P_l
\end{equation}
\begin{equation}
E_{Case_4,s}^X(2) = (3\cdot t_f - (t_l+\gamma^X\cdot(t_p^X+t_a^X)+t_d) - (\frac{t_l}{2}+\gamma^X\cdot(t_p^X+t_a^X)+t_d) - (\frac{t_p^X+t_a^X}{2}+t_p^X+t_a^X+2\cdot t_d))\cdot P_s
\end{equation}
When receiver wakes up later than both senders, the probability that an overhearer wakes up during a transmission of a preamble is higher than with previous scenarios.
If this happens, the overhearer performs a very short polling, overhears a message (most probably a preamble) and then goes back to sleep.
For simplicity we assume that the overhearer will perform half of $(t_p^X + t_a^X)$ of polling and than overhears an entire preamble.
Probability of busy channel is thus $ p_{case4,A=2}^X =(\gamma^X \cdot(t_p^X +t_a^X)+2\cdot t_d)/t_f$.
\begin{equation}
E_{Case_4,o}^X(2) = N_o\cdot(p_{case4,A=2}^X\cdot(\frac{t_p^X+t_a^X}{2}\cdot P_l+t_p^X\cdot P_r+(t_f-\frac{t_p^X+t_a^X}{2}-t_p^X )\cdot P_s) + (1-p_{case4,A=2}^X)\cdot(t_l\cdot P_l+(t_f-t_l)\cdot P_s))
\end{equation}
\item Cases 5, 6, 7: Second sender and receiver are not synchronized with first sender; the behaviour of the protocol depends on which device among the second sender and the receiver will wake up as first.
\begin{itemize}
\item Case 5: Receiver wakes up as first.
Similarly to Case 2, the only possibility for the second transmitter to send data in the current frame is to catch the ACK of the receiver during its polling.
This event happens with probability $q^X=(t_l-t_a^X)/t_f$.
However, there is also the possibility for $Tx_2$ to catch the preamble sent by $Tx_1$ that just precedes the overheard ACK.
Such eventuality can happen with probability $u^X = \dfrac{t_p^X+t_a^X}{2\cdot t_p^X+t_a^X }$.
This scenario happens with probability $p_{Case_5}=(N-1)/N\cdot(1-p)\cdot(1-p)\cdot\frac{1}{2}\cdot q^X$.
\vspace{0.5cm}
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=.5]{pdf/xmacA2case5}
\end{center}
\caption{X-MAC protocol, global buffer size A=2: Overhearing situations for Case 5.}
\label{fig:xmacA2case5}
\end{figure}
\vspace{1cm}
\begin{equation}
E_{Case_5,t}^X(2) = (\gamma^X\cdot t_p^X+t_d)\cdot P_t+t_a^X\cdot P_r+(u^X\cdot t_p^X+t_a^X)\cdot P_r+t_d\cdot P_t
\end{equation}
\begin{equation}
E_{Case_5,r}^X(2) = (t_p^X+2\cdot t_d)\cdot P_r+t_a^X\cdot P_t
\end{equation}
\begin{equation}
E_{Case_5,l}^X(2) = (t_l+(\gamma^X-1)\cdot t_a^X + \frac{t_p^X+t_a^X}{2} + u^X\cdot\frac{t_p^X+t_a^X}{2}+(1-u^X)\cdot\frac{t_p^X}{2})\cdot P_l
\end{equation}
\begin{equation}
E_{Case_5,s}^X(2) = (3\cdot t_f - (t_l+\gamma^X\cdot(t_p^X+t_a^X)+t_d) - (u^X\cdot\frac{t_p^X+t_a^X}{2}+(1-u^X)\cdot\frac{t_p^X}{2}+u^X\cdot t_p^X+t_a^X+t_d) - (\frac{t_p^X+t_a^X}{2}+t_p^X+t_a^X+2\cdot t_d))\cdot P_s
\end{equation}
As in the previous case, the overhearer perceives a very busy channel because of the transmission of preambles; so when it wakes up it will perform half of $(t_p^X+t_a^X)$ in polling state before overhearing an entire preamble.
Probability of busy channel is $p_{case5}^X = p_{case4}^X$.
\begin{equation}
E_{Case_5,o}^X(2) =E_{Case_4,o}^X(2)
\end{equation}
\item Case 6: Receiver wakes up as first.
Similarly to Case 3, with probability $1-q^X$, the second sender wakes up too late and cannot catch the ACK from the receiver.
Thus it goes back to sleep and will transmit its data during the next frame.
This scenario happens with probability $p_{Case_6}=(N-1)/N\cdot(1-p)\cdot(1-p)\cdot\frac{1}{2}\cdot(1-q^X)$.
\begin{equation}
E_{Case_6,t}^X(2) = \gamma^X\cdot t_p^X\cdot P_t+t_a^X\cdot P_r+t_d\cdot P_t+E_t^{X}(1)
\end{equation}
\begin{equation}
E_{Case_6,r}^X(2) = (t_p^X+t_d)\cdot P_r+t_a^X\cdot P_t+E_r^{X}(1)
\end{equation}
\begin{equation}
E_{Case_6,l}^X(2) = (t_l+(\gamma-1)\cdot t_a^X)\cdot P_l+t_l\cdot P_l+\frac{t_p^X+t_a^X}{2}\cdot P_l+E_l^{X}(1)
\end{equation}
\begin{equation}
E_{Case_3,s}^X(2) = (3\cdot t_f - (t_l+\gamma^X\cdot(t_p^X+t_a^X)+t_d) + t_l + (\frac{t_p^X+t_a^X}{2}+t_p^X+t_a^X+t_d))\cdot P_s+E_s^{X}(1)
\end{equation}
\begin{equation}
E_{Case_6,o}^X(2) = E_{Case_3,o}^X(2) = 2\cdot E_o^{X}(1)
\end{equation}
\item Case 7: Second transmitter wakes up as first, it hears a part of the strobed preamble until the receiver wakes up and sends its ACK.
In average, when the second transmitter wakes up, it performs a short polling whose duration is the one between two successive short preambles: $\frac{t_p^X+t_a^X}{2}$.
After that, it hears an average number of $\lfloor\dfrac{\gamma^X}{2}\rfloor$ short preambles before the receiver wakes up and stops the strobed preamble by sending its ACK.
Probability of this scenario is $p_{Case_7}=(N-1)/N\cdot(1-p)\cdot(1-p)\cdot\frac{1}{2}$.
\begin{equation}
E_{Case_7,t}^X(2) = (\gamma^X\cdot t_p^X+t_d)\cdot P_t+t_a^X\cdot P_r+(\lfloor\frac{\gamma^X}{2}\rfloor\cdot t_p^X+t_a^X)\cdot P_r+t_d\cdot P_t
\end{equation}
\begin{equation}
E_{Case_7,r}^X(2) = (t_p^X+t_d)\cdot P_r+t_a^X\cdot P_t+t_d\cdot P_r
\end{equation}
\begin{equation}
E_{Case_7,l}^X(2) = (t_l+(\gamma^X-1)\cdot t_a^X)\cdot P_l+((\lfloor\frac{\gamma^X}{2}\rfloor-1)\cdot t_a^X+\frac{t_p^X+t_a^X}{2})\cdot P_l+\frac{t_p^X+t_a^X}{2}\cdot P_l
\end{equation}
\begin{equation}
E_{Case_7,s}^X(2) = (3\cdot t_f - (t_l+\gamma^X\cdot(t_p^X+t_a^X)+t_d) - (\frac{t_p^X+t_a^X}{2}+ \lfloor\frac{\gamma^X}{2}\rfloor\cdot(t_p^X+t_a^X)+t_d) - (\frac{t_p^X+t_a^X}{2}+t_p^X+t_a^X+2\cdot t_d))\cdot P_s
\end{equation}
From the overhearers point of view, this case is equivalent to Cases 4 and 5.
\begin{equation}
E_{Case_7,o}^X(2) = E_{Case_4,o}^X(2)
\end{equation}
\end{itemize}
\item Case 8: There is only one sender that sends two messages in a row during the extra back-off time.
This last scenario happens with a probability equal to $p_{Case_8}=\frac{1}{N}$.
\begin{equation}
E_{Case_8,t}^X(2) = E_t^{X}(1)+t_d\cdot P_t
\end{equation}
\begin{equation}
E_{Case_8,r}^X(2) = E_r^{X}(1)+t_d\cdot P_r
\end{equation}
\begin{equation}
E_{Case_8,l}^X(2) = E_l^{X}(1)-t_d\cdot P_l
\end{equation}
\begin{equation}
E_{Case_8,s}^X(2) = E_s^{X}(1)-t_d\cdot P_s
\end{equation}
When the sender is unique, energy consumption of the overhearers can be assumed quite the same as the one in case of a global buffer with one packet to send (A=1).
\begin{equation}
E_{Case_8,o}^X(2) = E_o^X(1)
\end{equation}
\end{itemize}
The overall energy cost is the sum of the costs of each scenario, weighted by the probability of the scenario to happen (as showed in the figure~\ref{fig:probabilitiesXmac}):
\begin{equation}
E^{X}(2) = \sum\limits_{i=1}^{8} p_{Case_i}\cdot E_{Case_i}^X(2)
\end{equation}
\textit{ }
\noindent\textbf{LA-MAC ($B=2$)}
When global buffer contains more than one message, there can be one or several senders.
In this section we deal with the case $A=2$.
Energy consumption $E^L(2)$ depends on the number of senders as well as on how wake-up are scheduled.
All different combinations of wake-up instants with their probabilities are given on figure~\ref{fig:probabilitiesLamac}.
With probability $(N-1)/N$ there are two senders, otherwise there is a single sender.
Cases 1-7 refer to situations in which two senders are involved, whereas case 8 refers to a scenario with one sender.
\vspace{1cm}
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=.5]{pdf/probabilitiesLamac}
\end{center}
\caption{LA-MAC protocol, global buffer size A=2: Probability tree of wakeup combinations.}
\label{fig:probabilitiesLamac}
\end{figure}
\vspace{1cm}
\begin{itemize}
\item Case 1: The three agents are quasi-synchronized.
The very first preamble is instantly cleared by the receiver; the second transmitter hears this preamble and the ACK.
This scenario happens with a probability equal to $p_{Case_1}=(N-1)/N\cdot p\cdot p$.
\vspace{0.5cm}
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=.5]{pdf/lamacA2case1}
\end{center}
\caption{LA-MAC protocol, global buffer size A=2: Overhearing situations for Case 1.}
\label{fig:lamacA2case1}
\end{figure}
\vspace{1cm}
\begin{equation}
E_{Case_1,t}^L (2)= t_p^L\cdot P_t + t_a^L\cdot P_r + ( t_p^L + t_a^L )\cdot (P_r + P_t) + t_g \cdot P_t + t_d \cdot P_t
\end{equation}
\begin{equation}
E_{Case_1,r}^L(2) = E_r^{L}(1) + (t_p^L + t_d) \cdot P_r + t_a^L \cdot P_t
\end{equation}
\begin{equation}
E_{Case_1,l}^L(2) = (E_l^{L}(1) - (t_p^L + t_a^L )\cdot P_l) + \frac{t_l}{2} \cdot P_l
\end{equation}
\begin{equation}
E_{Case_1,s}^L(2) = (E_s^{L}(1) - t_d) - (t_f - \frac{t_l}{2} - t_p^L - t_a^L - t_g - t_d)\cdot P_s
\end{equation}
As far as overhearers are concerned, several situations may happen depending on the instants of wakeup of the overhearer
For simplicity we assume that if the overhearer is quasi-synchronized with one of the three active agents (sender one, two or receiver), it will sense a busy channel (cf. figure~\ref{fig:lamacA2case1}).
We assume that the overhearer will poll the channel for some time and then ovehear a message (that can be a preamble, an ACK, a Schedule or a data), for simplicity we assume that the ovehrearer polls the channel for an average time equal half the duration $(t_l )$ and then it will overhear a data (the largest message that can be sent).
Probability to wakeup during a busy period is $ p_{case1,A=2}^L = (2\cdot (t_p^L + t_a^L + t_d) + t_g ) /t_f$.
Otherwise, if the overhearer wakes up while channel is free, it will poll the channel and then go to sleep.
\begin{equation}
E_{Case_1,o}^L(2) = p_{case1,A=2}^L \cdot (\frac{t_l}{2} \cdot P_l +t_d \cdot P_r + (t_f-\frac{t_l}{2} - t_d ) \cdot P_s) + (1-p_{case1,A=2}^L ) /t_f) \cdot (t_l \cdot P_l + (t_f - t_l) \cdot P_s)
\end{equation}
\item Case 2: The first transmitter and receiver are quasi-synchronized. However the second sender, is not. The only possibility for it to send data in the current frame is to send a preamble during the polling of the receiver plus the probability to receive an ACK . This event happens with probability $q^L=1/\gamma^L + (t_l - t_a)/ t_f$.
\begin{equation}
E_{Case_2,t}^L(2) = E_t^{L}(1) +(t_g + 2 \cdot t_a^L ) \cdot P_r + ( t_p^L + t_d )\cdot P_t
\end{equation}
\begin{equation}
E_{Case_2,r}^L(2) = E_r^{L}(1) + (t_p^L + t_d) \cdot P_r + t_a^L \cdot P_t
\end{equation}
\begin{equation}
E_{Case_2,l}^L(2) = (E_l^{L}(1) - (t_p^L + t_a^L )\cdot P_l) + \frac{t_p}{2} \cdot P_l
\end{equation}
\begin{equation}
E_{Case_2,s}^L(2) = E_s^{L}(1) - (t_f- \frac{t_p}{2} - t_p^L - t_a^L - t_g - t_d)\cdot P_s
\end{equation}
We assume that the probability of busy channel is the same as the previous case.
So consumption is assumed to be the same as the previous case.
\begin{equation}
E_{Case_2,o}^L(2) = E_{Case_1,o}^L(2)
\end{equation}
\item Case 3: with probability $1-q^L$, the second sender wakes up too late and can not catch the acknowledge.
In this case it will go back to sleep and it will transmit its data during the next frame.
ADD FIGURE FOR THIS CASE
\begin{equation}
E_{Case_3}^L(2) =2\cdot E^{L}(1)
\end{equation}
\item Case 4: First and second senders are quasi-synchronized but the receiver wakes up later. In this case, the first sender will send a strobed preamble and the second will hear all the preambles until the receiver wakes up and sends the ACK.
\vspace{0.5cm}
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=.5]{pdf/lamacA2case4}
\end{center}
\caption{Global buffer size A=2. Overhearing situations for Case 4. LA-MAC protocol}
\label{fig:lamacA2case4}
\end{figure}
\vspace{1cm}
\begin{equation}
E_{Case_4,t}^L(2) = E_t^{L}(1) + \gamma^L \cdot t_p^L \cdot P_r + 2\cdot t_a^L\cdot P_r + t_g\cdot P_r+ (t_p^L +t_d) \cdot P_t
\end{equation}
\begin{equation}
E_{Case_4,r}^L(2) = E_r^{L}(1) + (t_p^L + t_d) \cdot P_r + t_a^L \cdot P_t
\end{equation}
\begin{equation}
E_{Case_4,l}^L(2) = (E_l^{L}(1) - (t_p^L + t_a^L )\cdot P_l) + ((\gamma^L -1 )\cdot t_a^L + \frac{ t_l}{2})\cdot P_l
\end{equation}
\begin{equation}
E_{Case_4,s}^L(2) = E_s^{L}(1) - (t_f - (\gamma^L +1) \cdot ( t_p^L +t_a^L ) - t_g - t_d - \frac{ t_l}{2})\cdot P_s
\end{equation}
If the receiver wakes up later than the couple of senders, the probability that an overhears wakes up during a transmission of a preamble is high.
If this happens, the overhearer performs a very short polling, overhears a message (most probably a preamble) and then it goes back to sleep.
For simplicity we assume that the overhearer will perform half of $(t_p^L + t_a^L)$ of polling and than overhears an entire preamble.
Probability of busy channel is $ p_{case4}^L =\gamma^L \cdot ( t_p^L +t_a^L ) + ( t_p^L +t_a^L ) + t_g + 2 \cdot t_d $
\begin{equation}
E_{Case_4,o}^L(2) = p_{case4}^L \cdot (\frac{t_p^L + t_a^L}{2} \cdot P_l + t_p^L \cdot P_r + (t_f-\frac{t_p^L + t_a^L}{2} -t_p^L )\cdot P_s ) + (1-p_{case4}^L ) \cdot (t_l \cdot P_l + (t_f - t_l) \cdot P_s)
\end{equation}
\item Cases 5, 6, 7: The second transmitter and the receiver are not synchronized with the first transmitter, the behaviour of the protocol depends on which agent will wakes up as first among the second transmitter and the receiver.
\begin{itemize}
\item Case 5: the receiver wakes up as first, similarly to Case 2, the only possibility for it to send data in the current frame is to listen to the ACK of the receiver, during its polling. This event happens with probability $q^L=1/ \gamma^L$.
\vspace{0.5cm}
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=.5]{pdf/lamacA2case5}
\end{center}
\caption{Global buffer size A=2. Overhearing situations for Case 5. LA-MAC protocol}
\label{fig:lamacA2case5}
\end{figure}
\vspace{1cm}
\begin{equation}
E_{Case_5,t}^L(2) = E_{Case_2,t}^L
\end{equation}
\begin{equation}
E_{Case_5,r}^L(2) = E_{Case_2,r}^L
\end{equation}
\begin{equation}
E_{Case_5,l}^L(2) = E_{Case_2,l}^L
\end{equation}
\begin{equation}
E_{Case_5,s}^L(2) = E_{Case_2,s}^L
\end{equation}
As in the previous case, the overhearer will perceive a very busy channel because of the transmission of preambles so when it will wake up, it will perform half of $(t_p^L + t_a^L)$ in polling state and than it will overhear an entire preamble.
Probability of busy channel is $ p_{case5}^L =p_{case4}^L =\gamma^L \cdot ( t_p^L +t_a^L ) + ( t_p^L +t_a^L ) + t_g + 2 \cdot t_d $
\begin{equation}
E_{Case_5,o}^L(2) =E_{Case_4,o}^L(2)
\end{equation}
\item Case 6: the receiver wakes up as first, similarly to Case 3, with probability $1-q^L$, the second sender wakes up too late and can not catch the acknowledge. In this case it will go back to sleep and it will transmit its data during the next frame.
\begin{equation}
E_{Case_6}^L(2) = E_{Case_3}^L =2\cdot E^{L}(1)
\end{equation}
\item Case 7: the second transmitter wakes up as first, will hear a part of the strobed preamble until the receiver wakes up and sends the ACK.
In average, the second transmitter will hear $\lfloor \dfrac{\gamma^L}{2}\rfloor$ preambles.
\begin{equation}
E_{Case_7,t}^L(2) = E_t^{L}(1) + \lfloor\frac{\gamma^L}{2}\rfloor \cdot t_p^L \cdot P_r +2 \cdot t_a^L \cdot P_r + (t_p^L+ t_d) \cdot P_t + t_g \cdot P_r
\end{equation}
\begin{equation}
E_{Case_7,r}^L(2) = E_r^{L}(1) + (t_p^L + t_d) \cdot P_r + t_a^L \cdot P_t
\end{equation}
\begin{equation}
E_{Case_7,l}^L(2) = (E_l^{L}(1) - (t_p^L + t_a^L )\cdot P_l) +((\lfloor\frac{\gamma^L}{2}\rfloor -1 )\cdot t_a^L + \frac{t_p^L + t_a^L}{2})\cdot P_l
\end{equation}
\begin{equation}
E_{Case_7,s}^L(2) = E_s^{L}(1) - (t_f - ((\lfloor\frac{\gamma^L}{2}\rfloor +1) \cdot ( t_p^L +t_a^L )- \frac{t_p^L + t_a^L}{2}-t_g - t_d)\cdot P_s
\end{equation}
From the overhearers point of view, this case is equivalent to Cases 4 and 5.
\begin{equation}
E_{Case_7,o}^L(2) = E_{Case_4,o}^L(2)
\end{equation}
\end{itemize}
\item Case 8: there is only one sender that will send two messages in a row.
\begin{equation}
E_{Case_8,t}^L(2) = E_t^{L}(1) + t_d \cdot P_t
\end{equation}
\begin{equation}
E_{Case_8,r}^L(2) = E_r^{L}(1) + t_d \cdot P_r
\end{equation}
\begin{equation}
E_{Case_8,l}^L(2) = (E_l^{L}(1) - t_d \cdot P_l)
\end{equation}
\begin{equation}
E_{Case_8,s}^L(2) = E_s^{L}(1) - t_d \cdot P_s
\end{equation}
When the sender is unique, overhearer consumption can be assumed the same as the case of A=1.
\begin{equation}
E_{Case_8,o}^L (2)= E_o^L(1)
\end{equation}
\end{itemize}
The overall energy cost is the sum of the costs of each case weighted by the probability of the case to happen (as showed in the figure~\ref{fig:probabilitiesXmac}):
\begin{equation}
E^{L}(2) = \sum\limits_{i=1}^{8} p_{Case_i}\cdot E_{Case_i}^L
\end{equation}
\subsection{Global buffer contains more than two messages ($B>2$)}
\textit{ }
\noindent\textbf{B-MAC ($B>2$)}
\textit{ }
\noindent\textbf{X-MAC ($B>2$)}
\textit{ }
\noindent\textbf{LA-MAC ($B>2$)}
\begin{figure}[ht!]
\begin{center}
\includegraphics[scale=.7]{pdf/expected_energy_consumption}
\caption{Energy analysis and OMNeT++ simulations versus the global buffer size.}
\label{fig:expected_energy_consumption}
\end{center}
\end{figure}
\section{Numerical Validation}
\label{sec_simulation}
We have implemented the analyzed MAC protocols in the OMNeT++ simulator
\cite{omnetpp.simulator} for numerical evaluation.
Each numerical value is the average of 100 runs and we show the
corresponding confidence intervals at 95$\%$ confidence level.
We assume that devices use the
CC1100~\cite{chipcon.cc1100} radio stack with bitrate of 20Kbps.
The values of power consumption for different radio states are specific to the
CC1100
transceiver considering a 3V battery.
In the following, we assume $N=9$ senders.
The periodical wakeup period is the same for all protocols: $t_f = t_l + t_s =
250~ms$.
Also the polling duration is the same for all protocols: $t_l=25~ms$, thus the
duty cycle with no messages to send is $10\%$.
We provide numerical and analytical results for buffer size $B \in[1,50]$.
We compare the protocol performance with respect to several criteria:
\begin{itemize}
\item\textit{Latency [s]:} the delay between the beginning of the simulation
and the instant of packet reception at the sink (we present the latency averaged
over all nodes).
\item\textit{Energy Consumption [Joules]:} the averaged energy consumed by all nods due to
radio activity.
\item\textit{Delivery Ratio:} the ratio of the number of received packet by
the sink to the total number of packets sent.
\end{itemize}
In Figure~\ref{fig:expected_energy_consumption}, we show the comparison between
the proposed energy consumption analysis and numerical simulations for different
values of the global buffer size.
We assume that at the beginning of each simulation all messages to send are
already buffered.
Each simulation stops when the last message in the buffer is received by the
sink.
Figure~\ref{fig:expected_energy_consumption} highlights the validity of the
analytical expressions for energy consumption: all curves match very well.
As expected, B-MAC is the most energy consuming protocol: as the buffer size
increases, the transmission of a long preamble locally saturates the network
resulting in high energy consumption and latency (cf. Figure~\ref{fig:latency}).
\begin{figure}[ht!]
\begin{center}
\includegraphics[scale=.7]{pdf/deliv}
\caption{Delivery ratio versus the global buffer size. In X-MAC, most collisions
happen when messages
are sent after the back-off time. }
\label{fig:deliv}
\end{center}
\end{figure}
In X-MAC, short preambles mitigate the effect of the increasing local traffic
load,
thus both latency and energy consumption are reduced with respect to B-MAC.
Even if X-MAC is more energy efficient than B-MAC, Figure~\ref{fig:deliv} shows
that even for small buffer sizes, the delivery ratio for this protocol is lower
than 100 $\%$ most likely because packets that are sent after the back-off
collide at the receiver.
LA-MAC is the most energy saving protocol and it also outperforms other
protocols in terms of latency and the delivery ratio.
We observe that when the instantaneous buffer size is lower than 8 messages, the
cost of the SCHEDULE message is paid in terms of a higher latency with respect to
X-MAC (cf. Figure~\ref{fig:latency}); however, for larger buffer sizes the cost
of the SCHEDULE transmission is compensated by a high number of delivered
messages.
In Figure~\ref{fig:radio_states}, we show the percentage of the time during
which devices spend in each radio state versus the global buffer size.
Thanks to efficient message scheduling of LA-MAC, devices sleep most of the time
independently of the buffer size and all messages are
delivered.
\begin{figure}[ht!]
\begin{center}
\includegraphics[scale=.7]{pdf/latency}
\caption{Average latency versus the global buffer size.}
\label{fig:latency}
\end{center}
\end{figure}
\begin{figure}[ht!]
\begin{center}
\includegraphics[scale=.7]{pdf/radio_states}
\caption{Percentage of the time spent in each radio state versus the
global buffer size.}
\label{fig:radio_states}
\end{center}
\end{figure}
\section{Conclusions}
\label{sec:conclusion}
In the present paper, we have analyzed the energy
consumption of preamble sampling MAC protocols by means of a simple
probabilistic modeling. The analytical results are then
validated by simulations.
We compare the classical MAC protocols (B-MAC and X-MAC) with LA-MAC, a method
proposed in a companion paper.
Our analysis highlights the energy savings achievable with LA-MAC with respect
to B-MAC and X-MAC. It also shows that
LA-MAC provides the best performance in the considered case of high density
networks under traffic congestion.
\bibliographystyle{IEEEtran}
|
2,869,038,154,724 | arxiv | \section{Introduction}
Nearly two decades after the first measurements of the accelerated expansion of space \citep[e.g.][]{Riess_etal_1998,Perlmutter_etal_1999,Schmidt_etal_1998} the fact that about $70\%$ of the Universe's energy content is in a form with a negative equation of state of $w\approx -1$ has been confirmed in numerous measurements \citep{wmap9,Planck_2015_XIII}.
Nevertheless, the nature of this `dark energy' is as puzzling as it has been since its discovery. Tremendous efforts in modern cosmology go into determining the amount and possible time-evolution of this unknown component. It is particularly problematic that few well-motivated frameworks for its physical nature exist -- apart from a cosmological constant. Many ideas \citep[e.g.][]{Dvali_Gabadadze_Porrati_2000} have by now been ruled out or shown to be intrinsically unstable. While there are still theories around (and always will be, since the parameter space of many of them is very flexible), they appear more or less contrived.
It is also important to recall that gravity is the `odd' fundamental force, and a lot of implicit assumptions are being made when extrapolating our knowledge over several orders of magnitudes to vastly different conditions and scales. These two points are, in fact, the main motivations behind a class of modified gravity theories \citep{Amendola2010deto.book.....A,2012PhR...513....1C}. Since general relativity as a theory of gravity is unique under very general assumptions \citep{Lovelock72}, any modification introduces new physical degrees of freedom. These can lead to accelerated expansion, but also tend to enhance gravity on a perturbative level as so-called fifth forces. To pass observational bounds, any of these models have to involve a `screening mechanism' leading to negligible deviations in, e.g., the solar-system where the predictions of general relativity have been confirmed to high precision \citep[e.g.][]{Berotti2003Natur.425..374B, Will2006LRR.....9....3W}.
In this work, we will circumvent the discussion of what characterizes a scientific theory (as opposed to, for instance, an effective one), and will instead treat the screened modified gravity models considered as examples of a (much) larger group of models. They all possess the common property that in addition to the Newtonian gravitational force $F_{\mathrm{N}}$, another \textit{fifth force} component $F_{\mathrm{Fifth}}$ exists, which is suppressed by some screening mechanism in high-density (or high-curvature) environments.
This choice is motivated by the fact that screening occurs in a range of scalar- and vector-field theories with different physical reasons, and is in fact essentially required by a large class of theories in order not to violate local gravity measurements.
Examples of screening mechanisms which are implemented in those theories include:
\begin{itemize}
\item \textit{Chameleon} \citep{Khoury_Weltman_2004} where the range of the fifth force is decreased in regions of high spacetime curvature, thus, effectively hiding the additional force,
\item \textit{Symmetron} \citep{Hinterbichler_Khoury_2010,Hinterbichler_etal_2011} in which the coupling of the scalar field is density dependent,
\item \textit{Vainshtein} screening \citep{1972PhLB...39..393V} where the screening effect is sourced by the second derivative of the field value, and
\item others such as screening through disformal coupling \citep{PhysRevD.48.3641}.
\end{itemize}
As already indicated above, a major problem in the search for a new theory of gravity is that $\Lambda$CDM gets so far only confirmed to higher and higher precision. While minor discrepancies between probes of the early and late Universe exist, especially in measurements of the Hubble parameter $H_0$ \citep[see e.g.][]{Riess_etal_2016, Planck_2015_XIII} and $\Omega_m$ or $\sigma_8$ \citep[e.g.][]{Hildebrandt_etal_2017}, no major tension between its predictions and the data has been found. Historically, however, we know that this does not mean that $\Lambda$CDM is correct but that either we have not yet found the right probe where tensions might arise, or we have to push the limits to higher precision. While the latter approach can well be fruitful (as shown by the high-precision measurements of, e.g., the perihelion precession of Mercury; \citealp{le1859lettre}) and is the preferred path taken by many next generation instruments such as EUCLID \citep{Euclid-y} and WFIRST \citep{WFIRST}, we will focus on the former path, and are thus interested in deviations on the $\gtrsim 10\%$ level.
Several observable signatures of screened modified gravity models have been suggested in the literature such as deviations in the halo mass function \citep{Schmidt_2010,Davis_etal_2012,Puchwein_Baldi_Springel_2013,Achitouv_etal_2016}, or the structure of the cosmic web \citep{2014JCAP...07..058F,2018A&A...619A.122H}.
However one concern raised by several authors \citep[e.g.,][]{Motohashi:2012wc,He_2013,Baldi_etal_2014} is that massive neutrinos and beyond-$\Lambda$CDM models might be degenerate.
In this work, we want to investigate how kinematic information can be used to break these degeneracies. This paper is structured as follows: in Sec.~\ref{sec:method} we introduce the screened modified gravity models studied, and briefly review the effect of neutrinos on structure formation. We will also describe our numerical simulations used to explore the joint effects numerically. In Sec.~\ref{sec:results} we present our results, before we conclude in Sec.~\ref{sec:conclucsion}.
\section{Method}
\label{sec:method}
This section briefly summarises the effects of modified gravity and massive neutrinos on the evolution of the density field. We also present the simulation suite used to study the combined effects in the fully non-linear regime.
\subsection{Review of modified gravity}
\label{sec:fR}
To work within a well-defined framework, in this paper we focus on $f(R)$ gravity. As a starting point we assume the generalised Einstein-Hilbert action\footnote{We adopt natural units $c = \hbar = 1$}
\begin{equation}
S = \int \mathrm d x^4 \sqrt{-g} \left( \frac{R + f(R)}{16 \pi G} + \mathcal{L}_m \right) \: ,
\end{equation}
where we introduced a function $f$ of the Ricci scalar $R$, the Lagrangian $\mathcal{L}_m$ contains all other matter fields and we recover standard general relativity (GR) if we choose the function to be a cosmological constant $f = - 2 \Lambda^\mathrm{GR}$. For this paper, we use instead the form established by \cite{Hu2007}
\begin{equation}
\label{eq:f_R}
f(R) = - 2 \Lambda \frac{R}{R + m^2} \: ,
\end{equation}
with a constant suggestively named $\Lambda$ and an additional scale $m^2$ that both have to be fixed later on. Assuming $m^2 \ll R$ lets us expand the function
\begin{equation}
\label{eq:f_R_approx}
f(R) \approx -2 \Lambda - f_{R0} \frac{\bar R_0^2}{R} \: ,
\end{equation}
with the background value of the Ricci scalar $\bar R_0$ today, and we defined the dimensionless parameter $f_{R0} \equiv - 2 \Lambda m^2 / \bar R_0^2$ that expresses the deviation from GR. We will return to the characteristic scale of $f_{R0}$ later, but typically $|f_{R0}| \ll 1$. The constant $\Lambda = \Lambda^\mathrm{GR}$ is then fixed to the measured value of the cosmological constant by the requirement to reproduce the standard $\Lambda$CDM expansion history established by observations. However, note that it no longer has the interpretation of a vacuum energy. The phenomenology of the theory in this limit is then set by $f_{R0}$ alone. This particular choice of parameters also implies that the background evolution is indistinguishable from a $\Lambda$CDM universe, but the growth of perturbations will differ.
To work out the perturbation equations, we vary the action with respect to the metric to arrive at the modified Einstein equations
\begin{equation}
\label{eq:modified_Einstein_eq}
G_{\mu \nu} - f_R R_{\mu \nu} - \left( \frac{f}{2} - \Box f_R \right) g_{\mu \nu} - \nabla_\mu \nabla_\nu f_R = 8 \pi G T_{\mu \nu} \: .
\end{equation}
The new dynamical scalar degree of freedom $f_R \equiv \mathrm d f / \mathrm d R$ is responsible for the modified dynamics of the theory. To obtain the equation of motion for this scalar field, we consider the trace of Eq.~\ref{eq:modified_Einstein_eq}
\begin{equation}
\label{eq:field_equation_fR}
\nabla^2 \delta f_R = \frac{a^2}{3} \Big( \delta R(f_R) - 8 \pi G \delta \rho_m \Big) \: ,
\end{equation}
where we assumed the field to vary slowly (the quasi-static approximation) and we consider small perturbations $\delta f_R \equiv f_R - \bar f_R$, $\delta R \equiv R - \bar R$ and $\delta \rho_m \equiv \rho_m - \bar \rho_m$ on a homogeneous background. To get a Poisson-like equation for the scalar metric perturbation $2\psi = \delta g_{00} / g_{00}$ we take the time-time component of Eq.~\ref{eq:modified_Einstein_eq} to arrive at
\begin{equation}
\label{eq:Poisson_fR}
\nabla^2 \psi = \frac{16 \pi G}{3} a^2 \rho_m - \frac{a^2}{6} \delta R(f_R) \: ,
\end{equation}
that now also depends on the scalar field. Solving the non-linear Eqs.~\ref{eq:field_equation_fR} and \ref{eq:Poisson_fR} in their full generality requires $N$-body simulations, but it is interesting to consider two edge cases to get some insight into the phenomenology of the theory.
If the field is large, $|f_{R0}| \gg | \psi |$, we can expand
\begin{equation}
\delta R \simeq \left .\frac{\mathrm d R}{\mathrm d f_R} \right \rvert_{R = \bar R} \delta f_R \: ,
\end{equation}
and we can solve Eqs.~\ref{eq:field_equation_fR} and \ref{eq:Poisson_fR} in Fourier space to get
\begin{equation}
\label{eq:Poisson_large_field}
k^2 \psi(k) = -4 \pi G \left( \frac{4}{3} - \frac{1}{3} \frac{\mu^2 a^2}{k^2 + \mu^2 a^2} \right) a^2 \delta \rho_m(k) \: ,
\end{equation}
with the Compton wavelength of the scalar field $\mu^{-1} = (3 \mathrm d f_R / \mathrm d R)^{1/2}$. For $k \gg \mu$ the second term vanishes and we obtain a Poisson equation with an additional factor $4/3$. On the other hand, for $k \ll \mu$ we recover standard gravity. The Compton wavelength $\mu^{-1}$ therefore sets the interaction range of an additional fifth force that enhances gravity by $1/3$. This is the maximum possible force enhancement in $f(R)$, irrespective of the choice of the function in Eq.~\ref{eq:f_R}.
For field values $| f_{R0} | \ll | \psi | $, the two terms on the right hand side of Eq.~\ref{eq:field_equation_fR} approximately cancel, so we arrive at
\begin{equation}
\label{eq:delta_R_unscreened}
\delta R \approx 8 \pi G \delta \rho_m
\end{equation}
and we also recover the standard Poisson equation from Eq.~\ref{eq:Poisson_fR}. This is the \textit{Chameleon} screening mechanism mentioned above to restore GR in regions of high curvature.
We can get an estimate of the scale where this screening transition occurs by solving Eq.~\ref{eq:field_equation_fR} formally with the appropriate Green's function
\begin{align}
\label{eq:f_R_solution}
\delta f_R(r) &= \frac{1}{4 \pi r} \frac{1}{3} \int_0^r \mathrm d^3 \mathbf{r^\prime} 8 \pi G \left( \delta \rho - \frac{\delta R}{8 \pi G} \right) \\
&= \frac{2}{3} \frac{G M_\mathrm{eff}(r)}{r}
\end{align}
where we defined the effective mass term $M_\mathrm{eff}$ acting as a source for the fluctuations in the scalar field $\delta f_R$. This definition requires $M_\mathrm{eff}(r) \leq M(r)$, and both contribution are equal in the unscreened regime, where Eq.~\ref{eq:delta_R_unscreened} implies $M_\mathrm{eff} = M$. In this case, $\delta f_R = 2/3 \psi_N$ with the Newtonian potential of the overdensity, $\psi_N = GM/r$. Since we assumed small perturbations on the homogeneous background, $\delta f_R \leq \bar{f_R}$, we arrive at the screening condition
\begin{equation}
\label{eq:thin_shell}
| f_{R}| \leq \frac{2}{3} \psi_N(r) \: .
\end{equation}
In other words, only the mass distribution outside of the radius where the equality $2/3 \psi(r) = |f_R|$ holds contributes to the fifth force. Note that screening for real halos is considerably more complex, since non-sphericity and environmental effects are also important for the transition. Nevertheless, Eq.~\ref{eq:thin_shell} gives a reasonable estimate for the onset of the transition between enhanced gravity and normal GR.
Since screening can function only for $\psi_N \sim f_R$, the condition implied by Eq.~\ref{eq:thin_shell} sets the scale for the free parameter $|f_{R0}|$. Typical values for the metric perturbation in cosmology range from $\psi_N \sim 10^{-5}$ to $\psi_N \sim 10^{-6}$, so $|f_{R0}|$ should be of the same order of magnitude to show any interesting phenomenology. For values of the scalar field $|f_{R0}| \gg \psi_N$, gravity is always enhanced so we can exclude this parameter space trivially, while in the opposite limit $|f_{R0}| \ll \psi$ the theory is always screened and does not offer any predictions to distinguish it from GR on cosmological scales.
\subsection{Neutrino effects on structure growth}
Cosmology allows to constrain the physics of neutrinos in unique ways. Assuming the standard thermal evolution and decoupling before $e^+/e^-$ annihilation, their temperature is related to the one of the CMB photons by
\begin{equation}
T_\nu = \left( \frac{4}{11} \right)^{1/3} T_\mathrm{CMB} \: ,
\end{equation}
which implies for neutrinos with mass eigenstates $m_\nu$ a total contribution to the Universe's energy budget of \citep{MANGANO2005221}
\begin{equation}
\label{eq:Omega_nu}
\Omega_\nu h^2 \approx \frac{\sum m_\nu}{93.14~\mathrm{eV}} \: ,
\end{equation}
where the sum runs over the three standard model neutrino states. Since their mass is constrained to be small, $\sum m_\nu \lesssim 1~\mathrm{eV}$, they decouple as highly relativistic particles in the early Universe. Their energy density therefore scales as an additional radiation component $\Omega_\nu \propto a^{-4}$ early on, but during adiabatic cooling with the expansion of the Universe they become non-relativistic and the energy density behaves like ordinary matter $\Omega_\nu \propto a^{-3}$ today. The small contribution from Eq.~\ref{eq:Omega_nu} to the overall energy budget also implies that their effect on the background expansion history is small.
Their weak interaction cross-section makes neutrinos a dark matter component. However, compared to the standard cold dark matter, they have considerable bulk velocities. This changes the growth of perturbations on scales smaller than the distance travelled by neutrinos up to today, the neutrino horizon, defined by
\begin{equation}
d_\nu(t_0) = \int_{t_\mathrm{ini}}^{t_0} c_\nu (t') \mathrm d t' \: ,
\end{equation}
with the average neutrino velocity $c_\nu$, which is close to the speed of light early on. The neutrino horizon itself is numerically closely related to the more commonly used free-streaming wavenumber at the time of the non-relativistic transition, $k_\mathrm{nr}$ \citep{Lesgourgues_neutrino_book}
\begin{equation}
\label{eq:k_nr}
k_\mathrm{nr} \approx 0.0178 \: \Omega_m^{1/2} \left(\frac{m_\nu}{\mathrm{eV}} \right)^{1/2} \: \mathrm{Mpc}^{-1} \: h \: .
\end{equation}
On scales exceeding the neutrino horizon, velocities can be neglected and the perturbations consequently evolve identical to those in the cold dark matter component. For smaller scales $k \gg k_\mathrm{nr}$ within the neutrino horizon, however, free-streaming leads to slower growth of neutrino perturbations. Due to gravitational backreaction on the other species, this causes a characteristic step-like suppression of the linear matter power spectrum approximately given by \cite{Hu_1998}
\begin{equation}
\left. \frac{P_{\nu}}{P} \right|_{k \gg k_\mathrm{nr}} \approx 1 - 8 \frac{\Omega_\nu}{\Omega_m} \: .
\end{equation}
To compare the density power spectrum between cosmologies with and without neutrinos, we here assumed the same primordial perturbations and kept the total $\Omega_m$ (including neutrinos) fixed, resulting in equal positions of the peak of the power spectrum and ensuring that the spectra are identical in the super-horizon limit. The cosmologies for our neutrino simulations described in Sec.~\ref{sec:simulations} are chosen in the same way.
The interplay between neutrinos and $f(R)$ gravity is interesting due to a curious coincidence: the typical range of the fifth force given by the Compton wavelength $\mu^{-1}$ in Eq.~\ref{eq:Poisson_large_field} and the free-streaming scale of neutrinos in Eq.~\ref{eq:k_nr} are comparable for the relevant parameter space of neutrino masses and values of $|f_{R0}|$, such that the known standard model neutrinos might counteract signatures of boosted growth caused by modified gravity. This makes neutrinos important for constraints on $f(R)$, and this paper searches for ways to disentangle both effects.
\subsection{The DUSTGRAIN-{\em pathfinder} simulations}
\label{sec:simulations}
\begin{table*}
\begin{center}
\begin{tabular}{lcccccccc}
Simulation Name & Gravity type &
$|f_{R0}| $ &
$\sum m_{\nu }$ [eV] &
$\Omega _{CDM}$ &
$\Omega _{\nu }$ &
$M^{p}_{CDM}$ [M$_{\odot }/h$] &
$M^{p}_{\nu }$ [M$_{\odot }/h$] & $\sigma_8$ \\
\hline \hline
$\Lambda $CDM & GR & -- & 0 & 0.31345 & 0 & $8.1\times 10^{10}$ & 0 & $0.842$\\
fR4 & $f(R)$ & $ 10^{-4}$ & 0 & 0.31345 & 0 & $8.1\times 10^{10}$ & 0 & $0.963$ \\
fR5 & $f(R)$ & $ 10^{-5}$ & 0 & 0.31345 &0 & $8.1\times 10^{10}$ & 0 & $0.898$ \\
fR6 & $f(R)$ & $ 10^{-6}$ & 0 & 0.31345 & 0 & $8.1\times 10^{10}$ & 0 & $0.856$ \\
fR4\_0.3eV & $f(R)$ & $ 10^{-4}$ & 0.3 & 0.30630 & 0.00715 & $7.92\times 10^{10}$ & $1.85\times 10^{9}$ & $0.887$ \\
fR5\_0.15eV & $f(R)$ & $ 10^{-5}$ & 0.15 & 0.30987 & 0.00358 & $8.01\times 10^{10}$ & $9.25\times 10^{8}$ & $0.859$ \\
\hline
\end{tabular}
\caption{Summary of the main numerical and cosmological parameters characterising the subset of the DUSTGRAIN-{\em pathfinder} simulations considered in this work. In the table, $M^p_{\nu}$ represents the neutrino simulation particle mass, $M^p_{CDM}$ represents the CDM simulation particle mass, while $\Omega_{CDM}$ and $\Omega_{\nu}$ the CDM and neutrino density parameters, respectively. The listed $\sigma_8$ values represent the linear power normalisation attained at $z=0$, while all simulations are normalised to the same spectral amplitude ${A}_{s}=2.199\times 10^{-9}$ at the redshift of the CMB.}
\label{tab:simulations}
\end{center}
\end{table*}
Our analysis is based on a subset of the DUSTGRAIN-{\em pathfinder} simulations suite described in \cite{Giocoli_Baldi_Moscardini_2018}. The main purpose of the DUSTGRAIN-{\em pathfinder} simulations is to explore the degeneracy between neutrino and modified gravity (MG) effects by sampling the joint $f(R)-\sum m_{\nu }$ parameter space with combined $N$-body simulations that simultaneously implement both effects in the evolution of cosmic structures. To this end, the {\small MG-GADGET} code -- specifically developed by \citet{Puchwein_Baldi_Springel_2013} for $f(R)$ gravity simulations -- has been combined with the particle-based implementation of massive neutrinos described in \citet{Viel_Haehnelt_Springel_2010}, allowing to include a separate family of neutrino particles to the source term of the $\delta f_R$ field equation \ref{eq:field_equation_fR}, which then reads:
\begin{equation}
\label{eq:field_equation_fR_nu}
\nabla^2 \delta f_R = \frac{a^2}{3} \Big( \delta R(f_R) - 8 \pi G \delta \rho_{CDM} - 8 \pi G \delta \rho_{\nu }\Big)\; .
\end{equation}
The DUSTGRAIN-{\em pathfinder} simulations follow the evolution of $(2\times )768^3$ particles of dark matter (and massive neutrinos) in a periodic cosmological box of $750\; h^{-1}$ Mpc per side from a starting redshift of $z_{i}=99$ to $z=0$, for a variety of combinations of the parameters $|f_{R0}|$ in the range $\left[ 10^{-6}, 10^{-4}\right] $ and $\sum m_{\nu }$ in the range $\left[ 0.0, 0.3\right]$ eV, plus a reference $\Lambda $CDM simulation (i.e. GR with $\sum m_{\nu }=0$). The cosmological parameters assumed in the simulations are consistent with the Planck 2015 constraints \citep[see][]{Planck_2015_XIII}: $\Omega _{M}=\Omega _{CDM}+\Omega _{b}+\Omega _{\nu} = 0.31345$, $\Omega _{\Lambda }=0.68655$, $h=0.6731$, $\sigma _{8}(\Lambda \mathrm{CDM})=0.842$. The dark matter particle mass (for the massless neutrino cases) is $M_{CDM}=8.1\times 10^{10}\; h^{-1}$ M$_{\odot }$ and the gravitational softening is set to $\epsilon _{g}= 25\; h^{-1}$kpc, corresponding to $(1/40)$ times the mean inter-particle separation.
Initial conditions for the simulations have been generated by following the Zel'dovich approximation to generate a random realisation of the linear matter power spectrum obtained with the Boltzmann code {\small CAMB}\footnote{www.cosmologist.info} \citep[][]{camb} for the cosmological parameters defined above and under the assumption of standard GR. For the simulations including massive neutrinos, besides updating the {\small CAMB} linear power spectrum used to generate the initial conditions accordingly, we also employ the approach described in \citet{Zennaro_etal_2017, Villaescusa-Navarro_etal_2018} which amounts to generating two fully correlated random realisations of the linear matter power spectrum for standard Cold Dark Matter particles and massive neutrinos based on their individual transfer functions. Neutrino thermal velocities are then randomly sampled from the corresponding Fermi distribution and added on top of gravitational velocities to the neutrino particles.
The same random seeds have been used to generate all initial conditions in order to suppress cosmic variance in the direct comparison between models. As the simulations start at $z_{i}=99$ when $f(R)$ effects are expected to be negligible, no modifications are necessary to incorporate them in the initial conditions and the standard GR particle distributions -- with and without neutrinos -- can be safely employed for both the GR and $f(R)$ runs.
A summary of the main parameters of the simulations considered in this work is presented in Table~\ref{tab:simulations}. We refer the interested reader to \cite{Giocoli_Baldi_Moscardini_2018} for a more detailed description of the DUSTGRAIN-{\em pathfinder} simulations.
\section{Cosmic Degeneracies}
\label{sec:results}
\begin{figure*}
\includegraphics[width=1.\textwidth]{plots/degeneracy.pdf}
\caption[Caption]{\textbf{Left:} Relative deviation induced by $f(R)$ gravity and massive neutrinos in the matter power spectrum measured in a subset of our simulations at $z=0$. The large deviation caused by the additional growth in $|f_{R0}|=10^{-4}$ is almost completely counteracted by massive neutrinos with $\sum m_\nu = 0.3 \: \mathrm{eV}$. We find a similar case for $|f_{R0}| = 10^{-5}$ and $\sum m_\nu = 0.15 \: \mathrm{eV}$. \textbf{Right:} The same degeneracy in the simulated abundance of halos at $z=0$. Note that the degeneracy is non-trivial, the same $P(k)$ can lead to different cluster abundances in $f(R)$ since the collapse threshold is changed in modified gravity. The uncertainty for the cluster abundance is calculated with Poisson error bars. Shaded grey bands indicate the $10 \%$ deviation region in both plots.}%
\label{fig:degeneracy}
\end{figure*}
The first $N$-body simulation to investigate the joint effects of neutrinos and modified gravity was performed in \cite{Baldi_etal_2014} where the authors pointed out the degeneracy between the competing signals. This was confirmed by multiple recent papers based on simulations to study how neutrinos can mask $f(R)$ imprints in the kinematic Sunyaev-Zeldovich effect of massive galaxy clusters \citep{Roncarelli_Baldi_Villaescusa-Navarro_2016, Roncarelli_Baldi_Villaescusa-Navarro_2018}, in weak lensing statistics \citep{Giocoli_Baldi_Moscardini_2018, Peel_etal_2018} and in the abundance of galaxy clusters \citep{Hagstotz_2018}.
A first attempt to exploit Machine Learning techniques to separate the two signals was put forward by \cite{Peel_etal_2018b, Merten_etal_2018}.
All these studies confirm a degeneracy in observables that rely on structure growth, which makes the unknown neutrino masses an important nuisance parameter when constraining $f(R)$ gravity, as pointed out in \cite{Hagstotz_2018}. These papers also show that especially the redshift evolution can be a potentially powerful tool in distinguishing these models since the time evolution of the modifications induced by $f(R)$ and neutrinos differs in general. However, many large-scale structure data sets available today do not have sufficient redshift reach to set stringent constraints on deviations from general relativity while marginalising over neutrino mass.
We refer to the above cited papers for details how these degeneracies play out for various probes and how they can be broken with higher redshift data, but the main challenge is summarized in Fig.~\ref{fig:degeneracy}, where we show the relative change induced in the matter power spectrum (left) and the halo abundance (right). Note that even though the halo mass function is clearly derived from the matter power spectrum, the degeneracy in the cluster abundance demonstrated here is non-trivial since the threshold of collapse $\delta_c$ also changes in $f(R)$ gravity \citep[e.g.][]{Schmidt2009, Kopp2013, Cataneo2016, Braun_Bates2017}. Within current observational accuracy, the effect of modified gravity leading to additional structure growth and the suppression effect of neutrino free-streaming are thus difficult to distinguish. Therefore, extending the cosmological parameter space with free neutrino masses tends to weaken existing limits on $|f_{R0}|$.
Since the degeneracy is broken by the different redshift evolution of the density $\delta$ in $f(R)$ and neutrino cosmologies, it is interesting to consider the growth rate of structures to tell them apart. In linear theory, the continuity equation
\begin{equation}
\frac{\partial \delta}{\partial t} + \frac{1}{a} \nabla \cdot \mathbf{v} = 0 \: ,
\end{equation}
relates the growth rate $f = \mathrm d \ln D_+ / \mathrm d \ln a$ directly to the velocity divergence
\begin{equation}
\theta = \frac{1}{H} \nabla \cdot \mathbf v = - a \delta f \: ,
\end{equation}
which we use as a probe of the different growth histories in GR, modified gravity and neutrino cosmologies. We then investigate the degeneracy between the latter in two regimes:
\begin{itemize}
\item The large-scale velocity divergence 2-point function in Fourier space $P_{\theta \theta}$ as a proxy for the growth rate. We present the detailed results in Sec.~\ref{sec:2-point}.
\item The velocity dispersion inside of non-linear collapsed structures in Sec.~\ref{sec:clusters}
\end{itemize}
\subsection{Velocity divergence 2-point functions}
\label{sec:2-point}
We compute the velocity dispersion $\theta = 1/H \: \nabla\cdot\mathbf{v}$ and interpolate it on a uniform, $512^3$-point grid, using the publicly available \texttt{DTFE} code \citep{Cautun_2011}.
This allows us to compare the power spectrum $P_{\theta \theta}$ in the $\Lambda$CDM simulation with the $f(R)$ and massive neutrino simulations in Fig.~\ref{fig:velocity_divergence_pk} where we plot (as in the left panel of Fig.~\ref{fig:degeneracy} for the matter power spectrum) the relative deviation from the $\Lambda$CDM value.
Clearly, all modified gravity simulations show an increased velocity divergence -- and therefore growth rate -- on scales $\gtrsim 0.1\,\mathrm{Mpc}\,h^{-1}$, with the $|f_{R0}|=10^{-4}$ simulation showing the strongest enhancement since the fifth force becomes active first. Very large scales $k \ll \mu^{-1}$ exceeding the range of the force given by the Compton wavelength of the scalar field are not affected. These results confirm previous findings \citep[see e.g.][]{Jennings_etal_2012} that the velocity power spectrum provides a much stronger signature of modified gravity compared to the density power spectrum, thereby representing a more powerful tool to test gravity on cosmological scales. In principle it can be probed by redshift space distortion measurements sensitive to $f \sigma_8 / b$ with the tracer bias $b$ \citep{Peacock_2001, BOSS_DR12_2017}. However, the scale dependence of $f$ in modified gravity, changes in galaxy formation and subsequently the tracer bias and difficult modelling of the nonlinear effects in modified gravity make this analysis challenging \citep[see the discussion in][]{Jennings_etal_2012, Hernandez_2018}.
The addition of neutrinos (cf. the two $|f_{R0}|=10^{-5}$ runs in Fig.~\ref{fig:degeneracy}) dampens the velocity divergence field slightly overall, but unlike for the density power spectrum this effect is not sufficient to counteract the enhanced growth rate in $f(R)$. This confirms the redshift evolution of the degeneracy in the density field: at early times $z \gtrsim 0.5$, $f(R)$ effects are small, and neutrino suppression of the matter fluctuations dominates. As soon as the additional force enhancement becomes active, it tends to win out and we arrive at the approximate degeneracy observed in Fig.~\ref{fig:degeneracy} today. In the future evolution, $f(R)$ effects will dominate over the neutrino damping for the cases shown here.
The plot also demonstrates that hierarchical formation of collapsed objects in $f(R)$ proceeds faster than in a $\Lambda$CDM universe. Small structures form first, and this process proceeds to larger scales with time. Since the fifth force accelerates the collapse, cosmologies with higher values of $|f_{R0}|$ contain larger nonlinear structures at a given redshift $z$. The transition to these collapsed structures appears as a characteristic dip in the velocity divergence power spectrum \citep[see also the detailed explanation in][]{Li_etal_2013a}.
\begin{figure}
\includegraphics[width=\columnwidth]{plots/pvv_rel.pdf}
\caption{Relative change in the velocity divergence power spectrum $P_{\theta \theta}$ compared to $\Lambda$CDM for various models with modified gravity, massive neutrinos, or both. The deviation from $\Lambda$CDM is more pronounced compared to the approximately degenerate density power spectra for combinations of $|f_{R0}|$ and $\sum m_\nu$ shown in Fig.~\ref{fig:degeneracy}. The dip in the spectra marks the onset of collapsed structures. The shaded band indicates a $10 \%$ deviation range.}
\label{fig:velocity_divergence_pk}
\end{figure}
\subsection{Cluster velocity dispersion}
\label{sec:clusters}
We now turn to the kinematics inside of non-linear structures. The velocity dispersion of galaxy cluster members is a long-established measure of the total gravitational potential via the virial theorem, and therefore it can serve as a mass proxy of the system \citep{Biviano_2006}. First studies of $f(R)$ effects on virialised systems were presented by \cite{Lombriser_2012_virial}, and recently efforts have been made to use the phase space dynamics of single massive clusters to constrain modified gravity \citep[e.g.][]{Pizzuti2017}.
Here we focus on the change in the mean observable velocity dispersion instead of detailed studies of single objects. Starting point is the virial theorem, which itself is a consequence of phase-space conservation expressed by the Liouville equation and holds for any system obeying Hamiltonian dynamics. It is therefore unchanged by $f(R)$ gravity, and states in its scalar form
\begin{equation}
2 E_\mathrm{kin} + E_\mathrm{pot} = 0 \: ,
\end{equation}
with kinetic and potential energy of the system respectively. From there, we can get a rough estimate for the velocity dispersion
\begin{equation}
\label{eq:sigma_sq_}
\sigma^2 \approx \frac{G M(r)}{r}
\end{equation}
for a virialised system of size $r$. This makes the velocity dispersion a direct measurement of the gravitational potential of a bound system. For an unscreened cluster in $f(R)$, Eq.~\ref{eq:Poisson_large_field} leads to an enhancement of the gravitational force and potential by a factor $4/3$ -- we therefore expect the velocity dispersion to be boosted by $(4/3)^{1/2}$ compared to the standard prediction.
\begin{figure}
\includegraphics[width=\columnwidth]{plots/cluster_veldisp_binned_fR5.pdf}
\caption{Velocity dispersion $\sigma$ within clusters of a given mass $M_{200m}$ for a subset of the studied cosmologies at $z=0$. Shaded region shows the standard deviation found in our simulations. Note that most systems are virialised, either to the $\Lambda$CDM value or the boosted unscreened $f(R)$ equilibrium. Neutrinos do not have any detectable effect on the velocity dispersion inside of clusters, and we just show the case with $|f_{R0}| = 10^{-5}$ and $\sum m_\nu = 0.15~\mathrm{eV}$ for clarity. The relative deviations are shown separately in Fig.~\ref{fig:rel_velocity_dispersion_sim}.}
\label{fig:velocity_dispersion}
\end{figure}
However, the screening mechanism of $f(R)$ gravity outlined in Sec.~\ref{sec:fR} is crucial to understand the full phenomenology of the theory. We can estimate the mass scale of objects with potential wells deep enough to activate the screening mechanism with the condition set by Eq.~\ref{eq:thin_shell}. In order to do that, we consider the force enhancement caused by $f(R)$
\begin{equation}
g(r) \equiv \frac{\mathrm d \psi / \mathrm d r}{\mathrm d \psi_N / \mathrm d r}
\end{equation}
relative to the Newtonian potential $\psi_N$. We can from there calculate the average \textit{additional} potential energy of the system
\begin{equation}
\label{eq:force_enhancement}
\bar g = \frac{\int \mathrm d r r^2 w(r) g(r)}{\int \mathrm d r r^2 w(r)} \: ,
\end{equation}
which varies between 1 (for the screened case) and $4/3$ (for the unscreened case), with the weighting function
\begin{equation}
w(r) = \rho(r) r \frac{\mathrm d \psi_N}{\mathrm d r} \: .
\end{equation}
Following \cite{Schmidt_2010}, we assume that the additional force is only sourced by the mass distribution beyond the \textit{screening radius} $r_\mathrm{screen}$, which is defined by the equality in condition Eq.~\ref{eq:thin_shell}, i.e.
\begin{equation}
\label{eq:r_screen}
\frac{2}{3} \psi_N(r_\mathrm{screen}) = \bar f_{R}(z) \: .
\end{equation}
This implies for the force enhancement
\begin{equation}
g(r) = 1 + \frac{1}{3} \frac{M(<r) - M(<r_\mathrm{screen})}{M(<r)} \: ,
\end{equation}
and by assuming NFW density profiles we can solve the equations above to determine $\bar g$. We use the concentration-mass relation by \cite{Bullock2001} to fix the density profiles, but the overall results for $\bar g$ are rather insensitive to the specific choice of $c(M, z)$. From the modified potential energy, the virial theorem then suggests the scaling of the velocity dispersion $\sigma$ in $f(R)$ as
\begin{equation}
\frac{\sigma^{f(R)}}{\sigma^{\Lambda \mathrm{CDM}}} \propto \bar g^{1/2} \: .
\end{equation}
The screening radius $r_\mathrm{screen}$ itself depends on time via the evolution of the density profile $c(M, z)$ and the background evolution of the scalar field
\begin{equation}
\bar f_R(z) = | f_{R0} | \frac{1 + 4\frac{\Omega_\Lambda}{\Omega_m}}{(1+z)^3 + 4 \frac{\Omega_\Lambda}{\Omega_m}} \: .
\end{equation}
The velocity dispersion measured in our simulations at $z=0$ is plotted in Fig.~\ref{fig:velocity_dispersion}, where the width of the contours represents the standard deviation found among the objects. Most of the clusters virialise either to the $\Lambda$CDM equilibrium or the boosted $f(R)$ value, and since the maximum force enhancement is identical for all models, $|f_{R0}|$ merely determines at which mass scale the transition between the two cases occurs. We also show results for the simulation with $|f_{R0}| = 10^{-5}$ and $\sum m_\nu = 0.15~\mathrm{eV}$ as an example of a cosmology with both modified gravity and massive neutrinos, but note that neutrinos have no detectable effect on the cluster velocity dispersion. Therefore the dynamics of galaxies within clusters are an excellent way to break the degeneracy found in measurements relying on the amplitude of the matter fluctuations.
\begin{figure}
\includegraphics[width=\columnwidth]{plots/sigma_rel_baseline_sim.pdf}
\caption{Relative velocity dispersion within clusters of a given mass in the extended cosmologies, normalised to the mean value of the $\Lambda$CDM simulation. The (propagated) error bar of the ratio $\Delta \sigma / \sigma$ is showcased for the $|f_{R0}| = 10^{-4}$ model as shaded region, and has a similar magnitude for all curves. The other error bars are suppressed for clarity. Also shown is the empirical relation (blue) with propagated error bars as described in the text. Dashed lines show the expectation $\Delta \sigma / \sigma \approx \bar g^{1/2}$ from the simplified force enhancement model in Eq.~\ref{eq:force_enhancement}. For unscreened clusters, the velocity dispersion is larger by a factor $\sqrt{4/3} \approx 1.15$ as expected from the virial theorem in $f(R)$.}
\label{fig:rel_velocity_dispersion_sim}
\end{figure}
We focus on the relative deviations from $\Lambda$CDM in Fig.~\ref{fig:rel_velocity_dispersion_sim}, where we normalise the curves to the values measured in our fiducial simulation. Dashed lines show the prediction $\Delta \sigma / \sigma \approx \bar g^{1/2}$ from Eq.~\ref{eq:force_enhancement}.
Clusters for $|f_{R0}| = 10^{-4}$ are all unscreened, and virialise to the $f(R)$ equilibrium value boosted by a factor $(4/3)^{1/2} \approx 1.15$. On the other hand $|f_{R0}| = 10^{-6}$ is almost completely screened, and just shows slight deviations for low mass systems with $M_{200m} \sim 10^{13} M_\odot h^{-1}$. The intermediate case $|f_{R0}| = 10^{-5}$ demonstrates how the screening mechanism becomes active for clusters with $M_{200m} \sim 2 \times 10^{14} M_\odot h^{-1}$ with a long transition tail towards the fully screened regime. This also implies that single very massive clusters are not well suited to constrain $f(R)$ models \citep[see e.g.][for a case study]{Pizzuti2017}.
The simple model from Eq.~\ref{eq:force_enhancement} somewhat overestimates the efficiency of the screening mechanism, in agreement with findings by \cite{Schmidt_2010}. It therefore only serves as a conservative estimate for the transition region. In addition, even clusters that are screened today can still carry the imprint of the fifth force if parts of the progenitor structures were unscreened in their past. The relaxation time of a galaxy cluster of richness $N$ is approximately given by \citep{Binney_Tremaine_book}
\begin{equation}
t_r \approx \frac{0.1 N}{\ln N} t_\mathrm{cross}
\end{equation}
with typical crossing times $t_\mathrm{cross}\approx 1~\mathrm{Gyr}$, this leads to relaxation timescales of order $t_r \approx 2~\mathrm{Gyr}$ for a richness $N \sim 100$ and can range up to the Hubble time $t_r \approx 14.5~\mathrm{Gyr}$ for very massive clusters with $N\sim 1000$ member galaxies.
We also compare the results found in the simulations to an empirical $\sigma(M)$ relation which we obtained by combining the mass-richness relation of \cite{Johnston_2007} and the $\sigma$-richness relation of \cite{Becker_2007}. Both studies used the catalog of the Sloan Digital Sky survey \citep[SDSS;][]{2009ApJ...703.2217S} which allowed us to combine the two empirical relations.
The uncertainty shown in Fig.~\ref{fig:rel_velocity_dispersion_sim} is the (propagated) uncertainty quoted in \cite{Johnston_2007} and \cite{Becker_2007}.
Even without giving a quantitative upper limit on $f_{R0}$ here, we note that the $|f_{R0}| = 10^{-5}$ results seem to be incompatible with the observed cluster velocity dispersion irrespective of neutrino effects. This is comparable to current upper limits obtained from large-scale structure data \citep[e.g.][]{Cataneo2014}.
\section{Conclusions}
\label{sec:conclucsion}
Neutrinos are of great interest for modified gravity searches in the large-scale structure since they suppress the growth of structures on scales comparable to the range of the fifth force expected in deviations from GR. The uncertainty in the neutrino mass scale leads to an uncertainty in the size of this suppression, which can mask the characteristic additional growth of structures in $f(R)$ gravity. This degeneracy was studied before in the context of the amplitude of matter fluctuations and found to be time dependant, since the modifications in the growth of structures induced by neutrinos and the fifth force have different redshift dependencies.
Therefore, in this paper we studied the velocity divergence power spectrum $P_{\theta \theta}$ in Sec.~\ref{sec:2-point} as a proxy for the linear growth rate. Compared to $\Lambda$CDM it is strictly enhanced in our simulations at $z=0$, also in cosmologies including both modified gravity and massive neutrinos that show a comparable amplitude of matter fluctuations at that time. We conclude that for combinations of parameters that show approximate degeneracy in the matter power spectrum today, neutrino suppression dominates in the past, while in the future evolution the additional growth induced by the fifth force will win out. This effect can be probed by redshift-space distortion measurements, but an analysis accounting for the scale dependant growth in $f(R)$ remains challenging \citep{Jennings_etal_2012, Hernandez_2018}.
As a second step, we studied the kinematics inside of clusters in Sec.~\ref{sec:clusters}. The velocity dispersion found in our simulations agrees well with the expectations from the virial theorem, and it is enhanced in the unscreened $f(R)$ regime by a factor $(4/3)^{1/2}$ proportional to the the maximum force enhancement. Neutrinos on the other hand do not have any detectable effect on the velocity dispersion. Since the free-streaming length is larger than the typical cluster size, they behave as a smooth background component. So while they suppress the overall cluster abundance, the kinematics inside of halos are completely unaffected. We also compare the simulated dynamics to the empirical $\sigma - M$ relation found by combining the results from \cite{Johnston_2007} and \cite{Becker_2007} and find good agreement with the $\Lambda$CDM simulation. While we do not quote a stringent upper limit on the modified gravitiy parameter $|f_{R0}|$, we point out that the observed relation is in strong tension with expectations from an $|f_{R0}| = 10^{-5}$ model for clusters of mass $M_{200m} \approx 10^{-14} M_\odot h^{-1}$ -- independent of the neutrino mass.
Overall, kinematic information is an excellent observable to detect fifth force effects irrespective of the unknown neutrino mass.
Using kinematic information could also be potentially useful in order to break other degeneracies with (screened) modified gravity theories such as baryonic feedback processes stemming, e.g., from AGNs which also reduce clustering \citep{Arnold_Puchwein_Springel_2014,2018A&A...615A.134E}.
\begin{acknowledgements}
Many cosmological quantities in this paper were calculated using the Einstein-Boltzmann code \texttt{CLASS} \citep{CLASS}.
We appreciate the help of Ben Moster with cross-checks for our simulation suite and helpful discussions with Raffaella Capasso on cluster dynamics. SH acknowledges the support of the DFG Cluster of Excellence ”Origin and Structure of the Universe” and the Transregio programme TR33 ”The Dark Universe”.
MG was supported by by NASA through the NASA Hubble
Fellowship grant \#HST-HF2-51409 awarded by the Space Telescope Science
Institute, which is operated by the Association of Universities for
Research in Astronomy, Inc., for NASA, under contract NAS5-26555.
MB acknowledges support from the Italian Ministry for Education, University and Research (MIUR) through the SIR individual grant SIMCODE (project number RBSI14P4IH), from the grant MIUR PRIN 2015 ”Cosmology and Fundamental Physics: illuminating the Dark Universe with Euclid”, and from the agreement ASI n.I/023/12/0 “Attivita` relative alla fase B2/C per la missione Euclid”. The DUSTGRAIN-pathfinder simulations discussed in this work have been performed and analysed on the Marconi supercomputing machine at Cineca thanks to the PRACE project SIMCODE1 (grant nr. 2016153604, P.I. M. Baldi) and on the computing facilities of the Computational Centre for Particle and Astrophysics (C2PAP) and the Leibniz Supercomputing Centre (LRZ) under the project ID pr94ji.
We thank the Research Council of Norway for their support. Some computations were performed on resources provided by UNINETT Sigma2 -- the National Infrastructure for High Performance Computing and Data Storage in Norway. This paper is partly based upon work from the COST action CA15117 (CANTATA), supported by COST (European Cooperation in Science and Technology).
\end{acknowledgements}
\bibliographystyle{aa}
|
2,869,038,154,725 | arxiv | \section{Introduction}
Within undirected graphs $G=(V,E)$ a {\em $r$-coloring}
corresponds to a partition of the vertex set $V$ into $r$ independent sets.
The smallest $r$ such that a graph
$G$ has a $r$-coloring is denoted as the {\em chromatic number} of $G$.
Even deciding whether a graph has a $3$-coloring is NP-complete but
there are many efficient solutions for the coloring
problem on special graph classes. Among these are chordal graphs \cite{Gol80},
comparability graphs \cite{Hoa94}, and co-graphs \cite{CLS81}.
For oriented graphs the concept of oriented colorings, which has been introduced by Courcelle \cite{Cou94}, received a lot of attention in \cite{CD06,Sop16,GH10,GHKLOR14,CFGK16,GKR19d,GKL20,GKL21}.
An {\em oriented $r$-coloring} of an oriented graph $G=(V,E)$ is a partition of the vertex set $V$ into $r$ independent sets, such
that all the arcs linking two of these subsets have the same direction.
The {\em oriented chromatic number} of an oriented graph $G$, denoted by $\chi_o(G)$, is the smallest $r$ such that $G$
allows an oriented $r$-coloring.
In this paper, we consider an approach for coloring the vertices of directed graphs, introduced by Neumann-Lara \cite{NL82}.
An {\em acyclic $r$-coloring} of a digraph $G=(V,E)$ is a partition of the
vertex set $V$ into $r$ sets such that all sets induce an acyclic subdigraph in $G$.
The {\em dichromatic number} of $G$ is the smallest integer $r$
such that $G$ has an acyclic $r$-coloring.
Acyclic colorings of digraphs received a lot of attention in \cite{BFJKM04,Moh03,NL82}
and also in recent works \cite{LM17,MSW19,SW20}. The dichromatic number is one of two basic concepts
for the class of perfect digraphs \cite{AH15} and can be regarded as a natural
counterpart of the well known chromatic number for undirected graphs.
In the Dichromatic Number problem ($\DCN$) there is given a digraph
$G$ and an integer $r$ and the question is whether $G$ has an acyclic $r$-coloring.
If $r$ is constant and not part of the input, the corresponding problem
is denoted by $\DCN_{r}$. Even $\DCN_{2}$ is NP-complete \cite{FHM03},
which motivates to consider the Dichromatic
Number problem on special graph classes.
Up to now, only few classes of digraphs are known for which the
dichromatic number can be found in polynomial time.
The set of DAGs is obviously equal to the set of digraphs
of dichromatic number $1$. Further,
every odd-cycle free digraph \cite{NL82} and every non-even digraph~\cite{MSW19}
has dichromatic number at most $2$.
The hardness of the Dichromatic Number problem remains true, even for inputs
of bounded directed feedback vertex set size \cite{MSW19}.
This result implies that there are no $\xp$-algorithms\footnote{XP is the class
of all parameterized problems which can be solved by algorithms that are polynomial
if the parameter is considered as a constant \cite{DF13}.} for the Dichromatic Number problem parameterized by directed width
parameters such as directed path-width, directed tree-width, DAG-width or Kelly-width, since all of these are upper bounded in terms of the size of a smallest feedback vertex set.
The first positive result concerning structural parameterizations of the Dichromatic Number problem
is the existence of an $\fpt$-algorithm\footnote{FPT is the class of all parameterized problems which can
be solved by algorithms that are exponential only in the size of a fixed parameter while
polynomial in the size of the input size \cite{DF13}.} for the Dichromatic Number problem
parameterized by directed modular width \cite{SW19}.
In this paper, we introduce the first polynomial-time algorithm for the Dichromatic Number problem
on digraphs of constant directed clique-width.
Therefore, we consider a directed clique-width expression $X$
of the input digraph $G$ of directed clique-width $k$.
For each node $t$ of the corresponding rooted expression-tree $T$ we use label-based reachability
information about the subgraph $G_t$ of the subtree rooted at $t$.
For every partition of the vertex set of $G_t$ into acyclic
sets $V_1,\ldots,V_s$ we compute the multi set
$\langle \reach(V_1),\ldots,\reach(V_s) \rangle$, where $\reach(V_i)$, $1\leq i \leq s$,
is the set of all label pairs $(a,b)$ such that the subgraph of $G_t$ induced
by $V_i$ contains a vertex labeled by $b$, which is reachable by a vertex labeled by $a$.
By using bottom-up dynamic programming along expression-tree $T$,
we obtain an algorithm for the Dichromatic Number problem of running
time $n^{2^{\text{$\mathcal O$}(k^2)}}$ where $n$ denotes the number of vertices of the input
digraph. Since any algorithm with running time in $n^{2^{o(k)}}$ would disprove the Exponential Time Hypothesis (ETH), the exponential dependence on $k$ in the degree
of the polynomial cannot be avoided, unless ETH fails.
From a parameterized point of view, our algorithm shows that
the Dichromatic Number problem is in $\xp$ when parameterized by directed clique-width.
Further, we show that the Dichromatic Number problem is $\w[1]$-hard on symmetric digraphs when parameterized by directed clique-width.
Inferring from this, there is no $\fpt$-algorithm for the Dichromatic Number problem parameterized by directed clique-width under reasonable assumptions. The best parameterized complexity which can be achieved is given by an $\xp$-algorithm.
Furthermore, we apply defineability within monadic second order logic (MSO)
in order to show that Dichromatic Number problem in $\fpt$ when parameterized by the directed clique-width
and $r$, which implies that for every integer $r$ it holds that $\DCN_{r}$
is in $\fpt$ when parameterized by directed clique-width.
Since the directed clique-width of a digraph is at most its directed modular width \cite{SW20},
we reprove the existence of an $\xp$-algorithm for $\DCN$
and an $\fpt$-algorithm for $\DCN_{r}$
parameterized by directed modular width \cite{SW19}.
On the other hand, there exist several classes of digraphs of bounded directed clique-width and
unbounded directed modular width, which implies that directed clique-width is the more
powerful parameter and thus the results of \cite{SW19} does not imply any parameterized algorithm
for directed clique-width.
In Table \ref{tab} we summarize the known results for $\DCN$ and $\DCN_r$ parameterized by parameters.
\begin{table}[h!]
\begin{center}
\begin{tabular}{l||ll|ll|}
parameter & \multicolumn{2}{c|}{$\DCN$} &\multicolumn{2}{c|}{$\DCN_r$} \\
\hline
directed tree-width & $\not\in\xp$ & Corollary \ref{cor-xp-ro} & $\not\in\xp$ & Corollary \ref{cor-xp-ro} \\
\hline
directed path-width & $\not\in\xp$ & Corollary \ref{cor-xp-ro} & $\not\in\xp$ &Corollary \ref{cor-xp-ro} \\
\hline
DAG-width &$\not\in\xp$ & Corollary \ref{cor-xp-ro} & $\not\in\xp$ &Corollary \ref{cor-xp-ro} \\
\hline
Kelly-width & $\not\in\xp$ & Corollary \ref{cor-xp-ro} & $\not\in\xp$ &Corollary \ref{cor-xp-ro} \\
\hline
directed modular width & \fpt & \cite{SW19} & \fpt & \cite{SW19} \\
\hline
directed clique-width & \w[1]-hard & Corollary \ref{hpcw} & \fpt & Corollary \ref{fpt-dc} \\
& \xp & Corollary \ref{xp-dc} & & \\
\hline
standard parameter $r$ & $\not\in\xp$ & Corollary \ref{cor-xp-r} & /// & \\
\hline
directed clique-width $+$ $r$ & \fpt & Theorem \ref{fpt-cw-r} & /// & \\
\hline
clique-width of $\un(G)$ & $\not\in\fpt$ & Corollary \ref{cor-fpt-un} & open & \\
\hline
number of vertices $n$ & \fpt & Corollary \ref{fpt-n} & \fpt & Corollary \ref{fpt-n} \\
\end{tabular}
\end{center}
\caption{Complexity of $\DCN$ and $\DCN_r$ parameterized by parameters.
We assume that $\p\neq \np$. The ''///'' entries indicate that by taking $r$ out of the instance the considered parameter
makes no sense.
\label{tab}}
\end{table}
For directed co-graphs, which is a class of digraphs of directed clique-width 2 \cite{GWY16}, and several
generalizations we even show a linear time solution for computing the
dichromatic number and an optimal acyclic coloring. Furthermore, we conclude that directed co-graphs and
the considered generalizations lead to subclasses of perfect digraphs \cite{AH15}.
For directed cactus forests, which is a set of digraphs of directed tree-width 1 \cite{GR19}, the
results of \cite{Wie20} and \cite{MSW19} lead to an upper bound of $2$ for the
dichromatic number and that an optimal acyclic coloring
can be computed in polynomial time. We show that this even can be done in linear time.
\section{Preliminaries}\label{intro}
We use the notations of Bang-Jensen and Gutin \cite{BG18} for graphs and digraphs.
\subsection{Directed graphs}
A {\em directed graph} or {\em digraph} is a pair $G=(V,E)$, where $V$ is
a finite set of {\em vertices} and
$E\subseteq \{(u,v) \mid u,v \in V,~u \not= v\}$ is a finite set of ordered pairs of distinct
vertices called {\em arcs} or {\em directed edges}.
For a vertex $v\in V$, the sets $N^+(v)=\{u\in V~|~ (v,u)\in E\}$ and
$N^-(v)=\{u\in V~|~ (u,v)\in E\}$ are called the {\em set of all successors}
and the {\em set of all predecessors} of $v$.
The {\em outdegree} of $v$, $\text{outdegree}(v)$ for short, is the number
of successors of $v$ and the {\em indegree} of $v$, $\text{indegree}(v)$ for short,
is the number of predecessors of $v$.
A digraph $G'=(V',E')$ is a {\em subdigraph} of digraph $G=(V,E)$ if $V'\subseteq V$
and $E'\subseteq E$. If every arc of $E$ with both end vertices in $V'$ is in
$E'$, we say that $G'$ is an {\em induced subdigraph} of digraph $G$ and we
write $G'=G[V']$.
The {\em out-degeneracy} of a digraph $G$ is the least integer $d$
such that $G$ and all subdigraphs of $G$ contain a vertex of outdegree at most $d$.
For some given digraph $G=(V,E)$ we define
its {\em underlying undirected graph} by ignoring the directions of the arcs, i.e.
$\un(G)=(V,\{\{u,v\}~|~(u,v)\in E, u,v\in V\})$.
There are several ways to define a digraph $G=(V,E)$ from an undirected
graph $G'=(V,E')$.
If we replace every edge $\{u,v\}\in E'$ by
\begin{itemize}
\item
both arcs $(u,v)$ and $(v,u)$, we refer to $G$ as a {\em complete biorientation} of $G'$.
Since in this case $G$ is well defined by $G'$ we also denote
it by $\overleftrightarrow{G'}$.
Every digraph $G$ which can be obtained by a complete biorientation of some undirected
graph $G'$ is called a {\em complete bioriented graph} or {\em symmetric digraph}.
\item
one of the arcs $(u,v)$ and $(v,u)$, we refer to $G$ as an {\em orientation} of $G'$.
Every digraph $G$ which can be obtained by an orientation of some undirected
graph $G'$ is called an {\em oriented graph}.
\end{itemize}
For a digraph $G=(V,E)$ an arc $(u,v)\in E$ is {\em symmetric} if $(v,u)\in E$.
Thus, each bidirectional arc is symmetric. Further, an arc is {\em asymmetric} if it is not symmetric.
We define the symmetric part of $G$ as $\sym(G)$,
which is the spanning subdigraph of $G$ that contains exactly the symmetric arcs of $G$.
Analogously, we define the asymmetric part of $G$ as $\asym(G)$, which is the spanning
subdigraph with only asymmetric arcs.
By $\overrightarrow{P_n}=(\{v_1,\ldots,v_n\},\{ (v_1,v_2),\ldots, (v_{n-1},v_n)\})$, $n \ge 2$,
we denote the directed path on $n$ vertices,
by $\overrightarrow{C_n}=(\{v_1,\ldots,v_n\},\{(v_1,v_2),\ldots, (v_{n-1},v_n),(v_n,v_1)\})$, $n \ge 2$,
we denote the directed cycle on $n$ vertices.
A {\em directed acyclic graph (DAG)} is a digraph without any $\overrightarrow{C_n}$,
for $n\geq 2$, as subdigraph. A vertex $v$ is {\em reachable} from a vertex $u$ in $G$ if $G$ contains
a $\overrightarrow{P_n}$ as a subdigraph having start vertex $u$ and
end vertex $v$.
A digraph is {\em odd cycle free} if it does not contain a $\overrightarrow{C_n}$,
for odd $n\geq 3$, as subdigraph.
A digraph $G$ is planar if $\un(G)$ is planar.
A digraph is {\em even} if for every
0-1-weighting of the edges it contains a directed cycle of even total weight.
\subsection{Acyclic coloring of directed graphs}
We consider the approach for coloring digraphs given in \cite{NL82}.
A set $V'$ of vertices of a digraph $G$ is called {\em acyclic} if $G[V']$ is acyclic.
\begin{definition}[Acyclic graph coloring \cite{NL82}]\label{def-dc}
An \emph{acyclic $r$-coloring} of a digraph $G=(V,E)$ is a mapping $c:V\to \{1,\ldots,r\}$,
such that the color classes $c^{-1}(i)$ for $1\leq i\leq r$ are acyclic.
The {\em dichromatic number} of $G$, denoted by $\vec{\chi}(G)$, is the smallest $r$,
such that $G$ has an acyclic $r$-coloring.
\end{definition}
There are several works on acyclic graph coloring \cite{BFJKM04,Moh03,NL82} including several
recent works~\cite{LM17,MSW19,SW20}. The following observations support that the dichromatic number
can be regarded as a natural counterpart of the well known
chromatic number $\chi(G)$ for undirected graphs $G$.
\begin{observation}\label{obs-dicol}
For every symmetric directed graph $G$ it holds that
$\vec{\chi}(G) = \chi(\un(G))$.
\end{observation}
\begin{observation} For every directed graph $G$ it holds that
$\vec{\chi}(G)\leq \chi(\un(G))$.
\end{observation}
\begin{observation}\label{le-ubdigraph}
Let $G$ be a digraph and $H$ be a subdigraph
of $G$, then $\vec{\chi}(H)\leq \vec{\chi}(G)$.
\end{observation}
We consider the following problem.
\begin{desctight}
\item[Name] Dichromatic Number ($\DCN$)
\item[Instance] A digraph $G=(V,E)$ and a positive integer $r \leq |V|$.
\item[Question] Is there an acyclic $r$-coloring for $G$?
\end{desctight}
If $r$ is constant and not part of the input, the corresponding problem
is denoted by $r$-Dichromatic Number ($\DCN_{r}$). Even $\DCN_{2}$ is NP-complete \cite{FHM03}.
\section{Acyclic coloring of special digraph classes}
As recently mentioned in \cite{SW19}, only few classes of digraphs for which the dichromatic number
can be found in polynomial time are known.
\begin{observation}
The set of DAGs is equal to the set of digraphs
of dichromatic number $1$.
\end{observation}
\begin{proposition}[\cite{NL82}]
Every odd-cycle free digraph has dichromatic number at most $2$.
\end{proposition}
\begin{proposition}[\cite{MSW19}]
Every non-even digraph has dichromatic number at most $2$.
\end{proposition}
Thus, given a DAG its dichromatic number is always equal to $1$.
Further, for some given odd-cycle free digraph or non-even digraph
we first check in linear time whether the digraph is a DAG and thus has
dichromatic number $1$ or otherwise state that it has dichromatic number $2$.
\begin{corollary}
For DAGs, odd-cycle free digraphs, and non-even digraphs
the dichromatic number can be computed in linear time.
\end{corollary}
\subsection{Acyclic coloring of perfect digraphs}
The dichromatic number and the clique number are the two basic concepts
for the class of perfect digraphs~\cite{AH15}.
The {\em clique number} of a digraph $G$, denoted by $\omega_d(G)$, is
the number of vertices in a largest complete bioriented subdigraph of $G$.
\begin{definition}[Perfect digraphs \cite{AH15}]\label{defp}
A digraph $G$ is {\em perfect} if, for every induced subdigraph $H$ of $G$, the dichromatic
number $\vec{\chi}(H)$ equals the clique number $\omega_d(H)$.
\end{definition}
An undirected graph $G$ is perfect if and only if its complete biorientation $\overleftrightarrow{G}$
is a perfect digraph. Thus, Definition \ref{defp} is a generalization of perfectness to digraphs.
While for undirected perfect graphs more than a hundred subclasses have been defined and studied \cite{Hou06},
for perfect digraphs there are only very few subclasses known. Obviously, DAGs and subclasses
such as series-parallel digraphs, minimal series-parallel digraphs and series-parallel order
digraphs \cite{VTL82} are perfect digraphs.
By \cite{AH15} the dichromatic number of a perfect digraph $G$
can be found by the chromatic number of $\un(\sym(G))$,
which is possible in polynomial time \cite{GLS81}.
\begin{proposition}[\cite{AH15}]\label{perf-col}
For every perfect digraph the Dichromatic Number problem can be solved in polynomial time.
\end{proposition}
We show how to find an optimal
acyclic coloring for directed co-graphs and special digraphs of directed
tree-width one in linear time.
\subsection{Acyclic coloring of directed co-graphs}
Let $G_1=(V_1,E_1)$ and $G_2=(V_2,E_2)$ be two vertex-disjoint digraphs.
The following opera\-tions
have been considered by Bechet et al.\ in \cite{BGR97}.
\begin{itemize}
\item
The {\em disjoint union} of $G_1$ and $G_2$,
denoted by $G_1 \oplus G_2$,
is the digraph with vertex set $V_1\cup V_2$ and
arc set $E_1\cup E_2$.
\item
The {\em series composition} of $G_1$ and $G_2$,
denoted by $G_1\otimes G_2$,
is defined by their disjoint union plus all possible arcs between
vertices of $G_1$ and $G_2$.
\item
The {\em order composition} of $G_1$ and $G_2$,
denoted by $G_1\oslash G_2$,
is defined by their disjoint union plus all possible arcs from
vertices of $G_1$ to vertices of $G_2$.
\end{itemize}
The following transformation has been considered by Johnson et al.\ in \cite{JRST01}
and generalizes the operations disjoint union and order composition.
\begin{itemize}
\item
A graph $G$ is obtained by a {\em directed union} of
$G_1$ and $G_2$, denoted by $G_1 \ominus G_2$ if $G$ is a
subdigraph of the order composition of $G_1 \oslash G_2$
and contains the disjoint union $G_1 \oplus G_2$ as a subdigraph.
\end{itemize}
We recall the definition of directed co-graphs from \cite{CP06}.
\begin{definition}[Directed co-graphs \cite{CP06}]\label{dcog}
The class of {\em directed co-graphs} is recursively defined as follows.
\begin{enumerate}
\item Every digraph with a single vertex $(\{v\},\emptyset)$,
denoted by $v$, is a {\em directed co-graph}.
\item If $G_1$ and $G_2$ are vertex-disjoint directed co-graphs, then
\begin{enumerate}
\item
the disjoint union
$G_1\oplus G_2$,
\item
the series composition
$G_1 \otimes G_2$, and
\item
the order composition
$G_1\oslash G_2$ are {\em directed co-graphs}.
\end{enumerate}
\end{enumerate}
\end{definition}
Every expression $X$ using the four operations of Definition \ref{dcog}
is called a {\em di-co-expression} and
$\g(X)$ is the defined digraph.
As undirected co-graphs can be characterized by forbidding the $P_4$,
directed co-graphs can be characterized likewise by
excluding the eight forbidden induced subdigraphs \cite{CP06}.
For every directed co-graph we can define a tree structure
denoted as {\em di-co-tree}. It is an ordered rooted tree whose
leaves represent the vertices of the digraph and
whose inner nodes correspond
to the operations applied on the subexpressions defined by the subtrees.
For every directed co-graph one can construct a di-co-tree in linear time \cite{CP06}.
Directed co-graphs are interesting from an algorithmic point of view
since several hard graph problems can be solved in
polynomial time by dynamic programming along the tree structure of
the input graph, see \cite{BM14,Gur17a,GHKRRW20,GKR19f,GKR19d,GR18c,Ret98}.
Moreover, directed co-graphs are very useful for the reconstruction
of the evolutionary history of genes or species using genomic
sequence data \cite{HSW17,NEMWH18}.
The set of {\em extended directed co-graphs}
was introduced in \cite{GR18f} by allowing
the directed union and series composition of defined digraphs which leads
to a superclass of directed co-graphs.
Also for every extended directed co-graph we can define a tree structure,
denoted as {\em ex-di-co-tree}.
For the class of extended directed co-graphs it remains open how to compute an
ex-di-co-tree.
\begin{lemma}\label{le-dec} Let $G_1$ and $G_2$ be two vertex-disjoint directed graphs.
Then, the following equations hold:
\begin{enumerate}
\item $\vec{\chi}((\{v\},\emptyset))=1$
\item $\vec{\chi}(G_1 \oplus G_2) = \vec{\chi}(G_1\oslash G_2) = \vec{\chi}(G_1\ominus G_2) = \max(\vec{\chi} (G_1), \vec{\chi}(G_2))$
\item $\vec{\chi}(G_1\otimes G_2)= \vec{\chi}(G_1) + \vec{\chi}(G_2)$
\end{enumerate}
\end{lemma}
\begin{proof}
\begin{enumerate}
\item
$\vec{\chi}((\{v\},\emptyset)) = 1$ is obviously clear.
\item $\vec{\chi}(G_1 \oplus G_2) \geq \max(\vec{\chi} (G_1), \vec{\chi}(G_2))$
Since the digraphs $G_1$ and $G_2$ are induced subdigraphs of digraph $G_1 \oplus G_2$,
both values $\vec{\chi} (G_1)$ and $\vec{\chi}(G_2)$ lead to
a lower bound for the number of necessary colors
of the combined graph by Observation~\ref{le-ubdigraph}.
$\vec{\chi}(G_1 \oplus G_2) \leq \max(\vec{\chi} (G_1), \vec{\chi}(G_2))$
Since the disjoint union operation does not create any new arcs, we
can combine color classes of $G_1$ and $G_2$.
The results for the order composition and directed union follow by the same arguments.
\item $\vec{\chi}(G_1\otimes G_2) \geq \vec{\chi}(G_1) + \vec{\chi}(G_2)$
Since every $G_1$ and $G_2$ are an induced subdigraphs of the combined
graph, both values $\vec{\chi} (G_1)$ and $\vec{\chi}(G_2)$ lead to
a lower bound for the number of necessary colors
of the combined graph by Observation~\ref{le-ubdigraph}. Further,
the series operations implies that
every vertex in $G_1$ is on a cycle of length two with every vertex
of $G_2$. Thus, no vertex in $G_1$ can be
colored in the same way as a vertex in $G_2$.
Thus, $\vec{\chi}(G_1) + \vec{\chi}(G_2)$
leads to a lower bound for the number of necessary colors
of the combined graph.
$\vec{\chi}(G_1\otimes G_2) \leq \vec{\chi}(G_1) +\vec{\chi}(G_2)$
For $1\leq i \leq 2$ let $G_i=(V_i,E_i)$ and $c_i:V_i\to\{1,\ldots,\vec{\chi}(G_i)\}$
a coloring for $G_i$.
For $G_1\otimes G_2=(V,E)$ we define a
mapping $c:V\to \{1,\ldots, \vec{\chi}(G_1)+\vec{\chi}(G_2) \}$ as follows.
\[
c(v)=\left\{ \begin{array}{ll}
c_1(v) & {\rm\ if\ } v \in V_1 \\
c_2(v)+ \vec{\chi}(G_1) & {\rm\ if\ } v \in V_{2}. \\
\end{array}\right.
\]
The mapping $c$ satisfies the definition of an acyclic coloring, because
every color class $c^{-1}(j)$, $j \in\{1,\ldots, \vec{\chi}(G_1)+\vec{\chi}(G_2) \}$ is a subset of
$V_1$ or of $V_2$, such that $c^{-1}(j)$
induces an acyclic digraph in $G_1$ or $G_2$ by assumption.
Since the series operation does not insert any further arcs between two vertices
of $G_1$ or $G_2$, vertex set $c^{-1}(j)$ induces also an acyclic digraph in $G$.
\end{enumerate}
This shows the statements of the lemma.
\end{proof}
Lemma \ref{le-dec} can be used to obtain the following result.
\begin{theorem}\label{algopd}
Let $G$ be a (-n extended) directed co-graph (given by an ex-di-co-tree).
Then, an optimal acyclic coloring for $G$ and
$\vec{\chi}(G)$ can be computed in linear time.
\end{theorem}
In order to state the next result, let $\omega(G)$
be the number of vertices in a largest clique in graph $G$.
Since the results of Lemma \ref{le-dec} also hold for $\omega_d$ instead of $\vec{\chi}$
we obtain the following result.
\begin{proposition}\label{prop-p}
Let $G$ be a (-n extended) directed co-graph (given by an ex-di-co-tree). Then, it holds that
$\vec{\chi}(G)=\chi(\un(\sym(G)))=\omega(\un(\sym(G)))=\omega_d(G)$ and all values can be
computed in linear time.
\end{proposition}
\begin{proposition}\label{dicop}
Every (extended) directed co-graph is a perfect digraph.
\end{proposition}
\begin{proof}
We show the result by verifying Definition \ref{defp}.
Since every induced subdigraph of a (-n extended) directed co-graph
is a (-n extended) directed co-graph, Proposition \ref{prop-p}
implies that every (extended) directed co-graph is a perfect digraph.
\end{proof}
Alternatively,
the last result can be shown by the {\em Strong Perfect Digraph Theorem} \cite{AH15}
since for every (extended) directed co-graph $G$ the symmetric part $\un(\sym(G))$ is an
undirected co-graph and thus a perfect graph. Furthermore (extended) directed co-graphs
do not contain a directed cycle $\overrightarrow{C_n}$, $n\geq 3$,
as an induced subdigraph \cite{CP06,GKR19h}.
The results of Theorem \ref{algopd} and Propositions \ref{prop-p} and \ref{dicop} can be generalized
to larger classes. Motivated by the idea of tree-co-graphs \cite{Tin89} we
replace in Definition \ref{dcog} the single vertex graphs by a DAG, for which we
know that the dichromatic number and
also the clique number is equal to $1$. Thus, Lemma \ref{le-dec} can be
adapted to compute the dichromatic number in linear time. Furthermore
following the proof of Proposition~\ref{dicop} we know that this class
is perfect.
\subsection{Acyclic coloring of directed cactus forests}
We recall the definition of directed cactus forests, which was introduced in \cite{GR19}
as a counterpart for undirected cactus forests.
\begin{definition}[Directed cactus forests \cite{GR19}]\label{def-tl}
A \emph{directed cactus forest} is a digraph, where any two directed cycles
have at most one joint vertex.
\end{definition}
Since directed cactus forests may contain a cycle they have
dichromatic number at least $2$.
Further, the set of all cactus forests is a subclass of the digraphs of directed tree-width at most $1$ \cite{GR19}, which
is a subclass of non-even digraphs \cite{Wie20}. By \cite{MSW19} we can conclude that every cactus forest
has dichromatic number at most $2$ and that
for every cactus forest an optimal acyclic coloring
can be computed in polynomial time. In order
to improve the running time, we show the following lemma.
\begin{lemma}\label{cf2}
Let $G$ be a directed cactus forest. Then, an acyclic 2-coloring for $G$ and
can be computed in linear time.
\end{lemma}
\begin{proof}
Let $G=(V,E)$ be a directed cactus forest.
In order to define an acyclic 2-coloring $c:V\to \{1,2\}$ for $G$,
we traverse a DAG $D_G$ defined as follows.
$D_G$ has a vertex for every cycle $C$ on at least two vertices in $G$ and
a vertex for every vertex of $G$ which is not
involved in any such cycle of $G$.
Two vertices
in $D_G$ which both represent a cycle in $G$ are
adjacent by a single (arbitrary chosen)
directed edge in $D_G$ if these cycles have a common vertex in $G$.
A vertex in $D_G$ which represents a cycle $C$ in $G$ and
a vertex in $D_G$ which represents a vertex $u$ in $G$ which
is not involved in any cycle in $G$ are adjacent in the same
way as the vertex of $C$ is adjacent to $u$ in $G$.
Two vertices in $D_G$ which both represent a vertex in $G$ which
is not involved in any cycle in $G$ are
adjacent in $D_G$ in the same way as they are adjacent in $G$.
Then, we consider the vertices $v$ of $D_G$ by a topological ordering
in order to define an acyclic 2-coloring for $G$.
\begin{itemize}
\item If $v$ represents a vertex in $G$ which
is not involved in any cycle in $G$, we define $c(v)=2$.
\item If $v$ represents a cycle $C$ in $G$, then we distinguish the following cases.
\begin{itemize}
\item If all vertices of $C$ are unlabeled up to now, we choose an arbitrary vertex
$x\in C$ and define $c(x)=1$ and $c(y)=2$ for all $y\in C-\{x\}$.
\item If we have already labeled one vertex $x$ of $C$ by $1$ then
we define $c(y)=2$ for all $y\in C-\{x\}$.
\item If we have already labeled one vertex $x$ of $C$ by $2$ then we choose an arbitrary vertex
$x'\in C-\{x\}$ and define $c(x')=1$ and further $c(y)=2$ for all $y\in C-\{x,x'\}$.
\end{itemize}
\end{itemize}
By this definitions, in every cycle $C$ of $G$ we color exactly one vertex of $C$ by $1$
and all remaining vertices of $G$ by $2$.
Thus, $c$ leads to a legit acyclic 2-coloring for $G$.
\end{proof}
If $G$ is a DAG, then $c(x)=1$ for
$x\in V$ leads to an acyclic 1-coloring for $G$. Otherwise, Lemma \ref{cf2} leads to an
acyclic 2-coloring for $G$.
\begin{theorem}\label{algocf}
Let $G$ be a directed cactus forest. Then, an optimal acyclic coloring for $G$ and
$\vec{\chi}(G)$ can be computed in linear time.
\end{theorem}
\section{Parameterized algorithms for directed clique-width}
For undirected graphs the clique-width \cite{CO00} is one of the most important parameters.
Clique-width measures how difficult it is to decompose the graph into a special tree-structure.
From an algorithmic point of view,
only tree-width \cite{RS86} is a more studied graph parameter.
Clique-width is more general than tree-width since
graphs of bounded tree-width have also bounded clique-width \cite{CR05}.
The tree-width can only be bounded by the clique-width under
certain conditions \cite{GW00}.
Many NP-hard graph problems admit poly\-nomial-time solutions when restricted to
graphs of bounded tree-width or graphs of bounded clique-width.
For directed graphs there are several attempts
to generalize tree-width
such as directed path-width, directed tree-width, DAG-width, or Kelly-width, which
are representative for what people are working on, see the surveys
\cite{GHKLOR14,GHKMORS16}. Unfortunately, none of these attempts
allows polynomial-time algorithms for a large class of problems on
digraphs of bounded width. This also holds for $\DCN_r$ and $\DCN$ by the following
theorem.
\begin{theorem}[\cite{MSW19}]
For every $r \geq 2$ the $r$-Dichromatic Number problem is NP-hard even for input digraphs
whose feedback vertex set number is
at most $r+4$ and whose out-degeneracy is at most $r+1$.
\end{theorem}
Thus, even for bounded size of a directed feedback vertex set,
deciding whether a directed graph has dichromatic number at most 2 is NP-complete.
This result rules out $\xp$-algorithms for $\DCN$ and $\DCN_r$ by directed width
parameters such as directed path-width, directed tree-width, DAG-width or Kelly-width,
since all of these are upper bounded by the feedback vertex set number.
\begin{corollary}\label{cor-xp-ro}
The Dichromatic Number problem is not in $\xp$ when parameterized by directed
tree-width, directed path-width, Kelly-width, or DAG-width, unless $\p=\np$.
\end{corollary}
Next, we discuss parameters which allow
$\xp$-algorithms or even $\fpt$-algorithms for $\DCN$ and $\DCN_r$.
The first positive result concerning structural parameterizations of $\DCN$
was recently given in \cite{SW19} using the directed modular width ($\dmws$).
\begin{theorem}[\cite{SW19}]\label{fpt-mw}
The Dichromatic Number problem is in $\fpt$ when parameterized by directed modular width.
\end{theorem}
By \cite{GHKLOR14}, directed
clique-width performs much better than directed path-width, directed tree-width, DAG-width, and
Kelly-width from the parameterized complexity point of view.
Hence, we consider the parameterized complexity of $\DCN$ parameterized by
directed clique-width.
\begin{definition}[Directed clique-width \cite{CO00}]\label{D4}
The {\em directed clique-width} of a digraph $G$, $\dcws(G)$ for short,
is the minimum number of labels needed to define $G$ using the following four operations:
\begin{enumerate}
\item Creation of a new vertex $v$ with label $a$ (denoted by $a(v)$).
\item Disjoint union of two labeled digraphs $G$ and $H$ (denoted by $G\oplus H$).
\item Inserting an arc from every vertex with label $a$ to every vertex with label $b$
($a\neq b$, denoted by $\alpha_{a,b}$).
\item Change label $a$ into label $b$ (denoted by $\rho_{a\to b}$).
\end{enumerate}
An expression $X$ built with the operations defined above
using $k$ labels is called a {\em directed clique-width $k$-expression}.
Let $\g(X)$ be the digraph defined by $k$-expression $X$.
\end{definition}
In \cite{GWY16} the set of directed co-graphs is characterized by excluding two
digraphs as a proper subset
of the set of all graphs of directed clique-width 2, while for the undirected versions
both classes are equal.
By the given definition every graph of directed clique-width at most $k$ can be represented by a tree structure,
denoted as {\em $k$-expression-tree}. The leaves of the $k$-expression-tree represent the
vertices of the digraph and the inner nodes of the $k$-expression-tree correspond
to the operations applied to the subexpressions defined by the subtrees.
Using the $k$-expression-tree many hard problems have been shown to be
solvable in polynomial time when restricted to graphs of bounded directed clique-width \cite{GWY16,GHKLOR14}.
Directed clique-width is not comparable to the directed variants of tree-width mentioned above,
which can be observed by the set of all complete biorientations of cliques and the set of all acyclic
orientations of grids.
The relation of directed clique-width and directed modular width \cite{SW20} is as follows.
\begin{lemma}[\cite{SW20}] For every digraph $G$ it holds that
$\dcws(G)\leq \dmws(G)$.
\end{lemma}
On the other hand, there exist several classes of digraphs of bounded directed clique-width and
unbounded directed modular width, e.g. even the set of all
directed paths $\{\overrightarrow{P_n} \mid n\geq 1\}$,
the set of all directed cycles $\{\overrightarrow{C_n} \mid n\geq 1\}$, and the set of all minimal series-parallel digraphs \cite{VTL82}.
Thus, the result of \cite{SW19} does not imply any $\xp$-algorithm or $\fpt$-algorithm
for directed clique-width.
\begin{corollary}\label{hpcw}
The Dichromatic Number problem is $\w[1]$-hard on symmetric digraphs and thus, on all digraphs when parameterized by
directed clique-width.
\end{corollary}
\begin{proof}
The Chromatic Number problem
is $\w[1]$-hard parameterized by clique-width \cite{FGLS10a}.
An instance consisting of a graph $G=(V,E)$ and a positive integer $r$ for the
Chromatic Number problem can be transformed
into an instance for the Dichromatic Number problem
on digraph $\overleftrightarrow{G}$ and integer $r$.
Then, $G$ has an $r$-coloring if and only if $\overleftrightarrow{G}$ has an
acyclic $r$-coloring by Observation \ref{obs-dicol}.
Since for every undirected graph $G$ its clique-width equals the directed
clique-width of $\overleftrightarrow{G}$ \cite{GWY16}, we obtain a
parameterized reduction.
\end{proof}
Thus, under reasonable assumptions there is no $\fpt$-algorithm for the Dichromatic Number problem
parameterized by directed clique-width and an $\xp$-algorithm is the best that can be achieved.
Next, we introduce such an $\xp$-algorithm.
Let $G=(V,E)$ be a digraph which is given by some directed clique-width $k$-expression $X$.
For some vertex set $V'\subseteq V$, we define $\reach(V')$
as the set of all pairs $(a,b)$ such that there is a vertex $u\in V'$
labeled by $a$ and there is a vertex $v\in V'$
labeled by $b$ and $v$ is reachable from $u$ in $G[V']$.
\begin{example}\label{ex-f}
We consider the digraph in Figure \ref{F06}.
The given partition into three acyclic sets $V=V_1\cup V_2 \cup V_3$, where $V_1=\{v_1,v_6,v_7\}$,
$V_2=\{v_2\}$, and $V_3= \{v_3,v_4,v_5\}$ leads to the multi set
${\cal M}=\langle \reach(V_1),\reach(V_2), \reach(V_3) \rangle$, where
$\reach(V_1)=\reach(V_3)= \{(1,1),(2,2),(4,4),(1,2),(2,4),(1,4)\}$ and $\reach(V_2) =\{(3,3)\}$.
\end{example}
\begin{figure}[hbtp]
\centerline{\includegraphics[width=0.37\textwidth]{ex_reach.pdf}}
\caption{Digraph in Example \ref{ex-f}. The dashed lines indicate a partition of
the vertex set into three acyclic sets. The numbers at the vertices represent their
labels.
\label{F06}}
\end{figure}
Within a construction of a digraph by directed clique-width operations
only the edge insertion operation can change the reachability between
the present vertices. Next, we show which acyclic sets remain acyclic
when performing an edge insertion operation and how the
reachability information of these sets have to be updated
due to the edge insertion operation.
\begin{lemma}\label{le0}
Let $G=(V,E)$ be a vertex labeled digraph defined by some directed clique-width $k$-expression $X$,
$a\neq b$, $a,b \in\{1,\ldots,k\}$,
and $V'\subseteq V$ be an acyclic set in $G$. Then, vertex set $V'$ remains acyclic in $\g(\alpha_{a,b}(X))$
if and only if $(b,a)\not\in \reach(V')$.
\end{lemma}
\begin{proof}
If $(b,a)\in \reach(V')$, then we know that in $\g(X)$ there is a vertex $y$
labeled by $a$ which is reachable by a vertex $x$ labeled by $b$. That is,
in $\g(X)$ there is a directed path $P$ from $x$ to $y$.
The edge insertion $\alpha_{a,b}$ leads to the arc $(y,x)$ which leads together with
path $P$ to a cycle in $\g(\alpha_{a,b}(X))$.
If $(b,a)\not\in \reach(V')$ and $V'\subseteq V$ is an acyclic set in $\g(X)$,
then there is a topological ordering of $\g(X)[V']$
such that every vertex labeled by $a$ is before every vertex labeled by $b$ in the ordering.
The same ordering is a topological ordering for $\g(\alpha_{a,b}(X))[V']$ which implies that
$V'$ remains acyclic for $\g(\alpha_{a,b}(X))$.
\end{proof}
\begin{lemma}\label{le0x}
Let $G=(V,E)$ be a vertex labeled digraph defined by some directed clique-width $k$-expression $X$,
$a\neq b$, $a,b \in\{1,\ldots,k\}$, $V'\subseteq V$ be an acyclic set in $G$, and $(b,a)\not\in \reach(V')$.
Then, $\reach(V')$ for $\g(\alpha_{a,b}(X))$ can be obtained
from $\reach(V')$ for $\g(X)$ as follows:
\begin{itemize}
\item For every pair $(x,a)\in \reach(V')$ and every pair $(b,y)\in \reach(V')$, we extend $\reach(V')$ by $(x,y)$.
\end{itemize}
\end{lemma}
\begin{proof}
Let $R_1$ be the set $\reach(V')$ for $\g(X)$, $R_2$ be the set $\reach(V')$ for $\g(\alpha_{a,b}(X))$,
and $R$ be the set of pairs constructed in the lemma starting with $\reach(V')$ for $\g(X)$.
Then, it holds $R_1\subseteq R$. Furthermore, the rule given in the lemma
obviously puts feasible pairs into $\reach(V')$
which implies $R\subseteq R_2$. It remains to show $R_2\subseteq R$.
Let $(c,d)\in R_2$. If $(c,d)\in R_1$ then $(c,d)\in R$ as mentioned
above. Thus, let $(c,d)\not\in R_1$. This implies that there is a vertex $u\in V'$
labeled by $c$ and a vertex $v\in V'$
labeled by $d$ and $v$ is reachable from $u$ in $\g(\alpha_{a,b}(X))$.
Since $\g(X)$ is a spanning subdigraph of $\g(\alpha_{a,b}(X))$ and
the vertex labels are not changed by the performed edge insertion operation,
there is a non-empty set $V_u\in V'$ of vertices
labeled by $c$ and there is a non-empty set $V_v\in V'$
of vertices labeled by $d$ and no vertex of $V_v$ is reachable from
a vertex of $V_u$ in $\g(X)$. By the definition of the edge
insertion operation we know that in $\g(X)$ there is a
vertex $u'$ labeled by $a$ and a vertex $v'$ labeled by $b$ such that
$u'$ is reachable from $u$ and $v$ is reachable from $v'$. Thus,
$(c,a)\in R_1$ and $(b,d)\in R_1$. Our rule given in the statement
leads to $(c,d)\in R$.
\end{proof}
For a disjoint partition of $V$ into acyclic
sets $V_1,\ldots,V_s$, let ${\cal M}$ be the multi
set\footnote{We use the notion of a {\em multi set}, i.e., a set
that may have several equal elements. For a multi set with elements $x_1,\ldots,x_n$ we write
${\cal M}=\langle x_1,\ldots,x_n \rangle$.
There is no order on the elements of ${\cal M}$.
The number how often an element $x$ occurs in ${\cal M}$ is denoted by $\psi({\cal M},x)$.
Two multi sets ${\cal M}_1$ and ${\cal M}_2$
are {\em equal} if for each element $x \in {\cal M}_1 \cup {\cal M}_2$,
$\psi({\cal M}_1,x)=\psi({\cal M}_2,x)$, otherwise they are called
{\em different}. The empty multi set is denoted by $\langle \rangle$.}
$\langle \reach(V_1),\ldots,\reach(V_s) \rangle$.
Let $F(X)$ be the set of all mutually different multi sets
${\cal M}$ for all disjoint partitions of vertex set $V$ into acyclic sets.
Every multi set in $F(X)$ consists of nonempty subsets of
$\{1,\ldots,k\} \times \{1,\ldots,k\}$. Each subset can occur
$0$ times and not more than $|V|$ times.
Thus, $F(X)$ has at most $$(|V|+1)^{2^{k^2}-1} \in |V|^{2^{\text{$\mathcal O$}(k^2)}} $$ mutually different multi
sets
and is polynomially bounded in the size of $X$.
In order to give a dynamic programming solution along the recursive structure
of a directed clique-width $k$-expression, we show how
to compute $F(a(v))$, $F(X \oplus Y)$ from $F(X)$
and $F(Y)$, as well as $F(\alpha_{a,b}(X))$ and $F(\rho_{a \to b}(X))$ from $F(X)$.
\begin{lemma}\label{le1} Let $a,b\in\{1,\ldots,k\}$, $a\neq b$.
\begin{enumerate}
\item
$F(a(v))=\{ \langle \{(a,a)\} \rangle \}$.
\item
Starting with set $D=\{ \langle \rangle \} \times F(X) \times F(Y)$
extend $D$ by all triples that can be obtained from some
triple $({\cal M},{\cal M}',{\cal M}'') \in D$ by removing a set $L'$ from ${\cal M}'$
or a set $L''$ from ${\cal M}''$ and inserting it into ${\cal M}$, or
by removing both sets and inserting $L' \cup L''$ into ${\cal M}$.
Finally, we choose
$F(X \oplus Y)= \{ {\cal M} \mid ({\cal M},\langle \rangle,\langle \rangle) \in D\}$.
\item
$F(\alpha_{a,b}(X))$ can be obtained from $F(X)$ as follows.
First, we remove from $F(X)$ all multi sets $\langle L_1, \ldots, L_s \rangle$
such that $(b,a) \in L_t$ for some $1\leq t\leq s$.
Afterwards, we modify every remaining multi set
$\langle L_1, \ldots, L_s \rangle$ in $F(X)$ as follows:
\begin{itemize}
\item
For every $L_i$ which contains
a pair $(x,a)$ and a pair $(b,y)$, we extend $L_i$ by $(x,y)$.
\end{itemize}
\item
$F(\rho_{a \to b}(X)) = \{ \langle \rho_{a \to b}(L_1), \ldots, \rho_{a \to b}(L_s) \rangle
\mid \langle L_1,\ldots,L_s \rangle \in F(X) \}$, using the relabeling of a set of label pairs $\rho_{a\to b}(L_i)=\{(\rho_{a\to b}(c),\rho_{a\to b}(d)) \mid (c,d)\in L_i\}$ and the relabeling
of integers $\rho_{a\to b}(c)=b$, if $c=a$ and $\rho_{a\to b}(c)=c$, if $c\neq a$.
\end{enumerate}
\end{lemma}
\begin{proof}
\begin{enumerate}
\item In $\g(a(v))$ there is exactly one vertex $v$ labeled by $a$ and thus the only partition of $V$ into
one acyclic set of the vertex set of $\g(a(v))$ is $\{v\}$. The corresponding multi set is
$\langle \reach(\{v\}) \rangle=\langle \{(a,a)\} \rangle$.
\item
$F(X \oplus Y) \subseteq \{ {\cal M} \mid ({\cal M},\langle \rangle,\langle \rangle) \in D\}$:
Every acyclic set of $\g(X \oplus Y)$ is either an acyclic set in $\g(X)$, or an
acyclic set in $\g(Y)$, or is the union of two acyclic sets from $\g(X)$ and $\g(Y)$.
All three possibilities are considered when computing $\{ {\cal M} \mid ({\cal M},\langle \rangle,\langle \rangle) \in D\}$ from $F(X)$ and $F(Y)$.
$F(X \oplus Y) \supseteq \{ {\cal M} \mid ({\cal M},\langle \rangle,\langle \rangle) \in D\}$:
Since the operation $\oplus$ does not create any new edges, the acyclic sets
from $\g(X)$, the acyclic sets
from $\g(Y)$, and the union of acyclic sets from $\g(X)$ and $\g(Y)$ remain acyclic sets for $\g(X \oplus Y)$.
\item By Lemma \ref{le0} we have to remove all multi sets $\langle L_1, \ldots, L_s \rangle$ from $F(X)$
for which holds that $(b,a) \in L_t$ for some $1\leq t\leq s$. The remaining multi sets are updated
correctly by Lemma \ref{le0x}.
\item In $\g(X)$ there is a vertex labeled by $d$ which is reachable
from a vertex labeled by $c$ if and only if in
$\g(\rho_{a \to b}(X))$ there is a vertex labeled by $\rho_{a \to b}(d)$ which is reachable
from a vertex labeled by $\rho_{a\to b}(c)$.
\end{enumerate}
This shows the statements of the lemma.
\end{proof}
Since every possible coloring of $G$ is realized in the set $F(X)$, where $X$ is a
directed clique-width $k$-expression for
$G$, it is easy to find a minimum coloring for $G$.
\begin{corollary} \label{cor3}
Let $G=(V,E)$ be a digraph given by a directed clique-width $k$-expression $X$.
There is a partition of $V$ into $r$ acyclic sets
if and only if there is some ${\cal M} \in F(X)$ consisting of $r$ sets of label pairs.
\end{corollary}
\begin{theorem}\label{xp-dca}
The Dichromatic Number problem on digraphs on $n$ vertices given by a directed
clique-width $k$-expression can be solved
in $n^{2^{\text{$\mathcal O$}(k^2)}}$ time.
\end{theorem}
\begin{proof}
Let $G=(V,E)$ be a digraph of directed clique-width at most $k$ and $T$ be a $k$-expression-tree
for $G$ with root $w$.
For some vertex $u$ of $T$ we denote by $T_u$
the subtree rooted at $u$ and $X_u$ the $k$-expression defined by $T_u$.
In order to solve the Dichromatic Number problem for $G$,
we traverse $k$-expression-tree $T$ in a bottom-up order.
For every vertex $u$ of $T$ we compute $F(X_u)$ following the rules
given in Lemma \ref{le1}. By Corollary \ref{cor3} we can solve our
problem by $F(X_w)=F(X)$.
Our rules given Lemma \ref{le1} show the following running times.
For every $v\in V$ and $a\in\{1,\ldots,k\}$ set $F(a(v))$
can be computed in $\text{$\mathcal O$}(1)$.
The set $F(X \oplus Y)$ can be computed
in time $(n+1)^{3(2^{k^2}-1)}\in n^{2^{\text{$\mathcal O$}(k^2)}}$ from $F(X)$ and $F(Y)$.
The sets $F(\alpha_{a,b}(X))$ and
$F(\rho_{a \to b}(X))$ can be computed
in time $(n+1)^{2^{k^2}-1}\in n^{2^{\text{$\mathcal O$}(k^2)}}$ from $F(X)$.
In order to bound the number and order of operations within directed clique-width expressions,
we can use the normal form for clique-width expressions defined in \cite{EGW03}.
The proof of Theorem 4.2 in \cite{EGW03} shows that also for directed clique-width expression $X$,
we can assume that for every subexpression, after a disjoint union operation
first there is a sequence of edge insertion operations followed by a sequence of
relabeling operations, i.e. between two disjoint union operations there is no relabeling before
an edge insertion. Since there are $n$ leaves in $T$, we have $n-1$
disjoint union operations, at most $(n-1)\cdot (k-1)$ relabeling operations,
and at most $(n-1)\cdot k(k-1)$ edge insertion operations.
This leads to an overall running time of $n^{2^{\text{$\mathcal O$}(k^2)}}$.
\end{proof}
\begin{example}
We consider $X=(\alpha_{1,2}(\rho_{2\to 1}(\alpha_{2,1}(\alpha_{1,2}(1(v_1)\oplus 2(v_2))))\oplus 2(v_3))$.
\begin{itemize}
\item
$F(1(v_1))=\{ \langle \{(1,1)\} \rangle \}$
\item
$F(2(v_2))=\{ \langle \{(2,2)\} \rangle \}$
\item
$F(1(v_1)\oplus 2(v_2))=\{ \langle \{(1,1)\},\{(2,2)\} \rangle, \langle \{(1,1),(2,2)\} \rangle \}$
\item
$F(\alpha_{1,2}(1(v_1)\oplus 2(v_2)))=\{ \langle \{(1,1)\},\{(2,2)\} \rangle, \langle \{(1,1),(2,2),(1,2)\} \rangle \}$
\item
$F(\alpha_{2,1}(\alpha_{1,2}(1(v_1)\oplus 2(v_2))))=\{ \langle \{(1,1)\},\{(2,2)\} \rangle\}$
\item
$F(\rho_{2\to 1}(\alpha_{2,1}(\alpha_{1,2}(1(v_1)\oplus 2(v_2)))))=\{ \langle \{(1,1)\},\{(1,1)\} \rangle\}$
\item
$F(\rho_{2\to 1}(\alpha_{2,1}(\alpha_{1,2}(1(v_1)\oplus 2(v_2))))\oplus 2(v_3))=\{ \langle \{(1,1)\},\{(2,2)\},\{(1,1)\} \rangle ,\langle \{(1,1),(2,2)\},$ $\{(1,1)\} \rangle \}$
\item
$F(\alpha_{1,2}(\rho_{2\to 1}(\alpha_{2,1}(\alpha_{1,2}(1(v_1)\oplus 2(v_2))))\oplus 2(v_3)))=\{ \langle \{(1,1)\},\{(2,2)\},\{(1,1)\} \rangle,$ $\langle \{(1,1),(2,2),(1,2)\},$ $\{(1,1)\} \rangle \}$
\end{itemize}
Thus, in $F(X)$ there is one multi set consisting of two sets of label pairs
and one multi set consisting of three sets of label pairs. This implies that
$\vec{\chi}(\g(X))=2$.
\end{example}
The running time shown in Theorem \ref{xp-dca} leads to the following result.
\begin{corollary}\label{xp-dc}
The Dichromatic Number problem is in $\xp$ when parameterized by directed clique-width.
\end{corollary}
Up to now there are only very few digraph classes for which we can compute a directed clique-width expression in
polynomial time. This holds for directed co-graphs, digraphs of bounded directed modular width,
orientations of trees, and directed cactus forests.
For such classes we can apply the result of Theorem~\ref{xp-dca}.
In order to find directed clique-width expressions for general
digraphs one can use results on the related parameter bi-rank-width \cite{KR13}.
By \cite[Lemma 9.9.12]{BG18} we can use approximate directed clique-width
expressions obtained from rank-decomposition with the
drawback of a single-exponential blow-up on the parameter.
Next, we give a lower bound for the running time of parameterized
algorithms for Dichromatic Number problem parameterized by
the directed clique-width.
\begin{corollary}
The Dichromatic Number problem on digraphs on $n$ vertices parameterized by
the directed clique-width $k$ cannot be solved in time $n^{2^{o(k)}}$, unless ETH fails.
\end{corollary}
\begin{proof}
In order to show the statement we apply the following lower bound for the
Chromatic Number problem parameterized by clique-width given in \cite{FGLSZ18}.
Any algorithm for the Chromatic Number problem parameterized by clique-width
with running in $n^{2^{o(k)}}$ would disprove the Exponential Time Hypothesis.
By Observation \ref{obs-dicol} and
since for every undirected graph $G$ its clique-width equals the directed
clique-width of $\overleftrightarrow{G}$ \cite{GWY16}, any algorithm
for the Dichromatic Number problem parameterized by
the directed clique-width can be used to solve the
Chromatic Number problem parameterized by clique-width.
\end{proof}
In order to show fixed parameter tractability for $\DCN_{r}$ w.r.t.\ the
parameter directed clique-width one can use its defineability within monadic second order logic (MSO).
We restrict to $\text{MSO}_1$-logic, which allows propositional logic,
variables for vertices and vertex sets of digraphs, the predicate $\arc(u,v)$ for arcs of digraphs, and
quantifications over vertices and vertex sets \cite{CE12}.
For defining optimization problems we use the $\text{LinEMSO}_1$ framework given in \cite{CMR00}.
The following theorem is from \cite[Theorem 4.2]{GHKLOR14}.
\begin{theorem}[\cite{GHKLOR14}]\label{th-ghk}
For every integer $k$ and $\text{MSO}_1$ formula $\psi$, every $\psi$-$\text{LinEMSO}_1$ optimization problem
is fixed-parameter tractable on digraphs of clique-width $k$, with the parameters $k$ and length of the formula $|\psi|$.
\end{theorem}
Next, we will apply this result to $\DCN$.
\begin{theorem}\label{fpt-cw-r}
The Dichromatic Number problem
is in $\fpt$ when parameterized by directed clique-width and $r$.
\end{theorem}
\begin{proof} Let $G=(V,E)$ be a digraph.
We can define $\DCN_{r}$ by an $\text{MSO}_1$ formula
$$\psi=\exists V_1,\ldots, V_r: \left( \text{Partition}(V,V_1,\ldots,V_r) \wedge \bigwedge_{1\leq i \leq r} \text{Acyclic}(V_i) \right)$$
with
$$
\begin{array}{lcl}\text{Partition}(V,V_1,\ldots,V_r)&=&\forall v\in V: ( \bigvee_{1\leq i\leq r}v\in V_i) \wedge \\
&& \nexists v\in V: ( \bigvee_{i\neq j,~ 1\leq i,j \leq r}(v\in V_i\wedge v\in V_j))
\end{array}
$$
and
$$
\begin{array}{lcl}
\text{Acyclic}(V_i)&=&\forall V'\subseteq V_i, V'\neq \emptyset: \exists v\in V' (\text{outdegree}(v)=0 \vee \text{outdegree}(v)\geq 2)
\end{array} $$
For the correctness we note the following. For every induced cycle $V'$ in
$G$ it holds that for every vertex $v\in V'$ we have $\text{outdegree}(v)=1$ in $G$.
This does not hold for non-induced cycles. But since for every cycle $V''$ in $G$
there is a subset $V'\subseteq V''$, such that
$G[V']$ is a cycle, we can verify by $\text{Acyclic}(V_i)$ whether $G[V_i]$ is acyclic.
Since it holds that $|\psi|\in \text{$\mathcal O$}(r)$, the statement follows by the result of Theorem \ref{th-ghk}.
\end{proof}
\begin{corollary}\label{fpt-dc}
For every integer $r$ the $r$-Dichromatic Number problem
is in $\fpt$ when parameterized by directed clique-width.
\end{corollary}
\section{Conclusions and outlook}
The presented methods allow us to compute the dichromatic number
on special classes of digraphs in polynomial time.
The shown parameterized solutions of Corollary \ref{xp-dc} and Theorem \ref{fpt-cw-r}
also hold for any parameter which is
larger or equal than directed clique-width, such as the parameter directed modular width \cite{SW20} (which
even allows an $\fpt$-algorithm by \cite{SW19,SW20})
and directed linear clique-width \cite{GR19c}. Furthermore, restricted to
semicomplete digraphs the shown parameterized solutions also hold
for directed path-width~\cite[Lemma 2.14]{FP19}.
Further, the hardness result of Corollary \ref{hpcw} rules out
$\fpt$-algorithms for the Dichromatic
Number problem parameterized by width parameters
which can be bounded by directed clique-width. Among these are
the clique-width and rank-width of the underlying undirected
graph, which also have been considered in \cite{Gan09} on the Oriented Chromatic
Number problem.
\begin{corollary}\label{cor-fpt-un}
The Dichromatic Number problem is not in $\fpt$ when parameterized by clique-width of the underlying undirected graph, unless $\p=\np$.
\end{corollary}
From a parameterized point of view width parameters are so-called structural parameters, which
are measuring the difficulty of decomposing a graph into a special tree-structure.
Beside these, the standard parameter, i.e.\ the threshold value given in the instance,
is well studied. Unfortunately, for the Dichromatic Number problem the standard parameter
is the number of necessary colors $r$ and
does even not allow an $\xp$-algorithm, since $\DCN_{2}$ is NP-complete \cite{MSW19}.
\begin{corollary}\label{cor-xp-r}
The Dichromatic Number problem is not in $\xp$ when parameterized by $r$, unless $\p=\np$.
\end{corollary}
A positive result can be obtained for parameter ''number of vertices'' $n$.
Since integer linear programming is fixed-parameter tractable for the
parameter ''number of variables'' \cite{Len83}
the existence of an integer program for $\DCN$ using $\text{$\mathcal O$}(n^2)$ variables
implies an $\fpt$-algorithm for parameter $n$.
\begin{remark}
To formulate $\DCN$ for some directed graph $G=(V,E)$ as an integer program,
we introduce a binary variable $y_j\in\{0,1\}$, $j\in \{1,\ldots,n\}$, such that
$y_j=1$ if and only if color $j$ is used. Further, we use $n^2$ variables
$x_{i,j}\in\{0,1\}$, $i,j\in \{1,\ldots,n\}$, such that
$x_{i,j}=1$ if and only if vertex $v_i$ receives color $j$.
In order to ensure that every color class is acyclic,
we will use the well known characterization that a digraph is acyclic if
and only if it has a topological vertex ordering. A {\em topological vertex ordering}
of a directed graph is a linear ordering of its vertices such that for every
edge $(v_i,v_{i'})$ vertex $v_i$ comes before vertex $v_{i'}$ in the ordering.
The existence of a topological ordering will be verified by further integer
valued variables $t_i\in \{0,\ldots,n-1\}$ realizing the ordering number of vertex $v_i$,
$i\in \{1,\ldots,n\}$.
In order to define only feasible orderings for every color $j\in \{1,\ldots,n\}$,
for edge $(v_i,v_{i'})\in A$ with $x_{i,j}=1$ and $x_{i',j}=1$ we verify
$t_{i'}\geq t_i+1$ in condition (\ref{01-p3-cn0}).
\begin{eqnarray}
\text{minimize} \sum_{j=1}^{n} y_j \label{01-p1-cn0}
\end{eqnarray}
subject to
\begin{eqnarray}
\sum_{j=1}^{n} x_{i,j} & = &1 \text{ for every } i \in \{1,\ldots,n\} \label{01-p2-cn0} \\
x_{i,j} & \leq & y_j \text{ for every } i,j \in \{1,\ldots,n\} \label{01-p-cn0} \\
t_{i'} & \geq & t_i+1 - n\cdot \left(1- (x_{i,j}\wedge x_{i',j})\right) \text{ for every } (v_i,v_{i'})\in E, j \in \{1,\ldots,n\} \label{01-p3-cn0} \\
y_j &\in & \{0,1\} \text{ for every } j \in \{1,\ldots,n\} \label{01-p4-cn0} \\
t_i &\in & \{0,\ldots,n-1\} \text{ for every } i \in \{1,\ldots,n\} \label{01-pc-cn0} \\
x_{i,j} &\in & \{0,1\} \text{ for every } i,j \in \{1,\ldots,n\} \label{01-p5-cn0}
\end{eqnarray}
Equations in (\ref{01-p3-cn0}) are not in propositional logic, but can be
transformed for integer programming \cite{Gur14}.
\end{remark}
\begin{corollary}\label{fpt-n}
The Dichromatic Number problem
is in $\fpt$ when parameterized by the number of vertices $n$.
\end{corollary}
It remains to verify whether the running time of our $\xp$-algorithm for $\DCN$ can
be improved to $n^{2^{\text{$\mathcal O$}(k)}}$, which is possible for the Chromatic Number
problem by \cite{EGW01a,KR03}.
Further, it remains open whether the
hardness of Corollary \ref{hpcw} also holds for special digraph classes and for
directed linear clique-width~\cite{GR19c}.
Furthermore, the existence of an $\fpt$-algorithm for $\DCN_r$ w.r.t.\ parameter
clique-width of the underlying undirected graph is open, see
Table \ref{tab}.
\section*{Acknowledgements} \label{sec-a}
The work of the second and third author was supported
by the Deutsche
Forschungsgemeinschaft (DFG, German Research Foundation) -- 388221852
\bibliographystyle{alpha}
\newcommand{\etalchar}[1]{$^{#1}$}
|
2,869,038,154,726 | arxiv | \section{Introduction}
\label{s:intro}
Let $G$ be a finite abelian group of order $n$ and let $\Delta_{n-1}$ denote the $(n-1)$-simplex on the vertex set $G$. The \emph{sum complex} $X_{A,k}$ associated to a subset $A \subset G$ and $k < n$, is the $k$-dimensional simplicial complex obtained
by taking the full $(k-1)$-skeleton of $\Delta_{n-1}$, together with all $(k+1)$-subsets $\sigma \subset G$ that satisfy $\sum_{x \in \sigma} x \in A$.
\\
{\bf Example:} Let $G={\fam\bbfam\twelvebb Z}_7$ be the cyclic group of order $7$, and let $A=\{0,1,3\}$. The sum complex $X_{A,2}$ is depicted in Figure \ref{fig:xa}). Note that $X_{A,2}$
is obtained from a $7$-point triangulation of the real projective plane ${\fam\bbfam\twelvebb R}{\fam\bbfam\twelvebb P}^2$ (Figure \ref{fig:rp2}) by adding the faces $\{2,3,5\}$, $\{0,2,6\}$ and $\{1,2,4\}$. $X_{A,2}$ is clearly homotopy equivalent to ${\fam\bbfam\twelvebb R}{\fam\bbfam\twelvebb P}^2$.
\begin{figure}
\subfloat[A $7$-point triangulation of ${\fam\bbfam\twelvebb R}{\fam\bbfam\twelvebb P}^2$]
{\label{fig:rp2}
\scalebox{0.6}{\includegraphics{pp7p.eps}}}
\hspace{30pt}
\subfloat[$X_{A,2}$ for $A=\{0,1,3\} \subset {\fam\bbfam\twelvebb Z}_7$]
{\label{fig:xa}
\scalebox{0.6}{\includegraphics{xa3a.eps}}}
\caption{}
\label{figure1}
\end{figure}
The sum complexes $X_{A,k}$ may be viewed as $k$-dimensional analogues of Cayley graphs over $G$. They were defined and studied (for cyclic groups) in \cite{LMR10,M14}, where some of their combinatorial and topological properties were established. For example, for $G={\fam\bbfam\twelvebb Z}_p$, the cyclic group of prime order $p$, the homology of $X_{A,k}$ was determined in \cite{LMR10} for coefficient fields ${\fam\bbfam\twelvebb F}$ of characteristic coprime to $p$, and in \cite{M14} for general ${\fam\bbfam\twelvebb F}$. In particular, for ${\fam\bbfam\twelvebb F}={\fam\bbfam\twelvebb C}$ we have the following
\begin{theorem}[\cite{LMR10,M14}]
\label{t:lmr}
Let $A \subset {\fam\bbfam\twelvebb Z}_p$ such that $|A|=m$. Then for $1 \leq k < p$
\begin{equation*}
\label{hmodp}
\dim \tilde{H}_{k-1}(X_{A,k};{\fam\bbfam\twelvebb C})=
\left\{
\begin{array}{cl}
0 & ~{\rm if}~~m \geq k+1, \\
(1-\frac{m}{k+1})\binom{p-1}{k} & ~{\rm if}~~m \leq k+1.
\end{array}
\right.~~
\end{equation*}
\end{theorem}
For a simplicial complex $X$ and $k \geq -1$ let $C^k(X)$ denote the space of complex valued
simplicial $k$-cochains of $X$ and let $d_k:C^k(X)
\rightarrow C^{k+1}(X)$ denote the coboundary operator. For $k
\geq 0$ define the reduced $k$-th Laplacian of $X$ by
$L_k(X)=d_{k-1}d_{k-1}^*+d_k^*d_k$ (see section \ref{s:hodge} for
details). The minimal eigenvalue of $L_k(X)$, denoted by $\mu_k(X)$, is the \emph{$k$-th spectral gap} of $X$.
Theorem \ref{t:lmr} implies that if $A$ is a subset of $G={\fam\bbfam\twelvebb Z}_p$ of size $|A|=m \geq k+1$, then $\tilde{H}_{k-1}(X_{A,k};{\fam\bbfam\twelvebb C})=0$ and
hence $\mu_{k-1}(X_{A,k})>0$. Returning to the case of general finite abelian groups $G$, it is then natural to ask for better lower bounds on
the spectral gap $\mu_{k-1}(X_{A,k})$. Note that any $(k-1)$-simplex $\sigma \in \Delta_{n-1}$
is contained in at most $m$ simplices of $X_{A,k}$ of dimension $k$, and therefore
$\mu_{k-1}(X_{A,k}) \leq m+k$ (see (\ref{e:upmu2}) in Section \ref{s:hodge}). Let $\log$ denote natural logarithm. Our main result asserts, roughly speaking, that if $k\geq 1$ and $\epsilon>0$ are fixed and $A$ is a random subset of $G$ of size $m=\lceil c(k,\epsilon) \log n \rceil$, then
$\mu_{k-1}(X_{A,k}) >(1-\epsilon) m$ asymptotically almost surely (a.a.s.).
The precise statement is as follows.
\begin{theorem}
\label{t:logp}
Let $k$ and $\epsilon>0$ be fixed. Let $G$ be an abelian group of order $
n> \frac{2^{10} k^8}{\epsilon^8}$, and
let $A$ be a random subset of $G$ of size
$m=\lceil\frac{4k^2\log n}{\epsilon^2}\rceil$. Then
\begin{equation*}
\label{e:sprob}
{\rm Pr}\big[~\mu_{k-1}(X_{A,k}) < (1-\epsilon)m~\big] < \frac{6}{n}.
\end{equation*}
\end{theorem}
\noindent
{\bf Remarks:}
\\
1. Alon and Roichman \cite{AR94} proved that for any $\epsilon>0$ there exists a constant $c(\epsilon)>0$ such that for any group $G$ of order $n$, if $S$ is a random subset of $G$ of size
$\lceil c(\epsilon) \log n\rceil$ and $m=|S \cup S^{-1}|$, then the spectral gap of the $m$-regular Cayley graph
${\rm C}(G,S \cup S^{-1})$ is a.a.s. at least $(1-\epsilon)m$. Theorem \ref{t:logp} may be viewed as a sort of high dimensional analogue of the Alon-Roichman theorem for abelian groups.
\\
2. For $0 \leq q \leq 1$ let $Y_k(n,q)$ denote the probability space of random complexes obtained by taking the full $(k-1)$-skeleton of $\Delta_{n-1}$ and then adding each $k$-simplex independently with probability $q$. Let $d=q(n-k)$ denote the expected number of $k$-simplices containing a fixed $(k-1)$-simplex. Gundert and Wagner \cite{GW16} proved that for any $\delta>0$ there exists a $C=C(\delta)$ such that if $q\geq \frac{(k+\delta)\log n}{n}$, then $Y \in Y_k(n,q)$ satisfies
a.a.s. $\mu_{k-1}(Y) \geq (1-\frac{C}{\sqrt{d}})d$.
\\
The paper is organized as follows. In Section \ref{s:hodge} we recall
some basic properties of high dimensional Laplacians and their eigenvalues.
In Section \ref{s:ftc} we study the Fourier images of $(k-1)$-cocycles of sum complexes,
and obtain a lower bound (Theorem \ref{t:lbmk}) on $\mu_{k-1}(X_{A,k})$, in terms of the Fourier transform of the indicator function of $A$. This bound is the key ingredient in the proof of Theorem \ref{t:logp} given in Section \ref{s:pfmain}. We conclude in Section \ref{s:con} with some remarks and open problems.
\section{Laplacians and their Eigenvalues}
\label{s:hodge}
Let $X$ be a finite simplicial complex on the vertex
set $V$. Let $X^{(k)}=\{\sigma \in X:\dim \sigma \leq k\}$ be the $k$-th skeleton of $X$, and let $X(k)$ denote the set of $k$-dimensional simplices in
$X$, each taken with an arbitrary but fixed orientation. The face numbers of $X$
are $f_k(X)=|X(k)|$. A simplicial $k$-cochain is a complex valued skew-symmetric function on
all ordered $k$-simplices of $X$. For $k \geq 0$ let $C^k(X)$
denote the space of $k$-cochains on $X$. The $i$-face of an
ordered $(k+1)$-simplex $\sigma=[v_0,\ldots,v_{k+1}]$ is the
ordered $k$-simplex
$\sigma_i=[v_0,\ldots,\widehat{v_i},\ldots,v_{k+1}]$. The
coboundary operator $d_k:C^k(X) \rightarrow C^{k+1}(X)$ is given
by $$d_k \phi (\sigma)=\sum_{i=0}^{k+1} (-1)^i \phi
(\sigma_i)~~.$$ It will be convenient to augment the cochain
complex $\{C^i(X)\}_{i=0}^{\infty}$ with the $(-1)$-degree term
$C^{-1}(X)={\fam\bbfam\twelvebb C}$ with the coboundary map $d_{-1}:C^{-1}(X)
\rightarrow C^0(X)$ given by $d_{-1}(a)(v)=a$ for $a \in {\fam\bbfam\twelvebb C}~,~v
\in V$. Let $Z^k(X)= \ker (d_k)$ denote the space of $k$-cocycles
and let $B^k(X)={\rm Im}(d_{k-1})$ denote the space of
$k$-coboundaries. For $k \geq 0$ let $\tilde{H}^k(X)=Z^k(X)/B^k(X)~$
denote the $k$-th reduced cohomology group of $X$ with complex
coefficients. For each $k \geq -1$ endow $C^k(X)$ with the
standard inner product $(\phi,\psi)_X=\sum_{\sigma \in X(k)}
\phi(\sigma)\overline{\psi(\sigma)}~~$ and the corresponding $L^2$ norm
$||\phi||_X=(\phi,\phi)^{1/2}$.
\\ Let $d_k^*:C^{k+1}(X) \rightarrow C^k(X)$
denote the adjoint of $d_k$ with respect to these standard
inner products. The reduced $k$-th Laplacian of $X$ is the mapping
$$L_k(X)=d_{k-1}d_{k-1}^*+d_k^*d_k : C^k(X) \rightarrow
C^k(X).$$
The $k$-th Laplacian $L_k(X)$ is a positive semi-definite Hermitian operator on $C^k(X)$. Its
minimal eigenvalue, denoted by $\mu_k(X)$, is the \emph{$k$-th spectral gap} of $X$.
For two ordered simplices $\alpha \subset \beta$
let $(\beta:\alpha) \in \{\pm 1\}$ denote the incidence number between $\beta$ and $\alpha$. Let $\deg(\beta)$ denote the number of simplices $\gamma$ of dimensional $\dim \beta+1$ that contain $\beta$ . For an ordered $k$-simplex $\sigma=[v_0,\ldots,v_k] \in X(k)$, let $1_{\sigma} \in C^k(X)$ be the indicator $k$-cochain of $\sigma$, i.e. $1_{\sigma}(u_0,\ldots,u_k)={\rm sign}(\pi)$ if $u_i=v_{\pi(i)}$
for some permutation $\pi \in S_{k+1}$, and zero otherwise.
By a simple computation (see e.g. (3.4) in \cite{DR02}), the matrix representation of $L_k$ with respect to the standard basis $\{1_{\sigma}\}_{\sigma \in X(k)}$ of $C^k(X)$ is given by
\begin{equation}
\label{e:matform}
L_k(X)(\sigma,\tau)= \left\{
\begin{array}{ll}
\deg(\sigma)+k+1 & \sigma=\tau, \\
(\sigma :\sigma \cap \tau)\cdot (\tau:\sigma \cap \tau)
& |\sigma \cap \tau|=k~,~\sigma \cup \tau \not\in X, \\
0 & {\rm otherwise}.
\end{array}
\right.~~
\end{equation}
\noindent
{\bf Remarks:}
\\
(i) By (\ref{e:matform})
\begin{equation*}
\label{e:upmu1}
{\rm tr} \, L_{k}(X)=\sum_{\sigma\in X(k)} (\deg(\sigma)+k+1)=(k+2)f_{k+1}(X)+(k+1)f_k(X).
\end{equation*}
Hence
\begin{equation}
\label{e:upmu2}
\begin{split}
\mu_k(X) &\leq \frac{{\rm tr} \, L_k(X)}{f_k(X)}
\leq (k+2)\frac{f_{k+1}(X)}{f_k(X)}+k+1 \\
&\leq \max_{\sigma \in X(k)}\deg(\sigma)+k+1.
\end{split}
\end{equation}
\noindent
(ii) The matrix representation of $L_0(X)$
is equal to $J+L$, where $J$ is the $V \times V$ all ones matrix, and $L$ is the graph Laplacian of the $1$-skeleton $X^{(1)}$ of $X$. In particular, $\mu_0(X)$ is equal to the graphical spectral gap $\lambda_2(X^{(1)})$.
\ \\ \\
In the rest of this section we record some well known properties of the coboundary operators and Laplacians on the $(n-1)$-simplex $\Delta_{n-1}$ (Claim \ref{c:simplex}), and on subcomplexes of $\Delta_{n-1}$ that contain its full $(k-1)$-skeleton (Proposition \ref{p:ulap}). Let ${\rm I}$ denote the identity operator on $C^{k-1}(\Delta_{n-1})$.
\begin{claim}
\label{c:simplex}
$~$
\\
(i) The $(k-1)$-Laplacian on $\Delta_{n-1}$ satisfies $L_{k-1}(\Delta_{n-1})=n\cdot {\rm I}$. \\
(ii) There is an orthogonal decomposition
\begin{equation*}
\label{e:orth}
C^{k-1}(\Delta_{n-1})=\ker d_{k-2}^* \oplus {\rm Im} \, d_{k-2}.
\end{equation*}
(iii)
The operators $P={\rm I}-\frac{1}{n}d_{k-2}d_{k-2}^*$ and $Q=\frac{1}{n}d_{k-2}d_{k-2}^*$ are, respectively, the orthogonal projections of
$C^{k-1}(\Delta_{n-1})$ onto $\ker d_{k-2}^*$ and onto ${\rm Im} \, d_{k-2}$.
\end{claim}
\noindent
{\bf Proof.} Part (i) follows from (\ref{e:matform}). Next observe that
$\ker d_{k-2}^* \perp {\rm Im} \, d_{k-2}$ and
$$\dim \ker d_{k-2}^*+\dim {\rm Im} \, d_{k-2}=\dim \ker d_{k-2}^*+\dim {\rm Im} \, d_{k-2}^*=\dim C^{k-1}(\Delta_{n-1}).$$
This implies (ii).
\\
(iii) By (i):
$$n \cdot {\rm I}=L_{k-1}(\Delta_{n-1})=d_{k-2}d_{k-2}^*+d_{k-1}^*d_{k-1},$$
and hence
\begin{equation*}
\label{e:sim1}
n d_{k-2}^*=d_{k-2}^*d_{k-2}d_{k-2}^*+d_{k-2}^*d_{k-1}^*d_{k-1}=d_{k-2}^*d_{k-2}d_{k-2}^*.
\end{equation*}
It follows that
$$d_{k-2}^*P=d_{k-2}^*-\frac{1}{n}d_{k-2}^*d_{k-2}d_{k-2}^*=0,$$
and therefore ${\rm Im} \, P \subset \ker d_{k-2}^*$. Since clearly ${\rm Im} \, Q \subset {\rm Im} \, d_{k-2}$,
it follows that $P$ is the projection onto $\ker_{k-2}^*$ and $Q$ is the projection onto $
{\rm Im} \, d_{k-2}$.
{\begin{flushright} $\Box$ \end{flushright}}
\ \\ \\
The variational characterization of the eigenvalues of Hermitian operators implies that for any complex $X$
\begin{equation}
\label{e:minev}
\begin{split}
\mu_{k-1}(X)&= \min\left\{\frac{(L_{k-1}\phi,\phi)_X}{(\phi,\phi)_X}:
0 \neq \phi \in C^{k-1}(X)\right\} \\
&=\min\left\{\frac{\|d_{k-2}^*\phi\|_X^2+\|d_{k-1}\phi\|_X^2}{\|\phi\|_X^2}:
0 \neq \phi \in C^{k-1}(X)\right\}.
\end{split}
\end{equation}
When $X$ contains the full $(k-1)$-skeleton we have the following stronger statement.
\begin{proposition}
\label{p:ulap}
Let $\Delta_{n-1}^{(k-1)} \subset X \subset \Delta_{n-1}$. Then
\begin{equation}
\label{e:ulap}
\mu_{k-1}(X)
=\min\left\{\frac{\|d_{k-1}\phi\|_X^2}{\|\phi\|_X^2}:
0 \neq \phi \in \ker d_{k-2}^* \right\}.
\end{equation}
\end{proposition}
\noindent
{\bf Proof.} The $\leq$ statement in (\ref{e:ulap}) follows directly from (\ref{e:minev}).
We thus have to show the reverse inequality. First note that if $\psi \in C^{k-1}(X)$ then by Claim \ref{c:simplex}(i)
\begin{equation}
\label{e:maxev}
\begin{split}
\|d_{k-1} \psi\|_X^2 &\leq \|d_{k-1} \psi\|_{\Delta_{n-1}}^2 \\
&\leq \|d_{k-2}^* \psi\|_{\Delta_{n-1}}^2+\|d_{k-1} \psi\|_{\Delta_{n-1}}^2 \\
&= (L_{k-1}(\Delta_{n-1}) \psi,\psi)_{\Delta_{n-1}}=n \|\psi\|_X^2.
\end{split}
\end{equation}
Furthermore, if $\phi \in C^{k-1}(X)$ then by Claim \ref{c:simplex}(iii)
\begin{equation}
\label{e:dpq}
d_{k-1}\phi=d_{k-1}P\phi+d_{k-1}Q\phi=d_{k-1}P\phi,
\end{equation}
and
\begin{equation}
\label{e:dpq1}
\begin{split}
\|d_{k-2}^*\phi\|_X^2 &= (d_{k-2}^*\phi,d_{k-2}^*\phi)_X
=(\phi,d_{k-2}d_{k-2}^*\phi)_X \\
&=n(\phi,Q\phi)_X=n\|Q\phi\|_X^2.
\end{split}
\end{equation}
It follows that
\begin{equation*}
\begin{split}
\mu_{k-1}(X)&=\min\left\{\frac{\|d_{k-2}^*\phi\|_X^2+\|d_{k-1}\phi\|_X^2}{\|\phi\|_X^2}:
0 \neq \phi \in C^{k-1}(X)\right\} \\
&=\min\left\{\frac{n\|Q\phi\|_X^2+\|d_{k-1}(P\phi)\|_X^2}{\|Q\phi\|_X^2+\|P\phi\|_X^2}:
0 \neq \phi \in C^{k-1}(X)\right\} \\
&\geq \min\left\{\frac{\|d_{k-1}(P\phi)\|_X^2}{\|P\phi\|_X^2}:
0 \neq \phi \in C^{k-1}(X)\right\} \\
&= \min\left\{\frac{\|d_{k-1}\psi\|_X^2}{\|\psi\|_X^2}:
0 \neq \psi \in \ker d_{k-2}^*\right\},
\end{split}
\end{equation*}
where the first equality is (\ref{e:minev}), the second equality follows from (\ref{e:dpq}) and
(\ref{e:dpq1}), the third inequality follows from (\ref{e:maxev}) with $\psi=P\phi$, and the last equality is a consequence of Claim \ref{c:simplex}(iii).
{\begin{flushright} $\Box$ \end{flushright}}
\section{Fourier Transform and Spectral Gaps}
\label{s:ftc}
Let $G$ be a finite abelian group of order $n$. Let $\mathcal{L}(G)$ denote the space of complex valued functions on $G$ with the standard inner product
$(\phi,\psi)=\sum_{x \in G}\phi(x) \overline{\psi(x)}$ and the corresponding $L^2$ norm $\|\phi\|=(\phi,\phi)^{1/2}$. Let $\widehat{G}$ be the character group of $G$.
The \emph{Fourier Transform} of $\phi \in \mathcal{L}(G)$, is the function
$\widehat{\phi} \in \mathcal{L}(\widehat{G})$ whose value on the character $\chi \in \widehat{G}$ is given by
$\widehat{\phi}(\chi)=\sum_{x \in G} \phi(x) \chi(-x)$. For $\phi,\psi \in \mathcal{L}(G)$ we have the Parseval identity $(\widehat{\phi},\widehat{\psi})=n (\phi,\psi)$, and in particular
$\|\widehat{\phi}\|^2=n\|\phi\|^2$.
Let $G^k$ denote the direct product $G \times \cdots \times G$ ($k$ times). The character group
$\widehat{G^k}$ is naturally identified with $\widehat{G}^k$.
Let $\widetilde{\ccl}(G^k)$ denote the subspace of skew-symmetric functions in $\mathcal{L}(G^k)$.
Then $\widetilde{\ccl}(G^k)$ is mapped by the Fourier transform onto $\widetilde{\ccl}(\widehat{G}^k)$.
Recall that $\Delta_{n-1}$ is the simplex on the vertex set $G$, and
let $X \subset \Delta_{n-1}$ be a simplicial complex that contains the full $(k-1)$-skeleton of $\Delta_{n-1}$.
As sets, we will identify $C^{k-1}(X)=C^{k-1}(\Delta_{n-1})$ with $\widetilde{\ccl}(G^k)$.
Note, however, that the inner products and norms defined on $C^{k-1}(\Delta_{n-1})$ and on $\widetilde{\ccl}(G^k)$
differ by multiplicative constants:
If $\phi,\psi \in C^{k-1}(\Delta_{n-1})=\widetilde{\ccl}(G^k)$ then
$ (\phi,\psi)=k! (\phi,\psi)_{\Delta_{n-1}}$ and $\|\phi\|=\sqrt{k!} \|\phi\|_{\Delta_{n-1}}$.
\ \\ \\
Let $A \subset G$ and let $k < n=|G|$. Let $\chi_0 \in \widehat{G}$ denote the trivial character of $G$
and let $\widehat{G}_+=\widehat{G} \setminus \{\chi_0\}$.
Let $1_A \in \mathcal{L}(G)$ denote the indicator function of $A$, i.e. $1_A(x)=1$ if $x \in A$ and zero otherwise. Then $\widehat{1_A}(\eta)=\sum_{a \in A}\eta(-a)$ for $\eta \in \widehat{G}$.
The main result of this section is the following lower bound on the spectral gap of $X_{A,k}$.
\begin{theorem}
\label{t:lbmk}
\begin{equation*}
\label{e:lbmk}
\mu_{k-1}(X_{A,k}) \geq |A|-k \max\{ |\widehat{1_A}(\eta)|: \eta \in \widehat{G}_+\}.
\end{equation*}
\end{theorem}
\ \\ \\
The proof of Theorem \ref{t:lbmk} will be based on two preliminary results, Propositions \ref{p:bbb}
and \ref{p:mesti}. The first of these is the following Fourier theoretic characterization of
$\ker d_{k-2}^* \subset C^{k-1}(\Delta_{n-1})$.
\begin{proposition}
\label{p:bbb}
Let $\phi \in C^{k-1}(\Delta_{n-1})=\widetilde{\ccl}(G^k)$. Then $d_{k-2}^*\phi =0$ iff ${\rm supp}(\widehat{\phi}) \subset (\widehat{G}_+)^k$.
\end{proposition}
\noindent
{\bf Proof.} If $d_{k-2}^*\phi=0$ then for all $(x_1,\ldots,x_{k-1}) \in G^{k-1}$:
$$
0=d_{k-2}^*\phi(x_1,\ldots,x_{k-1})=\sum_{x_0 \in G} \phi(x_0,x_1,\ldots,x_{k-1}).
$$
Let $(\chi_1,\ldots,\chi_{k-1})$ be an arbitrary element of $\widehat{G}^{k-1}$ and write
$\chi=(\chi_0,\chi_1,\ldots,\chi_{k-1}) \in \widehat{G}^{k}$. Then
\begin{equation*}
\begin{split}
\widehat{\phi}(\chi) &= \sum_{(x_0,\ldots,x_{k-1}) \in G^k} \phi(x_0,x_1,\ldots,x_{k-1})
\prod_{j=1}^{k-1} \chi_j(-x_j) \\
&=\sum_{(x_1,\ldots,x_{k-1}) \in G^{k-1}} \left(\sum_{x_0 \in G} \phi(x_0,x_1,\ldots,x_{k-1}) \right)
\prod_{j=1}^{k-1} \chi_j(-x_j) =0.
\end{split}
\end{equation*}
The skew-symmetry of $\widehat{\phi}$ thus implies that ${\rm supp}(\widehat{\phi}) \subset (\widehat{G}_+)^k$.
The other direction is similar.
{\begin{flushright} $\Box$ \end{flushright}}
\ \\ \\
For the rest of this section let $X=X_{A,k}$. Fix $\phi\in C^{k-1}(X)=\widetilde{\ccl}(G^k)$.
Our next step is to obtain a lower bound on $\|d_{k-1}\phi\|_X$ via the Fourier transform $\widehat{d_{k-1}\phi}$. For $a \in G$ define a function $f_a \in \widetilde{\ccl}(G^k)$ by
\begin{equation*}
\begin{split}
f_a(x_1,\ldots,x_k)&=d_{k-1}\phi\bigl(a-\sum_{i=1}^{k} x_i,x_1,\ldots,x_k\bigr) \\
&=\phi(x_1,\ldots,x_k)+ \sum_{i=1}^k (-1)^i
\phi\bigl(a-\sum_{j=1}^{k}
x_j,x_1,\ldots,\hat{x_i},\ldots,x_k\bigr).
\end{split}
\end{equation*}
By the Parseval identity
\begin{equation}
\label{e:dphi}
\begin{split}
\|d_{k-1} \phi\|_X^2 &= \sum_{\tau \in X(k)} |d_{k-1}\phi(\tau)|^2 \\
&= \frac{1}{(k+1)!} \sum_{\{(x_0,\ldots,x_k) \in G^{k+1}: \{x_0,\ldots,x_k\} \in X\}}
|d_{k-1}\phi(x_0,\ldots,x_k)|^2 \\
&= \frac{1}{(k+1)!} \sum_{a \in A} \sum_{x=(x_1,\ldots,x_k)\in G^k} |d_{k-1}\phi(a-\sum_{i=1}^k x_i,x_1,\ldots,x_k)|^2 \\
&= \frac{1}{(k+1)!}\sum_{a \in A} \sum_{x \in G^k} |f_a(x)|^2 \\
&= \frac{1}{n^k (k+1)!}\sum_{a \in A} \sum_{\chi \in \widehat{G}^k} |\widehat{f_a}(\chi)|^2.
\end{split}
\end{equation}
We next find an expression for $\widehat{f_a}(\chi)$.
Let $T$ be the
automorphism of $\widehat{G}^k$ given by
$$
T(\chi_1,\ldots,\chi_k)=(\chi_2 \chi_1^{-1},\ldots,\chi_{k}\chi_1^{-1},\chi_1^{-1})~.$$
Then $T^{k+1}=I$ and for $1 \leq i \leq k$
\begin{equation}
\label{e:ti}
T^i(\chi_1,\ldots,\chi_k)=(\chi_{i+1}\chi_i^{-1},\ldots,\chi_k\chi_i^{-1},\chi_i^{-1},\chi_1 \chi_i^{-1},
\ldots,\chi_{i-1}\chi_i^{-1}).
\end{equation}
\noindent
The following result is a slight extension of Claim 2.2 in \cite{LMR10}.
Recall that $\chi_0$ is the trivial character of $G$.
\begin{claim}
\label{c:ftfa}
Let $\chi=(\chi_1,\ldots,\chi_k) \in \ghat^k$. Then
\begin{equation}
\label{faone}
\widehat{f_a}(\chi)=\sum_{i=0}^{k}(-1)^{ki}
\chi_i(-a) \widehat{\phi} (T^{i} \chi).
\end{equation}
\end{claim}
\noindent
{\bf Proof:} For $1 \leq i \leq k$ let
$\psi_i \in \mathcal{L}(G^k)$ be given by
$$
\psi_i(x_1,\ldots,x_k)=
\phi\bigl(a-\sum_{j=1}^{k}
x_j,x_1,\ldots,\hat{x_i},\ldots,x_k\bigr).$$
Then
$$
\widehat{\psi_i}(\chi)=
\sum_{(x_1,\ldots,x_k) \in G^k}
\phi\bigl(a-\sum_{j=1}^{k}
x_j,x_1,\ldots,\hat{x_i},\ldots,x_k\bigr)
\prod_{j=1}^k \chi_j(-x_j)~~. $$
Substituting
$$
y_j=\left\{
\begin{array}{ll}
a-\sum_{\ell=1}^k x_{\ell} & j=1, \\
x_{j-1} & 2 \leq j \leq i, \\
x_j & i+1 \leq j \leq k,
\end{array}
\right.
$$
it follows that
$$
\prod_{j=1}^k \chi_j(-x_j)=\chi_i^{-1}(a-y_1)\prod_{j=2}^i (\chi_i^{-1}\chi_{j-1})(-y_j) \prod_{j=i+1}^k (\chi_i^{-1}\chi_j)(-y_j)~.$$
Therefore
\begin{equation}
\label{e:psifour}
\begin{split}
\widehat{\psi_i}(\chi)&=\chi_i(-a)
\sum_{y=(y_1,\ldots,y_k)\in G^k}
\phi(y)\chi_i^{-1}(-y_1)\prod_{j=2}^i (\chi_{j-1} \chi_i^{-1}) (-y_j) \prod_{j=i+1}^k
(\chi_j \chi_i^{-1})(-y_j) \\
&=\chi_i(-a)
\widehat{\phi}
(\chi_i^{-1},\chi_1\chi_i^{-1},\ldots,\chi_{i-1}\chi_i^{-1},\chi_{i+1}
\chi_i^{-1},\ldots,\chi_k\chi_i^{-1}) \\
&=\chi_i(-a) (-1)^{i(k-i)} \widehat{\phi}(T^i \chi)~.
\end{split}
\end{equation}
Now (\ref{faone}) follows from (\ref{e:psifour}) since
$f_a=\phi+\sum_{i=1}^k (-1)^i \psi_i$.
{\begin{flushright} $\Box$ \end{flushright}}
\noindent
For $\phi \in \widetilde{\ccl}(G^k)$ and $\chi=(\chi_1,\ldots,\chi_k) \in \widehat{G}^k$ let
\begin{equation*}
\label{e:dchi}
\begin{split}
&D(\phi,\chi)= \{\chi_i \chi_j^{-1}: 0 \leq i < j \leq k~,
~\widehat{\phi}(T^i \chi)\widehat{\phi}(T^j\chi)\neq 0\} \\
&=\{\chi_j^{-1}:1 \leq j \leq k~,~\widehat{\phi}(\chi)\widehat{\phi}(T^j\chi) \neq 0\} \cup
\{\chi_i \chi_j^{-1}: 1 \leq i < j \leq k~,
~\widehat{\phi}(T^i \chi)\widehat{\phi}(T^j\chi)\neq 0\}.
\end{split}
\end{equation*}
Let $D(\phi)=\bigcup_{\chi \in \widehat{G}^k} D(\phi,\chi) \subset \widehat{G}$.
The main ingredient in the proof of Theorem \ref{t:lbmk} is the following
\begin{proposition}
\label{p:mesti}
$$
\|d_{k-1} \phi\|_X^2 \geq \left(|A|- k\max_{\eta \in D(\phi)} |\widehat{1_A}(\eta)|\right)\|\phi\|_X^2.
$$
\end{proposition}
\noindent
{\bf Proof.}
Let $\chi=(\chi_1,\ldots,\chi_k) \in \widehat{G}^k$.
By Claim \ref{c:ftfa}
\begin{equation}
\label{suma}
\begin{split}
&\sum_{a \in A} |\widehat{f_a}(\chi)|^2=\sum_{a \in A} |\sum_{i=0}^{k}(-1)^{ki}
\chi_i(-a) \widehat{\phi} (T^{i} \chi)|^2 \\
&=\sum_{a \in A} \sum_{i,j=0}^{k}(-1)^{k(i+j)}
(\chi_i \chi_j^{-1})(-a) \widehat{\phi} (T^{i} \chi)\overline{\widehat{\phi} (T^{j} \chi)} \\
&= |A|\sum_{i=0}^k |\widehat{\phi} (T^{i} \chi)|^2+ 2\, \text{Re}\sum_{a \in A} \sum_{0 \leq i<j \leq k}(-1)^{k(i+j)}
(\chi_i \chi_j^{-1})(-a)\widehat{\phi}(T^i\chi)\overline{\widehat{\phi}(T^j\chi)} \\
&= |A|\sum_{i=0}^k |\widehat{\phi} (T^{i} \chi)|^2+ 2\, \text{Re}\sum_{0 \leq i<j \leq k}(-1)^{k(i+j)}
\widehat{1_A}(\chi_i\chi_j^{-1})\widehat{\phi}(T^i \chi)\overline{\widehat{\phi}(T^j\chi)} \\
&\geq |A|\sum_{i=0}^k |\widehat{\phi} (T^{i} \chi)|^2 -2\max_{\eta \in D(\phi,\chi)} |\widehat{1_A}(\eta)|\sum_{0 \leq i<j \leq k} |\widehat{\phi}(T^i \chi)|\cdot|\widehat{\phi}(T^j \chi)|.
\end{split}
\end{equation}
Using (\ref{e:dphi}) and summing (\ref{suma}) over all $\chi \in \widehat{G}^k$ it follows that
\begin{equation*}
\label{sumau}
\begin{split}
&n^k (k+1)!\|d_{k-1} \phi\|_X^2 = \sum_{a \in A} \sum_{\chi \in \widehat{G}^k} |\widehat{f_a}(\chi)|^2 \\
&\geq |A|\sum_{i=0}^k \sum_{\chi \in \widehat{G}^k} |\widehat{\phi} (T^{i} \chi)|^2 -2\max_{\eta \in D(\phi)} |\widehat{1_A}(\eta)|\sum_{0 \leq i<j \leq k}
\sum_{\chi \in \widehat{G}^k} |\widehat{\phi}(T^i \chi)|\cdot|\widehat{\phi}(T^j\chi)| \\
&\geq (k+1)|A|\sum_{\chi \in \widehat{G}^k} |\widehat{\phi}(\chi)|^2- k(k+1)\max_{\eta \in D(\phi)} |\widehat{1_A}(\eta)| \, \sum_{\chi \in \widehat{G}^k} |\widehat{\phi}(\chi)|^2 \\
&=(k+1)n^k \sum_{x \in G^k} |\phi(x)|^2 \left(|A|-k\max_{\eta \in D(\phi)} |\widehat{1_A}(\eta)|\right) \\
&= (k+1)!n^k\|\phi\|_X^2\left(|A|- k\max_{\eta \in D(\phi)} |\widehat{1_A}(\eta)|\right).
\end{split}
\end{equation*}
{\begin{flushright} $\Box$ \end{flushright}}
\noindent
{\bf Proof of Theorem \ref{t:lbmk}.} Let $0 \neq \phi \in C^{k-1}(X_{A,k})=\widetilde{\ccl}(G^k)$ such that $d_{k-2}^*\phi=0$. Proposition \ref{p:bbb} implies that
\begin{equation}
\label{e:supht}
{\rm supp}(\widehat{\phi}) \subset (\widehat{G}_+)^k.
\end{equation}
We claim that
$\chi_0 \not\in D(\phi)$. Suppose to the contrary that $\chi_0 \in D(\phi)$. Then there exists
a $\chi=(\chi_1,\ldots,\chi_k) \in \widehat{G}^k$ such that $\chi_0 \in D(\phi,\chi)$,
i.e. $\chi_i=\chi_j$ for some
$0 \leq i < j \leq k$ such that $\widehat{\phi}(T^i \chi)\widehat{\phi}(T^j \chi) \neq 0$. We consider two cases:
\begin{itemize}
\item
If $i=0$ then $\chi_j=\chi_i=\chi_0$ and therefore
$$0 \neq \widehat{\phi}(T^i\chi)=\widehat{\phi}(\chi)=
\widehat{\phi}(\chi_1,\ldots,\chi_{j-1},\chi_0,\chi_{j+1},\ldots,\chi_k),$$
in contradiction of (\ref{e:supht}).
\item
If $i \geq 1$ then $\chi_j \chi_i^{-1}=\chi_0$, and by (\ref{e:ti})
\begin{equation*}
\label{e:e:supht1}
\begin{split}
0&\neq \widehat{\phi}(T^i \chi)=
\widehat{\phi}(\chi_{i+1}\chi_i^{-1},\ldots,\chi_k\chi_i^{-1},\chi_i^{-1},\chi_1 \chi_i^{-1},
\ldots,\chi_{i-1}\chi_i^{-1}) \\
& \widehat{\phi} (\chi_{i+1}\chi_i^{-1},\ldots,\chi_{j-1} \chi_i^{-1},\chi_0,\chi_{j+1}\chi_i^{-1},\ldots,
\chi_k\chi_i^{-1},\chi_i^{-1},\chi_1 \chi_i^{-1},
\ldots,\chi_{i-1}\chi_i^{-1}),
\end{split}
\end{equation*}
again in contradiction of (\ref{e:supht}).
\end{itemize}
We have thus shown that $D(\phi) \subset \widehat{G}_+$.
Combining Propositions \ref{p:ulap} and \ref{p:mesti} we obtain
\begin{equation*}
\label{e:fin}
\begin{split}
\mu_{k-1}&(X_{A,k})
=\min\left\{\frac{\|d_{k-1}\phi\|_X^2}{\|\phi\|_X^2}:
0 \neq \phi \in \ker d_{k-2}^* \right\} \\
&\geq \min\left\{
\frac{\left(|A|- k\max_{\eta \in D(\phi)}
|\widehat{1_A}(\eta)|\right)\|\phi\|_X^2 }{\|\phi\|_X^2}:
0 \neq \phi \in \ker d_{k-2}^* \right\} \\
&\geq |A|- k\max_{\eta \in \widehat{G}_+} |\widehat{1_A}(\eta)|.
\end{split}
\end{equation*}
{\begin{flushright} $\Box$ \end{flushright}}
\section{Proof of Theorem \ref{t:logp}}
\label{s:pfmain}
Let $k \geq 1$ and $0<\epsilon<1$ be fixed, and let $n> \frac{2^{10} k^8}{\epsilon^8}$,
$m=\lceil\frac{4k^2\log n}{\epsilon^2}\rceil$. Let $G$ be an abelian group of order $n$ and
let $\Omega$ denote the uniform probability space of all $m$-subsets of $G$.
Suppose that $A \in \Omega$ satisfies $|\widehat{1_A}(\eta)| \leq \frac{\epsilon m}{k}$ for all
$\eta \in \widehat{G}_+$. Then by Theorem \ref{t:lbmk}
\begin{equation*}
\begin{split}
\mu_{k-1}(X_{A,k}) &\geq |A|-k \max\{ |\widehat{1_A}(\eta)|: \eta \in \widehat{G}_+\} \\
& \geq |A|-k \cdot \frac{\epsilon m}{k}=(1-\epsilon)m.
\end{split}
\end{equation*}
Theorem \ref{t:logp} will therefore follow from
\begin{proposition}
\label{p:cher}
\begin{equation*}
\label{e:cher}
{\rm Pr}_{\Omega}\left[~A \in \Omega~:~\max_{\eta \in \widehat{G}_+}
|\widehat{1_A}(\eta)|>\frac{\epsilon m}{k}~\right]< \frac{6}{n}.
\end{equation*}
\end{proposition}
\noindent
{\bf Proof.}
Let $\eta \in \widehat{G}_+$ be fixed and let $\lambda=\frac{\epsilon m}{k}$.
Let $\Omega'$ denote the uniform probability space $G^m$, and for $1 \leq i \leq m$ let $X_i$ be the random variable defined on $\omega'=(a_1,\ldots,a_m) \in \Omega'$ by
$X_i(\omega')=\eta(-a_i)$. The $X_i$'s are of course independent and satisfy $|X_i|=1$. Furthermore, as $\eta \neq \chi_0$, the expectation of $X_i$ satisfies $E_{\Omega'}[X_i]=\frac{1}{n}
\sum_{x \in G} \eta(-x)=0$. Hence, by the Chernoff bound (see e.g. Theorem A.1.16 in \cite{AS08})
\begin{equation}
\label{e:chernoff1}
\begin{split}
&{\rm Pr}_{\Omega'}\left[~\omega' \in \Omega'~:~|\sum_{i=1}^m X_i(\omega')|>\lambda~\right]
< 2\exp\left(-\frac{\lambda^2}{2m}\right) \\
&=2\exp\left(-\frac{\epsilon^2m}{2k^2}\right) \leq \frac{2}{n^2}.
\end{split}
\end{equation}
Let $\Omega''=\{(a_1,\ldots,a_m) \in G^m: a_i \neq a_j \text{~for~} i \neq j \}$
denote the subspace of $\Omega'$ consisting of all sequences in $G^m$ with pairwise distinct elements. Note that the assumption $n > 2^{10} k^8 \epsilon^{-8}$ implies that
\begin{equation}
\label{e:mp}
\frac{m^2}{n-m} <1.
\end{equation}
Combining (\ref{e:chernoff1}) and (\ref{e:mp}) we obtain
\begin{equation}
\label{e:chernoff2}
\begin{split}
&{\rm Pr}_{\Omega}\left[A \in \Omega:|\widehat{1_A}(\eta)|>\frac{\epsilon m}{k}~\right] \\
&={\rm Pr}_{\Omega''}\left[~\omega'' \in \Omega'':|\sum_{i=1}^m X_i(\omega'')|
>\frac{\epsilon m}{k}~\right] \\
&\leq {\rm Pr}_{\Omega'}\left[~\omega' \in \Omega'~:~|\sum_{i=1}^m X_i(\omega')| >
\frac{\epsilon m}{k}~\right]
\cdot \left({\rm Pr}_{\Omega'}[~\Omega''~]\right)^{-1} \\
&< \frac{2}{n^2} \cdot \prod_{i=1}^m \frac{n}{n-i+1} \leq \frac{2}{n^2} \cdot
\left(\frac{n}{n-m}\right)^m \\
&\leq \frac{2}{n^2} \cdot \exp\left(\frac{m^2}{n-m}\right) < \frac{6}{n^2}.
\end{split}
\end{equation}
Using the union bound together with (\ref{e:chernoff2}) it follows that
$$
{\rm Pr}_{\Omega}\left[~\max_{\eta \in \widehat{G}_+}|\widehat{1_A}(\eta)|>\frac{\epsilon m}{k}~\right]< \frac{6}{n}.
$$
{\begin{flushright} $\Box$ \end{flushright}}
\section{Concluding Remarks}
\label{s:con}
In this paper we studied the $(k-1)$-spectral gap of sum complexes $X_{A,k}$ over a finite abelian group $G$. Our main results include a Fourier theoretic lower bound on $\mu_{k-1}(X_{A,k})$,
and a proof that if $A$ is a random subset of $G$ size of $O(\log |G|)$, then $X_{A,k}$ has a nearly optimal $(k-1)$-th spectral gap.
Our work suggests some more questions regarding sum complexes:
\begin{itemize}
\item
Theorem \ref{t:logp} implies that if $G$ is an abelian group of order $n$, then
$G$ contains many subsets $A$ of size $m=O_{k,\epsilon}(\log n)$ such that
$\mu_{k-1}(X_{A,k})\geq (1-\epsilon)m$. As is often the case with probabilistic existence proofs, it would be interesting to give explicit constructions for such $A$'s.
For $G={\fam\bbfam\twelvebb Z}_2^{\ell}$, such a construction
follows from the work of Alon and Roichman. Indeed, they observed (see Proposition 4 in \cite{AR94}) that by results of \cite{ABNNR92},
there is an absolute constant $c>0$ such that for any $\epsilon>0$ and $\ell$, there is an explicitly constructed $A_{\ell} \subset {\fam\bbfam\twelvebb Z}_2^{\ell}$ of size
$m\leq \frac{c k^3 \ell}{\epsilon^3}=\frac{c k^3 \log_2 |G|}{\epsilon^3}$, such that
$$
|\widehat{1_{A_\ell}}(v)|=|\sum_{a \in A}(-1)^{a \cdot v}| \leq \frac{\epsilon m}{k}
$$
for all $0 \neq v \in {\fam\bbfam\twelvebb Z}_2^{\ell}$. Theorem \ref{t:lbmk} then implies that
$\mu_{k-1}(X_{A_{\ell},k}) \geq (1-\epsilon)m$.
\\
It would be interesting to find explicit constructions with $|A|=O(\log |G|)$ for other groups $G$ as well, in particular for the cyclic group ${\fam\bbfam\twelvebb Z}_p$.
\item
Consider the following non-abelian version of sum complexes. Let $G$ be a finite group of order $n$
and let $A \subset G$. For $1 \leq i \leq k+1$ let $V_i$ be the $0$-dimensional complex on the set $G \times \{i\}$, and let $T_{n,k}$ be the join $V_1*\cdots*V_{k+1}$.
The complex $R_{A,k}$ is obtained by taking the $(k-1)$-skeleton of $T_{n,k}$, together with all
$k$-simplices $\sigma=\{(x_1,1),\ldots,(x_{k+1},k+1)\} \in T_{n,k}$ such that
$x_1\cdots x_{k+1} \in A$. One may ask whether there is an analogue of Theorem \ref{t:logp}
for the complexes $R_{A,k}$, i.e. is there a constant $c_1(k,\epsilon)>0$ such that
if $A$ is a random subset of $G$ of size $m=\lceil c_1(k,\epsilon) \log n \rceil$, then
a.a.s. $\mu_{k-1}(R_{A,k}) >(1-\epsilon) m$.
\end{itemize}
|
2,869,038,154,727 | arxiv | \section{Introduction}
For $k$ fixed, a graph $G$ is {\it $k$-path-pairable}, if for any set of $k$ disjoint pairs of vertices, $s_i,t_i$, $1\leq i\leq k$, there exist pairwise edge-disjoint $s_i,t_i$-paths in $G$. The {\it path-pairability number}, denoted $pp(G)$, is the largest $k$ such that $G$ is $k$-path-pairable.
In \cite{pxp} we determine the path-pairability number of the grid graph $G(a,b)=P_{a}\Box P_{b}$, the Cartesian product of two paths on $a$ and $b$ vertices, where there is an edge between
two vertices, $(i,j)$ and $(p,q)$, if and only if $|p-i|+|q-j|=1$, for $1\leq i\leq a$, $1\leq j\leq b$.
\begin{theorem}[\cite{pxp}]
\label{PxP} If $k=\min\{a,b\}$, then
$$pp(G(a,b))=\left\{
\begin{array}{ccllll}
&k-1& \hbox{ for } &k&=2,3,4\\
&3& \hbox{ for } &k&=5 \\
&4& \hbox{ for } &k&\geq 6 \qquad
\end{array}\right..$$
\end{theorem}
We complete the proof of the formula in Theorem \ref{PxP} by proving our main result:
\begin{theorem}
\label{main}$pp(G(6,6))=4$.
\end{theorem}
In Section \ref{proof} the proof of Theorem \ref{main} is given in two parts. In Proposition \ref{66upper} we present a pairing of ten terminals which does not give a linkage
in $G(6,6)$.
To show that $G(6,6)$ is $4$-path-pairable Proposition \ref{66} uses a sequence of technical lemmas. These lemmas are listed next
in Section \ref{lemmas}, and they are proved separately in two notes, \cite{heavy} and \cite{escape}.
\section{Technical lemmas}
\label{lemmas}
Let $T=\{s_1,t_1,s_2,t_2,s_3,t_3,$ $s_4,t_4\}$ be the set of eight distinct vertices of the grid $G=P_6\Box P_6$, called {\it terminals}. The set $T$ is partitioned into
four terminal pairs, $\pi_i=\{s_i,t_i\}$, $1\leq i\leq 4$, to be linked in $G$ by edge disjoint paths. A (weak) {\it linkage} for $\pi_i$, $1\leq i\leq 4$, means a set of edge disjoint $s_i,t_i$-paths $P_i\subset G$.
The grid $G$ partitions
into four $P_3\Box P_3$ grids called {\it quadrants}.
We say that a set of terminals in a quadrant $Q\subset G$ {\it escape} from $Q$
if there are pairwise edge disjoint `mating paths' from the terminals into distinct mates (exits) located at the union of a horizontal and a vertical boundary line of $Q$.
A quadrant $Q\subset G$ is considered to be `crowded', if it contains $5$ or more terminals.
Among the technical lemmas the proof of three lemmas pertaining to crowded quadrants was presented in \cite{heavy}. The technical lemmas for `sparse' quadrants containing at most $4$ terminals are proved in \cite{escape}.
\subsection{Escaping from crowded quadrants}
Let $A$ be a horizontal and let $B$ be a vertical boundary line
of a quadrant $Q\subset G$, and for a subgraph $S\subseteq G$ set $\|S\|=|T\cap S|$.
\begin{lemma}
\label{heavy78}
If $\|Q\|=7$ or $8$,
then there is
a linkage for two or more pairs in $Q$, and there exist edge disjoint escape paths for the unlinked terminals into distinct
exit vertices in $A\cup B$.\ifhmode\unskip\nobreak\fi\quad\ifmmode\Box\else$\Box$\fi
\end{lemma}
\begin{lemma}
\label{heavy6}
If $\|Q\|=6$, then there is a linkage for one or more pairs in $Q$, and there exist edge disjoint escape paths for the unlinked terminals into distinct exit vertices of $A\cup B$ such that $B\setminus A$ contains at most one exit.\ifhmode\unskip\nobreak\fi\quad\ifmmode\Box\else$\Box$\fi
\end{lemma}
\begin{lemma}
\label{heavy5}
If $\|Q\|=5$ and $\{s_1,t_1\}\subset Q$,
then there is
an $s_1,t_1$-path $P_1\subset Q$,
and the complement of $P_1$ contains edge disjoint escape paths for the three unlinked terminals into distinct exit vertices of $A\cup B$ such that $B\setminus A$ contains at most one exit.\ifhmode\unskip\nobreak\fi\quad\ifmmode\Box\else$\Box$\fi
\end{lemma}
\subsection{Escaping from sparse quadrants}
The vertices of a grid are represented as elements $(i,j)$ of a matrix arranged
in rows $A(i)$ and columns $B(j)$. W.l.o.g. we may assume that $Q$ is the upper left quadrant of $G=P_6\Box P_6$, and thus
$A=A(3)\cap Q$ and $B=B(3)\cap Q$ are the horizontal and vertical boundary lines of $Q$, respectively.
For a vertex set $S\subset V(G)$ and a subgraph $H\subseteq G$, $H-S$ is interpreted as the subgraph obtained by the removal of $S$ and the incident edges from $H$; $S$ is also interpreted as the subgraph of $G$ induced by $S$; $x\in H$ simply means a vertex of $H$. Mating (or shifting) a terminal $w$ to vertex $w^\prime$, called a mate of $w$, means specifying a $w,w^\prime$-path called a {\it mating path}.
Finding a linkage for two pairs are facilitated using the property of a graph being {\it weakly $2$-linked},
and by introducing the concept of a {\it frame}.
A graph $H$ is weakly $2$-linked, if for every $u_1,v_1,u_2,v_2\in H$, not necessarily distinct vertices, there exist edge disjoint $u_i,v_i$-paths in $H$, for $i=1,2$. A weakly $2$-linked graph must be $2$-connected, but $2$-connectivity is not a sufficient condition.
The next lemma lists a few weakly $2$-linked subgrids (the simple proofs are omitted).
\begin{lemma}
\label{w2linked}
The grid $P_3\Box P_k$,
and the subgrid of
$P_k\Box P_k$ induced by
$(A(1)\cup A(2)) \cup$ $ (B(1)\cup B(2))$ is weakly $2$-linked, for $k\geq 3$.
\ifhmode\unskip\nobreak\fi\quad\ifmmode\Box\else$\Box$\fi
\end{lemma}
We use the $3$-path-pairability
of certain grids proved in \cite{pxp} (see in Theorem \ref{PxP}).
\begin{lemma}
\label{3pp}
The grid $P_4\Box P_k$,
is $3$-path-pairable, for $k\geq 4$.
\ifhmode\unskip\nobreak\fi\quad\ifmmode\Box\else$\Box$\fi
\end{lemma}
Let $C\subset G$ be a cycle and let $x$ be a fixed vertex of $C$. Take two edge disjoint paths from a member
of $\pi_j$ to $x$, for $j=1$ and $2$, not using edges of $C$. Then we say that
the subgraph of the union of $C$ and the two paths to $x$ define a {\it frame} $[C,x]$, for $\pi_1,\pi_2$. A frame
$[C,x]$, for $\pi_1,\pi_2$, helps find a linkage
for the pairs $\pi_1$ and $\pi_2$; in fact, it is enough to mate the other members of the terminal pairs onto $C$ using mating paths edge disjoint from $[C,x]$ and each other.
The concept of a frame facilitates `communication' between quadrants of $G$. For this purpose frames in $G$ can be built on two standard cycles $C_0, C_1\subset G$ as follows.
Let $C_0$ be the innermost $4$-cycle of $G$ induced by $(A(3)\cup A(4))\cap (B(3)\cup B(4))$, and let $C_1$ be the $12$-cycle around $C_0$ induced by the neighbors of $C_0$.
Given a quadrant $Q$ we usually set $x_0=Q\cap C_0$ and we denote by $x_1$ the middle vertex of the path $Q\cap C_1$. (For instance, in the upper right quadrant of $G$, $x_0=(3,4)$ and $x_1=(2,5)$.)
\begin{figure}[htp]
\begin{center}
\tikzstyle{T} = [rectangle, minimum width=.1pt, fill, inner sep=3pt]
\tikzstyle{V} = [circle, minimum width=1pt, fill, inner sep=1pt]
\tikzstyle{B} = [rectangle, draw=black!, minimum width=4pt, inner sep=4pt]
\tikzstyle{txt} = [circle, minimum width=1pt, draw=white, inner sep=0pt]
\tikzstyle{Wedge} = [draw,line width=1.5pt,-,black!100]
\begin{tikzpicture}
\foreach \x in {0,1,2,3,4,5}
\foreach \y in {0,1,2,3,4,5}
{ \node[V] () at (\x,\y) {};}
\foreach \u in {0,1,2,3,4}
\foreach \v in {1,2,3,4,5}
{ \draw (\u,\v) -- (\u+1,\v); \draw (\u,\v-1) -- (\u,\v); }
\draw (0,0) -- (5,0) -- (5,5);
\draw[line width=.5pt,snake=zigzag] (2,2) -- (3,2) -- (3,3) -- (2,3) -- (2,2);
\draw[Wedge,->](4,4)--(5,4) -- (5,3) -- (3.1,3);
\draw[Wedge,->] (4,5) -- (3,5) -- (3,3.1) ;
\draw[line width=.5pt,snake=zigzag] (2.5,4) -- (4,4) -- (4,1) -- (2.5,1);
\node[T](s1) at (4,4){};
\node[txt] () at (4.3,4.3){$s_1$};
\node[T,label=above:$s_2$]() at (4,5){};
\node[T]() at (3,4){};
\node[txt] () at (2.7,4.3){$s_3$};
\node[]() at (3,3){o};
\node[txt] () at (2.7,2.7){$x_0$};
\node[txt] () at (3.7,3.7){$x_1$};
\node[txt] () at (1.65,2.5){$C_0$};
\node[txt] () at (3.5,1.5){$C_1$};
\end{tikzpicture}
\hskip1truecm
\begin{tikzpicture}
\foreach \x in {0,1,2,3,4,5}
\foreach \y in {0,1,2,3,4,5}
{ \node[V] () at (\x,\y) {};}
\foreach \u in {0,1,2,3,4}
\foreach \v in {1,2,3,4,5}
{ \draw (\u,\v) -- (\u+1,\v); \draw (\u,\v-1) -- (\u,\v); }
\draw (0,0) -- (5,0) -- (5,5);
\draw[Wedge,->](4,4) -- (5,4) -- (5,3) -- (4.1,3);
\draw[Wedge,->] (3,4) -- (3,3) -- (3.9,3) ;
\draw[line width=.5pt,snake=zigzag] (2.5,4) -- (4,4) -- (4,1) -- (2.5,1);
\node[T](s1) at (4,4){};
\node[txt] () at (4.3,4.3){$s_1$};
\node[T,label=above:$s_2$]() at (4,5){};
\node[T]() at (3,4){};
\node[txt] () at (2.7,4.3){$s_3$};
\node[]() at (4,3){o};
\node[txt] () at (4.3,2.7){$w$};
\node[txt] () at (3.5,1.5){$C_1$};
\end{tikzpicture}
\end{center}
\caption{Framing $[C_0,x_0]$ for $\pi_1,\pi_2$, and $[C_1,w]$, for $\pi_1,\pi_3$}
\label{frames}
\end{figure}
Let $\alpha\in\{0,1\}$ be fixed, assume that there are two terminals in a quadrant $Q$ belonging to distinct pairs, say $s_1\in\pi_1$, $s_2\in \pi_2$, and let $w\in Q\cap C_\alpha$. We say that $[C_\alpha,w]$ is a {\it framing in $Q$ for $\pi_1,\pi_2$ to $C_\alpha$}, if
there exist edge disjoint mating paths in $Q$
from $s_1$ and from $s_2$ to $w$, edge disjoint from $C_1$
(see examples in Fig.\ref{frames} for framing in the upper right quadrant).
\begin{lemma}
\label{frame}
Let $s_1\in\pi_1,s_2\in\pi_2$ be two (not necessarily distinct) terminals/mates in a quadrant $Q$.
(i) For any mapping $\gamma:\{s_1,s_2\}\longrightarrow\{C_0,C_1\}$,
there exist edge disjoint mating paths in $Q$ from $s_j$ to vertex $s_j^\prime\in\gamma(s_j)$, $j=1,2$, not using edges of $C_1$.
(ii) For any fixed $\alpha\in\{0,1\}$, there is a framing
$[C_\alpha,x_\alpha]$,
for $\pi_1, \pi_2$, where $x_\alpha\in C_\alpha\cap Q$ and the mating paths are in $Q$.
\end{lemma}
\begin{lemma}
\label{12toCa} Let $s_p,s_q,s_r$ be distinct terminals in a quadrant $Q$ belonging to three distinct pairs. Then there is a framing in $Q$ for $\pi_p,\pi_q$ to $C_\alpha$, for some $\alpha\in\{0,1\}$, and there is an edge disjoint mating path in $Q$ from $s_r$ to $C_\beta$,
where $\beta=\alpha+1\pmod 2$, and edge disjoint from $C_1$.
\end{lemma}
\begin{lemma}
\label{Caforpq}
Let $s_1,s_2,s_3$ be distinct terminals in a quadrant $Q$ (belonging to distinct pairs);
let $y_0\in Q$ be a corner vertex of $Q$ with degree three in $G$, and
let $z\in \{x_0,y_0\}$ be a fixed corner vertex of $Q$. Then,
(i) for some $1\leq p<q\leq 3$, there is a framing in $Q$ for $\pi_p,\pi_q$ to $C_0$,
and there is an edge disjoint mating path in $Q$ from the third terminal to $C_1$;
(ii) for some $1\leq p<q\leq 3$, there is a framing in $Q$ for $\pi_p,\pi_q$ to $C_1$,
and there is an edge disjoint mating path in $Q$ from the third terminal to $z$;
\end{lemma}
\begin{lemma}
\label{exit}
Let $A$ be a boundary line of a quadrant $Q\subset G$. Let $Q_0$ be the subgraph obtained by removing the edges of $A$ from $Q$, and
let $Q_i$, $1\leq i\leq 4$, be one of the subgraphs in Fig.\ref{Qmin} obtained from $Q_0$ by edge removal and edge contraction.
(i) For any $H=Q_i$, $1\leq i\leq 4$, and for any three distinct terminals of $H$ there exist edge disjoint mating paths in $H$ from the terminals into not necessarily distinct vertices in $A$.
\begin{figure}[htp]
\tikzstyle{W} = [double,line width=.5,-,black!100]
\tikzstyle{V} = [circle, minimum width=1pt, fill, inner sep=1pt]
\tikzstyle{T} = [circle, minimum width=1pt, fill, inner sep=1pt]
\tikzstyle{A} = [rectangle, minimum width=1pt, draw=black,fill=white, inner sep=1.4pt]
\begin{center}
\begin{tikzpicture}
\draw[dashed] (0.5,0.5)--(2.3,0.5)--(2.3,0.9)--(.5,0.9)--(.5,0.5);
\foreach \x in {.7,1.4,2.1}\draw(\x,.7)--(\x,2.1);
\foreach \y in {1.4,2.1}\draw(.7,\y)--(2.1,\y);
\foreach \x in {.7,1.4,2.1}\foreach \y in {.7,1.4,2.1}\node[V] () at (\x,\y){};
\foreach \x in {.7,1.4,2.1}\node[A]()at(\x,.7){};
\node() at (.2,.7){A};
\node() at (1.4,0){$Q_0$};
\end{tikzpicture}
\hskip.5cm
\begin{tikzpicture}
\draw (1.4,2.1)--(1.4,1.4)--(2.1,1.4) (.7,1.4)--(1.4,1.4) (.7,.7)--(.7,1.4) (1.4,.7)--(1.4,1.4) (2.1,.7)--(2.1,2.1);
\foreach \x in {.7,1.4,2.1}\foreach \y in {.7,1.4}\node[V] () at (\x,\y){};
\node[T]()at(2.1,2.1){};\node[T]()at(1.4,2.1){};
\foreach \x in {.7,1.4,2.1}\node[A]()at(\x,.7){};
\node() at (1.4,0){$Q_1$};
\end{tikzpicture}
\hskip.5cm
\begin{tikzpicture}
\draw (.7,1.4)--(.7,2.1)--(1.4,1.4) (2.1,1.4)--(.7,1.4) (.7,.7)--(.7,1.4) (1.4,.7)--(1.4,1.4) (2.1,.7)--(2.1,2.1);
\foreach \x in {.7,1.4,2.1}\foreach \y in {.7,1.4}\node[V] () at (\x,\y){};
\foreach \x in {.7,2.1}\node[T] () at (\x,2.1){};
\foreach \x in {.7,1.4,2.1}\node[A]()at(\x,.7){};
\node() at (1.4,0){$Q_2$};
\end{tikzpicture}
\hskip.5cm
\begin{tikzpicture}
\draw (.7,2.1)--(2.1,2.1);
\foreach \x in {.7,1.4,2.1}\draw(\x,.7)--(\x,2.1);
\foreach \x in {1.4,2.1}\foreach \y in {.7,1.4}\node[V] () at (\x,\y){};
\foreach \y in {.7,2.1}\node[V] () at (.7,\y){};
\node[T]()at(1.4,2.1){};
\node[T]()at(2.1,2.1){};
\foreach \x in {.7,1.4,2.1}\node[A]()at(\x,.7){};
\node() at (1.4,0){$Q_3$};
\end{tikzpicture}
\hskip.5cm
\begin{tikzpicture}
\draw (.7,2.1)--(2.1,2.1);
\foreach \x in {.7,1.4,2.1}\draw(\x,.7)--(\x,2.1);
\foreach \x in {.7,2.1}\foreach \y in {.7,1.4}\node[V] () at (\x,\y){};
\foreach \y in {.7,2.1}\node[V] () at (1.4,\y){};
\node[T]()at(.7,2.1){};
\node[T]()at(2.1,2.1){};
\foreach \x in {.7,1.4,2.1}\node[A]()at(\x,.7){};
\node() at (1.4,0){$Q_4$};
\end{tikzpicture}
\caption{Mating into $A$ in adjusted quadrants}
\label{Qmin}
\end{center}
\end{figure}
(ii) If $s_1,t_1,s_2$ are not necessarily distinct terminals in $Q_0$ then there is
an $s_1,t_1$-path in $Q_0$ and an edge disjoint mating path from $s_2$ into a vertex of $A$.
(iii) From any three distinct terminals of $Q_0$ there exist pairwise edge disjoint mating paths
into three distinct vertices of $A$. Furthermore, the claim remains true if two terminals not in $A$ coincide.
\end{lemma}
\begin{lemma}
\label{heavy4}
Let $A,B$ be a horizontal and
a vertical boundary line of quadrant $Q$, let $c$ be the corner vertex of $Q$ not in $A\cup B$, and let $b$ be the middle vertex of $B$ (see $Q_0$ in Fig.\ref{except}). Denote by $Q_0$ the grid obtained by removing the edges of $A$ from $Q$, and let $T$ be a set of at most four distinct terminals in $Q_0$.
(i) If $T\subset Q_0-A$ and $c\notin T$,
then for every terminal $s\in T$, there is a linkage
in $Q_0$ to connect $s$ to $b$, and there exist edge disjoint mating paths in $Q_0$
from the remaining terminals of $T$ into not necessarily distinct vertices of $A$.
\begin{figure}[htp]
\tikzstyle{T} = [rectangle, minimum width=.1pt, fill, inner sep=2.5pt]
\tikzstyle{V} = [circle, minimum width=1pt, fill, inner sep=1pt]
\tikzstyle{A} = [circle, minimum width=.5pt, draw=black, inner sep=2pt]
\tikzstyle{B} = [rectangle, minimum width=1pt, draw=black,fill=white, inner sep=1.4pt]
\begin{center}
\begin{tikzpicture}
\draw[dashed] (0.5,0.5)--(2.3,0.5)--(2.3,.9)--(.5,.9)--(.5,0.5);
\foreach \y in {1.4,2.1}\draw (.7,\y)--(2.1,\y);
\foreach \x in{.7,1.4,2.1}\draw (\x,.7)--(\x,2.1);
\foreach \x in {.7,1.4,2.1}\foreach \y in {1.4,2.1}\node[V] () at (\x,\y){};
\foreach \x in{.7,1.4,2.1}\node[B] () at (\x,.7){};
\node() at (2.1,2.4){B};
\node() at (.3,1){A};
\node() at (1.4,0){$Q_0$};
\node[B,label=left:$c$] () at(.7,2.1){};
\node[A] () at(2.1,1.4){};
\node() at (2.35,1.4){$b$};
\end{tikzpicture}
\hskip1cm
\begin{tikzpicture}
\foreach \y in {1.4,2.1}\draw (.7,\y)--(2.1,\y);
\foreach \x in{.7,1.4,2.1}\draw (\x,.7)--(\x,2.1);
\foreach \x in {.7,1.4,2.1}\foreach \y in {1.4,2.1}\node[V] () at (\x,\y){};
\foreach \x in{.7,1.4,2.1}\node[B] () at (\x,.7){};
\node[T,label=left:$s_{3}$] () at (.7,.7){};
\node[T,label=left:$s_{2}$] () at (.7,1.4){};
\node[T,label=left:$s_1$] () at (.7,2.1){};
\node() at (1.4,0){$T_1$};
\node[A] () at(2.1,1.4){};
\node() at (2.35,1.4){$b$};
\end{tikzpicture}
\hskip1cm
\begin{tikzpicture}
\foreach \y in {1.4,2.1}\draw (.7,\y)--(2.1,\y);
\foreach \x in{.7,1.4,2.1}\draw (\x,.7)--(\x,2.1);
\foreach \x in {.7,1.4,2.1}\foreach \y in {1.4,2.1}\node[V] () at (\x,\y){};
\foreach \x in{.7,1.4,2.1}\node[B] () at (\x,.7){};
\node[T,label=above:$s_{2}$] () at (2.1,2.1){};
\node[T,label=above:$s_{3}$] () at (1.4,2.1){};
\node[T,label=above:$s_{4}$] () at (.7,2.1){};
\node[T] () at(.7,2.1){};
\node[T] () at(2.1,1.4){};
\node() at (1.4,0){$T_2$};
\node() at (2.4,1.4){$s_1$};
\end{tikzpicture}
\caption{Projection to $A$}
\label{except}
\end{center}
\end{figure}
(ii) If $T$ is different from $T_1$ and
$T_2$ in Fig.\ref{except}, then
for $\min\{3,|T|\}$ choices of a terminal $s\in T$,
there is a linkage
in $Q_0$ to connect $s$ to $b$, and there exist edge disjoint mating paths in $Q_0$
from the remaining terminals of $T$ into not necessarily distinct vertices of $A$.
(iii)
If $T$ is one of $T_1$ and
$T_2$ in Fig.\ref{except}, then the claim in (ii) above is true only for $s=s_1$ and $s_2$.
\end{lemma}
\begin{lemma}
\label{boundary}
Let $A,B$ be a horizontal and
a vertical boundary line of a quadrant $Q$.
For every $s_1,t_1,s_2,s_3\in Q$ and $\psi: \{s_2,s_3\} \longrightarrow \{A,B\}$,
there is a linkage for $\pi_1$, and there exist edge disjoint mating paths in $Q$
from $s_j$, $j=2,3$, to distinct vertices $s_j^*\in \psi(s_j)$.
\end{lemma}
\section{Proof of Theorem \ref{main}}
\label{proof}
\begin{proposition}
\label{66upper}
The $6\times 6$ grid is not $5$-path-pairable.
\end{proposition}
\begin{proof}
Eight terminals are located in the upper left quadrant of $G=P_6\Box P_6$ as shown in Fig.\ref{cluster}; $t_1$ and $t_5$ are be placed anywhere in $G$. We claim that there is no linkage for $\pi_i$, $1\leq i\leq 5$. Assume on the contrary that there are pairwise edge disjoint $s_i,t_i$-paths, for $1\leq i\leq 5$. Then
$P_1, P_2,P_3, $ and $P_5$ must leave the upper left $2\times 2$ square.
\begin{figure}[htp]
\begin{center}
\tikzstyle{V} = [circle, minimum width=1pt, fill, inner sep=1pt]
\tikzstyle{T} = [rectangle, minimum width=.1pt, fill, inner sep=3pt]
\tikzstyle{B} = [rectangle, draw=black!, minimum width=1pt, fill=white, inner sep=1pt]
\tikzstyle{txt} = [circle, minimum width=1pt, draw=white, inner sep=0pt]
\tikzstyle{Wedge} = [draw,line width=2.2pt,-,black!100]
\tikzstyle{M} = [circle, draw=black!, minimum width=1pt, inner sep=3.5pt]
\begin{tikzpicture}
\foreach \x in {1,...,3}\draw (\x,0.2)--(\x,3);
\foreach \y in {1,...,3}\draw (1,\y)--(3.8,\y);
\foreach \x in {1,...,3}\foreach \y in {1,...,3}\node[B]()at(\x,\y){};
\foreach \x in {1,2}\foreach \y in {1,2,3}\node[T]()at(\x,\y){};
\foreach \y in {2,3}\node[T]()at(3,\y){};
\draw[->,line width=1pt] (2,2)--(2,1.2);
\draw[->,line width=1pt] (1,1)--(1.8,1);
\draw[->,line width=1pt] (1,2)--(1,1.1);
\draw[->,line width=1pt] (1,1)--(1,.3);
\node[label=above:$s_1$]()at(.6,2.9){};
\node[label=left:$s_2$]()at(1,2){};
\node[label=left:$s_3$]()at(1,1){};
\node[label=right:$s_4$]()at(1.9,.8){};
\node[label=right:$t_4$]()at(2.9,1.8){};
\node[label=above:$t_3$]()at(2,3){};
\node[label=above:$t_2$]()at(3,3){};
\node[label=right:$s_5$]()at(1.9,1.8){};
\node[M]()at(2,1){};
\end{tikzpicture}
\end{center}
\caption{Unresolvable pairings}
\label{cluster}
\end{figure}
By symmetry, we may assume that $P_5$ starts with the edge $s_5 - (3,2)$, furthermore, either $P_1$ or $P_2$ must use the edge $(2,1) - (3,1)$. Then either $P_3$ or one of
$P_1$ and $P_2$ uses the edge $(3,1) - (3,2)$. Thus a bottle-neck is formed at
vertex $(3,2)$, since two paths are entering there and $P_4$ must leave it, but only two edges, $(3,2) - (3,3)$ and $(3,2) - (4,2)$ are available, a contradiction.
\end{proof}
\begin{proposition}
\label{66}
The $6\times 6$ grid is $4$-path-pairable.
\end{proposition}
\begin{proof}
We partition the grid $G=P_6\Box P_6$
into four quadrants,
named NW, NE, SW, SE according to their `orientation'.
Given the terminal pairs $\pi_i=\{s_i,t_i\}\subset G$, $1\leq i\leq 4$,
a solution consists of pairwise edge disjoint $s_i,t_i$-paths, $P_i$, for $1\leq i\leq 4$, and is referred to as a {\it linkage} for $\pi_i$, $1\leq i\leq 4$.
Our procedure
described in terms of a tedious case analysis is based on the distribution of
$T=\cup_{i=1}^4\pi_i$ in the four quadrants. The distributions of the terminals with respect to the quadrants are described with a so called {\it q-diagram} $\mathcal{D}$ defined as a (multi)graph with four nodes
labeled with the four
quadrants $Q_1, Q_2,Q_3, Q_4\subset G$, and containing four edges (loops and parallel edges are allowed): for each terminal pair
$\{s_i,t_i\}\subset T$, $1\leq i\leq 4$, there is an edge
$Q_aQ_b\in \mathcal{D}$ if and only if $s_i\in Q_a$, $t_i\in Q_b$.
The proof is split into main cases A and B according to whether some quadrant contains a terminal pair, that is the diagram $\mathcal{D}$ is loopless, or $\mathcal{D}$ contains a loop. \\
\noindent Case A: no quadrant of $G$ contains a terminal pair. Observe that in this case
the maximum degree in the q-diagram is at most $4$.
A.1:
every quadrant has two terminals (the q-diagram is $2$-regular). There are four essentially different distributions, apart by symmetries of the grid, see in Fig.\ref{2222}. We may assume that $s_1,s_2\in NW$ and $s_3,s_4\in Q$, where $Q=$SE for the leftmost q-diagram and $Q=$NE for the other ones as indicated by the blackened nodes of the q-diagrams
\begin{figure}[htp]
\begin{center}
\tikzstyle{T} = [rectangle, minimum width=.1pt, fill, inner sep=3pt]
\tikzstyle{V} = [circle, minimum width=1pt, fill, inner sep=1.5pt]
\tikzstyle{Q} = [rectangle, draw=black!, minimum width=1pt, fill=white, inner sep=3pt]
\begin{tikzpicture}
\draw(.7,.7)--(.7,1.4)--(1.4,1.4)--(1.4,.7)--(.7,.7);
\foreach \x in {0.7,1.4}
\foreach \y in {0.7,1.4} {\node[Q] () at (\x,\y) {};}
\node[V] () at (.7,1.4) {}; \node[V] () at (1.4,.7) {};
\node[label=left:{\small 1}]() at (.9,1.05){};
\node[label=right:{\small 2}]() at (.7,1.6){};
\end{tikzpicture}
\hskip1cm
\begin{tikzpicture}
\draw(.7,1.4)--(1.4,.7)--(1.4,1.4)--(.7,.7)--(.7,1.4);
\foreach \x in {0.7,1.4}
\foreach \y in {0.7,1.4} { \node[Q] () at (\x,\y) {};}
\node[V] () at (.7,1.4) {}; \node[V] () at (1.4,1.4) {};
\node[label=left:{\small 1}]() at (1,1.1){};
\node[label=right:{\small 2}]() at (.6,1.25){};
\end{tikzpicture}
\hskip1cm
\begin{tikzpicture}
\draw(.65,.7)--(.65,1.4)(1.35,1.4)--(1.35,.7);
\draw(.75,.7)--(.75,1.4)(1.45,1.4)--(1.45,.7);
\foreach \x in {0.7,1.4}
\foreach \y in {0.7,1.4} { \node[Q] () at (\x,\y) {};}
\node[V] () at (.7,1.4) {}; \node[V] () at (1.4,1.4) {};
\node[label=left:{\small 1}]() at (.9,1.05){};
\node[label=right:{\small 2}]() at (.5,1.05){};
\end{tikzpicture}
\hskip1cm
\begin{tikzpicture}
\draw(.65,.7)--(1.35,1.4) (1.35,.7)--(.65,1.4);
\draw(.75,.7)--(1.45,1.4)(1.45,.7)--(.75,1.4);
\foreach \x in {0.7,1.4}
\foreach \y in {0.7,1.4} { \node[Q] () at (\x,\y) {};}
\node[V] () at (.7,1.4) {}; \node[V] () at (1.4,1.4) {};
\node[label=left:{\small 1}]() at (1.15,1.1){};
\node[label=right:{\small 2}]() at (.6,1.3){};
\end{tikzpicture}
\end{center}
\caption{$\|Q\|=2$, for every quadrant}
\label{2222}
\end{figure}
For each distributions we apply Lemma \ref{frame} (ii) to obtain
a framing in NW for $\pi_1,\pi_2$ to $C_0$ and another framing
in $Q$ for $\pi_3,\pi_4$ to $C_1$. Since the other two quadrants contain two-two terminals, it is possible to
mate $t_1,t_2$ into vertices of $C_0$ and $t_3,t_4$ into vertices of $C_1$ by using Lemma \ref{frame} (i). Then the linkage is completed along the cycles $C_0$ and $C_1$.\\
A.2: the maximum degree of the q-diagram is $3$, and there is just one node with maximum degree, let $\|NW\|=3$.
Fig.\ref{A12cases} lists q-diagrams with this property.
\begin{figure}[htp]
\begin{center}
\tikzstyle{T} = [rectangle, minimum width=.1pt, fill, inner sep=3pt]
\tikzstyle{V} = [circle, minimum width=1pt, fill, inner sep=1.5pt]
\tikzstyle{Q} = [rectangle, draw=black!, minimum width=1pt, fill=white, inner sep=3pt]
\begin{tikzpicture}
\draw(.7,.7)--(.7,1.4)--(1.4,1.4)--(1.4,.7)--(.7,1.4);
\foreach \x in {0.7,1.4}
\foreach \y in {0.7,1.4} {\node[Q] () at (\x,\y) {};}
\node[V] () at (.7,1.4) {}; \node[V] () at (1.4,1.4) {};
\node[label=left:{\small 1}]() at (.95,1.05){};
\node[label=right:{\small 2}]() at (.6,1.05){};
\node[label=above:{\small 3}]() at (1.05,1.2){};
\node[label=right:{\small 4}]() at (1.15,1.05){};
\end{tikzpicture}
\hskip.6cm
\begin{tikzpicture}
\draw(.7,.7)--(.7,1.4)--(1.4,1.4)--(.7,.7) (.7,1.4)--(1.4,.7);
\foreach \x in {0.7,1.4}
\foreach \y in {0.7,1.4} {\node[Q] () at (\x,\y) {};}
\node[V] () at (.7,1.4) {}; \node[V] () at (1.4,1.4) {};
\node[label=left:{\small 1}]() at (.95,1.1){};
\node[label=right:{\small 2}]() at (.55,1.2){};
\end{tikzpicture}
\hskip.8cm
\begin{tikzpicture}
\draw (.7,1.37)--(1.4,.6) (.7,1.53)--(1.4,.755)
(.7,.7)--(1.4,1.4)--(.7,1.4);
\foreach \x in {0.7,1.4}
\foreach \y in {0.7,1.4} { \node[Q] () at (\x,\y) {};}
\node[V] () at (.7,1.4) {}; \node[V] () at (1.4,1.4) {};
\node[label=left:{\small 1}]() at (1.15,1.1){};
\node[label=right:{\small 2}]() at (.65,1.2){};
\end{tikzpicture}
\hskip.6cm
\begin{tikzpicture}
\draw(.65,.7)--(.65,1.4) (.7,1.4)--(1.4,1.4)--(1.4,.7);
\draw (.75,.7)--(.75,1.4);
\foreach \x in {0.7,1.4}
\foreach \y in {0.7,1.4} { \node[Q] () at (\x,\y) {};}
\node[V] () at (.7,1.4) {}; \node[V] () at (1.4,1.4) {};
\node[label=left:{\small 1}]() at (.9,1.05){};
\node[label=right:{\small 2}]() at (.5,1.05){};
\end{tikzpicture}
\hskip.6cm
\begin{tikzpicture}
\draw (.65,.7)--(.65,1.4) (.7,1.4)--(1.4,.7)--(1.4,1.4);
\draw (.75,.7)--(.75,1.4);
\foreach \x in {0.7,1.4}
\foreach \y in {0.7,1.4} { \node[Q] () at (\x,\y) {};}
\node[V] () at (.7,1.4) {}; \node[V] () at (1.4,.7) {};
\node[label=left:{\small 1}]() at (.9,1.){};
\node[label=right:{\small 2}]() at (.5,1.){};
\end{tikzpicture}
\end{center}
\caption{$NW$ has $3$ terminals all other quadrants have less}
\label{A12cases}
\end{figure}
Let
$s_1,s_2,s_3\in NW$,
let $Q=$NE for the first four q-diagrams, and let $Q=$SE for the last q-diagram (see blackened nodes in Fig.\ref{A12cases}).
Applying
Lemma \ref{12toCa} with quadrant NW and $p=1, q=2$, we obtain a framing in NW for $\pi_1,\pi_2$ to $C_\alpha$, for some $\alpha\in \{0,1\}$, furthermore, we obtain a mating of $s_3$
into a vertex in $C_\beta\cap NW$, where $\beta=\alpha+1 \pmod 2$. Recall that the remaining quadrants contain at most two terminals.
We use Lemma \ref{frame} (ii) with quadrant Q which yields a framing
in Q for $\pi_3,\pi_4$ to $C_\beta$.The solution is completed by mating the remaining terminals to the appropriate cycles applying Lemma \ref{frame} (i).\\
A.3: there are two quadrants containing three terminals,
let
$\|NW\|=\|Q\|=3$, where $Q=$NE or SE, see Fig.\ref{twomax3}.
\begin{figure}[htp]
\begin{center}
\tikzstyle{T} = [rectangle, minimum width=.1pt, fill, inner sep=3pt]
\tikzstyle{V} = [circle, minimum width=1pt, fill, inner sep=1.5pt]
\tikzstyle{Q} = [rectangle, draw=black!, minimum width=1pt, fill=white, inner sep=3pt]
\begin{tikzpicture
\draw (.7,.7)--(1.4,.7);
\draw (.7,1.33)--(1.4,1.33)(.7,1.4)--(1.4,1.4) (.7,1.47)--(1.4,1.47);
\draw ;
\foreach \x in {0.7,1.4}
\foreach \y in {0.7,1.4} { \node[Q] () at (\x,\y) {};}
\node[label=below:{\small 4}]() at (1.05,1.3){};
\node[label=below:{\small (I)}]() at (1.05,.6){};
\end{tikzpicture}
\hskip.6cm
\begin{tikzpicture}
\draw (.7,1.35)--(1.4,1.35) (.7,1.45)--(1.4,1.45)
(1.4,.7)--(1.4,1.4) (.7,.7)--(.7,1.4);
\foreach \x in {0.7,1.4}
\foreach \y in {0.7,1.4} { \node[Q] () at (\x,\y) {};}
\node[label=above:{\small 1}]() at (1.05,1.2){};
\node[label=above:{\small 2}]() at (1.05,.85){};
\node[label=left:{\small 3}]() at (.9,1.05){};
\node[label=right:{\small 4}]() at (1.15,1.05){};
\node[label=below:{\small (II)}]() at (1.05,.7){};
\end{tikzpicture}
\hskip.6cm
\begin{tikzpicture}
\draw (.7,1.37)--(1.4,.6) (.7,1.53)--(1.4,.755)
(1.4,.7)--(1.4,1.4) (.7,.7)--(.7,1.4);
\foreach \x in {0.7,1.4}
\foreach \y in {0.7,1.4} { \node[Q] () at (\x,\y) {};}
\node[label=left:{\small 3}]() at (.9,1.05){};
\node[label=right:{\small 4}]() at (1.15,1.05){};
\node[label=below:{\small (III)}]() at (1.05,.6){};
\end{tikzpicture}
\hskip.6cm
\begin{tikzpicture}
\draw (.7,1.35)--(1.4,1.35) (.7,1.45)--(1.4,1.45)
(.7,1.4)--(1.4,.7)--(1.4,1.4);
\foreach \x in {0.7,1.4}
\foreach \y in {0.7,1.4} { \node[Q] () at (\x,\y) {};}
\node[label=right:{\small 2}]() at (.7,1.25){};
\node[label=above:{\small 1}]() at (1.05,1.2){};
\node[label=below:{\small (IV)}]() at (1.05,.6){};
\node[label=right:{\small 4}]() at (1.15,1.05){};
\end{tikzpicture}
\hskip.6cm
\begin{tikzpicture}
\draw (.7,1.35)--(1.4,1.35) (.7,1.45)--(1.4,1.45)
(.7,1.4)--(1.4,.7)(1.4,1.4)--(.7,.7);
\foreach \x in {0.7,1.4}
\foreach \y in {0.7,1.4} { \node[Q] () at (\x,\y) {};}
\node[label=right:{\small 2}]() at (.7,1.25){};
\node[label=above:{\small 1}]() at (1.05,1.2){};
\node[label=below:{\small (V)}]() at (1.05,.6){};
\end{tikzpicture}
\hskip.6cm
\begin{tikzpicture}
\draw (.7,1.37)--(1.4,.6) (.7,1.53)--(1.4,.755)
(1.4,.7)--(1.4,1.4)--(.7,1.4);
\foreach \x in {0.7,1.4}
\foreach \y in {0.7,1.4} { \node[Q] () at (\x,\y) {};}
\node[label=above:{\small 3}]() at (1.05,1.2){};
\node[label=right:{\small 4}]() at (1.15,1.05){};
\node[label=below:{\small (VI)}]() at (1.05,.6){};
\end{tikzpicture}
\hskip.6cm
\begin{tikzpicture}
\draw (.65,1.37)--(1.35,.6) (.75,1.53)--(1.45,.755)
(.72,1.42)--(1.42,.68) (.7,.7)--(1.4,1.4 );
\foreach \x in {0.7,1.4}
\foreach \y in {0.7,1.4} { \node[Q] () at (\x,\y) {};}
\node[label=left:{\small 4}]() at (1.35,.8){};
\node[label=below:{\small (VII)}]() at (1.05,.6){};
\end{tikzpicture}
\end{center}
\caption{$\|NW\|=\|Q\|=3$}
\label{twomax3}
\end{figure}
Let $s_1,s_2,s_3\in NW$, and $t_1,t_2\in Q$.
For the q-diagram (I) we define $G^*=G-(A(5)\cup A(6))$. We mate the terminals from row $A(4)$ to $s_4^\prime, t_4^\prime\in A(5)\cup A(6)$ along columns of $G$.
Since $G^*\cong P_4\Box P_6$ is $3$-path-pairable
by Lemma \ref{3pp},
there is a linkage for $\pi_1,\pi_2,\pi_3$ in $G^*$. Furthermore,
there is an edge disjoint $s_4^\prime,t_4^\prime$-path in the connected subgrid $A(5)\cup A(6)$ thus completing a solution. \\
For the q-diagrams (II) and (III) let $t_4\in Q$, where $Q=$NE or SE, respectively.
We apply Lemma \ref{exit} (iii) for NW and for $Q$ with horizontal boundary line $A=A(3)\cap NW$ and in
$A=A(3)\cap NE$ or $A(4)\cap SE$,
respectively.
\begin{figure}[htp]
\begin{center}
\tikzstyle{A} = [circle, minimum width=.5pt, draw=black, inner sep=2pt]
\tikzstyle{B} = [rectangle, draw=black!, minimum width=1pt, fill=white, inner sep=1pt]
\tikzstyle{T} = [rectangle, minimum width=.1pt, fill, inner sep=2.5pt]
\tikzstyle{V} = [circle, minimum width=1pt, fill, inner sep=1pt]
\begin{tikzpicture}
\draw[line width=1.5pt] (1,2)--(3,2);
\draw[line width=1.5pt] (.5,2)--(.5,1.5)--(2,1.5)--(2,2);
\draw[->,double,snake](1.5,2)--(1.5,1.1);
\draw[->,double,snake](2.5,2)--(2.5,1.1);
\foreach \y in {.5,1,1.5,2,2.5,3} \draw (.5,\y)--(3,\y);
\foreach \x in {.5,1,1.5,2,2.5,3} \draw (\x,.5)--(\x,3);
\foreach \x in {.5,1,1.5,2,2.5,3}\node[A]() at (\x,2){};
\foreach \x in {.5,1,1.5,2,2.5,3} \foreach \y in {.5,1,1.5,2,2.5,3}
\node[V]() at (\x,\y){};
\node() at (4.1,2){$A(3)$};
\node() at (4.1,1.5){$A(4)$};
\node() at (0.2,2.25){$s_2^\prime$};
\node() at (2.7,2.25){$t_4^\prime$};
\node() at (1.3,2.3){$s_3^\prime$};
\node() at (.8,2.3){$s_1^\prime$};
\node() at (3.25,2.25){$t_1^\prime$};
\node() at (2.22,2.25){$t_2^\prime$};
\end{tikzpicture}
\begin{tikzpicture}
\draw[line width=1.5pt] (1,2)--(1,1.5)--(2,1.5);
\draw[line width=1.5pt] (.5,2)--(2.5,2)--(2.5,1.5);
\draw[->,double,snake](1.5,2)--(1.5,1.1);
\draw[->,double,snake](3,1.5)--(3,2.4);
\foreach \y in {.5,1,1.5,2,2.5,3} \draw (.5,\y)--(3,\y);
\foreach \x in {.5,1,1.5,2,2.5,3} \draw (\x,.5)--(\x,3);
\foreach \x in {.5,1,1.5,2,2.5,3} \foreach \y in {.5,1,1.5,2,2.5,3}
\node[V]() at (\x,\y){};
\foreach \x in {.5,1,1.5}\node[A]() at (\x,2){};
\foreach \x in {2,2.5,3}\node[A]() at (\x,1.5){};
\node() at (2.7,1.3){$t_1^\prime$};
\node() at (0.2,2.25){$s_1^\prime$};
\node() at (1.3,2.3){$s_3^\prime$};
\node() at (.8,2.3){$s_2^\prime$};
\node() at (3.25,1.3){$t_4^\prime$};
\node() at (2.2,1.3){$t_2^\prime$};
\end{tikzpicture}
\end{center}
\caption{Cases (II) and (III)}
\label{A3A4}
\end{figure}
Thus we obtain six distinct mates
$s_1^\prime,s_2^\prime,s_3^\prime\in A(3)\cap NW$ and
$t_1^\prime,t_2^\prime,t_4^\prime\in A(3)\cap NE$ or
$A(4)\cap SE$, see the encircled vertices in Fig. \ref{A3A4}.
Observe that the mating paths are edge disjoint from
the $2\times 6$ grid $G^*=A(3)\cup A(4)$. The mating paths from
$s_3^\prime$ and $t_4^\prime$ can be extended into the neighboring quadrants containing $t_3$ and $s_4$ along the columns of $G$ (zigzag lines in Fig.\ref{A3A4}).
Furthermore, a linkage for $\pi_2$ can be completed by an $s_2^\prime, t_2^\prime$-path in $G^*$ not using edges of
$A(3)$, and a linkage for $\pi_3$ can be completed by an $s_3^\prime, t_3^\prime$-path in $G^*$ not using edges of $A(4)$. \\
The solution for the q-diagrams (IV) and (V) follows a similar strategy using Lemma \ref{heavy4}. Assume that as a result of applying Lemma \ref{heavy4} twice, for quadrants NW and NE, we find a common index $\ell\in\{1,2\}$, say $\ell=1$, that satisfies the following property: there exists a path in NW
from $s_1$ to $s_1^\prime=(2,3)$, and there exists a path in NE from $t_1$ to $t_1^\prime=(2,4)$, furthermore, terminals $s_2,s_3\in NW$, $t_2,t_4\in NE$ are mated into not necessarily distinct vertices $s_2^\prime,s_3^\prime,t_2^\prime,t_4^\prime\in A(3)$ using edge disjoint mating paths.
Now we complete a linkage for $\pi_1$ by adding the edge $s_1^\prime t_1^\prime\in A(2)$. Since the mating paths do not use edges of $A(3)$, a linkage for $\pi_2$ can be completed by adding an $s_2^\prime, t_2^\prime$-path in $A(3)$. Next we extend the mating paths from $s_3^\prime$ and $t_4^\prime$ into $s_3^{*},t_4^{*}\in A(4)$ along the columns of $G$. Since $G^*=G-(NW\cup NE)$
contains $t_3,s_3^*,s_4,t_4^{*}$ and since, by Lemma \ref{w2linked}, $G^*\cong P_3\Box P_6$ is weakly $2$-linked, the linkage for $\pi_3,\pi_4$ can be completed in $G^*$.
A common index $\ell\in\{1,2\}$ as above exists by the pigeon hole principle if one of the terminal set in NW or in NE is different from type $T_1$ in Fig.\ref{except}. If the terminals in both quadrants are of type $T_1$, then we have $s_1,s_2,s_3\in B(1)\cap NW$ and $t_1,t_2,t_4\in B(6)\cap NE$. Now we mate $s_1,s_2,t_1,t_2$ into the $3\times 4$
grid $G^\prime$ induced by $(NW\cup NE)\setminus (B(1)\cup B(6))$ along their rows, furthermore, we mate $s_3,t_4$ to vertices $s_3^*, t_4^*\in A(4)$ along their columns. Since $G^\prime$ is $2$-path-pairable, a linkage for $\pi_1,\pi_2$ can be completed in $G^\prime$. Since the weakly $2$-linked
$G^*=G-(NW\cup NE)$ contains
$s_3^*,t_3,s_4,t_4^*$, there are edge disjoint
ptahs from $s_3^*$ to $t_3$ and from $s_4$ to $t_4^*$ completeing
a linkage in $G^*$ for $\pi_3$ and $\pi_4$.\\
\begin{figure}[htp]
\begin{center}
\tikzstyle{A} = [circle, minimum width=.5pt, draw=black, inner sep=2pt]
\tikzstyle{B} = [rectangle, draw=black!, minimum width=1pt, fill=white, inner sep=1pt]
\tikzstyle{T} = [rectangle, minimum width=.1pt, fill, inner sep=2.5pt]
\tikzstyle{V} = [circle, minimum width=1pt, fill, inner sep=1pt]
\begin{tikzpicture}
\draw[line width=1.5pt] (1.5,3)--(1.5,.5)--(3,.5);
\draw[line width=1.5pt] (1,3)--(1,1.5)--(3,1.5);
\draw[->,double](.5,3)--(1.9,3);
\draw[->,double](3,1)--(3,1.9);
\foreach \y in {.5,1,1.5,2,2.5,3} \draw (.5,\y)--(3,\y);
\foreach \x in {.5,1,1.5,2,2.5,3} \draw (\x,.5)--(\x,3);
\foreach \x in {.5,1,1.5,2,2.5,3} \foreach \y in {.5,1,1.5,2,2.5,3}
\node[V]() at (\x,\y){};
\foreach \x in {.5,1,1.5}\node[T]() at (\x,3){};
\foreach \y in {.5,1,1.5}\node[T]() at (3,\y){};
\node() at (3.3,1){$t_4$};
\node() at (3.3,1.5){$t_1$};
\node() at (3.3,.5){$t_2$};
\node() at (3.3,2.15){$t_4^\prime$};
\node() at (2.15,3.3){$s_3^\prime$};
\node() at (.5,3.3){$s_3$};
\node() at (1,3.3){$s_2$};
\node() at (1.5,3.3){$s_1$};
\end{tikzpicture}
\hskip1cm
\begin{tikzpicture}
\draw[line width=1.5pt] (1,2)--(1,1)--(2,1) (1.5,3)--(1.5,1.5)--(3,1.5);
\draw[->, line width=1.2] (1.5,3)--(1.9,3);
\draw[->, line width=1.2] (2.5,1.5)--(2.5,1.9);
\foreach \y in {.5,1,1.5,2,2.5,3} \draw (.5,\y)--(3,\y);
\foreach \x in {.5,1,1.5,2,2.5,3} \draw (\x,.5)--(\x,3);
\foreach \x in {.5,1,1.5,2,2.5,3} \foreach \y in {.5,1,1.5,2,2.5,3}
\node[V]() at (\x,\y){};
\foreach \x in {2.5,3}\node[A]() at (\x,1.5){};
\foreach \y in {3}\node[A]() at (1.5,\y){};
\node[A]() at (2,1){}; \node[A]() at (1,2){};
\node[A]() at (2,3){}; \node[A]() at (2.5,2){};
\node() at (2.3,.75){$t_1^\prime$};
\node() at (2.75,1.27){$t_4^\prime$};
\node() at (2.75,2.2){$t_4^*$};
\node() at (3.3,1.27){$t_2^\prime$};
\node() at (1.3,3.3){$s_3^\prime$};
\node() at (2.2,3.3){$s_3^*$};
\node() at (1.3,2.78){$s_2^\prime$};
\node() at (.75,2.3){$s_1^\prime$};
\end{tikzpicture}
\end{center}
\caption{Case (VI)}
\label{VI}
\end{figure}
For the diagram (VI) suppose that $s_1,s_2,s_3\in A(1)\cap NW$ and $t_1,t_2,t_4\in B(6)\cap SE$. Mate $s_3$ into
$s_3^\prime\in B(4)$ along $A(1)$, and mate $t_4$ into
$t_4^\prime\in A(3)$ along $B(6)$.
Since $s_3^\prime,t_3,s_4,t_4^\prime\in NE$, and NE is weakly $2$-linked, a linkage can be completed in NE for $\pi_3,\pi_4$. For the pairs $\pi_1,\pi_2$ a linkage can be obtained easily by taking shortest paths through SW as shown in the left of Fig.\ref{VI}.
Assume now that the terminals in one of the quadrants NW and SE is not of type $T_1$ as before. Then we apply Lemma \ref{heavy4} for NW with $A=B(3)\cap NW$ and $b=(3,2)$, and we apply Lemma \ref{heavy4} for SE with $A=A(4)\cap SE$ and $b=(5,4)$. Then by the pigeon hole principle, we obtain a common index $\ell\in\{1,2\}$, say $\ell=1$, which satisfies: there exists a path in NW
from $s_1$ to $s_1^\prime=(3,2)$, and there exits a path in SE from $t_1$ to $t_1^\prime=(5,4)$, furthermore, terminals $s_2,s_3\in NW$, $t_2,t_4\in NE$ are mated into not necessarily distinct vertices $s_2^\prime,s_3^\prime\in B(3)$ and $t_2^\prime,t_4^\prime\in A(4)$ using edge disjoint mating paths. We take an $s_2^\prime,t_2^\prime$-path in $B(3)\cup A(4)$ to complete a linkage for $\pi_2$. Then
the mating paths to
$s_3^\prime,t_4^\prime$ are extended into
$s_3^*,t_4^*\in NE$. Since
NE is weakly $2$-linked, a linkage can be completed there for $\pi_3,\pi_3$ (see on the right of Fig.\ref{VI}).\\
For q-diagram (VII) we apply Lemma \ref{Caforpq} (i) with $Q=NW$ and $y_0=(1,3)$. W.l.o.g. we assume that there is a framing in $NW$
for $\pi_1,\pi_2$ with $C_1$ and a mating of $s_3$ into $y_0$. We extend this mating path to $s_3^*=(1,4)\in NE$.
Next we apply Lemma \ref{12toCa} with $Q=SE$,
and $p=1, q=2$. Thus we obtain a framing in $SE$ for $\pi_1,\pi_2$ to
$C_\alpha$, for some $\alpha\in\{0,1\}$.
\begin{figure}[htp]
\begin{center}
\tikzstyle{M} = [circle, minimum width=.5pt, draw=black, inner sep=2pt]
\tikzstyle{B} = [rectangle, draw=black!, minimum width=1pt, fill=white, inner sep=1pt]
\tikzstyle{T} = [rectangle, minimum width=.1pt, fill, inner sep=2.5pt]
\tikzstyle{V} = [circle, minimum width=1pt, fill, inner sep=1pt]
\tikzstyle{C} = [circle, minimum width=1pt, fill=white, inner sep=2pt]
\tikzstyle{A} = [circle, draw=black!, minimum width=1pt, fill, inner sep=1.7pt]
\tikzstyle{T} = [rectangle, minimum width=.1pt, fill, inner sep=3pt]
\tikzstyle{txt} = [circle, minimum width=1pt, draw=white, inner sep=0pt]
\tikzstyle{Wedge} = [draw,line width=2.2pt,-,black!100]
\begin{tikzpicture}
\draw[->,line width=1.2pt] (0,5) -- (0,4) -- (.9,4);
\draw[->,line width=1.2pt] (1,5) -- (1,4.1);
\draw[->,line width=1.2pt] (4,2) -- (3.1,2);
\draw[->,line width=1.2pt] (0,3) -- (0,3) -- (2,3) -- (2,4.9);
\draw[->,line width=1.2pt] (2,5) -- (2.9,5);
\draw[->,line width=1.2pt] (5,2) -- (5,0) -- (3,0) -- (3,1.9);
\draw[->,line width=1.2pt] (5,1)--(4.1,1);
\draw[snake,line width=.5pt] (1,2) -- (3,2) -- (3,4) -- (1,4) -- (1,2) (3.1,5) -- (4,5) -- (4,1);
\draw[dashed] (0,4) -- (0,0) -- (3,0) (4,0)--(4,1)--(0,1) (5,2)--(5,5)--(4,5);
\draw[dashed] (0,5) -- (2,5) (3,4)--(5,4) (2,3)--(5,3) (0,2)--(1,2) (4,2)--(5,2);
\draw[dashed] (1,0) -- (1,2) (2,0)--(2,3);
\foreach \x in {0,1,2,3,4,5}
\foreach \y in {0,1,2,3,4,5}
{ \node[B] () at (\x,\y) {};}
\node[T](s4) at (3,4){};
\node[txt]() at (3.35,4.3) {$s_4$};
\node[T](t4) at (2,1){};
\node[txt]() at (1.7,.7) {$t_4$};
\node[T,label=right:$t_3$](t3) at (5,1){};
\node[T](t1) at (4,2){};
\node[txt]() at (4.35,1.7) {$t_1$};
\node[txt]() at (4.3,.7) {$t_3^\prime$};
\node[T,label=right:$t_2$](t2) at (5,2){};
\node[T,label=above:$s_1$](s1) at (0,5){};
\node[T,label=above:$s_2$](s2) at (1,5){};
\node[T,label=left:$s_3$](s3) at (0,3){};
\node(s3*) at (3,5.35){$s_3^*$};
\node[M] () at (1,4) {};
\node[M] () at (3,2) {}; \node[M] () at (4,1) {};
\node[M,label=above:$y_0$] () at (2,5) {};
\node[M] () at (3,5) {};
\node[txt]() at (1.4,3.6) {$D$};
\node[txt]() at (3.6,3.6) {$P_3^*$};
\end{tikzpicture}
\begin{tikzpicture}
\draw[double,line width=.3pt] (1,2) -- (3,2) -- (3,4) -- (1,4) -- (1,2) (3,5) -- (4,5) -- (4,1) -- (5,1);
\draw[double,line width=.3pt] (0,3) -- (2,3) -- (2,5) -- (3,5)
(0,5) -- (0,4) -- (1,4) (1,5)--(1,4);
\draw[double,line width=.3pt] (4,2) -- (3,2) (5,2) -- (5,0)--(3,0)--(3,2);
\draw[line width=2.2pt] (4,5) -- (5,5) -- (5,3) -- (2,3) -- (2,0);
\draw[line width=2.2pt] (3,5) -- (3,4) -- (5,4);
\draw[line width=2.2pt] (1,1)-- (1,2) -- (0,2) -- (0,0) -- (2,0) -- (2,2);
\draw[dashed] (0,5)--(2,5) (0,4)--(0,2) (5,3)--(5,2)--(4,2) (0,1)--(4,1);
\draw[dashed] (0,1)--(1,1) (4,0)--(4,1) (1,0)--(1,1) (2,0)--(3,0);
\foreach \x in {0,1,2,3,4,5}
\foreach \y in {0,1,2,3,4,5}
{ \node[B] () at (\x,\y) {};}
\node[B]()at (4,3) {};
\node[T](s4) at (3,4){};
\node[txt]() at (3.35,4.3) {$s_4$};
\node[T](t4) at (2,1){};
\node[txt]() at (1.7,.7) {$t_4$};
\node[T,label=right:$t_3$](t3) at (5,1){};
\node[T](t1) at (4,2){};
\node[txt]() at (4.35,1.7) {$t_1$};
\node[T,label=right:$t_2$](t2) at (5,2){};
\node[T,label=above:$s_1$](s1) at (0,5){};
\node[T,label=above:$s_2$](s2) at (1,5){};
\node[T,label=left:$s_3$](s3) at (0,3){};
\node[txt]() at (2.6,5.3) {$P_3$};
\node[txt]() at (2.6,3.6) {$P_2$};
\node[txt]() at (1.4,2.4) {$P_1$};
\end{tikzpicture}
\end{center}
\caption{Case (VII)}
\label{VII}
\end{figure}
For $\alpha =1$, a linkage for $\pi_1,\pi_2$ is completed along $C_1$ and $t_3$ is mated in SE to $(4,4)\in C_0$. It remains to build a framing in $NE$ for $\pi_3,\pi_4$ with $C_0$. For this purpose we apply Lemma \ref{frame} with $s_3^*,s_4\in NE$ and mate $t_4$ in SW to $C_0$ not using edges of $C_1$.
For $\alpha =0$, the solution is obtained by combining the frames as follows. Let $D$ be the $8$-cycle spanned by the neighbors of $(3,3)$ (see the left of Fig.\ref{VII}). Observe that no edges of $D$ have been used by the mating paths in the two framing. Thus a linkage $P_1,P_2$ for $\pi_1,\pi_2$ is completed
around $D$. A linkage $P_3$ for $\pi_3$ can be completed by a path
$P^*_3\subset
A(1)\cup B(5)$ from $s_3^*$ to $t_3^\prime=(5,5)$. The right picture in Fig.\ref{VII} shows that $s_4$ and $t_4$ are not disconnected by the linkage built so far, the tree highlighted in the picture
saturates all vertices of $NE\cup SW$ and edge disjoint from $P_1\cup P_2\cup P_3$.
Hence
there is a linkage for $\pi_4$.\\
A.4: $\|NW\|=4$, let $s_1,s_2,s_3,s_4\in NW$.
Two cases will be distinguished according to whether there is a quadrant $Q\neq$NW with three or more terminals or not.
By symmetry, we may assume that $\|NE\|\geq \|SW\|$.
A.4.1: $\|Q\|\geq 3$, where $Q=$NE or SE.
In each case we apply Lemma \ref{heavy4} twice: for $NW$ with $A=A(3)\cap NW$, $B=B(3)\cap NW$, then for $Q$, with $A=A(3)\cap Q$,
$B=B(3)\cap Q$, if $Q=$NE or
with
$A=B(4)\cap Q$, $B=A(4)\cap Q$,
if $Q=$SE.
Assume that there is a common index $\ell$, $1\leq j\leq 4$, resulting from the two applications of Lemma \ref{heavy4}, such that
$s_\ell$ is linked to $(2,3)$ in $NW$
and $t_\ell\in NE$ is linked in $NE$ to $(2,3)$ or
$t_\ell\in SE$ is linked in SE to $(4,4)$, furthermore, the remaining (five or six) terminals are mated into $A(3)\cap NW$ and
into $A(3)\cap NW$ or $B(4)\cap SE$.
First we complete a linkage for $\pi_\ell$ by the inclusion of the edge $(2,3)-(2,4)$. W.l.o.g. assume that $\ell=1$. Lemma \ref{heavy4} also implies that the mating paths leading from $s_2,s_3,s_4$ to the not necessarily distinct mates
$s_2^\prime,s_3^\prime,s_4^\prime$ are not using the edges of
$A(3)$, and similarly, the mating paths in $Q$ to the not necessarily distinct mates
$t_i^\prime\in Q$ are not using the edges $A(3)$ or $B(4)$, for $Q=$NE or SE.
\begin{figure}[htp]
\begin{center}
\tikzstyle{M} = [circle, minimum width=.5pt, draw=black, inner sep=2pt]
\tikzstyle{B} = [rectangle, draw=black!, minimum width=1pt, fill=white, inner sep=1pt]
\tikzstyle{T} = [rectangle, minimum width=.1pt, fill, inner sep=2.5pt]
\tikzstyle{V} = [circle, minimum width=1pt, fill, inner sep=1pt]
\tikzstyle{C} = [circle, minimum width=1pt, fill=white, inner sep=2pt]
\tikzstyle{A} = [circle, draw=black!, minimum width=1pt, fill, inner sep=1.7pt]
\tikzstyle{txt} = [circle, minimum width=1pt, draw=white, inner sep=0pt]
\tikzstyle{Wedge} = [draw,line width=2.2pt,-,black!100]
\begin{tikzpicture}
\draw[double,line width=.5pt] (0,0) -- (0,2)--(5,2)--(5,0)--(0,0);
\draw[double,line width=.5pt] (3,0) -- (3,2);
\draw[double,line width=.5pt](4,2)--(4,0);
\draw[double,line width=.5pt] (0,1) -- (5,1) (0,2) -- (5,2);
\draw[double,line width=.5pt] (1,0) -- (1,2) (2,0) -- (2,2);
\draw[->,line width=1.2pt] (0,5)--(0,3.1);
\draw[->,line width=1.2pt] (0,3)--(0,2.1);
\draw[->,line width=1.2pt] (1,5)--(2,5)--(2,3.1);
\draw[->,line width=1.2pt] (2,3)--(2,2.1);
\draw[->,line width=1.2pt] (5,4)--(5,3.1);
\draw[->,line width=1.2pt] (5,3)--(5,2.1);
\draw[line width=2.2pt] (1,3)--(1,4)--(4,4)--(4,5);
\draw[line width=2.2pt] (2,3)--(3,3)--(3,4);
\draw[dashed] (5,3)--(3,3)-- (3,2) (3,5)-- (3,4) (0,3)--(2,3);
\draw[dashed] (0,4)--(1,4)--(1,5)--(0,5) (1,2)--(1,3) (2,5)--(5,5)--(5,4)--(4,4)--(4,2) ;
\foreach \x in {0,1,2,3,4,5}
\foreach \y in {0,1,2,3,4,5}
{ \node[B] () at (\x,\y) {};}
\node[B]()at (4,3) {};
\node[M]()at (2,4) {};\node[M]()at (3,3) {};
\node[M]()at (0,3) {};\node[M]()at (5,3) {};
\node[T](s2) at (2,3){};
\node[txt]() at (.3,3.3) {$s_3^\prime$};
\node[T,label=above:$s_3$](s3) at (0,5){};
\node[T,label=above:$s_4$](s4) at (1,5){};
\node[T](s1) at (1,3){}; \node[txt](s1) at (.75,2.7) {$s_1$};
\node[txt]() at (2.3,2.7) {$s_4^\prime$};
\node[txt]() at (3.3,2.7) {$t_2^\prime$};
\node[txt]() at (4.7,3.3) {$t_4^\prime$};
\node[txt](s2) at (1.7,2.7) {$s_2$};
\node[txt](s1') at (1.7,4.3) {$s_1^\prime$};
\node[txt](t1') at (3.3,4.3) {$t_1^\prime$};
\node[T](t2) at (3,4){};
\node() at (3.3,3.75){$t_2$};
\node[T,label=above:$t_1$](t1) at (4,5){};
\node[T,label=right:$t_4$](t4) at (5,4){};
\node[T,label=below:$t_3$](t3) at (3,0){};
\node(s1*) at (2.3,1.75){$s_4^*$}; \node[M] () at (2,2) {};
\node(s2*) at (.3,1.75){$s_3^*$}; \node[M]()at (0,2) {};
\node(t1*) at (4.7,1.7){$t_4^*$}; \node[M]()at (5,2) {};
\node[txt]() at (2.6,3.4) {$P_2$};
\node[txt]() at (2.5,4.4) {$P_1$};
\end{tikzpicture}
\hskip1cm
\begin{tikzpicture}
\draw[double,line width=.5pt] (0,0) -- (2,0) (0,1)--(2,1)
(0,2)--(2,2);
\draw[double,line width=.5pt] (0,0) -- (0,2)
(1,0) -- (1,2) (2,0) -- (2,2);
\draw[->,line width=1.2pt] (0,3)--(0,2.1);
\draw[->,line width=1.2pt] (1,3)--(1,2.1);
\draw[->,line width=1.2pt] (5,1)--(3.1,1);
\draw[->,line width=1.2pt] (3,1)--(2.1,1);
\draw[->,line width=1.2pt] (1,4)--(1,3.1);
\draw[->,line width=1.2pt] (3,2)--(2.1,2);
\draw[line width=2.2pt] (1,5)--(1,4)--(4,4)--(4,0)--(3,0);
\draw[line width=2.2pt] (0,5)--(0,3)--(3,3)--(3,2)--(5,2);
\draw[dashed] (0,5)--(5,5)--(5,0)--(4,0) (3,5)-- (5,5)-- (5,4) (1,4)--(1,5) (2,5)--(2,2);
\draw[dashed] (0,4)--(0,3)--(1,3) (4,4)--(5,4) (4,5)--(4,2)
(3,5)--(3,3)--(5,3) (2,0)--(3,0)--(3,2) (0,4)--(1,4);
\foreach \x in {0,1,2,3,4,5}
\foreach \y in {0,1,2,3,4,5}
{ \node[B] () at (\x,\y) {};}
\node[B]()at (4,3) {};
\node[T](s4) at (1,4){}; \node[txt]() at (1.3,3.7) {$s_4$};
\node() at (1.3,2.7) {$s_4^\prime$};
\node() at (2.3,4.3) {$s_1^\prime$};
\node[T](t4) at (3,2){}; \node()at(2.75,1.75) {$t_4$};
\node[T,label=left:$s_3$](s3) at (0,3){};
\node(s3') at (.3,2.7) {$s_2^\prime$};
\node[T,label=below:$t_1$](t1) at (3,0){};
\node[T,label=right:$t_2$](t2) at (5,2){};
\node[txt](t3') at (3.35,.7) {$t_3^\prime$};
\node[txt](t1') at (4.35,1.7) {$t_1^\prime$};
\node[txt](t2') at (3.3,1.7) {$t_2^\prime$};
\node[T,label=above:$s_2$](s2) at (0,5){};
\node[T,label=above:$s_1$](s1) at (1,5){};
\node[T,label=right:$t_3$](t3) at (5,1){};
\node[M]()at (0,2) {}; \node[M]()at (1,2) {};
\node[M]()at (4,2) {}; \node[M]()at (2,2) {};
\node[M]()at (2,4) {}; \node[M]()at (3,1) {};
\node[M]()at (2,1) {};
\node(s4*) at (1.25,1.75){$s_4^*$};
\node(s3*) at (.3,1.75){$s_3^*$};
\node[M]()at (1,3) {};
\node(t3*) at (1.7,.7){$t_3^*$};
\node(t3*) at (1.758,1.7){$t_4^*$};
\node[txt]() at (2.6,2.6) {$P_2$};
\node[txt]() at (1.4,4.4) {$P_1$};
\end{tikzpicture}
\end{center}
\caption{$\|NW\|=4$, $\|Q\|\geq 3$}
\label{NW4}
\end{figure}
Next a linkage for another pair
$\pi_j$, $2\leq j\leq 4$ is completed along $A(3)$ (if $Q=$NE) or along $A(3)\cup B(4)$ (if $Q=$SE), where $j$ is selected as follows: $j$ is arbitrary provided all mates are distinct; $j$ is an index if $s_j^\prime\in A(3)$ or $t_j^\prime\in A(3)$ or $t_j^\prime\in B(4)$
is the only vertex hosting two mates in $NW$ (or in $Q$); if both NW and $Q$ contain repeated mates,
then $j\in\{2,3,4\}$ is selected to satisfy that both $s_j^\prime$ and $t_j^\prime$ are repeated mates (such index $j$ exists by the pigeon hole principle). W.l.o.g. let $j=2$.
Finally, a linkage can be obtained by extending (three or four) mating paths
from the remaining distinct mates into neighbors in $SW\cup SE$ (if $Q=$NE) or into neighbors in SW (if $Q=$SE). Then the linkage for
$\pi_3,\pi_4$ can be completed in the $3\times 6$
grid $SW\cup SE$ or in the quadrant SW which are both weakly-$2$-linked, by Lemma \ref{w2linked} (Fig.\ref{NW4} shows solutions).
Therefore a solution is obtained once a common index $\ell$ can be selected
to link $\pi_\ell$ as above. By the pigeon hole principle there is a common index $\ell$ unless the terminal set in NW is of type $T_2$, and
the terminal set in $Q$ is of type $T_1$ or $T_2$ in Fig.\ref{except} (ii).
We handle the exceptional cases one-by-one.
Let $s_1=(2,3), s_2=(1,3)$, $s_3=(1,2)$ and $s_4=(1,1)$.
For $\|SE\|=4$, if the terminals in NW are located according to type $T_2$ as well, then the argument using the common index $\ell$ can be repeated by
switching the role of
NE and SE. Since the pattern $T_2$ is not symmetric about the diagonal, the solution above works.
\begin{figure}[htp]
\begin{center}
\tikzstyle{M} = [circle, minimum width=.5pt, draw=black, inner sep=2pt]
\tikzstyle{B} = [rectangle, draw=black!, minimum width=1pt, fill=white, inner sep=1pt]
\tikzstyle{T} = [rectangle, minimum width=.1pt, fill, inner sep=2.5pt]
\tikzstyle{V} = [circle, minimum width=1pt, fill, inner sep=1pt]
\tikzstyle{C} = [circle, minimum width=1pt, fill=white, inner sep=2pt]
\tikzstyle{A} = [circle, draw=black!, minimum width=1pt, fill, inner sep=1.7pt]
\tikzstyle{txt} = [circle, minimum width=1pt, draw=white, inner sep=0pt]
\tikzstyle{Wedge} = [draw,line width=2.2pt,-,black!100]
\begin{tikzpicture}
\foreach \x in {0,...,5}
\foreach \y in {0,1,2,3,4,5}
{ \draw (0,\y)--(5,\y); \draw (\x,0)--(\x,5);}
\draw[line width=2.2pt] (2,5)--(4,5)
(1,5)--(1,4)--(3,4)--(3,5) ;
\draw[line width=2.2pt] (0,5)--(0,2)--(3,2)--(3,4)
(2,4)--(2,3)--(5,3)--(5,5) ;
\foreach \x in {0,1,2,3,4,5}
\foreach \y in {0,1,2,3,4,5}
{ \node[B] () at (\x,\y) {};}
\node[M]() at (5,3){}; \node[M]() at (3,2){};
\node[M]() at (0,2){}; \node[M]() at (2,3){};
\node() at (5.3,2.7) {$t_1^\prime$};
\node() at (1.7,2.7) {$s_1^\prime$};
\node[T](s1) at (2,4){}; \node() at (1.75,3.75) {$s_1$};
\node[T,label=above:$s_4$](s4) at (0,5){};
\node[T,label=above:$s_3$](s3) at (1,5){};
\node[T,label=above:$s_2$](s2) at (2,5){};
\node[T,label=above:$t_2$](t2) at (4,5){};
\node[T,label=above:$t_1$](t1) at (5,5){};
\node[T,label=above:$t_3$](t3) at (3,5){};
\node[T](t4) at (3,4){}; \node() at (3.25,3.75) {$t_4$};
\node() at (3.25,1.75) {$t_4^\prime$};
\node() at (.3,1.75) {$s_4^\prime$};
\node() at (.4,2.4) {$P_4$};
\node() at (4.6,3.4) {$P_1$};
\node() at (3.6,4.6) {$P_2$};
\node() at (1.4,4.4) {$P_3$};
\end{tikzpicture}
\end{center}
\caption{$\|NE\|=4$}
\label{diagonal2}
\end{figure}
For $\|NE\|=4$, we have $\{t_3,t_4\}=\{(1,4),(2,4)\}$ and $\{t_1,t_2\}=\{(1,5),(1,6)\}$. Thus there are
two pairs are in $A(1)$, say
$\pi_2,\pi_3\subset A(1)$. Their linkage can be done
using the $s_2,t_2$-path $P_2\subset A(1)$
and the $s_3,t_3$-path $P_3\subset (B(2)\cup A(2)\cup B(4))$. The remaining terminals can be mated along their distinct columns into vertices $s_1^\prime,t_1^\prime\in A(3)$ and $s_4^\prime,t_4^\prime\in A(4)$. The linkage
for $\pi_1,\pi_4$ can be completed along $A(3)$ and $A(4)$, respectively (see Fig.\ref{diagonal2}).
\begin{figure}[htp]
\begin{center}
\tikzstyle{M} = [circle, minimum width=.5pt, draw=black, inner sep=2pt]
\tikzstyle{B} = [rectangle, draw=black!, minimum width=1pt, fill=white, inner sep=1pt]
\tikzstyle{T} = [rectangle, minimum width=.1pt, fill, inner sep=2.5pt]
\tikzstyle{V} = [circle, minimum width=1pt, fill, inner sep=1pt]
\tikzstyle{C} = [circle, minimum width=1pt, fill=white, inner sep=2pt]
\tikzstyle{A} = [circle, draw=black!, minimum width=1pt, fill, inner sep=1.7pt]
\tikzstyle{txt} = [circle, minimum width=1pt, draw=white, inner sep=0pt]
\tikzstyle{Wedge} = [draw,line width=2.2pt,-,black!100]
\begin{tikzpicture}
\foreach \x in {0,...,5}
\foreach \y in {0,1,2,3,4,5}
{ \draw (0,\y)--(5,\y); \draw (\x,0)--(\x,5);}
\draw[->,line width=1.2pt] (5,3)--(5,2.1);
\draw[double,line width=.5pt] (1.7,5.6)--(5.7,5.6)-- (5.7,2.7)--(1.7,2.7)--(1.7,5.6);
\draw[double,line width=.5pt] (1.3,5.6)--(-.4,5.6)--(-.4,-.7)--(5.7,-.7) --(5.7,2.35)--(1.3,2.35)--(1.3,5.6) ;
\foreach \x in {0,1,2,3,4,5}
\foreach \y in {0,1,2,3,4,5}
{ \node[B] () at (\x,\y) {};}
\node[M](t3') at (5,2){};
\node() at (5.3,2) {$t_3^\prime$};
\node[T](s1) at (2,4){}; \node() at (2.3,3.7) {$s_1$};
\node[T,label=above:$s_4$](s4) at (0,5){};
\node[T,label=above:$s_3$](s3) at (1,5){};
\node[T,label=above:$s_2$](s2) at (2,5){};
\node[T,label=right:$t_2$](t2) at (5,4){};
\node[T,label=right:$t_1$](t1) at (5,5){};
\node[T,label=below:$t_4$](t4) at (2,0){};
\node[T,label=right:$t_3$](t3) at (5,3){};
\node[txt]() at (5.4,.4) {$G^\prime$};
\end{tikzpicture}
\hskip1cm
\begin{tikzpicture}
\foreach \x in {0,...,5}
\foreach \y in {0,1,2,3,4,5}
{ \draw (0,\y)--(5,\y); \draw (\x,0)--(\x,5);}
\draw[double,line width=.5pt] (1.7,5.6)--(5.7,5.6)-- (5.7,-.7)--(3.7,-.7)--(3.7,3.55)--(1.7,3.55)--(1.7,5.6);
\draw[double,line width=.5pt] (1.3,5.6)--(-.4,5.6)--
(-.4,-.7)--(3.3,-.7) --(3.3,3.35)--(1.3,3.35)--(1.3,5.6) ;
\foreach \x in {0,1,2,3,4,5}
\foreach \y in {0,1,2,3,4,5}
{ \node[B] () at (\x,\y) {};}
\node[T](s1) at (2,4){}; \node() at (2.3,3.7) {$s_1$};
\node[T,label=above:$s_4$](s4) at (0,5){};
\node[T,label=above:$s_3$](s3) at (1,5){};
\node[T,label=above:$s_2$](s2) at (2,5){};
\node[T,label=below:$t_2$](t2) at (4,0){};
\node[T,label=below:$t_1$](t1) at (5,0){};
\node[T,label=below:$t_3$](t3) at (3,0){};
\node[T](t4) at (2,2){}; \node() at (2.3,2.3){$t_4$};
\node[txt]() at (5.4,.4) {$G^\prime$};
\end{tikzpicture}
\end{center}
\caption{}
\label{Qexcept}
\end{figure}
For $\|NE\|=3$ we have $\{t_1,t_2\}=\{(1,6),(2,6)\}$ and $(3,6)=t_3$ or $t_4$. First we mate the terminal at $(3,6)$ to $(4,6)$. The grid
$G^\prime=(SW\cup SE)\cup (B(1)\cup B(2))$ is
$2$-path pairable, thus a linkage for $\pi_3,\pi_4$ can be completed in $G^\prime$. In $G-G^\prime$ which is $2$-path-pairable as well, there is a linkage for $\pi_1,\pi_2$ (see the left of Fig. \ref{Qexcept}).
For $\|SE\|=3$ we have $\{t_1,t_2\}=\{(6,5),(6,6)\}$ and $(6,4)=t_3$ or $t_4$. The $2$-path pairable grid
$G^\prime=(A(1)\cup A(2))\cup (B(5)\cup B(6))\setminus (B(1)\cup B(2))$ and its complement are both $2$-path pairable. Thus $G^\prime$ contains a linkage for $\pi_1,\pi_2$, and $G-G^\prime$ contains a linkage for $\pi_3,\pi_4$ (see the right of Fig. \ref{Qexcept}).\\
In the remaining cases we have $\|Q\|\leq 2$, for every $Q\neq$ NW.
Since $2\geq \|NE\|\geq \|SW\|$, we have either
$\|SE\|=2$ and $\|NE\|=\|SW\|= 1$ or $\|NE\|=2$.\\
A.4.2: $\|SE\|=2$ and $\|NE\|=\|SW\|= 1$.
We apply Lemma \ref{heavy4} for NW with
$A=A(3)\cap NW$ and $B=B(3)\cap NW$. There are at least two terminals that can be mapped into $(2,3)$, hence by the pigeon hole principle, there is an index $1\leq \ell\leq 4$ such that $s_\ell$ is mated to $s_\ell^\prime=(2,3)$ and
$t_\ell\in NE\cup SE$. W.l.o.g. we may assume that $\ell=1$, and the terminals
$s_2,s_3,s_4$ are mated into $s_2^\prime,s_3^\prime,s_4^\prime\in A(3)$ by the lemma.
If $t_1\in NE$, then a linkage for $\pi_1$ is completed by an $s_1^\prime,t_1$-path. Moreover, if
$s_2^\prime,s_3^\prime,s_4^\prime$ are distinct
then the linkage for the remaining terminals can be completed
in the $3$-path-pairable grid $G^*=G-(A(1)\cup A(2))$.
Assume now that $s_2^\prime,s_3^\prime,s_4^\prime$ are not distinct,
let $w\in A(3)\cap NW$ be the mate of two terminals of NW, that is $s_i^\prime=s_j^\prime=w$, for some $2\leq i<j\leq 4$ (actually, one of them is a terminal, $s_i^\prime=s_i$ or $s_j^\prime=s_j$). Since $\|SW\|\leq 1$,
$t_i$ or $t_j$ is a terminal in $NE\cup SE$, say $t_i\in NE\cup SE$; let $t_k$ be the third terminal in $NE\cup SE$ (that is $t_1,t_i,t_k\in N$).
We plan to specify a linkage for $\pi_1$ by mating $t_1$ to $s_1^\prime$, then specify a linkage for $\pi_i$
by mating $t_i$ to a vertex of $A(3)$; the remaining terminals
can be mated into the weakly $2$-linked SW and find there a linkage for $\pi_j,\pi_k$. The plan is easy to realize provided $t_1\in NE$. It is enough to mate $t_i\in SE$ along its column to $t_i^\prime\in A(3)$, then $s_j^\prime,s_k^\prime\in A(3)$ to $s_j^*,s_k^*\in A(4)$, and to mate $t_k\in SE$ along its row to $t_k^*\in B(3)$.
Assume now that
$t_1\in SE$.
We introduce three auxiliary terminals in NE,
let $x=(2,4)$, $x^\prime=(3,5)$, and $y=(3,4)$. There exist an $x,x^\prime$-path $X$ and
an edge disjoint path $Y$ from the terminal of NE to $y$ not using edges of $A(3)$. If the terminal of NE is $t_k$, and thus $t_1,t_i\in SE$, then we extend $Y$ to
$t_k^*=(4,3)$ by adding the path $y-(4,4)-t_k^*$, furthermore, we mate
$t_1$ to $t_1^\prime=(4,5)$ and we mate $t_i$ to $t_i^\prime=(4,6)$ (see the left of Fig.\ref{adhoc}).
If the terminal of NE is $t_i$, and thus $t_1,t_k\in SE$,
let $t_i^\prime=y$ be the mate of $t_i$, we mate
$t_1$ to $t_1^\prime=(4,5)$, and we mate $t_k$ to
$t_k^\prime=(6,3)$.
In each case we complete a linkage for $\pi_1$ by adding the path $X$ and the two edges $s_1^\prime x$ and $t_1^\prime x^\prime$; and we complete a linkage for $\pi_i$ by adding the $s_i^\prime, t_i^\prime$-path in $A(3)$. The unpaired terminals/mates from
$A(3)\cup B(4)$ are mated
into the weakly $2$-linked quadrant SW to complete a solution. An example is shown in the left of Fig.\ref{adhoc}.
\begin{figure}[htp]
\begin{center}
\tikzstyle{M} = [circle, minimum width=.5pt, draw=black, inner sep=2pt]
\tikzstyle{B} = [rectangle, draw=black!, minimum width=1pt, fill=white, inner sep=1pt]
\tikzstyle{T} = [rectangle, minimum width=.1pt, fill, inner sep=2.5pt]
\tikzstyle{V} = [circle, minimum width=1pt, fill, inner sep=1pt]
\tikzstyle{C} = [circle, minimum width=1pt, fill=white, inner sep=2pt]
\tikzstyle{A} = [circle, draw=black!, minimum width=1pt, fill, inner sep=1.7pt]
\begin{tikzpicture}
\draw[line width=2.2pt] (0,4)--(3,4)(4,3)--(4,2)--(4,1)--(3,1);
\draw[line width=2.2pt] (1,3)--(5,3)--(5,2);
\draw[snake](3,4)--(4,4)--(4,3) (4,5)--(3,5)--(3,3);
\foreach \x in {0,1,2} \draw[double,line width=.5pt] (\x,2)--(\x,0);
\foreach \y in {0,1,2} \draw[double,line width=.5pt] (0,\y)--(2,\y);
\draw[->,line width=1.2pt] (3,0)-- (5,0)--(5,1.9);
\draw[->,line width=1.2pt] (0,5)--(0,3.1);
\draw[->,line width=1.2pt] (0,3)--(0,2.1);
\draw[->,line width=1.2pt] (1,5)--(1,3.1);
\draw[->,line width=1.2pt] (1,3)--(1,2.1);
\draw[->,line width=1.2pt] (3,3)--(3,2)--(2.1,2);
\draw[dashed] (0,5)--(3,5) (4,4)--(4,5)--(5,5)--(5,3)(4,0)--(4,1)--(5,1);
\draw[dashed] (2,5)--(2,2) (2,0)--(3,0)--(3,2)--(5,2)(2,1)--(3,1)(4,4)--(5,4);
\foreach \x in {0,1,2,3,4,5}
\foreach \y in {0,1,2,3,4,5}
{ \node[B] () at (\x,\y) {};}
\node[T](t1) at (3,1){}; \node()at(3.3,.7){$t_1$};
\node[T,label=above:$t_k$](tk) at (4,5){};
\node[T](ti') at (5,2){}; \node() at (5.3,1.75) {$t_i^\prime$};
\node[T,label=below:$t_j$](tj) at (0,0){};
\node[T,label=below:$t_i$](ti) at (3,0){};
\node[T,label=above:$s_k$](sk) at (0,5){};
\node[T,label=left:$s_1$](s1) at (0,4){};
\node[T,label=above:$s_j$](sj) at (1,5){};
\node[T](si) at (1,3){}; \node() at (.7,2.7) {$s_i$};
\node[M]()at (4,3) {}; \node() at (4.3,3.3) {$x^\prime$};
\node[M]()at (3,4) {}; \node() at (3.3,4.3) {$x$};
\node[M]()at (3,3) {}; \node() at (3.25,3.25) {$y$};
\node() at (.7,3.3) {$w$};
\node[M,label=left:$s_k^\prime$]()at (0,3) {};
\node[M]()at (2,4) {}; \node() at (1.7,4.3) {$s_1^\prime$};
\node[M]()at (4,2) {}; \node() at (4.3,1.7) {$t_1^\prime$}; \node[T](s3) at (1,3){}; \node()at(1.35,3.3){$s_j^\prime$};
\node[M]()at (1,2) {}; \node() at (1.35,2.3) {$s_j^*$};
\node[M]()at (2,2) {}; \node() at (2.35,2.3) {$t_k^*$};
\node[M,label=left:$s_k^*$]()at (0,2) {};
\node() at (3.65,3.65) {$X$};
\node() at (2.7,4.7) {$Y$};
\node() at (.4,.4) {$G^*$};
\end{tikzpicture}
\hskip1cm
\begin{tikzpicture}
\foreach \x in {0,...,5} \draw[double,line width=.5pt] (\x,3)--(\x,0);
\foreach \y in {0,...,3} \draw[double,line width=.5pt] (0,\y)--(5,\y);
\draw[line width=2.2pt] (0,5)--(5,5)--(5,4);
\draw[->,line width=1.2pt] (2,4)--(0,4)--(0,3.1);
\draw[->,line width=1.2pt] (1,5)--(1,3.1);
\draw[->,line width=1.2pt] (2,5)-- (2,3.1);
\draw[->,line width=1.2pt] (4,5)-- (4,3.1);
\draw[dashed] (0,5)--(0,4) (2,4)--(5,4)--(5,3) (3,5) -- (3,3);
\foreach \x in {0,1,2,3,4,5}
\foreach \y in {0,1,2,3,4,5}
{ \node[B] () at (\x,\y) {};}
\node[T,label=right:$t_4$](t2) at (5,4){};
\node[T](t4) at (2,2){}; \node()at(2.3,1.7){$t_2$};
\node[T,label=below:$t_1$](t1) at (5,0){};
\node[T,label=above:$t_3$](t3) at (4,5){};
\node[T,label=above:$s_1$](s1) at (2,5){};
\node[T,label=above:$s_4$](s4) at (0,5){};
\node[T,label=above:$s_3$](s3) at (1,5){};
\node[T](s2) at (2,4){}; \node() at (1.7,4.3) {$s_2$};
\node[M]()at (2,3) {}; \node() at (1.7,3.3) {$s_1^{*}$};
\node[M]()at (1,3) {}; \node() at (.7,3.3) {$s_3^*$};
\node[M]()at (4,3) {}; \node() at (3.7,3.3) {$t_3^*$};
\node[M]()at (0,3) {}; \node() at (-.3,3.3) {$s_2^*$};
\node() at (4.6,4.6) {$P_4$};
\node() at (.4,.4) {$G^*$};
\end{tikzpicture}
\end{center}
\caption{$\|NW\|=4$}
\label{adhoc}
\end{figure}
A.4.3: $\|NE\|=2$. The solution starts with Lemma \ref{heavy4} applied for NW with $A=A(3)\cap NW$ and $B=B(3)\cap NW$. Since $\|NW\|=4$, there are three terminals that can be mated to $b=(2,3)$ and the other ones to $A$ unless the four terminals in NW are located according to type $T_2$ in Fig.\ref{except}. First we sketch a solution for this exceptional case when
$\{s_1,s_2\}=\{(1,3),(2,3)\}$ and $\{s_3,s_4\}=\{(1,1),(1,2)\}$, furthermore, the two terminals in NE are $t_3,t_4$. The solution on the right of Fig.\ref{adhoc}
starts with a linkage $P_4$ for $\pi_4$ not using edges of NW not in $A(1)\cap NW$, and mating the other terminals of $NW\cup NE$ into distinct vertices
$s_1^*,s_2^*,s_3^*,t_3^*\in A(3)$. The linkage is completed for $\pi_1,\pi_2,\pi_3$ in the $3$-path-pairable $G^*=G-(A(1)\cup A(2))$.
Therefore, we may assume that the terminals in NW are not in position $T_2$, and when we apply Lemma \ref{heavy4} for NW with
$A=A(3)\cap NW$ and $B=B(3)\cap NW$, there are three terminals one can map into $(2,3)$. By the pigeon hole principle, there is an index $1\leq \ell\leq 4$ such that $s_\ell$ is mated to $s_\ell^\prime=(2,3)$ and
$t_\ell\in NE\cup SE$. W.l.o.g. we may assume that $\ell=1$, and the terminals
$s_2,s_3,s_4$ are mated into $s_2^\prime,s_3^\prime,s_4^\prime\in A(3)$ by the lemma. If
$s_2^\prime,s_3^\prime,s_4^\prime$ are distinct, then we follow the solution given for the particular case above.
A linkage for $\pi_1$ is specified first, then
the $3$-path-pairability of $G^*=G-(A(1)\cup A(2))$ is used to obtain a linkage for the remaining pairs.
\begin{figure}[htp]
\begin{center}
\tikzstyle{M} = [circle, minimum width=.5pt, draw=black, inner sep=2pt]
\tikzstyle{B} = [rectangle, draw=black!, minimum width=1pt, fill=white, inner sep=1pt]
\tikzstyle{T} = [rectangle, minimum width=.1pt, fill, inner sep=2.5pt]
\tikzstyle{V} = [circle, minimum width=1pt, fill, inner sep=1pt]
\tikzstyle{C} = [circle, minimum width=1pt, fill=white, inner sep=2pt]
\tikzstyle{A} = [circle, draw=black!, minimum width=1pt, fill, inner sep=1.7pt]
\tikzstyle{txt} = [circle, minimum width=1pt, draw=white, inner sep=0pt]
\tikzstyle{Wedge} = [draw,line width=2.2pt,-,black!100]
\hskip1cm
\begin{tikzpicture}
\foreach \x in {0,1,2} \draw[double,line width=.5pt] (\x,2)--(\x,0);
\foreach \y in {0,1,2} \draw[double,line width=.5pt] (0,\y)--(2,\y);
\draw[line width=2.2pt] (4,5)--(4,4)--(2,4);
\draw[line width=2.2pt] (5,2)--(5,3)--(1,3);
\draw[->,line width=1.2pt] (0,5)--(0,3.1);
\draw[->,line width=1.2pt] (1,5)--(1,3.1);
\draw[dashed] (0,5)--(5,5)--(5,4)(4,1)--(4,0)--(5,0)--(5,2);
\draw[dashed] (0,4)--(2,4)--(2,2)--(5,2) (2,0)--(4,0);
\draw[dashed] (3,5)--(3,0) (4,1)--(5,1)
(0,3)--(1,3)(5,3)--(5,4);
\draw[->,line width=1.2pt] (1,3)--(1,2.1);
\draw[->,line width=1.2pt] (4,3)--(4,2.1);
\draw[->,line width=1.2pt] (4,2)--(4,1)-- (2.1,1);
\draw[->,line width=1.2pt] (0,3)--(0,2.1);
\draw[->,line width=1.2pt] (5,4)--(4,4)--(4,3.1);
\foreach \x in {0,1,2,3,4,5}
\foreach \y in {0,1,2,3,4,5}
{ \node[B] () at (\x,\y) {};}
\node[M]() at (4,3){}; \node()at(4.3,2.7){$t_k^\prime$};
\node[T,label=above:$t_1$](t1) at (4,5){};
\node[T](tk) at (5,4){}; \node() at (5.3,3.75) {$t_k$};
\node[T,label=below:$t_j$](tj) at (0,0){};
\node[T](ti) at (5,2){}; \node() at (5.3,1.75) {$t_i$};
\node[T,label=above:$s_k$](sk) at (0,5){};
\node[T,label=left:$s_1$](s1) at (0,4){};
\node[T,label=above:$s_j$](sj) at (1,5){};
\node[T](si) at (1,3){}; \node() at (.7,2.7) {$s_i$};
\node() at (.7,3.3) {$w$};
\node[M]()at (2,4) {}; \node() at (1.7,4.3) {$s_1^\prime$};
\node[M]()at (3,4) {}; \node() at (2.7,4.3) {$t_1^\prime$};
\node[M]()at (3,1) {}; \node[M]()at (4,2) {};
\node[M,label=left:$s_k^\prime$]()at (0,3) {};
\node[M]()at (5,3) {}; \node() at (5.3,2.7) {$t_i^{\prime}$}; \node[M]()at (2,1) {}; \node()at(2.3,.7){$t_k^*$};
\node[T](sj) at (1,3){}; \node() at (1.35,3.3) {$s_j^\prime$};
\node[M]()at (1,2) {}; \node() at (1.35,2.3) {$s_j^*$};
\node[M,,label=left:$s_k^*$]()at (0,2) {};
\node[txt]() at (3.6,4.4) {$P_1$};
\node[txt]() at (2.5,2.5) {$P_2$};
\node[txt]() at (.4,.4) {$G^*$};
\end{tikzpicture}
\hskip.3cm
\begin{tikzpicture}
\draw[line width=2.2pt] (0,4)--(4,4)--(4,5);
\draw[snake] (0,3)--(0,1)--(4,1)--(4,3);
\foreach \x in {1,2} \draw[double,line width=.5pt] (\x,3)--(\x,0);
\foreach \y in {0,3} \draw[double,line width=.5pt] (1,\y)--(2,\y);
\draw[->,line width=1.2pt] (0,5)--(0,3.1);
\draw[->,line width=1.2pt] (1,5)--(1,3.1);
\draw[->,line width=1.2pt] (4,4)--(4,3.1);
\draw[->,line width=1.2pt] (0,0)--(.9,0);
\draw[dashed] (0,5)--(5,5)--(5,0) -- (2,0) (0,0)--(0,1);
\draw[dashed] (4,1)--(5,1) (5,2)--(0,2)(0,3)--(1,3);
\draw[dashed] (3,5)-- (3,0) (2,3) --(5,3)(4,0)--(4,1);
\draw[dashed] (2,5)--(2,4)(4,4)--(5,4)(2,3)--(2,4);
\foreach \x in {0,1,2,3,4,5}
\foreach \y in {0,1,2,3,4,5}
{ \node[B] () at (\x,\y) {};}
\node[T,label=above:$s_2$](s2) at (0,5){};
\node[T,label=left:$s_1$](s1) at (0,4){};
\node[T,label=above:$s_4$](s4) at (1,5){};
\node[T](s3) at (1,3){}; \node() at (.7,2.7) {$s_3$};
\node() at (.7,3.3) {$w$};
\node[T](t3) at (2,2){}; \node() at (2.3,1.7) {$t_3$};
\node[T,label=below:$t_4$](t4) at (0,0){};
\node[T](t2) at (4,4){}; \node() at (4.3,4.3) {$t_2$};
\node[T,label=above:$t_1$](t1) at (4,5){};
\node[M]()at (4,3) {}; \node() at (4.3,2.7) {$t_2^\prime$};
\node[M,label=left:$s_2^\prime$]()at (0,3) {};
\node() at (1.3,3.3) {$s_4^\prime$};
\node[M,label=below:$t_4^*$]()at (1,0) {};
\node[txt]() at (3.6,4.4) {$P_1$};
\node[txt]() at (1.5,.4) {$C$};
\node[txt]() at (3.4,1.4) {$X$};
\node[txt]() at (-.7,1) {$A(\ell)$};
\end{tikzpicture}
\end{center}
\caption{$\|NW\|=4$, $\|NE\|=2$}
\label{Q2}
\end{figure}
Thus we assume that
$s_2^\prime,s_3^\prime,s_4^\prime$ are not distinct.
Let $w\in A(3)\cap NW$ be the mate of two terminals of NW, that is $s_i^\prime=s_j^\prime=w$, for some $2\leq i<j\leq 4$ (actually, one of them is a terminal, $s_i^\prime=s_i$ or $s_j^\prime=s_j$). For $\|SW\|\leq 1$,
$t_i$ or $t_j$ is a terminal in $NE\cup SE$, say $t_i\in NE\cup SE$. Now $t_i$ is mated to $t_i^\prime\in A(3)$ and a linkage for $\pi_i$ is completed by adding the $s_i^\prime,t_i^\prime$-path in
$A(3)$.
Then we mate the unlinked terminals of $NW\cup NE$ into the weakly $2$-linked quadrant SW to complete there the linkage for the remaining pairs $\pi_j,\pi_k$ (see in the left of Fig.\ref{Q2}).
To tackle the last subcase we may assume that $t_1,t_k\in NE$ and
$t_i,t_j\in SW$. W.l.o.g. let $i=3,j=4$, that is
$s_3^\prime=s_4^\prime=w\in A(3)\cap NW$ and $t_3,t_4\in SW$. Let $P_1\subset NW\cup NE$ be a linkage for $\pi_1$, let
$s_2^\prime\in (A(3)\setminus\{w\})\cap NW$ and $t_2^\prime \in A(3)\cap NE$, as obtained before.
There is a row $A(\ell)$, $4\leq \ell\leq 6$ containing no terminal, thus a linkage for $\pi_2$ can be completed by
adding the path $Y$ from $s_2^\prime$ to $t_2^\prime$ in the union of their columns and $A(\ell)$.
Define $C\subset SW$ to be the cycle bounded vertically by the two columns not containing $s_2^\prime$ and bounded horizontally by $A(3)$ and by row $A(5)$ if
$\ell=6$ or by row $A(6)$ if $\ell\neq 6$. In this way we obtain a frame $[C,w]$. Since $t_3,t_4\notin X$, if $t_3$ and/or $t_4$ is not in $C$ it can be mated easily into $C$ along its row, thus the linkage for $\pi_3,\pi_4$ is obtained along the frame. An example is shown in the right of Fig.\ref{Q2}.\\
Case B: there is a quadrant containing a pair. Assume that $NW$
contains a pair and it has the largest number of terminals with this property.\\
B.1. $\|NW\|=7$ or $8$. The strategy consists in linking two pairs in NW and
mating the remaining terminals into $G-NW$ which is weakly $2$-linked by Lemma \ref{w2linked}. This plan works due to Lemma \ref{heavy78}.\\
B.2. $\|NW\|=6$.
First we assume that NW consists of three pairs, let $\pi_1,\pi_2,\pi_3\in NW$. We extend NW into the grid $H\cong P_4\Box P_4$ by including the $7$-path $L\subset (A(4)\cup B(4))$ between $(1,4)$ and $(4,1)$.
By Lemma \ref{w2linked}, $H$ is $3$-path-pairable,
therefore, there is a linkage in $H$ for $\pi_1,\pi_2$ and $\pi_3$. Removing the edges of $H$ from $G$
a connected graph remains, which contains a linkage for $\pi_4$.
Next we assume that $NW$ contains $\pi_1, \pi_2$ and the terminals $s_3,s_4$.
W.l.o.g. assume that $\|NE\|\geq \|SW\|$, and let $A=NW\cap A(3)$, $B=NW\cap B(3)$ and $x_0=(3,3)$. We apply Lemma \ref{heavy6} for $Q=NW$. We obtain a linkage for $\pi_1$ (or $\pi_2$ or both),
and a mating of the remaining terminals into distinct vertices of $A\cup B$ such that $B-x_0$
contains at most one mate.
We extend these mating paths ending at $(A\cup B)\setminus\{x_0\}$ into at most three vertices of
$\{(4,1),(4,2),(1,4),(2,4)\}$. Observe that after this step both quadrants NE and SW contain at most three (not necessarily distinct) terminals/mates.
Let $G^*=G-(A(1)\cup A(2)\cup B(1)\cup B(2))$, and let
$L^*\subset (A(3)\cup B(3))$ be the $7$-path bounding $G^*$.
By applying Lemma \ref{exit} (iii) twice, the
terminals/mates in NE and those in SW can be mated to distinct vertices of $L^*$ without using edges in $L^*$.
By Lemma \ref{3pp}, $G^*\cong P_4\Box P_4$ is $3$-path-pairable, hence the linkage for $\pi_2$ (or $\pi_1$) and $\pi_3,\pi_4$ can be completed in $G^*$.\\
B.3. $\|NW\|=5$.
First we assume that NW contains $\pi_1,\pi_2$ and a terminal $s_3$.
As in case B.2, we extend $NW$ into the grid $H\cong P_4\Box P_4$ by including
the $7$-path $L\subset (A(4)\cup B(4))$ from $(1,4)$ to $(4,1)$.
Let $s_3^*$ be any terminal-free vertex on $L$.
Since $H$ is $3$-path-pairable, there is a linkage in $H$ for the pairs $\pi_1,\pi_2$ and $\{s_3,s_3^*\}$.
Next we mate $s_3^*$ and the remaining terminals of $L$ into $G-H$ using edges from $G$ to $G-H$.
By Lemma \ref{w2linked}, $G-H=A(5)\cup A(6)\cup B(5)\cup B(6)$ is weakly $2$-linked thus a linkage can be completed there for the pairs $\pi_3,\pi_4$.
Next we assume that $NW$ contains $\pi_1$ and the terminals $s_2,s_3,s_4$. W.l.o.g. assume that $\|NE\|\geq\|SW\|$, and apply Lemma \ref{heavy5} with $NW$ to obtain a linkage for $\pi_1$
and mates $s_2^\prime,s_3^\prime\in A(3)$, $s_4^\prime\in B(3)\cap NW$.
For $\|NE\|\leq 2$ the solution is completed similarly to the one in B.2 as above.
For $\|NE\|=3$ we extend the mating paths from
$s_2^\prime,s_3^\prime$ to vertices $s_2^*,s_3^*\in A(4)\cap SW$ and the mating path from
$s_4^\prime$ to $s_4^{*}\in B(4)\cap NE$.
If $s_4^{*}\notin\{t_2,t_3,t_4\}$, then we apply Lemma \ref{boundary} to
obtain a linkage for $\pi_4$ and the mating of $t_2,t_3$ to $t_2^*,t_3^*\in A(4)$.
Assume that $s_4^{*}=t_i$, for some $2\leq i\leq 4$.
If $s_4^*=t_4$, then a linkage is obtained for $\pi_4$, and we mate $t_2,t_3$ to $t_2^*,t_3^*\in A(4)$. W.l.o.g. let $s_4^*=t_2=w$. For $w=(3,4)$ we take a $w,t_4$-path in the weakly $2$-linked NE to complete the linkage for $\pi_4$, and we mate $t_3$ to $t_3^\prime\in A(3)\cap NE$; then we mate $t_2,t_3^\prime$ to
$t_2^*,t_3^*\in A(4)$. For $w=(2,4)$
we mate $t_2$ into $(4,4)$ along $B(4)$. Then we take a $w,t_4$-path in the weakly $2$-linked $NE\setminus (3,3)$ to complete the linkage for $\pi_4$, and we mate $t_3$ to
$t_3^*\in A(4)$.
In each case we have $s_2^*,t_2^*,s_3^*,t_3^*\in A(4)$, thus a linkage for $\pi_2,\pi_3$ can be completed in the
weakly $2$-linked halfgrid $SW\cup SE$.\\
B.4: $\|NW\|\leq 4$. Recall that NW contains a pair, say $\pi_1$, and $NW$ has the largest number of terminals with this property.
B.4.1. If $\|NW\|=2$ or $3$, then it contains one pair, say $\pi_1$, and by the choice of NW, we have $\|NE\|\leq 3$.
Applying Lemma \ref{exit} (ii) for NW and (iii) for NE, there is a linkage in NW for $\pi_1$ and there are distinct matings for the other terminals of $NW\cup NE$ into distinct vertices of
$A(3)$ without using edges of $A(3)$. A linkage
for $\pi_2,\pi_3,\pi_3$ can be completed in the grid $G-(A(1)\cup A(2))\cong P_4\Box P_6$ which is $3$-path-pairable by Lemma \ref{3pp}.
B.4.2. Let $\|NW\|=4$. If NW contains two pairs then their linkage can be done in NW
and the linkage of the other two pairs in $G-NW$, since both $Q$ and $G-NW$ are $2$-path-pairable, by Lemma \ref{w2linked}. Thus we may assume that NW contains $\pi_1$ and terminals $s_2,s_3$.
We distinguish cases where $\pi_4$ is contained by some quadrant $Q\neq NW$ or $s_4,t_4$ are in distinct quadrants.
If $\pi_4\subset Q$, then we may assume, by symmetry, that $Q=$NE or SE. For $\pi_4\subset NE$,
we apply Lemma \ref{boundary} twice. The pair $\pi_1$ is linked in NW and $s_2,s_3$ are mated into $A(3)\cap NW$; the pair $\pi_4$ is linked in NE and the remaining terminals are mated to $A(3)\cap NE$.
Then the four (distinct) terminals/mates from $A(3)$ are mated further into
$SW\cup SE$ which is weakly $2$-linked by Lemma \ref{w2linked}. Thus
a linkage for $\pi_2, \pi_3$ can be completed in $SW\cup SE$.
Assume next that $\pi_4\subset SE$. We apply Lemma \ref{boundary} with NW to obtain a linkage for $\pi_1$ and mates of $s_2,s_3$ through the appropriate boundary of NW into the neighboring quadrants. For $s_j$, $j=2,3$, we set $\psi(s_j)\subset A(3)$ if $t_j\in SW$, and
$\psi(s_j)\subset B(3)$ if $t_j\in NE\cup SE$.
Using Lemma \ref{boundary} with SE
we take a linkage for $\pi_4$ and mate $t_j\in SE$ into the neighboring quadrant where $s_j$ is mated from NW.
Then we obtain the linkage for $\pi_2,\pi_3$ in the weakly $2$-linked quadrants SW and/or NE.
The remaining cases, where $\pi_4$ does not belong to any quadrant are listed in Fig.\ref{1P2S}.
\begin{figure}[htp]\begin{center}
\tikzstyle{txt} = [circle, minimum width=1pt, draw=white, inner sep=0pt]
\begin{tikzpicture}
\draw (0,0) -- (1,0) -- (1,1) -- (0,1) -- (0,0);
\draw (1.2,0) -- (2.2,0) -- (2.2,1) -- (1.2,1) -- (1.2,0);
\draw (0,1.2) -- (1,1.2) -- (1,2.2) -- (0,2.2) -- (0,1.2);
\draw (1.2,1.2) -- (2.2,1.2) -- (2.2,2.2) -- (1.2,2.2) -- (1.2,1.2);
\node[txt](NW) at (.5, 1.9){$s_1$ $t_1$};
\node[txt]() at (.5, 1.45){$s_2$ $s_3$};
\node[txt](SW) at (.5, .7){$\emptyset$};
\node[txt](NE) at (1.7, 1.9){$s_4$};
\node[txt](SE) at (1.7, .7){$t_2$ {$t_3$}};
\node[txt]() at (1.7, .35){$t_4$};
\node[txt]() at (1.15, -.6){(I)};
\end{tikzpicture}
\hskip.8truecm
\begin{tikzpicture}
\draw (0,0) -- (2.2,0) -- (2.2,1) -- (0,1) -- (0,0);
\draw (0,1.2) -- (1,1.2) -- (1,2.2) -- (0,2.2) -- (0,1.2);
\draw (1.2,1.2) -- (2.2,1.2) -- (2.2,2.2) -- (1.2,2.2) -- (1.2,1.2);
\node[txt](NW) at (.5, 1.9){$s_1$ $t_1$};
\node[txt]() at (.5, 1.45){$s_2$ $s_3$};
\node[txt](NE) at (1.7, 1.9){$t_2$ $s_4$};
\node[txt](SE) at (1.1, .5){$t_3$ {$t_4$}};
\node[txt]() at (1.15, -.6){(II)};
\end{tikzpicture}
\hskip.8truecm
\begin{tikzpicture}
\draw (0,0) -- (2.2,0) -- (2.2,1) -- (0,1) -- (0,0);
\draw (0,1.2) -- (1,1.2) -- (1,2.2) -- (0,2.2) -- (0,1.2);
\draw (1.2,1.2) -- (2.2,1.2) -- (2.2,2.2) -- (1.2,2.2) -- (1.2,1.2);
\node[txt](NW) at (.5, 1.9){$s_1$ $t_1$};
\node[txt]() at (.5, 1.45){$s_2$ $s_3$};
\node[txt](NE) at (1.7, 1.9){$t_2$ $t_3$};
\node[txt]() at (1.7, 1.45){$s_4$};
\node[txt]() at (1.1, .5){$t_4$};
\node[txt]() at (1.15, -.6){(III)};
\end{tikzpicture}
\hskip.8truecm
\begin{tikzpicture}
\draw (0,0) -- (1,0) -- (1,1) -- (0,1) -- (0,0);
\draw (1.2,0) -- (2.2,0) -- (2.2,1) -- (1.2,1) -- (1.2,0);
\draw (0,1.2) -- (1,1.2) -- (1,2.2) -- (0,2.2) -- (0,1.2);
\draw (1.2,1.2) -- (2.2,1.2) -- (2.2,2.2) -- (1.2,2.2) -- (1.2,1.2);
\node[txt](NW) at (.5, 1.9){$s_1$ $t_1$};
\node[txt]() at (.5, 1.45){$s_2$ $s_3$};
\node[txt](SW) at (.5, .7){$t_4$ };
\node[txt](NE) at (1.7, 1.9){ $s_4$};
\node[txt](SE) at (1.7, .7){ $t_2$ $t_3$};
\node[txt]() at (1.15, -.6){(IV)};
\end{tikzpicture}
\end{center}
\caption{$\|NW\|=4$, $\pi_1\subset NW$}
\label{1P2S}
\end{figure}
For type (I), Lemma \ref{boundary} is used for SE and for NW similarly as above. Thus we
obtain a linkage in NW for $\pi_1$, a linkage in SW for $\pi_2,\pi_3$, and a linkage in NE for $\pi_4$.
For type (II), we apply Lemma \ref{boundary} with NW to find a linkage for $\pi_1$ and to mate $s_2$ to $s_2^*\in B(4)\cap NE$ and to mate $s_3$ into $s_3^*\in A(4)\cap SW$. Then Lemma \ref{exit} (ii) is used with NE to complete a linkage in NW for $\pi_2$ and to mate $t_3$ to $t_3^*\in A(4)\cap SE$.
The linkage for $\pi_3,\pi_4$ can be completed
in $SW\cup SE$ which is weakly $2$-linked, by Lemma \ref{w2linked}.
For types (III) and (IV), the linkage for $\pi_2,\pi_3,\pi_4$ will be done by mating the terminals appropriately into
$G^*=G-(A(1)\cup A(2)\cup B(1)\cup B(2))\cong P_4\Box P_4$ which is $3$-path-pairable, by Lemma \ref{3pp}.
\begin{figure}[htp]\begin{center}
\tikzstyle{M} = [circle, minimum width=.5pt, draw=black, inner sep=2pt]
\tikzstyle{B} = [rectangle, draw=black!, minimum width=1pt, fill=white, inner sep=1pt]
\tikzstyle{T} = [rectangle, minimum width=.1pt, fill, inner sep=2.5pt]
\tikzstyle{V} = [circle, minimum width=1pt, fill, inner sep=1pt]
\tikzstyle{C} = [circle, minimum width=1pt, fill=white, inner sep=2pt]
\tikzstyle{A} = [circle, draw=black!, minimum width=1pt, fill, inner sep=1.7pt]
\tikzstyle{txt} = [circle, minimum width=1pt, draw=white, inner sep=0pt]
\tikzstyle{Wedge} = [draw,line width=2.2pt,-,black!100]
\begin{tikzpicture}
\draw[->,line width=1.2pt] (1,2) -- (1,0) -- (1.9,0);
\draw[->,line width=1.2pt] (1,5) -- (1,3.1);
\draw[->,line width=1.2pt] (1,3) -- (1,2) -- (1.9,2);
\draw[->,line width=1.2pt] (0,4) -- (0,2.9);
\draw[->,line width=1.2pt] (0,3) -- (0,2) (0,2) -- (0,1) -- (1.9,1);
\draw[->,line width=1.2pt] (3,5) -- (5,5) -- (5, 3.1);
\draw[->,line width=1.2pt] (3,4) -- (3,3.1);
\draw[->,line width=1.2pt] (4,4) -- (4,3.1);
\draw[double,line width=.5pt] (2,0) -- (5,0) -- (5,3) --(2,3)--(2,0);
\draw[double,line width=.5pt] (3,0) -- (3,3) (4,0) --(4,3);
\draw[double,line width=.5pt] (2,1) -- (5,1) (2,2) --(5,2);
\draw[line width=2.2pt] (0,5) -- (0,4) --(2,4);
\draw[dashed] (0,5) -- (3,5) (2,4)-- (5,4) (0,3)--(2,3) (0,2)--(1,2) (0,1)--(0,0)--(1,0);
\draw[dashed] (2,3) -- (2,5) (3,4)-- (3,5) (4,4)--(4,5);
\foreach \x in {0,1,2,3,4,5}
\foreach \y in {0,1,2,3,4,5}
{ \node[B] () at (\x,\y) {};}
\node[T,label=above:$s_1$](s1) at (0,5){};
\node[T,label=left:$s_2$](s2) at (0,4){};
\node[T,label=above:$s_3$](s3) at (1,5){};
\node[T,label=above:$s_4$](s4) at (3,5){};
\node[T](t1) at (2,4){}; \node[txt]() at (1.7,4.3){$t_1$} ;
\node[T](t3) at (4,4){}; \node[txt]() at (4.3,4.3){$t_3$} ;
\node[T](t2) at (3,4){};\node[txt]() at (3.3,4.3){$t_2$} ;
\node[T](t4) at (1,2){};\node[txt]() at (.7,1.7){$t_4$} ;
\node[M,label=left:$s_2^\prime$](s2') at (0,3){} ;
\node[txt](t2*) at (2.7,2.7){$t_2^*$} ;
\node[M](s4*) at (5,3){} ;
\node[txt] () at (2.3,.35) {$s_3^{*}$};
\node[M]() at (1,3){} ; \node[M]() at (2,1){} ; \node[M]() at (2,0){} ;
\node[M]() at (2,2){} ; \node[M]() at (3,3){} ; \node[M]() at (4,3){} ;
\node[txt]() at (2.35,1.4){$s_2^{*}$} ;
\node[txt]() at (1.35,2.7){$s_3^\prime$} ;
\node[txt]() at (2.35,2.3){$t_4^{*}$} ;
\node[txt]() at (3.7,2.7){$t_3^*$} ;
\node[txt]() at (4.7,2.7){$s_4^*$} ;
\node[txt]() at (4.6,.4){$G^*$} ;
\node[txt]() at (.4,4.4){$P_1$} ;
\end{tikzpicture}
\end{center}
\caption{Solution for a pairing of type (III)}
\label{B32}
\end{figure}
First we apply Lemma \ref{exit} (iii) for $NE$ to mate the terminals in $NE$ into distinct vertices of $A(3)$
with mating paths not using edges in $A(3)$.
Next we use Lemma \ref{boundary} to obtain a linkage for $\pi_1$ and to mate
$s_j$ into $s_j^\prime\in A(3)\cap NW$, for $j=2,3$. If
$s_j^\prime\neq (3,3)$, then we extend its mating path into $A(4)\cap SW$. Applying Lemma \ref{exit} (iii) for the terminals/mates in $SW$ we obtain the mates $s_2^{*}, s_3^{*}, t_4^*\in B(3)$. (In case of type (III) it is possible that $t_4\in SE$, when we just take $t_4^*=t_4$.)
Then the linkage for $\pi_2,\pi_3$, and $\pi_4$ can be completed, since all mating paths leading to $G^*$ are edge disjoint from $G^*$. An example is shown in Fig.\ref{B32}.
\end{proof}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.